bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators
@ 2020-07-02 20:03 Alexei Starovoitov
  2020-07-02 20:03 ` [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-02 20:03 UTC (permalink / raw)
  To: torvalds; +Cc: davem, daniel, ebiederm, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Hi Linus,

This patch set is the first real user of user moder driver facility. The
general use case for user mode driver is to ship vmlinux with preloaded BPF
programs. In this particular case the user mode driver populates bpffs instance
with two BPF iterators. In several months BPF_LSM project would need to preload
the kernel with its own set of BPF programs and attach to LSM hooks instead of
bpffs. BPF iterators and BPF_LSM are unstable from uapi perspective. They are
tracing based and peek into arbitrary kernel data structures. One can question
why a kernel module cannot embed BPF programs inside. The reason is that libbpf
is necessary to load them. First libbpf loads BPF Type Format, then creates BPF
maps, populates them. Then it relocates code sections inside BPF programs,
loads BPF programs, and finally attaches them to events. Theoretically libbpf
can be rewritten to work in the kernel, but that is massive undertaking. The
maintenance of in-kernel libbpf and user space libbpf would be another
challenge. Another obstacle to embedding BPF programs into kernel module is
sys_bpf api. Loading of programs, BTF, maps goes through the verifier. It
validates and optimizes the code. It's possible to provide in-kernel api to all
of sys_bpf commands (load progs, create maps, update maps, load BTF, etc), but
that is huge amount of work and forever maintenance headache.
Hence the decision is to ship vmlinux with user mode drivers that load
BPF programs. Just like kernel modules extend vmlinux BPF programs
are safe extensions of the kernel and some of them need to ship with vmlinux.

This patch set adds a kernel module with user mode driver that populates bpffs
with two BPF iterators. The patches are based on Eric's v3 set.

$ mount bpffs /sys/fs/bpf/ -t bpf
$ ls -la /sys/fs/bpf/
total 4
drwxrwxrwt  2 root root    0 Jul  2 00:27 .
drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
-rw-------  1 root root    0 Jul  2 00:27 maps
-rw-------  1 root root    0 Jul  2 00:27 progs

The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
maps, load two BPF programs, attach them to BPF iterators, and finally send two
bpf_link IDs back to the kernel.
The kernel will pin two bpf_links into newly mounted bpffs instance under
names "progs" and "maps". These two files become human readable.

$ cat /sys/fs/bpf/progs
  id name            pages attached
  11    dump_bpf_map     1 bpf_iter_bpf_map
  12   dump_bpf_prog     1 bpf_iter_bpf_prog
  27 test_pkt_access     1
  32       test_main     1 test_pkt_access test_pkt_access
  33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
  34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
  35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
  36 new_get_skb_len     1 get_skb_len test_pkt_access
  37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
  38 new_get_constan     1 get_constant test_pkt_access

The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
all BPF programs currently loaded in the system. This information is unstable
and will change from kernel to kernel.

In some sence this output is similar to 'bpftool prog show' that is using
stable api to retreive information about BPF programs. The BPF subsytems grows
quickly and there is always demand to show as much info about BPF things as
possible. But we cannot expose all that info via stable uapi of bpf syscall,
since the details change so much. Right now a BPF program can be attached to
only one other BPF program. Folks are working on patches to enable
multi-attach, but for debugging it's necessary to see the current state. There
is no uapi for that, but above output shows it:
  37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
  38 new_get_constan     1 get_constant test_pkt_access
     [1]                   [2]          [3]
[1] is the name of BPF prog.
[2] is the name of function inside target BPF prog.
[3] is the name of target BPF prog.

[2] and [3] are not exposed via uapi, since they will change from single to
multi soon. There are many other cases where bpf internals are useful for
debugging, but shouldn't be exposed via uapi due to high rate of changes.

systemd mounts /sys/fs/bpf at the start, so this kernel module with user mode
driver needs to be available early. BPF_LSM most likely would need to preload
BPF programs even earlier.

Few interesting observations:
- though bpffs comes with two human readble files "progs" and "maps" they
  can be removed. 'rm -f /sys/fs/bpf/progs' will remove bpf_link and kernel
  will automatically unload corresponding BPF progs, maps, BTFs.

- 'ps aux|grep bpf_preload' shows nothing. User mode driver loaded BPF
  iterators and exited. Nothing is lingering in user space at this point.

- We can consider giving 0644 permissions to "progs" and "maps" to allow
  unprivileged users see BPF things loaded in the system.
  We cannot do so with "bpftool prog show", since it's using cap_sys_admin
  parts of bpf syscall.

- The functionality split between core kernel, bpf_preload kernel module and
  user mode driver is very similar to bpfilter style of interaction. Once
  this patch set lands the bpfilter can be removed, since user mode driver
  facility will have a real user. Do you still insist on removing bpfilter?

- Similar BPF iterators can be used as unstable extensions to /proc.
  Like mounting /proc can prepopolate some subdirectory in there with
  a BPF iterator that will print QUIC sockets instead of tcp and udp.

TODO:
- The patches are rough in error handling.
- my Makefile skills are rusty. I would need to clean that up.
- CONFIG_BPF_PRELOAD_UMD=m|y are tested, but -static doesn't work yet.
  User mode driver depends on libelf because libbpf is using it.
  That can be fixed up later.
- I've decided to avoid clang 10 dependency and instead check-in
  generated bpf skeleton into git. I think it's reasonable compromise.
  See patch 2.

Thoughts? Comments?

Alexei Starovoitov (3):
  bpf: Factor out bpf_link_get_by_id() helper.
  bpf: Add BPF program and map iterators as built-in BPF programs.
  bpf: Add kernel module with user mode driver that populates bpffs.

 include/linux/bpf.h                           |   1 +
 init/Kconfig                                  |   2 +
 kernel/bpf/Makefile                           |   1 +
 kernel/bpf/inode.c                            |  75 ++++
 kernel/bpf/preload/Kconfig                    |  15 +
 kernel/bpf/preload/Makefile                   |  21 +
 kernel/bpf/preload/bpf_preload.h              |  15 +
 kernel/bpf/preload/bpf_preload_kern.c         |  87 +++++
 kernel/bpf/preload/bpf_preload_umd_blob.S     |   7 +
 kernel/bpf/preload/iterators/.gitignore       |   2 +
 kernel/bpf/preload/iterators/Makefile         |  57 +++
 kernel/bpf/preload/iterators/README           |   4 +
 .../preload/iterators/bpf_preload_common.h    |   8 +
 kernel/bpf/preload/iterators/iterators.bpf.c  |  81 ++++
 kernel/bpf/preload/iterators/iterators.c      |  81 ++++
 kernel/bpf/preload/iterators/iterators.skel.h | 359 ++++++++++++++++++
 kernel/bpf/syscall.c                          |  46 ++-
 17 files changed, 844 insertions(+), 18 deletions(-)
 create mode 100644 kernel/bpf/preload/Kconfig
 create mode 100644 kernel/bpf/preload/Makefile
 create mode 100644 kernel/bpf/preload/bpf_preload.h
 create mode 100644 kernel/bpf/preload/bpf_preload_kern.c
 create mode 100644 kernel/bpf/preload/bpf_preload_umd_blob.S
 create mode 100644 kernel/bpf/preload/iterators/.gitignore
 create mode 100644 kernel/bpf/preload/iterators/Makefile
 create mode 100644 kernel/bpf/preload/iterators/README
 create mode 100644 kernel/bpf/preload/iterators/bpf_preload_common.h
 create mode 100644 kernel/bpf/preload/iterators/iterators.bpf.c
 create mode 100644 kernel/bpf/preload/iterators/iterators.c
 create mode 100644 kernel/bpf/preload/iterators/iterators.skel.h

-- 
2.23.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper.
  2020-07-02 20:03 [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
@ 2020-07-02 20:03 ` Alexei Starovoitov
  2020-07-06 23:08   ` Andrii Nakryiko
  2020-07-02 20:03 ` [PATCH bpf-next 2/3] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-02 20:03 UTC (permalink / raw)
  To: torvalds; +Cc: davem, daniel, ebiederm, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Refactor the code a bit to extract bpf_link_get_by_id() helper.
It's similar to existing bpf_prog_by_id().

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h  |  1 +
 kernel/bpf/syscall.c | 46 +++++++++++++++++++++++++++-----------------
 2 files changed, 29 insertions(+), 18 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3c659f36591d..4fcff4895f27 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1319,6 +1319,7 @@ int btf_check_type_match(struct bpf_verifier_env *env, struct bpf_prog *prog,
 			 struct btf *btf, const struct btf_type *t);
 
 struct bpf_prog *bpf_prog_by_id(u32 id);
+struct bpf_link *bpf_link_by_id(u32 id);
 
 const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
 #else /* !CONFIG_BPF_SYSCALL */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e236a6c0aea4..a2ce46f4a987 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3969,40 +3969,50 @@ static int link_update(union bpf_attr *attr)
 	return ret;
 }
 
-static int bpf_link_inc_not_zero(struct bpf_link *link)
+static struct bpf_link *bpf_link_inc_not_zero(struct bpf_link *link)
 {
-	return atomic64_fetch_add_unless(&link->refcnt, 1, 0) ? 0 : -ENOENT;
+	return atomic64_fetch_add_unless(&link->refcnt, 1, 0) ? link : ERR_PTR(-ENOENT);
 }
 
-#define BPF_LINK_GET_FD_BY_ID_LAST_FIELD link_id
-
-static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
+struct bpf_link *bpf_link_by_id(u32 id)
 {
 	struct bpf_link *link;
-	u32 id = attr->link_id;
-	int fd, err;
 
-	if (CHECK_ATTR(BPF_LINK_GET_FD_BY_ID))
-		return -EINVAL;
-
-	if (!capable(CAP_SYS_ADMIN))
-		return -EPERM;
+	if (!id)
+		return ERR_PTR(-ENOENT);
 
 	spin_lock_bh(&link_idr_lock);
-	link = idr_find(&link_idr, id);
 	/* before link is "settled", ID is 0, pretend it doesn't exist yet */
+	link = idr_find(&link_idr, id);
 	if (link) {
 		if (link->id)
-			err = bpf_link_inc_not_zero(link);
+			link = bpf_link_inc_not_zero(link);
 		else
-			err = -EAGAIN;
+			link = ERR_PTR(-EAGAIN);
 	} else {
-		err = -ENOENT;
+		link = ERR_PTR(-ENOENT);
 	}
 	spin_unlock_bh(&link_idr_lock);
+	return link;
+}
 
-	if (err)
-		return err;
+#define BPF_LINK_GET_FD_BY_ID_LAST_FIELD link_id
+
+static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
+{
+	struct bpf_link *link;
+	u32 id = attr->link_id;
+	int fd;
+
+	if (CHECK_ATTR(BPF_LINK_GET_FD_BY_ID))
+		return -EINVAL;
+
+	if (!capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	link = bpf_link_by_id(id);
+	if (IS_ERR(link))
+		return PTR_ERR(link);
 
 	fd = bpf_link_new_fd(link);
 	if (fd < 0)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next 2/3] bpf: Add BPF program and map iterators as built-in BPF programs.
  2020-07-02 20:03 [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
  2020-07-02 20:03 ` [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
@ 2020-07-02 20:03 ` Alexei Starovoitov
  2020-07-08 21:55   ` Andrii Nakryiko
  2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
  2020-07-03 11:18 ` [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Toke Høiland-Jørgensen
  3 siblings, 1 reply; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-02 20:03 UTC (permalink / raw)
  To: torvalds; +Cc: davem, daniel, ebiederm, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

The program and map iterators work similar to seq_file-s.
Once the program is pinned in bpffs it can be read with "cat" tool
to print human readable output. In this case about BPF programs and maps.
For example:
$ cat /sys/fs/bpf/progs
  id name            pages attached
   5    dump_bpf_map     1 bpf_iter_bpf_map
   6   dump_bpf_prog     1 bpf_iter_bpf_prog
$ cat /sys/fs/bpf/maps
  id name            pages
   3 iterator.rodata     2

To avoid kernel build dependency on clang 10 separate bpf skeleton generation
into manual "make" step and instead check-in generated .skel.h into git.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/preload/iterators/.gitignore       |   2 +
 kernel/bpf/preload/iterators/Makefile         |  57 +++
 kernel/bpf/preload/iterators/README           |   4 +
 kernel/bpf/preload/iterators/iterators.bpf.c  |  81 ++++
 kernel/bpf/preload/iterators/iterators.skel.h | 359 ++++++++++++++++++
 5 files changed, 503 insertions(+)
 create mode 100644 kernel/bpf/preload/iterators/.gitignore
 create mode 100644 kernel/bpf/preload/iterators/Makefile
 create mode 100644 kernel/bpf/preload/iterators/README
 create mode 100644 kernel/bpf/preload/iterators/iterators.bpf.c
 create mode 100644 kernel/bpf/preload/iterators/iterators.skel.h

diff --git a/kernel/bpf/preload/iterators/.gitignore b/kernel/bpf/preload/iterators/.gitignore
new file mode 100644
index 000000000000..ffdb70230c8b
--- /dev/null
+++ b/kernel/bpf/preload/iterators/.gitignore
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+/.output
diff --git a/kernel/bpf/preload/iterators/Makefile b/kernel/bpf/preload/iterators/Makefile
new file mode 100644
index 000000000000..28fa8c1440f4
--- /dev/null
+++ b/kernel/bpf/preload/iterators/Makefile
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: GPL-2.0
+OUTPUT := .output
+CLANG ?= clang
+LLC ?= llc
+LLVM_STRIP ?= llvm-strip
+DEFAULT_BPFTOOL := $(OUTPUT)/sbin/bpftool
+BPFTOOL ?= $(DEFAULT_BPFTOOL)
+LIBBPF_SRC := $(abspath ../../../../tools/lib/bpf)
+BPFOBJ := $(OUTPUT)/libbpf.a
+BPF_INCLUDE := $(OUTPUT)
+INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../../../tools/lib)        \
+       -I$(abspath ../../../../tools/include/uapi)
+CFLAGS := -g -Wall
+
+abs_out := $(abspath $(OUTPUT))
+ifeq ($(V),1)
+Q =
+msg =
+else
+Q = @
+msg = @printf '  %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
+MAKEFLAGS += --no-print-directory
+submake_extras := feature_display=0
+endif
+
+.DELETE_ON_ERROR:
+
+.PHONY: all clean
+
+all: iterators.skel.h
+
+clean:
+	$(call msg,CLEAN)
+	$(Q)rm -rf $(OUTPUT) iterators
+
+iterators.skel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL)
+	$(call msg,GEN-SKEL,$@)
+	$(Q)$(BPFTOOL) gen skeleton $< > $@
+
+
+$(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT)
+	$(call msg,BPF,$@)
+	$(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES)			      \
+		 -c $(filter %.c,$^) -o $@ &&				      \
+	$(LLVM_STRIP) -g $@
+
+$(OUTPUT):
+	$(call msg,MKDIR,$@)
+	$(Q)mkdir -p $(OUTPUT)
+
+$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)
+	$(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC)			       \
+		    OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
+
+$(DEFAULT_BPFTOOL):
+	$(Q)$(MAKE) $(submake_extras) -C ../../../../tools/bpf/bpftool			      \
+		    prefix= OUTPUT=$(abs_out)/ DESTDIR=$(abs_out) install
diff --git a/kernel/bpf/preload/iterators/README b/kernel/bpf/preload/iterators/README
new file mode 100644
index 000000000000..7fd6d39a9ad2
--- /dev/null
+++ b/kernel/bpf/preload/iterators/README
@@ -0,0 +1,4 @@
+WARNING:
+If you change "iterators.bpf.c" do "make -j" in this directory to rebuild "iterators.skel.h".
+Make sure to have clang 10 installed.
+See Documentation/bpf/bpf_devel_QA.rst
diff --git a/kernel/bpf/preload/iterators/iterators.bpf.c b/kernel/bpf/preload/iterators/iterators.bpf.c
new file mode 100644
index 000000000000..bea214b0e354
--- /dev/null
+++ b/kernel/bpf/preload/iterators/iterators.bpf.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+struct seq_file;
+struct bpf_iter_meta {
+	struct seq_file *seq;
+	__u64 session_id;
+	__u64 seq_num;
+} __attribute__((preserve_access_index));
+
+struct bpf_map_memory {
+	__u32 pages;
+};
+struct bpf_map {
+	__u32 id;
+	struct bpf_map_memory memory;
+	char name[16];
+} __attribute__((preserve_access_index));
+
+struct bpf_iter__bpf_map {
+	struct bpf_iter_meta *meta;
+	struct bpf_map *map;
+} __attribute__((preserve_access_index));
+
+struct bpf_prog_aux {
+	__u32 id;
+	char name[16];
+	const char *attach_func_name;
+	struct bpf_prog *linked_prog;
+} __attribute__((preserve_access_index));
+
+struct bpf_prog {
+	struct bpf_prog_aux *aux;
+	__u16 pages;
+} __attribute__((preserve_access_index));
+
+struct bpf_iter__bpf_prog {
+	struct bpf_iter_meta *meta;
+	struct bpf_prog *prog;
+} __attribute__((preserve_access_index));
+
+SEC("iter/bpf_map")
+int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	__u64 seq_num = ctx->meta->seq_num;
+	struct bpf_map *map = ctx->map;
+
+	if (!map)
+		return 0;
+
+	if (seq_num == 0)
+		BPF_SEQ_PRINTF(seq, "  id name            pages\n");
+
+	BPF_SEQ_PRINTF(seq, "%4u%16s%6d\n", map->id, map->name, map->memory.pages);
+	return 0;
+}
+
+SEC("iter/bpf_prog")
+int dump_bpf_prog(struct bpf_iter__bpf_prog *ctx)
+{
+	struct seq_file *seq = ctx->meta->seq;
+	__u64 seq_num = ctx->meta->seq_num;
+	struct bpf_prog *prog = ctx->prog;
+	struct bpf_prog_aux *aux;
+
+	if (!prog)
+		return 0;
+
+	aux = prog->aux;
+	if (seq_num == 0)
+		BPF_SEQ_PRINTF(seq, "  id name            pages attached\n");
+
+	BPF_SEQ_PRINTF(seq, "%4u%16s%6d %s %s\n", aux->id, aux->name, prog->pages,
+		       aux->attach_func_name, aux->linked_prog->aux->name);
+	return 0;
+}
+char LICENSE[] SEC("license") = "GPL";
diff --git a/kernel/bpf/preload/iterators/iterators.skel.h b/kernel/bpf/preload/iterators/iterators.skel.h
new file mode 100644
index 000000000000..bbee5a1f346c
--- /dev/null
+++ b/kernel/bpf/preload/iterators/iterators.skel.h
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+
+/* THIS FILE IS AUTOGENERATED! */
+#ifndef __ITERATORS_BPF_SKEL_H__
+#define __ITERATORS_BPF_SKEL_H__
+
+#include <stdlib.h>
+#include <bpf/libbpf.h>
+
+struct iterators_bpf {
+	struct bpf_object_skeleton *skeleton;
+	struct bpf_object *obj;
+	struct {
+		struct bpf_map *rodata;
+	} maps;
+	struct {
+		struct bpf_program *dump_bpf_map;
+		struct bpf_program *dump_bpf_prog;
+	} progs;
+	struct {
+		struct bpf_link *dump_bpf_map;
+		struct bpf_link *dump_bpf_prog;
+	} links;
+	struct iterators_bpf__rodata {
+		const char dump_bpf_map____fmt[28];
+		const char dump_bpf_map____fmt_1[12];
+		const char dump_bpf_prog____fmt[37];
+		const char dump_bpf_prog____fmt_2[18];
+	} *rodata;
+};
+
+static void
+iterators_bpf__destroy(struct iterators_bpf *obj)
+{
+	if (!obj)
+		return;
+	if (obj->skeleton)
+		bpf_object__destroy_skeleton(obj->skeleton);
+	free(obj);
+}
+
+static inline int
+iterators_bpf__create_skeleton(struct iterators_bpf *obj);
+
+static inline struct iterators_bpf *
+iterators_bpf__open_opts(const struct bpf_object_open_opts *opts)
+{
+	struct iterators_bpf *obj;
+
+	obj = (typeof(obj))calloc(1, sizeof(*obj));
+	if (!obj)
+		return NULL;
+	if (iterators_bpf__create_skeleton(obj))
+		goto err;
+	if (bpf_object__open_skeleton(obj->skeleton, opts))
+		goto err;
+
+	return obj;
+err:
+	iterators_bpf__destroy(obj);
+	return NULL;
+}
+
+static inline struct iterators_bpf *
+iterators_bpf__open(void)
+{
+	return iterators_bpf__open_opts(NULL);
+}
+
+static inline int
+iterators_bpf__load(struct iterators_bpf *obj)
+{
+	return bpf_object__load_skeleton(obj->skeleton);
+}
+
+static inline struct iterators_bpf *
+iterators_bpf__open_and_load(void)
+{
+	struct iterators_bpf *obj;
+
+	obj = iterators_bpf__open();
+	if (!obj)
+		return NULL;
+	if (iterators_bpf__load(obj)) {
+		iterators_bpf__destroy(obj);
+		return NULL;
+	}
+	return obj;
+}
+
+static inline int
+iterators_bpf__attach(struct iterators_bpf *obj)
+{
+	return bpf_object__attach_skeleton(obj->skeleton);
+}
+
+static inline void
+iterators_bpf__detach(struct iterators_bpf *obj)
+{
+	return bpf_object__detach_skeleton(obj->skeleton);
+}
+
+static inline int
+iterators_bpf__create_skeleton(struct iterators_bpf *obj)
+{
+	struct bpf_object_skeleton *s;
+
+	s = (typeof(s))calloc(1, sizeof(*s));
+	if (!s)
+		return -1;
+	obj->skeleton = s;
+
+	s->sz = sizeof(*s);
+	s->name = "iterators_bpf";
+	s->obj = &obj->obj;
+
+	/* maps */
+	s->map_cnt = 1;
+	s->map_skel_sz = sizeof(*s->maps);
+	s->maps = (typeof(s->maps))calloc(s->map_cnt, s->map_skel_sz);
+	if (!s->maps)
+		goto err;
+
+	s->maps[0].name = "iterator.rodata";
+	s->maps[0].map = &obj->maps.rodata;
+	s->maps[0].mmaped = (void **)&obj->rodata;
+
+	/* programs */
+	s->prog_cnt = 2;
+	s->prog_skel_sz = sizeof(*s->progs);
+	s->progs = (typeof(s->progs))calloc(s->prog_cnt, s->prog_skel_sz);
+	if (!s->progs)
+		goto err;
+
+	s->progs[0].name = "dump_bpf_map";
+	s->progs[0].prog = &obj->progs.dump_bpf_map;
+	s->progs[0].link = &obj->links.dump_bpf_map;
+
+	s->progs[1].name = "dump_bpf_prog";
+	s->progs[1].prog = &obj->progs.dump_bpf_prog;
+	s->progs[1].link = &obj->links.dump_bpf_prog;
+
+	s->data_sz = 5744;
+	s->data = (void *)"\
+\x7f\x45\x4c\x46\x02\x01\x01\0\0\0\0\0\0\0\0\0\x01\0\xf7\0\x01\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\xb0\x12\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\x40\0\x0f\0\
+\x0e\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\
+\x1a\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\
+\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\xb7\x03\0\0\x1c\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x71\0\
+\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x08\0\0\0\xbf\x72\0\0\0\0\0\0\
+\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x04\0\0\0\0\0\x7b\x1a\xf8\
+\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\
+\0\x18\x02\0\0\x1c\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x05\0\0\x18\
+\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x79\x12\0\0\0\0\
+\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x21\0\0\0\0\0\x79\x78\0\
+\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\
+\x04\0\0\xd8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\x28\0\0\0\0\0\0\0\0\0\
+\0\0\xb7\x03\0\0\x25\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x81\0\0\
+\0\0\0\0\x7b\x1a\xd8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x82\0\0\0\0\0\0\x0f\
+\x12\0\0\0\0\0\0\x7b\x2a\xe0\xff\0\0\0\0\x69\x72\x08\0\0\0\0\0\x7b\x2a\xe8\xff\
+\0\0\0\0\x79\x82\x18\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x79\x82\x20\0\0\0\0\0\
+\x79\x22\0\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\
+\0\0\0\x07\x04\0\0\xd8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\x4d\0\0\0\0\
+\0\0\0\0\0\0\0\xb7\x03\0\0\x12\0\0\0\xb7\x05\0\0\x28\0\0\0\x85\0\0\0\x7e\0\0\0\
+\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\
+\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\x67\x65\x73\x0a\0\x25\x34\
+\x75\x25\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\
+\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\x67\x65\x73\x20\x61\x74\
+\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x25\x31\x36\x73\x25\x36\x64\x20\x25\
+\x73\x20\x25\x73\x0a\0\x47\x50\x4c\0\x9f\xeb\x01\0\x18\0\0\0\0\0\0\0\x90\x03\0\
+\0\x90\x03\0\0\xf8\x03\0\0\0\0\0\0\0\0\0\x02\x02\0\0\0\x01\0\0\0\x02\0\0\x04\
+\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\0\x04\0\0\0\x40\0\0\0\0\0\0\0\0\
+\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\
+\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\0\0\0\x20\0\0\x01\x24\0\0\0\x01\0\
+\0\x0c\x05\0\0\0\xa3\0\0\0\x03\0\0\x04\x18\0\0\0\xb1\0\0\0\x09\0\0\0\0\0\0\0\
+\xb5\0\0\0\x0b\0\0\0\x40\0\0\0\xc0\0\0\0\x0b\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\
+\x0a\0\0\0\xc8\0\0\0\0\0\0\x07\0\0\0\0\xd1\0\0\0\0\0\0\x08\x0c\0\0\0\xd7\0\0\0\
+\0\0\0\x01\x08\0\0\0\x40\0\0\0\x91\x01\0\0\x03\0\0\x04\x18\0\0\0\x99\x01\0\0\
+\x0e\0\0\0\0\0\0\0\x9c\x01\0\0\x10\0\0\0\x20\0\0\0\xa3\x01\0\0\x12\0\0\0\x40\0\
+\0\0\xa8\x01\0\0\0\0\0\x08\x0f\0\0\0\xae\x01\0\0\0\0\0\x01\x04\0\0\0\x20\0\0\0\
+\xbb\x01\0\0\x01\0\0\x04\x04\0\0\0\xca\x01\0\0\x0e\0\0\0\0\0\0\0\xd0\x01\0\0\0\
+\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\0\0\x03\0\0\0\0\x11\0\0\0\x13\0\0\0\
+\x10\0\0\0\xd5\x01\0\0\0\0\0\x01\x04\0\0\0\x20\0\0\0\0\0\0\0\0\0\0\x02\x15\0\0\
+\0\x38\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x4b\x02\0\0\
+\x16\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x19\0\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\
+\x1c\0\0\0\x14\0\0\0\x50\x02\0\0\x01\0\0\x0c\x17\0\0\0\x9c\x02\0\0\x02\0\0\x04\
+\x10\0\0\0\xa5\x02\0\0\x1a\0\0\0\0\0\0\0\xca\x01\0\0\x1b\0\0\0\x40\0\0\0\0\0\0\
+\0\0\0\0\x02\x1d\0\0\0\xa9\x02\0\0\0\0\0\x08\x1c\0\0\0\xaf\x02\0\0\0\0\0\x01\
+\x02\0\0\0\x10\0\0\0\x10\x03\0\0\x04\0\0\x04\x28\0\0\0\x99\x01\0\0\x0e\0\0\0\0\
+\0\0\0\xa3\x01\0\0\x12\0\0\0\x20\0\0\0\x1d\x03\0\0\x1e\0\0\0\xc0\0\0\0\x2e\x03\
+\0\0\x16\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x0a\x11\0\0\
+\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1f\0\0\0\x13\0\0\0\x1c\0\0\0\x8a\x03\0\0\0\0\0\
+\x0e\x20\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1f\0\0\0\x13\0\0\0\x0c\0\0\0\
+\x9e\x03\0\0\0\0\0\x0e\x22\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1f\0\0\0\
+\x13\0\0\0\x25\0\0\0\xb4\x03\0\0\0\0\0\x0e\x24\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\
+\0\0\0\0\x1f\0\0\0\x13\0\0\0\x12\0\0\0\xc9\x03\0\0\0\0\0\x0e\x26\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\x03\0\0\0\0\x11\0\0\0\x13\0\0\0\x04\0\0\0\xe0\x03\0\0\0\0\0\x0e\
+\x28\0\0\0\x01\0\0\0\xe8\x03\0\0\x04\0\0\x0f\0\0\0\0\x21\0\0\0\0\0\0\0\x1c\0\0\
+\0\x23\0\0\0\x1c\0\0\0\x0c\0\0\0\x25\0\0\0\x28\0\0\0\x25\0\0\0\x27\0\0\0\x4d\0\
+\0\0\x12\0\0\0\xf0\x03\0\0\x01\0\0\x0f\0\0\0\0\x29\0\0\0\0\0\0\0\x04\0\0\0\0\
+\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\
+\x74\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\
+\x30\x3a\x30\0\x2f\x77\x2f\x6e\x65\x74\x2d\x6e\x65\x78\x74\x2f\x6b\x65\x72\x6e\
+\x65\x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\
+\x61\x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\
+\x2e\x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\
+\x2a\x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\
+\x65\x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\
+\x71\0\x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\
+\x73\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x6c\x6f\x6e\x67\x20\
+\x6c\x6f\x6e\x67\x20\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x30\x3a\
+\x31\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\
+\x61\x70\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\
+\x21\x6d\x61\x70\x29\0\x30\x3a\x32\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\
+\x5f\x6e\x75\x6d\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\
+\x65\x71\x5f\x6e\x75\x6d\x3b\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\
+\x20\x3d\x3d\x20\x30\x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\
+\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\
+\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\x67\x65\x73\x5c\x6e\
+\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6d\x65\x6d\x6f\x72\x79\
+\0\x6e\x61\x6d\x65\0\x5f\x5f\x75\x33\x32\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\
+\x69\x6e\x74\0\x62\x70\x66\x5f\x6d\x61\x70\x5f\x6d\x65\x6d\x6f\x72\x79\0\x70\
+\x61\x67\x65\x73\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\x52\x41\x59\x5f\x53\x49\
+\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\
+\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\x25\x31\x36\x73\
+\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\x2d\x3e\x69\x64\x2c\x20\x6d\x61\
+\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\x70\x2d\x3e\x6d\x65\x6d\x6f\x72\
+\x79\x2e\x70\x61\x67\x65\x73\x29\x3b\0\x7d\0\x62\x70\x66\x5f\x69\x74\x65\x72\
+\x5f\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\x6f\x67\0\x64\x75\x6d\x70\
+\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\
+\x72\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x70\x72\x6f\x67\
+\x20\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\x2d\x3e\x70\x72\x6f\x67\x3b\0\
+\x09\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\x62\x70\x66\x5f\x70\x72\x6f\x67\
+\0\x61\x75\x78\0\x5f\x5f\x75\x31\x36\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x73\
+\x68\x6f\x72\x74\0\x09\x61\x75\x78\x20\x3d\x20\x70\x72\x6f\x67\x2d\x3e\x61\x75\
+\x78\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\
+\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\
+\x20\x20\x20\x20\x20\x20\x20\x20\x70\x61\x67\x65\x73\x20\x61\x74\x74\x61\x63\
+\x68\x65\x64\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x61\x75\
+\x78\0\x61\x74\x74\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\x65\0\x6c\
+\x69\x6e\x6b\x65\x64\x5f\x70\x72\x6f\x67\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\
+\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\x25\x31\x36\
+\x73\x25\x36\x64\x20\x25\x73\x20\x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\
+\x3e\x69\x64\x2c\x20\x61\x75\x78\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x70\x72\x6f\
+\x67\x2d\x3e\x70\x61\x67\x65\x73\x2c\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\
+\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\
+\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\
+\x4c\x49\x43\x45\x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\
+\x6e\x73\x65\0\x9f\xeb\x01\0\x20\0\0\0\0\0\0\0\x24\0\0\0\x24\0\0\0\x44\x01\0\0\
+\x68\x01\0\0\x34\x01\0\0\x08\0\0\0\x31\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\0\x5e\
+\x02\0\0\x01\0\0\0\0\0\0\0\x18\0\0\0\x10\0\0\0\x31\0\0\0\x09\0\0\0\0\0\0\0\x42\
+\0\0\0\x7b\0\0\0\x1e\xc0\0\0\x08\0\0\0\x42\0\0\0\x7b\0\0\0\x24\xc0\0\0\x10\0\0\
+\0\x42\0\0\0\xf2\0\0\0\x1d\xc8\0\0\x18\0\0\0\x42\0\0\0\x13\x01\0\0\x06\xd0\0\0\
+\x20\0\0\0\x42\0\0\0\x22\x01\0\0\x1d\xc4\0\0\x28\0\0\0\x42\0\0\0\x47\x01\0\0\
+\x06\xdc\0\0\x38\0\0\0\x42\0\0\0\x5a\x01\0\0\x03\xe0\0\0\x70\0\0\0\x42\0\0\0\
+\xe9\x01\0\0\x02\xe8\0\0\xf0\0\0\0\x42\0\0\0\x36\x02\0\0\x01\xf0\0\0\x5e\x02\0\
+\0\x0a\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x04\x01\0\x08\0\0\0\x42\0\0\0\x7b\
+\0\0\0\x24\x04\x01\0\x10\0\0\0\x42\0\0\0\x6c\x02\0\0\x1f\x0c\x01\0\x18\0\0\0\
+\x42\0\0\0\x90\x02\0\0\x06\x18\x01\0\x20\0\0\0\x42\0\0\0\xbe\x02\0\0\x0e\x24\
+\x01\0\x28\0\0\0\x42\0\0\0\x22\x01\0\0\x1d\x08\x01\0\x30\0\0\0\x42\0\0\0\x47\
+\x01\0\0\x06\x28\x01\0\x40\0\0\0\x42\0\0\0\xd0\x02\0\0\x03\x2c\x01\0\x78\0\0\0\
+\x42\0\0\0\x3a\x03\0\0\x02\x34\x01\0\x28\x01\0\0\x42\0\0\0\x36\x02\0\0\x01\x40\
+\x01\0\x10\0\0\0\x31\0\0\0\x07\0\0\0\0\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\
+\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x02\0\0\0\xee\0\0\0\0\0\0\0\x20\0\0\
+\0\x08\0\0\0\x1e\x01\0\0\0\0\0\0\x70\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\x80\0\0\
+\0\x0d\0\0\0\x1e\x01\0\0\0\0\0\0\xa0\0\0\0\x0d\0\0\0\xee\0\0\0\0\0\0\0\x5e\x02\
+\0\0\x0b\0\0\0\0\0\0\0\x15\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\
+\0\0\0\0\0\x10\0\0\0\x15\0\0\0\xee\0\0\0\0\0\0\0\x20\0\0\0\x19\0\0\0\x3e\0\0\0\
+\0\0\0\0\x28\0\0\0\x08\0\0\0\x1e\x01\0\0\0\0\0\0\x78\0\0\0\x1d\0\0\0\x3e\0\0\0\
+\0\0\0\0\x88\0\0\0\x1d\0\0\0\xee\0\0\0\0\0\0\0\xa8\0\0\0\x19\0\0\0\xee\0\0\0\0\
+\0\0\0\xb8\0\0\0\x1d\0\0\0\x1e\x01\0\0\0\0\0\0\xc8\0\0\0\x1d\0\0\0\x86\x03\0\0\
+\0\0\0\0\xd0\0\0\0\x19\0\0\0\x3e\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\xcf\0\0\0\0\0\x02\0\x70\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xc1\0\
+\0\0\0\0\x02\0\xf0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xc8\0\0\0\0\0\x03\0\x78\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\xba\0\0\0\0\0\x03\0\x28\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\x14\0\0\0\x01\0\x04\0\0\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\xed\0\0\0\x01\0\x04\
+\0\x1c\0\0\0\0\0\0\0\x0c\0\0\0\0\0\0\0\x28\0\0\0\x01\0\x04\0\x28\0\0\0\0\0\0\0\
+\x25\0\0\0\0\0\0\0\xd6\0\0\0\x01\0\x04\0\x4d\0\0\0\0\0\0\0\x12\0\0\0\0\0\0\0\0\
+\0\0\0\x03\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\x03\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\xb2\0\0\0\x11\0\x05\0\0\0\0\0\0\0\0\0\x04\0\0\0\0\0\0\0\x3d\0\0\0\x12\0\x02\0\
+\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\x5b\0\0\0\x12\0\x03\0\0\0\0\0\0\0\0\0\x38\
+\x01\0\0\0\0\0\0\x48\0\0\0\0\0\0\0\x01\0\0\0\x0b\0\0\0\xc8\0\0\0\0\0\0\0\x01\0\
+\0\0\x0b\0\0\0\x50\0\0\0\0\0\0\0\x01\0\0\0\x0b\0\0\0\0\x01\0\0\0\0\0\0\x01\0\0\
+\0\x0b\0\0\0\x64\x03\0\0\0\0\0\0\x0a\0\0\0\x0b\0\0\0\x70\x03\0\0\0\0\0\0\x0a\0\
+\0\0\x0b\0\0\0\x7c\x03\0\0\0\0\0\0\x0a\0\0\0\x0b\0\0\0\x88\x03\0\0\0\0\0\0\x0a\
+\0\0\0\x0b\0\0\0\xa0\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x2c\0\0\0\0\0\0\0\0\0\0\
+\0\x09\0\0\0\x3c\0\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x50\0\0\0\0\0\0\0\0\0\0\0\x09\
+\0\0\0\x60\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x70\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\
+\x80\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x90\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xa0\0\
+\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xb0\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xc0\0\0\0\0\
+\0\0\0\0\0\0\0\x09\0\0\0\xd0\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xe8\0\0\0\0\0\0\0\
+\0\0\0\0\x0a\0\0\0\xf8\0\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x08\x01\0\0\0\0\0\0\0\0\
+\0\0\x0a\0\0\0\x18\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x28\x01\0\0\0\0\0\0\0\0\0\
+\0\x0a\0\0\0\x38\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x48\x01\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x58\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x68\x01\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x78\x01\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x94\x01\0\0\0\0\0\0\0\0\0\0\
+\x09\0\0\0\xa4\x01\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xb4\x01\0\0\0\0\0\0\0\0\0\0\
+\x09\0\0\0\xc4\x01\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xd4\x01\0\0\0\0\0\0\0\0\0\0\
+\x09\0\0\0\xe4\x01\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\xf4\x01\0\0\0\0\0\0\0\0\0\0\
+\x09\0\0\0\x0c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x1c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x2c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x3c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x4c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x5c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x6c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x7c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\x8c\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x9c\x02\0\0\0\0\0\0\0\0\0\0\
+\x0a\0\0\0\xac\x02\0\0\0\0\0\0\0\0\0\0\x0a\0\0\0\x3d\x3e\x30\x31\x32\x33\x3c\0\
+\x2e\x74\x65\x78\x74\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\x2e\x65\x78\x74\0\x64\
+\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\
+\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\
+\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x2e\x72\x65\x6c\x69\x74\x65\
+\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\
+\x72\x6f\x67\0\x2e\x72\x65\x6c\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\
+\x67\0\x2e\x6c\x6c\x76\x6d\x5f\x61\x64\x64\x72\x73\x69\x67\0\x6c\x69\x63\x65\
+\x6e\x73\x65\0\x2e\x73\x74\x72\x74\x61\x62\0\x2e\x73\x79\x6d\x74\x61\x62\0\x2e\
+\x72\x6f\x64\x61\x74\x61\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\0\x4c\x49\x43\x45\
+\x4e\x53\x45\0\x4c\x42\x42\x31\x5f\x34\0\x4c\x42\x42\x30\x5f\x34\0\x4c\x42\x42\
+\x31\x5f\x33\0\x4c\x42\x42\x30\x5f\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\
+\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x64\x75\x6d\x70\x5f\x62\
+\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x01\0\0\0\x06\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x04\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x4e\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\x40\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\x6d\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x40\x01\0\0\
+\0\0\0\0\x38\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\xa1\0\0\0\x01\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x78\x02\0\0\0\0\0\0\x5f\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x89\0\0\0\x01\
+\0\0\0\x03\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xd7\x02\0\0\0\0\0\0\x04\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xad\0\0\0\x01\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\0\xdb\x02\0\0\0\0\0\0\xa0\x07\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\x7b\x0a\0\0\0\0\0\0\xbc\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\x99\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x38\
+\x0d\0\0\0\0\0\0\x68\x01\0\0\0\0\0\0\x0e\0\0\0\x0c\0\0\0\x08\0\0\0\0\0\0\0\x18\
+\0\0\0\0\0\0\0\x4a\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xa0\x0e\0\0\
+\0\0\0\0\x20\0\0\0\0\0\0\0\x08\0\0\0\x02\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\
+\0\0\x69\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xc0\x0e\0\0\0\0\0\0\
+\x20\0\0\0\0\0\0\0\x08\0\0\0\x03\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\xa9\
+\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xe0\x0e\0\0\0\0\0\0\x50\0\0\0\
+\0\0\0\0\x08\0\0\0\x06\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x07\0\0\0\x09\
+\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x30\x0f\0\0\0\0\0\0\x70\x02\0\0\0\0\0\0\
+\x08\0\0\0\x07\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x7b\0\0\0\x03\x4c\xff\
+\x6f\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\0\xa0\x11\0\0\0\0\0\0\x07\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x91\0\0\0\x03\0\0\0\0\0\0\0\0\
+\0\0\0\0\0\0\0\0\0\0\0\xa7\x11\0\0\0\0\0\0\x03\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
+\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0";
+
+	return 0;
+err:
+	bpf_object__destroy_skeleton(s);
+	return -1;
+}
+
+#endif /* __ITERATORS_BPF_SKEL_H__ */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-02 20:03 [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
  2020-07-02 20:03 ` [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
  2020-07-02 20:03 ` [PATCH bpf-next 2/3] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
@ 2020-07-02 20:03 ` Alexei Starovoitov
  2020-07-02 22:14   ` kernel test robot
                     ` (3 more replies)
  2020-07-03 11:18 ` [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Toke Høiland-Jørgensen
  3 siblings, 4 replies; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-02 20:03 UTC (permalink / raw)
  To: torvalds; +Cc: davem, daniel, ebiederm, netdev, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Add kernel module with user mode driver that populates bpffs with
BPF iterators.

$ mount bpffs /sys/fs/bpf/ -t bpf
$ ls -la /sys/fs/bpf/
total 4
drwxrwxrwt  2 root root    0 Jul  2 00:27 .
drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
-rw-------  1 root root    0 Jul  2 00:27 maps
-rw-------  1 root root    0 Jul  2 00:27 progs

The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
maps, load two BPF programs, attach them to BPF iterators, and finally send two
bpf_link IDs back to the kernel.
The kernel will pin two bpf_links into newly mounted bpffs instance under
names "progs" and "maps". These two files become human readable.

$ cat /sys/fs/bpf/progs
  id name            pages attached
  11    dump_bpf_map     1 bpf_iter_bpf_map
  12   dump_bpf_prog     1 bpf_iter_bpf_prog
  27 test_pkt_access     1
  32       test_main     1 test_pkt_access test_pkt_access
  33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
  34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
  35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
  36 new_get_skb_len     1 get_skb_len test_pkt_access
  37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
  38 new_get_constan     1 get_constant test_pkt_access

The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
all BPF programs currently loaded in the system. This information is unstable
and will change from kernel to kernel.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 init/Kconfig                                  |  2 +
 kernel/bpf/Makefile                           |  1 +
 kernel/bpf/inode.c                            | 75 ++++++++++++++++
 kernel/bpf/preload/Kconfig                    | 15 ++++
 kernel/bpf/preload/Makefile                   | 21 +++++
 kernel/bpf/preload/bpf_preload.h              | 15 ++++
 kernel/bpf/preload/bpf_preload_kern.c         | 87 +++++++++++++++++++
 kernel/bpf/preload/bpf_preload_umd_blob.S     |  7 ++
 .../preload/iterators/bpf_preload_common.h    |  8 ++
 kernel/bpf/preload/iterators/iterators.c      | 81 +++++++++++++++++
 10 files changed, 312 insertions(+)
 create mode 100644 kernel/bpf/preload/Kconfig
 create mode 100644 kernel/bpf/preload/Makefile
 create mode 100644 kernel/bpf/preload/bpf_preload.h
 create mode 100644 kernel/bpf/preload/bpf_preload_kern.c
 create mode 100644 kernel/bpf/preload/bpf_preload_umd_blob.S
 create mode 100644 kernel/bpf/preload/iterators/bpf_preload_common.h
 create mode 100644 kernel/bpf/preload/iterators/iterators.c

diff --git a/init/Kconfig b/init/Kconfig
index a46aa8f3174d..278975a5daf2 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -2313,3 +2313,5 @@ config ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
 # <asm/syscall_wrapper.h>.
 config ARCH_HAS_SYSCALL_WRAPPER
 	def_bool n
+
+source "kernel/bpf/preload/Kconfig"
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index e6eb9c0402da..19e137aae40e 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -29,3 +29,4 @@ ifeq ($(CONFIG_BPF_JIT),y)
 obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
 obj-${CONFIG_BPF_LSM} += bpf_lsm.o
 endif
+obj-$(CONFIG_BPF_PRELOAD) += preload/
diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index fb878ba3f22f..8d33edd5c69c 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -20,6 +20,7 @@
 #include <linux/filter.h>
 #include <linux/bpf.h>
 #include <linux/bpf_trace.h>
+#include "preload/bpf_preload.h"
 
 enum bpf_type {
 	BPF_TYPE_UNSPEC	= 0,
@@ -409,6 +410,26 @@ static const struct inode_operations bpf_dir_iops = {
 	.unlink		= simple_unlink,
 };
 
+static int bpf_link_pin_kernel(struct dentry *parent,
+			       const char *name, struct bpf_link *link)
+{
+	umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
+	struct dentry *dentry;
+	int ret;
+
+	inode_lock(parent->d_inode);
+	dentry = lookup_one_len(name, parent, strlen(name));
+	if (IS_ERR(dentry)) {
+		inode_unlock(parent->d_inode);
+		return PTR_ERR(dentry);
+	}
+	ret = bpf_mkobj_ops(dentry, mode, link, &bpf_link_iops,
+			    &bpf_iter_fops);
+	dput(dentry);
+	inode_unlock(parent->d_inode);
+	return ret;
+}
+
 static int bpf_obj_do_pin(const char __user *pathname, void *raw,
 			  enum bpf_type type)
 {
@@ -638,6 +659,57 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
 	return 0;
 }
 
+struct bpf_preload_ops bpf_preload_ops = { .info.driver_name = "bpf_preload" };
+EXPORT_SYMBOL_GPL(bpf_preload_ops);
+
+static int populate_bpffs(struct dentry *parent)
+{
+	struct bpf_link *links[BPF_PRELOAD_LINKS] = {};
+	u32 link_id[BPF_PRELOAD_LINKS] = {};
+	int err = 0, i;
+
+	mutex_lock(&bpf_preload_ops.lock);
+	if (!bpf_preload_ops.do_preload) {
+		mutex_unlock(&bpf_preload_ops.lock);
+		request_module("bpf_preload");
+		mutex_lock(&bpf_preload_ops.lock);
+
+		if (!bpf_preload_ops.do_preload) {
+			pr_err("bpf_preload module is missing.\n"
+			       "bpffs will not have iterators.\n");
+			goto out;
+		}
+	}
+
+	if (!bpf_preload_ops.info.tgid) {
+		err = bpf_preload_ops.do_preload(link_id);
+		if (err)
+			goto out;
+		for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
+			links[i] = bpf_link_by_id(link_id[i]);
+			if (IS_ERR(links[i])) {
+				err = PTR_ERR(links[i]);
+				goto out;
+			}
+		}
+		err = bpf_link_pin_kernel(parent, "maps", links[0]);
+		if (err)
+			goto out;
+		err = bpf_link_pin_kernel(parent, "progs", links[1]);
+		if (err)
+			goto out;
+		err = bpf_preload_ops.do_finish();
+		if (err)
+			goto out;
+	}
+out:
+	mutex_unlock(&bpf_preload_ops.lock);
+	for (i = 0; i < BPF_PRELOAD_LINKS && err; i++)
+		if (!IS_ERR_OR_NULL(links[i]))
+			bpf_link_put(links[i]);
+	return err;
+}
+
 static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
 {
 	static const struct tree_descr bpf_rfiles[] = { { "" } };
@@ -656,6 +728,7 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
 	inode->i_mode &= ~S_IALLUGO;
 	inode->i_mode |= S_ISVTX | opts->mode;
 
+	populate_bpffs(sb->s_root);
 	return 0;
 }
 
@@ -705,6 +778,8 @@ static int __init bpf_init(void)
 {
 	int ret;
 
+	mutex_init(&bpf_preload_ops.lock);
+
 	ret = sysfs_create_mount_point(fs_kobj, "bpf");
 	if (ret)
 		return ret;
diff --git a/kernel/bpf/preload/Kconfig b/kernel/bpf/preload/Kconfig
new file mode 100644
index 000000000000..b737ce4c2bab
--- /dev/null
+++ b/kernel/bpf/preload/Kconfig
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0-only
+menuconfig BPF_PRELOAD
+	bool "Load kernel specific BPF programs at kernel boot time (BPF_PRELOAD)"
+	depends on BPF
+	help
+	  tbd
+
+if BPF_PRELOAD
+config BPF_PRELOAD_UMD
+	tristate "bpf_preload kernel module with user mode driver"
+	depends on CC_CAN_LINK_STATIC
+	default m
+	help
+	  This builds bpf_preload kernel module with embedded user mode driver
+endif
diff --git a/kernel/bpf/preload/Makefile b/kernel/bpf/preload/Makefile
new file mode 100644
index 000000000000..191d82209842
--- /dev/null
+++ b/kernel/bpf/preload/Makefile
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: GPL-2.0
+
+LIBBPF := $(srctree)/../../tools/lib/bpf
+userccflags += -I $(srctree)/tools/include/ -I $(srctree)/tools/include/uapi -I $(LIBBPF) \
+	-I $(srctree)/tools/lib/ \
+	-I $(srctree)/kernel/bpf/preload/iterators/ -Wno-int-conversion \
+	-DCOMPAT_NEED_REALLOCARRAY
+
+userprogs := bpf_preload_umd
+
+LIBBPF_O := $(LIBBPF)/bpf.o $(LIBBPF)/libbpf.o $(LIBBPF)/btf.o $(LIBBPF)/libbpf_errno.o \
+	$(LIBBPF)/str_error.o $(LIBBPF)/hashmap.o $(LIBBPF)/libbpf_probes.o
+
+bpf_preload_umd-objs := iterators/iterators.o $(LIBBPF_O)
+
+userldflags += -lelf -lz
+
+$(obj)/bpf_preload_umd_blob.o: $(obj)/bpf_preload_umd
+
+obj-$(CONFIG_BPF_PRELOAD_UMD) += bpf_preload.o
+bpf_preload-objs += bpf_preload_kern.o bpf_preload_umd_blob.o
diff --git a/kernel/bpf/preload/bpf_preload.h b/kernel/bpf/preload/bpf_preload.h
new file mode 100644
index 000000000000..0d852574c02a
--- /dev/null
+++ b/kernel/bpf/preload/bpf_preload.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _BPF_PRELOAD_H
+#define _BPF_PRELOAD_H
+
+#include <linux/usermode_driver.h>
+
+struct bpf_preload_ops {
+        struct umd_info info;
+        struct mutex lock;
+	int (*do_preload)(u32 *);
+	int (*do_finish)(void);
+};
+extern struct bpf_preload_ops bpf_preload_ops;
+#define BPF_PRELOAD_LINKS 2
+#endif
diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c
new file mode 100644
index 000000000000..bfcd1fb3891c
--- /dev/null
+++ b/kernel/bpf/preload/bpf_preload_kern.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/pid.h>
+#include <linux/fs.h>
+#include <linux/sched/signal.h>
+#include "bpf_preload.h"
+#include "iterators/bpf_preload_common.h"
+
+extern char bpf_preload_umd_start;
+extern char bpf_preload_umd_end;
+
+static int do_preload(u32 *link_id)
+{
+	int magic = BPF_PRELOAD_START;
+	struct pid *tgid;
+	int id, i, err;
+	loff_t pos;
+	ssize_t n;
+
+	err = fork_usermode_driver(&bpf_preload_ops.info);
+	if (err)
+		return err;
+	tgid = bpf_preload_ops.info.tgid;
+
+	/* send the start magic to let UMD proceed with loading BPF progs */
+	n = __kernel_write(bpf_preload_ops.info.pipe_to_umh,
+			   &magic, sizeof(magic), &pos);
+	if (n != sizeof(magic))
+		return -EPIPE;
+
+	/* receive bpf_link IDs from UMD */
+	pos = 0;
+	for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
+		n = kernel_read(bpf_preload_ops.info.pipe_from_umh,
+				&id, sizeof(id), &pos);
+		if (n != sizeof(id))
+			return -EPIPE;
+		link_id[i] = id;
+	}
+	return 0;
+}
+
+static int do_finish(void)
+{
+	int magic = BPF_PRELOAD_END;
+	struct pid *tgid;
+	loff_t pos;
+	ssize_t n;
+
+	/* send the last magic to UMD. It will do a normal exit. */
+	n = __kernel_write(bpf_preload_ops.info.pipe_to_umh,
+			   &magic, sizeof(magic), &pos);
+	if (n != sizeof(magic))
+		return -EPIPE;
+	tgid = bpf_preload_ops.info.tgid;
+	wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
+	bpf_preload_ops.info.tgid = NULL;
+	return 0;
+}
+
+static int __init load_umd(void)
+{
+	int err;
+
+	err = umd_load_blob(&bpf_preload_ops.info, &bpf_preload_umd_start,
+			    &bpf_preload_umd_end - &bpf_preload_umd_start);
+	if (err)
+		return err;
+	bpf_preload_ops.do_preload = do_preload;
+	bpf_preload_ops.do_finish = do_finish;
+	return err;
+}
+
+static void __exit fini_umd(void)
+{
+	bpf_preload_ops.do_preload = NULL;
+	bpf_preload_ops.do_finish = NULL;
+	/* kill UMD in case it's still there due to earlier error */
+	kill_pid(bpf_preload_ops.info.tgid, SIGKILL, 1);
+	bpf_preload_ops.info.tgid = NULL;
+	umd_unload_blob(&bpf_preload_ops.info);
+}
+late_initcall(load_umd);
+module_exit(fini_umd);
+MODULE_LICENSE("GPL");
diff --git a/kernel/bpf/preload/bpf_preload_umd_blob.S b/kernel/bpf/preload/bpf_preload_umd_blob.S
new file mode 100644
index 000000000000..d0fe58c0734a
--- /dev/null
+++ b/kernel/bpf/preload/bpf_preload_umd_blob.S
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+	.section .init.rodata, "a"
+	.global bpf_preload_umd_start
+bpf_preload_umd_start:
+	.incbin "bpf_preload_umd"
+	.global bpf_preload_umd_end
+bpf_preload_umd_end:
diff --git a/kernel/bpf/preload/iterators/bpf_preload_common.h b/kernel/bpf/preload/iterators/bpf_preload_common.h
new file mode 100644
index 000000000000..f2e77711cd95
--- /dev/null
+++ b/kernel/bpf/preload/iterators/bpf_preload_common.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _BPF_PRELOAD_COMMON_H
+#define _BPF_PRELOAD_COMMON_H
+
+#define BPF_PRELOAD_START 0x5555
+#define BPF_PRELOAD_END 0xAAAA
+
+#endif
diff --git a/kernel/bpf/preload/iterators/iterators.c b/kernel/bpf/preload/iterators/iterators.c
new file mode 100644
index 000000000000..74f23580b25f
--- /dev/null
+++ b/kernel/bpf/preload/iterators/iterators.c
@@ -0,0 +1,81 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include <argp.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <bpf/libbpf.h>
+#include <bpf/bpf.h>
+#include <sys/mount.h>
+#include "iterators.skel.h"
+#include "bpf_preload_common.h"
+
+int to_kernel = -1;
+int from_kernel = 0;
+
+static int send_id_to_kernel(struct bpf_link *link)
+{
+	struct bpf_link_info info = {};
+	__u32 info_len = sizeof(info);
+	int err;
+
+	err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &info, &info_len);
+	if (err)
+		return err;
+	if (write(to_kernel, &info.id, sizeof(info.id)) != sizeof(info.id))
+		return -EPIPE;
+	return 0;
+}
+
+int main(int argc, char **argv)
+{
+	struct iterators_bpf *skel;
+	int err, magic;
+	int debug_fd;
+
+	debug_fd = open("/dev/console", O_WRONLY | O_NOCTTY | O_CLOEXEC);
+	if (debug_fd < 0)
+		return -1;
+	to_kernel = dup(1);
+	close(1);
+	dup(debug_fd);
+	/* now stdin and stderr point to /dev/console */
+
+	read(from_kernel, &magic, sizeof(magic));
+	if (magic != BPF_PRELOAD_START) {
+		printf("bad start magic %d\n", magic);
+		return -1;
+	}
+
+	/* libbpf opens BPF object and loads it into the kernel */
+	skel = iterators_bpf__open_and_load();
+	if (!skel)
+		return -1;
+
+	err = iterators_bpf__attach(skel);
+	if (err)
+		goto cleanup;
+
+	/* send two bpf_link IDs to the kernel */
+	err = send_id_to_kernel(skel->links.dump_bpf_map);
+	if (err)
+		goto cleanup;
+	err = send_id_to_kernel(skel->links.dump_bpf_prog);
+	if (err)
+		goto cleanup;
+
+	/* The kernel will proceed with pinnging the links in bpffs.
+	 * UMD will wait on read from pipe.
+	 */
+	read(from_kernel, &magic, sizeof(magic));
+	if (magic != BPF_PRELOAD_END) {
+		printf("bad final magic %d\n", magic);
+		err = -EINVAL;
+	}
+cleanup:
+	iterators_bpf__destroy(skel);
+
+	return err != 0;
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
@ 2020-07-02 22:14   ` kernel test robot
  2020-07-02 22:53   ` kernel test robot
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: kernel test robot @ 2020-07-02 22:14 UTC (permalink / raw)
  To: Alexei Starovoitov, torvalds
  Cc: kbuild-all, davem, daniel, ebiederm, netdev, bpf, kernel-team

[-- Attachment #1: Type: text/plain, Size: 1349 bytes --]

Hi Alexei,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Alexei-Starovoitov/bpf-Populate-bpffs-with-map-and-prog-iterators/20200703-040602
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: alpha-allyesconfig (attached as .config)
compiler: alpha-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=alpha 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from kernel/bpf/inode.c:23:
>> kernel/bpf/preload/bpf_preload.h:5:10: fatal error: linux/usermode_driver.h: No such file or directory
       5 | #include <linux/usermode_driver.h>
         |          ^~~~~~~~~~~~~~~~~~~~~~~~~
   compilation terminated.

vim +5 kernel/bpf/preload/bpf_preload.h

     4	
   > 5	#include <linux/usermode_driver.h>
     6	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 65025 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
  2020-07-02 22:14   ` kernel test robot
@ 2020-07-02 22:53   ` kernel test robot
  2020-07-03  1:05   ` Linus Torvalds
  2020-07-09  3:15   ` Andrii Nakryiko
  3 siblings, 0 replies; 17+ messages in thread
From: kernel test robot @ 2020-07-02 22:53 UTC (permalink / raw)
  To: Alexei Starovoitov, torvalds
  Cc: kbuild-all, davem, daniel, ebiederm, netdev, bpf, kernel-team

[-- Attachment #1: Type: text/plain, Size: 2008 bytes --]

Hi Alexei,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/0day-ci/linux/commits/Alexei-Starovoitov/bpf-Populate-bpffs-with-map-and-prog-iterators/20200703-040602
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: um-allmodconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-13) 9.3.0
reproduce (this is a W=1 build):
        # save the attached .config to linux build tree
        make W=1 ARCH=um 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/bpf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/bpf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/btf.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/btf.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_errno.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_errno.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/str_error.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/str_error.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/hashmap.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/hashmap.o'.
>> make[4]: *** No rule to make target 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_probes.c', needed by 'kernel/bpf/preload/../../tools/lib/bpf/libbpf_probes.o'.
   make[4]: *** [scripts/Makefile.build:281: kernel/bpf/preload/bpf_preload_kern.o] Error 1
   make[4]: Target '__build' not remade because of errors.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23003 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
  2020-07-02 22:14   ` kernel test robot
  2020-07-02 22:53   ` kernel test robot
@ 2020-07-03  1:05   ` Linus Torvalds
  2020-07-03  2:35     ` Alexei Starovoitov
  2020-07-09  3:15   ` Andrii Nakryiko
  3 siblings, 1 reply; 17+ messages in thread
From: Linus Torvalds @ 2020-07-03  1:05 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David Miller, Daniel Borkmann, Eric W. Biederman, Netdev, bpf,
	kernel-team

On Thu, Jul 2, 2020 at 1:03 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
> all BPF programs currently loaded in the system. This information is unstable
> and will change from kernel to kernel.

If so, it should probably be in debugfs, not in /sys/fs/

                Linus

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-03  1:05   ` Linus Torvalds
@ 2020-07-03  2:35     ` Alexei Starovoitov
  2020-07-03  3:34       ` Linus Torvalds
  0 siblings, 1 reply; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-03  2:35 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: David Miller, Daniel Borkmann, Eric W. Biederman, Netdev, bpf,
	kernel-team

On Thu, Jul 02, 2020 at 06:05:29PM -0700, Linus Torvalds wrote:
> On Thu, Jul 2, 2020 at 1:03 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
> > all BPF programs currently loaded in the system. This information is unstable
> > and will change from kernel to kernel.
> 
> If so, it should probably be in debugfs, not in /sys/fs/

/sys/fs/bpf/ is just a historic location where we chose to mount bpffs.
iirc iproute2 picked that location and systemd followed.
bpffs itself is not a single mount and not sharing anything with sysfs or debugfs.
By default it's not mounted anywhere.
Every instance is independent and can contain only pinned BPF objects:
progs, maps, links.
Folks are using bpffs to store BPF objects that need to survive the life time
of the process that created those objects.
Another use is to share that BPF object with another process.
Like firewall service can load bpf prog attach to netdev and pin bpf map
with firewall rules in some /var/my_firewall/bpf/my_fw_rules.
Then another process can do map lookup/delete on that map if it can access the path.
I've seen such use case in production.
As far as preloading "progs" and "maps" iterators into bpffs the following
works just as well:
$ mkdir /aa
$ mount bpffs /aa -t bpf
$ ll /aa
total 4
drwxrwxrwt  2 root root    0 Jul  2 00:27 .
drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
-rw-------  1 root root    0 Jul  2 00:27 maps
-rw-------  1 root root    0 Jul  2 00:27 progs
$ cat /aa/progs
  id name            pages attached
  17    dump_bpf_map     1 bpf_iter_bpf_map
  18   dump_bpf_prog     1 bpf_iter_bpf_prog

May be I misunderstood what you meant?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-03  2:35     ` Alexei Starovoitov
@ 2020-07-03  3:34       ` Linus Torvalds
  2020-07-04  0:06         ` Alexei Starovoitov
  0 siblings, 1 reply; 17+ messages in thread
From: Linus Torvalds @ 2020-07-03  3:34 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David Miller, Daniel Borkmann, Eric W. Biederman, Netdev, bpf,
	kernel-team

On Thu, Jul 2, 2020 at 7:35 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Jul 02, 2020 at 06:05:29PM -0700, Linus Torvalds wrote:
> > On Thu, Jul 2, 2020 at 1:03 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
> > > all BPF programs currently loaded in the system. This information is unstable
> > > and will change from kernel to kernel.
> >
> > If so, it should probably be in debugfs, not in /sys/fs/
>
> /sys/fs/bpf/ is just a historic location where we chose to mount bpffs.

It's more the "information is unstable and will change from kernel to kernel"

No such interfaces exist. If people start parsing it and depending it,
it's suddenly an ABI, whether you want to or not (and whether you
documented it or not).

At least if it's in /sys/kernel/debug/bpf/ or something, it's less
likely that anybody will do that.

               Linus

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators
  2020-07-02 20:03 [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
                   ` (2 preceding siblings ...)
  2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
@ 2020-07-03 11:18 ` Toke Høiland-Jørgensen
  2020-07-04  0:07   ` Alexei Starovoitov
  3 siblings, 1 reply; 17+ messages in thread
From: Toke Høiland-Jørgensen @ 2020-07-03 11:18 UTC (permalink / raw)
  To: Alexei Starovoitov, torvalds
  Cc: davem, daniel, ebiederm, netdev, bpf, kernel-team

> The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
> maps, load two BPF programs, attach them to BPF iterators, and finally send two
> bpf_link IDs back to the kernel.
> The kernel will pin two bpf_links into newly mounted bpffs instance under
> names "progs" and "maps". These two files become human readable.
>
> $ cat /sys/fs/bpf/progs
>   id name            pages attached
>   11    dump_bpf_map     1 bpf_iter_bpf_map
>   12   dump_bpf_prog     1 bpf_iter_bpf_prog
>   27 test_pkt_access     1
>   32       test_main     1 test_pkt_access test_pkt_access
>   33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
>   34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
>   35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
>   36 new_get_skb_len     1 get_skb_len test_pkt_access
>   37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
>   38 new_get_constan     1 get_constant test_pkt_access

Do the iterators respect namespace boundaries? Or will I see all
programs/maps on the host if I cat the file inside a container?

> Few interesting observations:
> - though bpffs comes with two human readble files "progs" and "maps" they
>   can be removed. 'rm -f /sys/fs/bpf/progs' will remove bpf_link and kernel
>   will automatically unload corresponding BPF progs, maps, BTFs.

Is there any way to get the files back if one does this by mistake
(other than re-mounting the bpffs)?

-Toke


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-03  3:34       ` Linus Torvalds
@ 2020-07-04  0:06         ` Alexei Starovoitov
  0 siblings, 0 replies; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-04  0:06 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: David Miller, Daniel Borkmann, Eric W. Biederman, Netdev, bpf,
	kernel-team

On Thu, Jul 02, 2020 at 08:34:17PM -0700, Linus Torvalds wrote:
> On Thu, Jul 2, 2020 at 7:35 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Thu, Jul 02, 2020 at 06:05:29PM -0700, Linus Torvalds wrote:
> > > On Thu, Jul 2, 2020 at 1:03 PM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
> > > > all BPF programs currently loaded in the system. This information is unstable
> > > > and will change from kernel to kernel.
> > >
> > > If so, it should probably be in debugfs, not in /sys/fs/
> >
> > /sys/fs/bpf/ is just a historic location where we chose to mount bpffs.
> 
> It's more the "information is unstable and will change from kernel to kernel"
> 
> No such interfaces exist. If people start parsing it and depending it,
> it's suddenly an ABI, whether you want to or not (and whether you
> documented it or not).
> 
> At least if it's in /sys/kernel/debug/bpf/ or something, it's less
> likely that anybody will do that.

I think I will go with "debug" mount option then.
By default nothing will be preloaded, so de-facto /sys/fs/bpf/ will stay empty.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators
  2020-07-03 11:18 ` [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Toke Høiland-Jørgensen
@ 2020-07-04  0:07   ` Alexei Starovoitov
  2020-07-04 10:48     ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 17+ messages in thread
From: Alexei Starovoitov @ 2020-07-04  0:07 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen
  Cc: torvalds, davem, daniel, ebiederm, netdev, bpf, kernel-team

On Fri, Jul 03, 2020 at 01:18:43PM +0200, Toke Høiland-Jørgensen wrote:
> > The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
> > maps, load two BPF programs, attach them to BPF iterators, and finally send two
> > bpf_link IDs back to the kernel.
> > The kernel will pin two bpf_links into newly mounted bpffs instance under
> > names "progs" and "maps". These two files become human readable.
> >
> > $ cat /sys/fs/bpf/progs
> >   id name            pages attached
> >   11    dump_bpf_map     1 bpf_iter_bpf_map
> >   12   dump_bpf_prog     1 bpf_iter_bpf_prog
> >   27 test_pkt_access     1
> >   32       test_main     1 test_pkt_access test_pkt_access
> >   33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
> >   34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
> >   35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
> >   36 new_get_skb_len     1 get_skb_len test_pkt_access
> >   37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
> >   38 new_get_constan     1 get_constant test_pkt_access
> 
> Do the iterators respect namespace boundaries? Or will I see all
> programs/maps on the host if I cat the file inside a container?

why are you asking? I'm pretty sure you know that bpf infra isn't namespaced yet.

> > Few interesting observations:
> > - though bpffs comes with two human readble files "progs" and "maps" they
> >   can be removed. 'rm -f /sys/fs/bpf/progs' will remove bpf_link and kernel
> >   will automatically unload corresponding BPF progs, maps, BTFs.
> 
> Is there any way to get the files back if one does this by mistake
> (other than re-mounting the bpffs)?

Same as user A pining their prog/map/link in bpffs and user B removing it.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators
  2020-07-04  0:07   ` Alexei Starovoitov
@ 2020-07-04 10:48     ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 17+ messages in thread
From: Toke Høiland-Jørgensen @ 2020-07-04 10:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: torvalds, davem, daniel, ebiederm, netdev, bpf, kernel-team

Alexei Starovoitov <alexei.starovoitov@gmail.com> writes:

> On Fri, Jul 03, 2020 at 01:18:43PM +0200, Toke Høiland-Jørgensen wrote:
>> > The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
>> > maps, load two BPF programs, attach them to BPF iterators, and finally send two
>> > bpf_link IDs back to the kernel.
>> > The kernel will pin two bpf_links into newly mounted bpffs instance under
>> > names "progs" and "maps". These two files become human readable.
>> >
>> > $ cat /sys/fs/bpf/progs
>> >   id name            pages attached
>> >   11    dump_bpf_map     1 bpf_iter_bpf_map
>> >   12   dump_bpf_prog     1 bpf_iter_bpf_prog
>> >   27 test_pkt_access     1
>> >   32       test_main     1 test_pkt_access test_pkt_access
>> >   33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
>> >   34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
>> >   35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
>> >   36 new_get_skb_len     1 get_skb_len test_pkt_access
>> >   37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
>> >   38 new_get_constan     1 get_constant test_pkt_access
>> 
>> Do the iterators respect namespace boundaries? Or will I see all
>> programs/maps on the host if I cat the file inside a container?
>
> why are you asking? I'm pretty sure you know that bpf infra isn't namespaced yet.

I thought I did, but this seemed to be something that implied I was
wrong, so figured maybe I missed something.

>> > Few interesting observations:
>> > - though bpffs comes with two human readble files "progs" and "maps" they
>> >   can be removed. 'rm -f /sys/fs/bpf/progs' will remove bpf_link and kernel
>> >   will automatically unload corresponding BPF progs, maps, BTFs.
>> 
>> Is there any way to get the files back if one does this by mistake
>> (other than re-mounting the bpffs)?
>
> Same as user A pining their prog/map/link in bpffs and user B removing it.

Right, sure, but in that case A can presumably restart their application
and get it back. Whereas in this case that's not possible, short of
clearing the whole bpffs and remounting it. Would it be possible to
re-trigger the UMH on an existing fs, say by issuing 'mount -o remount
bpf /sys/fs/bpf' (or some other debug mechanism)?

-Toke


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper.
  2020-07-02 20:03 ` [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
@ 2020-07-06 23:08   ` Andrii Nakryiko
  0 siblings, 0 replies; 17+ messages in thread
From: Andrii Nakryiko @ 2020-07-06 23:08 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Linus Torvalds, David S. Miller, Daniel Borkmann,
	Eric W. Biederman, Networking, bpf, Kernel Team

On Thu, Jul 2, 2020 at 1:04 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Refactor the code a bit to extract bpf_link_get_by_id() helper.
> It's similar to existing bpf_prog_by_id().
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

LGTM.

Acked-by: Andrii Nakryiko <andriin@fb.com>


>  include/linux/bpf.h  |  1 +
>  kernel/bpf/syscall.c | 46 +++++++++++++++++++++++++++-----------------
>  2 files changed, 29 insertions(+), 18 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: Add BPF program and map iterators as built-in BPF programs.
  2020-07-02 20:03 ` [PATCH bpf-next 2/3] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
@ 2020-07-08 21:55   ` Andrii Nakryiko
  0 siblings, 0 replies; 17+ messages in thread
From: Andrii Nakryiko @ 2020-07-08 21:55 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Linus Torvalds, David S. Miller, Daniel Borkmann,
	Eric W. Biederman, Networking, bpf, Kernel Team

On Thu, Jul 2, 2020 at 1:04 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> The program and map iterators work similar to seq_file-s.
> Once the program is pinned in bpffs it can be read with "cat" tool
> to print human readable output. In this case about BPF programs and maps.
> For example:
> $ cat /sys/fs/bpf/progs
>   id name            pages attached
>    5    dump_bpf_map     1 bpf_iter_bpf_map
>    6   dump_bpf_prog     1 bpf_iter_bpf_prog



> $ cat /sys/fs/bpf/maps
>   id name            pages
>    3 iterator.rodata     2
>
> To avoid kernel build dependency on clang 10 separate bpf skeleton generation
> into manual "make" step and instead check-in generated .skel.h into git.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  kernel/bpf/preload/iterators/.gitignore       |   2 +
>  kernel/bpf/preload/iterators/Makefile         |  57 +++
>  kernel/bpf/preload/iterators/README           |   4 +
>  kernel/bpf/preload/iterators/iterators.bpf.c  |  81 ++++
>  kernel/bpf/preload/iterators/iterators.skel.h | 359 ++++++++++++++++++
>  5 files changed, 503 insertions(+)
>  create mode 100644 kernel/bpf/preload/iterators/.gitignore
>  create mode 100644 kernel/bpf/preload/iterators/Makefile
>  create mode 100644 kernel/bpf/preload/iterators/README
>  create mode 100644 kernel/bpf/preload/iterators/iterators.bpf.c
>  create mode 100644 kernel/bpf/preload/iterators/iterators.skel.h
>

[...]

> +struct seq_file;
> +struct bpf_iter_meta {
> +       struct seq_file *seq;
> +       __u64 session_id;
> +       __u64 seq_num;
> +} __attribute__((preserve_access_index));
> +
> +struct bpf_map_memory {
> +       __u32 pages;
> +};

forgot __attribute__((preserve_access_index)) here?

> +struct bpf_map {
> +       __u32 id;
> +       struct bpf_map_memory memory;
> +       char name[16];
> +} __attribute__((preserve_access_index));
> +

[...]

> +SEC("iter/bpf_map")
> +int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
> +{
> +       struct seq_file *seq = ctx->meta->seq;
> +       __u64 seq_num = ctx->meta->seq_num;
> +       struct bpf_map *map = ctx->map;
> +
> +       if (!map)
> +               return 0;
> +
> +       if (seq_num == 0)
> +               BPF_SEQ_PRINTF(seq, "  id name            pages\n");
> +
> +       BPF_SEQ_PRINTF(seq, "%4u%16s%6d\n", map->id, map->name, map->memory.pages);

Here and below, please use %-16s for left-aligned strings for map name
and prog name.

> +       return 0;
> +}
> +
> +SEC("iter/bpf_prog")
> +int dump_bpf_prog(struct bpf_iter__bpf_prog *ctx)
> +{
> +       struct seq_file *seq = ctx->meta->seq;
> +       __u64 seq_num = ctx->meta->seq_num;
> +       struct bpf_prog *prog = ctx->prog;
> +       struct bpf_prog_aux *aux;
> +
> +       if (!prog)
> +               return 0;
> +
> +       aux = prog->aux;
> +       if (seq_num == 0)
> +               BPF_SEQ_PRINTF(seq, "  id name            pages attached\n");
> +
> +       BPF_SEQ_PRINTF(seq, "%4u%16s%6d %s %s\n", aux->id, aux->name, prog->pages,
> +                      aux->attach_func_name, aux->linked_prog->aux->name);
> +       return 0;
> +}
> +char LICENSE[] SEC("license") = "GPL";

[...]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
                     ` (2 preceding siblings ...)
  2020-07-03  1:05   ` Linus Torvalds
@ 2020-07-09  3:15   ` Andrii Nakryiko
  2020-07-10  6:31     ` Yonghong Song
  3 siblings, 1 reply; 17+ messages in thread
From: Andrii Nakryiko @ 2020-07-09  3:15 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Linus Torvalds, David S. Miller, Daniel Borkmann,
	Eric W. Biederman, Networking, bpf, Kernel Team

On Thu, Jul 2, 2020 at 1:04 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Add kernel module with user mode driver that populates bpffs with
> BPF iterators.
>
> $ mount bpffs /sys/fs/bpf/ -t bpf
> $ ls -la /sys/fs/bpf/
> total 4
> drwxrwxrwt  2 root root    0 Jul  2 00:27 .
> drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
> -rw-------  1 root root    0 Jul  2 00:27 maps
> -rw-------  1 root root    0 Jul  2 00:27 progs
>
> The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
> maps, load two BPF programs, attach them to BPF iterators, and finally send two
> bpf_link IDs back to the kernel.
> The kernel will pin two bpf_links into newly mounted bpffs instance under
> names "progs" and "maps". These two files become human readable.
>
> $ cat /sys/fs/bpf/progs
>   id name            pages attached
>   11    dump_bpf_map     1 bpf_iter_bpf_map
>   12   dump_bpf_prog     1 bpf_iter_bpf_prog
>   27 test_pkt_access     1
>   32       test_main     1 test_pkt_access test_pkt_access
>   33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
>   34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
>   35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
>   36 new_get_skb_len     1 get_skb_len test_pkt_access
>   37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
>   38 new_get_constan     1 get_constant test_pkt_access
>
> The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
> all BPF programs currently loaded in the system. This information is unstable
> and will change from kernel to kernel.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

[...]

> +static int bpf_link_pin_kernel(struct dentry *parent,
> +                              const char *name, struct bpf_link *link)
> +{
> +       umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
> +       struct dentry *dentry;
> +       int ret;
> +
> +       inode_lock(parent->d_inode);
> +       dentry = lookup_one_len(name, parent, strlen(name));
> +       if (IS_ERR(dentry)) {
> +               inode_unlock(parent->d_inode);
> +               return PTR_ERR(dentry);
> +       }
> +       ret = bpf_mkobj_ops(dentry, mode, link, &bpf_link_iops,
> +                           &bpf_iter_fops);

bpf_iter_fops only applies to bpf_iter links, while
bpf_link_pin_kernel allows any link type. See bpf_mklink(), it checks
bpf_link_is_iter() to decide between bpf_iter_fops and bpffs_obj_fops.


> +       dput(dentry);
> +       inode_unlock(parent->d_inode);
> +       return ret;
> +}
> +
>  static int bpf_obj_do_pin(const char __user *pathname, void *raw,
>                           enum bpf_type type)
>  {
> @@ -638,6 +659,57 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
>         return 0;
>  }
>
> +struct bpf_preload_ops bpf_preload_ops = { .info.driver_name = "bpf_preload" };
> +EXPORT_SYMBOL_GPL(bpf_preload_ops);
> +
> +static int populate_bpffs(struct dentry *parent)

So all the pinning has to happen from the kernel side because at the
time that bpf_fill_super is called, user-space can't yet see the
mounted BPFFS, do I understand the problem correctly? Would it be
possible to add callback to fs_context_operations that would be called
after FS is mounted and visible to user-space? At that point the
kernel can spawn the user-mode blob and just instruct it to do both
BPF object loading and pinning?

Or are there some other complications with such approach?

> +{
> +       struct bpf_link *links[BPF_PRELOAD_LINKS] = {};
> +       u32 link_id[BPF_PRELOAD_LINKS] = {};
> +       int err = 0, i;
> +
> +       mutex_lock(&bpf_preload_ops.lock);
> +       if (!bpf_preload_ops.do_preload) {
> +               mutex_unlock(&bpf_preload_ops.lock);
> +               request_module("bpf_preload");
> +               mutex_lock(&bpf_preload_ops.lock);
> +
> +               if (!bpf_preload_ops.do_preload) {
> +                       pr_err("bpf_preload module is missing.\n"
> +                              "bpffs will not have iterators.\n");
> +                       goto out;
> +               }
> +       }
> +
> +       if (!bpf_preload_ops.info.tgid) {
> +               err = bpf_preload_ops.do_preload(link_id);
> +               if (err)
> +                       goto out;
> +               for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
> +                       links[i] = bpf_link_by_id(link_id[i]);
> +                       if (IS_ERR(links[i])) {
> +                               err = PTR_ERR(links[i]);
> +                               goto out;
> +                       }
> +               }
> +               err = bpf_link_pin_kernel(parent, "maps", links[0]);
> +               if (err)
> +                       goto out;
> +               err = bpf_link_pin_kernel(parent, "progs", links[1]);
> +               if (err)
> +                       goto out;

This hard coded "maps" -> link #0, "progs" -> link #1 mapping is what
motivated the question above about letting user-space do all pinning.
It would significantly simplify the kernel part, right?

> +               err = bpf_preload_ops.do_finish();
> +               if (err)
> +                       goto out;
> +       }

[...]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs.
  2020-07-09  3:15   ` Andrii Nakryiko
@ 2020-07-10  6:31     ` Yonghong Song
  0 siblings, 0 replies; 17+ messages in thread
From: Yonghong Song @ 2020-07-10  6:31 UTC (permalink / raw)
  To: Andrii Nakryiko, Alexei Starovoitov
  Cc: Linus Torvalds, David S. Miller, Daniel Borkmann,
	Eric W. Biederman, Networking, bpf, Kernel Team



On 7/8/20 8:15 PM, Andrii Nakryiko wrote:
> On Thu, Jul 2, 2020 at 1:04 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
>>
>> From: Alexei Starovoitov <ast@kernel.org>
>>
>> Add kernel module with user mode driver that populates bpffs with
>> BPF iterators.
>>
>> $ mount bpffs /sys/fs/bpf/ -t bpf
>> $ ls -la /sys/fs/bpf/
>> total 4
>> drwxrwxrwt  2 root root    0 Jul  2 00:27 .
>> drwxr-xr-x 19 root root 4096 Jul  2 00:09 ..
>> -rw-------  1 root root    0 Jul  2 00:27 maps
>> -rw-------  1 root root    0 Jul  2 00:27 progs
>>
>> The user mode driver will load BPF Type Formats, create BPF maps, populate BPF
>> maps, load two BPF programs, attach them to BPF iterators, and finally send two
>> bpf_link IDs back to the kernel.
>> The kernel will pin two bpf_links into newly mounted bpffs instance under
>> names "progs" and "maps". These two files become human readable.
>>
>> $ cat /sys/fs/bpf/progs
>>    id name            pages attached
>>    11    dump_bpf_map     1 bpf_iter_bpf_map
>>    12   dump_bpf_prog     1 bpf_iter_bpf_prog
>>    27 test_pkt_access     1
>>    32       test_main     1 test_pkt_access test_pkt_access
>>    33   test_subprog1     1 test_pkt_access_subprog1 test_pkt_access
>>    34   test_subprog2     1 test_pkt_access_subprog2 test_pkt_access
>>    35   test_subprog3     1 test_pkt_access_subprog3 test_pkt_access
>>    36 new_get_skb_len     1 get_skb_len test_pkt_access
>>    37 new_get_skb_ifi     1 get_skb_ifindex test_pkt_access
>>    38 new_get_constan     1 get_constant test_pkt_access
>>
>> The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about
>> all BPF programs currently loaded in the system. This information is unstable
>> and will change from kernel to kernel.
>>
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>> ---
> 
> [...]
> 
>> +static int bpf_link_pin_kernel(struct dentry *parent,
>> +                              const char *name, struct bpf_link *link)
>> +{
>> +       umode_t mode = S_IFREG | S_IRUSR | S_IWUSR;
>> +       struct dentry *dentry;
>> +       int ret;
>> +
>> +       inode_lock(parent->d_inode);
>> +       dentry = lookup_one_len(name, parent, strlen(name));
>> +       if (IS_ERR(dentry)) {
>> +               inode_unlock(parent->d_inode);
>> +               return PTR_ERR(dentry);
>> +       }
>> +       ret = bpf_mkobj_ops(dentry, mode, link, &bpf_link_iops,
>> +                           &bpf_iter_fops);
> 
> bpf_iter_fops only applies to bpf_iter links, while
> bpf_link_pin_kernel allows any link type. See bpf_mklink(), it checks
> bpf_link_is_iter() to decide between bpf_iter_fops and bpffs_obj_fops.
> 
> 
>> +       dput(dentry);
>> +       inode_unlock(parent->d_inode);
>> +       return ret;
>> +}
>> +
>>   static int bpf_obj_do_pin(const char __user *pathname, void *raw,
>>                            enum bpf_type type)
>>   {
>> @@ -638,6 +659,57 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
>>          return 0;
>>   }
>>
>> +struct bpf_preload_ops bpf_preload_ops = { .info.driver_name = "bpf_preload" };
>> +EXPORT_SYMBOL_GPL(bpf_preload_ops);
>> +
>> +static int populate_bpffs(struct dentry *parent)
> 
> So all the pinning has to happen from the kernel side because at the
> time that bpf_fill_super is called, user-space can't yet see the
> mounted BPFFS, do I understand the problem correctly? Would it be
> possible to add callback to fs_context_operations that would be called
> after FS is mounted and visible to user-space? At that point the
> kernel can spawn the user-mode blob and just instruct it to do both
> BPF object loading and pinning?

This is possible during bpf_fill_super() which is called when a `mount`
syscall is called. I experimented it a little bit when in my early
bpf_iter experiment with bpffs to re-populate every existing
iterators in a new bpffs mount.

In this case, we probably do not want to repopulate it in
every new bpffs mount. I think we just want to put them in a fixed
location. Since this is a fixed location, the system can go ahead
to do the mount, I think. But could just set up all necessary
data structures and do eventual mount after file system is up
in user space. Just my 2 cents.

> 
> Or are there some other complications with such approach?
> 
>> +{
>> +       struct bpf_link *links[BPF_PRELOAD_LINKS] = {};
>> +       u32 link_id[BPF_PRELOAD_LINKS] = {};
>> +       int err = 0, i;
>> +
>> +       mutex_lock(&bpf_preload_ops.lock);
>> +       if (!bpf_preload_ops.do_preload) {
>> +               mutex_unlock(&bpf_preload_ops.lock);
>> +               request_module("bpf_preload");
>> +               mutex_lock(&bpf_preload_ops.lock);
>> +
>> +               if (!bpf_preload_ops.do_preload) {
>> +                       pr_err("bpf_preload module is missing.\n"
>> +                              "bpffs will not have iterators.\n");
>> +                       goto out;
>> +               }
>> +       }
>> +
>> +       if (!bpf_preload_ops.info.tgid) {
>> +               err = bpf_preload_ops.do_preload(link_id);
>> +               if (err)
>> +                       goto out;
>> +               for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
>> +                       links[i] = bpf_link_by_id(link_id[i]);
>> +                       if (IS_ERR(links[i])) {
>> +                               err = PTR_ERR(links[i]);
>> +                               goto out;
>> +                       }
>> +               }
>> +               err = bpf_link_pin_kernel(parent, "maps", links[0]);
>> +               if (err)
>> +                       goto out;
>> +               err = bpf_link_pin_kernel(parent, "progs", links[1]);
>> +               if (err)
>> +                       goto out;
> 
> This hard coded "maps" -> link #0, "progs" -> link #1 mapping is what
> motivated the question above about letting user-space do all pinning.
> It would significantly simplify the kernel part, right?
> 
>> +               err = bpf_preload_ops.do_finish();
>> +               if (err)
>> +                       goto out;
>> +       }
> 
> [...]
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-07-10  6:32 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-02 20:03 [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Alexei Starovoitov
2020-07-02 20:03 ` [PATCH bpf-next 1/3] bpf: Factor out bpf_link_get_by_id() helper Alexei Starovoitov
2020-07-06 23:08   ` Andrii Nakryiko
2020-07-02 20:03 ` [PATCH bpf-next 2/3] bpf: Add BPF program and map iterators as built-in BPF programs Alexei Starovoitov
2020-07-08 21:55   ` Andrii Nakryiko
2020-07-02 20:03 ` [PATCH bpf-next 3/3] bpf: Add kernel module with user mode driver that populates bpffs Alexei Starovoitov
2020-07-02 22:14   ` kernel test robot
2020-07-02 22:53   ` kernel test robot
2020-07-03  1:05   ` Linus Torvalds
2020-07-03  2:35     ` Alexei Starovoitov
2020-07-03  3:34       ` Linus Torvalds
2020-07-04  0:06         ` Alexei Starovoitov
2020-07-09  3:15   ` Andrii Nakryiko
2020-07-10  6:31     ` Yonghong Song
2020-07-03 11:18 ` [PATCH bpf-next 0/3] bpf: Populate bpffs with map and prog iterators Toke Høiland-Jørgensen
2020-07-04  0:07   ` Alexei Starovoitov
2020-07-04 10:48     ` Toke Høiland-Jørgensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).