bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation
@ 2022-04-28  4:15 Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 1/4] libbpf: append "..." in fixed up log if CO-RE spec is truncated Andrii Nakryiko
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Andrii Nakryiko @ 2022-04-28  4:15 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Add bpf_map__set_autocreate() API which is a BPF map counterpart of
bpf_program__set_autoload() and serves similar goal of allowing to build more
flexible CO-RE applications. See patch #3 for example scenarios in which the
need for such API came up previously.

Patch #1 is a follow-up patch to previous patch set adding verifier log fixup
logic, making sure bpf_core_format_spec()'s return result is used for
something useful.

Patch #2 is a small refactoring to avoid unnecessary verbose memory management
around obj->maps array.

Patch #3 adds and API and corresponding BPF verifier log fix up logic to
provide human-comprehensible error message with useful details.

Patch #4 adds a simple selftest validating both the API itself and libbpf's
log fixup logic for it.

Andrii Nakryiko (4):
  libbpf: append "..." in fixed up log if CO-RE spec is truncated
  libbpf: use libbpf_mem_ensure() when allocating new map
  libbpf: allow to opt-out from creating BPF maps
  selftests/bpf: test bpf_map__set_autocreate() and related log fixup
    logic

 tools/lib/bpf/libbpf.c                        | 169 +++++++++++++-----
 tools/lib/bpf/libbpf.h                        |  22 +++
 tools/lib/bpf/libbpf.map                      |   4 +-
 .../selftests/bpf/prog_tests/log_fixup.c      |  37 +++-
 .../selftests/bpf/progs/test_log_fixup.c      |  26 +++
 5 files changed, 209 insertions(+), 49 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH bpf-next 1/4] libbpf: append "..." in fixed up log if CO-RE spec is truncated
  2022-04-28  4:15 [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation Andrii Nakryiko
@ 2022-04-28  4:15 ` Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 2/4] libbpf: use libbpf_mem_ensure() when allocating new map Andrii Nakryiko
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Andrii Nakryiko @ 2022-04-28  4:15 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Detect CO-RE spec truncation and append "..." to make user aware that
there was supposed to be more of the spec there.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 73a5192defb3..a85d83390d67 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -6947,7 +6947,7 @@ static void fixup_log_failed_core_relo(struct bpf_program *prog,
 	const struct bpf_core_relo *relo;
 	struct bpf_core_spec spec;
 	char patch[512], spec_buf[256];
-	int insn_idx, err;
+	int insn_idx, err, spec_len;
 
 	if (sscanf(line1, "%d: (%*d) call unknown#195896080\n", &insn_idx) != 1)
 		return;
@@ -6960,11 +6960,11 @@ static void fixup_log_failed_core_relo(struct bpf_program *prog,
 	if (err)
 		return;
 
-	bpf_core_format_spec(spec_buf, sizeof(spec_buf), &spec);
+	spec_len = bpf_core_format_spec(spec_buf, sizeof(spec_buf), &spec);
 	snprintf(patch, sizeof(patch),
 		 "%d: <invalid CO-RE relocation>\n"
-		 "failed to resolve CO-RE relocation %s\n",
-		 insn_idx, spec_buf);
+		 "failed to resolve CO-RE relocation %s%s\n",
+		 insn_idx, spec_buf, spec_len >= sizeof(spec_buf) ? "..." : "");
 
 	patch_log(buf, buf_sz, log_sz, line1, line3 - line1, patch);
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH bpf-next 2/4] libbpf: use libbpf_mem_ensure() when allocating new map
  2022-04-28  4:15 [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 1/4] libbpf: append "..." in fixed up log if CO-RE spec is truncated Andrii Nakryiko
@ 2022-04-28  4:15 ` Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 3/4] libbpf: allow to opt-out from creating BPF maps Andrii Nakryiko
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Andrii Nakryiko @ 2022-04-28  4:15 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Reuse libbpf_mem_ensure() when adding a new map to the list of maps
inside bpf_object. It takes care of proper resizing and reallocating of
map array and zeroing out newly allocated memory.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 37 ++++++++++---------------------------
 1 file changed, 10 insertions(+), 27 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index a85d83390d67..ee43719a0376 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -1433,36 +1433,19 @@ static int find_elf_var_offset(const struct bpf_object *obj, const char *name, _
 
 static struct bpf_map *bpf_object__add_map(struct bpf_object *obj)
 {
-	struct bpf_map *new_maps;
-	size_t new_cap;
-	int i;
-
-	if (obj->nr_maps < obj->maps_cap)
-		return &obj->maps[obj->nr_maps++];
-
-	new_cap = max((size_t)4, obj->maps_cap * 3 / 2);
-	new_maps = libbpf_reallocarray(obj->maps, new_cap, sizeof(*obj->maps));
-	if (!new_maps) {
-		pr_warn("alloc maps for object failed\n");
-		return ERR_PTR(-ENOMEM);
-	}
+	struct bpf_map *map;
+	int err;
 
-	obj->maps_cap = new_cap;
-	obj->maps = new_maps;
+	err = libbpf_ensure_mem((void **)&obj->maps, &obj->maps_cap,
+				sizeof(*obj->maps), obj->nr_maps + 1);
+	if (err)
+		return ERR_PTR(err);
 
-	/* zero out new maps */
-	memset(obj->maps + obj->nr_maps, 0,
-	       (obj->maps_cap - obj->nr_maps) * sizeof(*obj->maps));
-	/*
-	 * fill all fd with -1 so won't close incorrect fd (fd=0 is stdin)
-	 * when failure (zclose won't close negative fd)).
-	 */
-	for (i = obj->nr_maps; i < obj->maps_cap; i++) {
-		obj->maps[i].fd = -1;
-		obj->maps[i].inner_map_fd = -1;
-	}
+	map = &obj->maps[obj->nr_maps++];
+	map->fd = -1;
+	map->inner_map_fd = -1;
 
-	return &obj->maps[obj->nr_maps++];
+	return map;
 }
 
 static size_t bpf_map_mmap_sz(const struct bpf_map *map)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH bpf-next 3/4] libbpf: allow to opt-out from creating BPF maps
  2022-04-28  4:15 [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 1/4] libbpf: append "..." in fixed up log if CO-RE spec is truncated Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 2/4] libbpf: use libbpf_mem_ensure() when allocating new map Andrii Nakryiko
@ 2022-04-28  4:15 ` Andrii Nakryiko
  2022-04-28  4:15 ` [PATCH bpf-next 4/4] selftests/bpf: test bpf_map__set_autocreate() and related log fixup logic Andrii Nakryiko
  2022-04-29  3:20 ` [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation patchwork-bot+netdevbpf
  4 siblings, 0 replies; 6+ messages in thread
From: Andrii Nakryiko @ 2022-04-28  4:15 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Add bpf_map__set_autocreate() API that allows user to opt-out from
libbpf automatically creating BPF map during BPF object load.

This is a useful feature when building CO-RE-enabled BPF application
that takes advantage of some new-ish BPF map type (e.g., socket-local
storage) if kernel supports it, but otherwise uses some alternative way
(e.g., extra HASH map). In such case, being able to disable the creation
of a map that kernel doesn't support allows to successfully create and
load BPF object file with all its other maps and programs.

It's still up to user to make sure that no "live" code in any of their BPF
programs are referencing such map instance, which can be achieved by
guarding such code with CO-RE relocation check or by using .rodata
global variables.

If user fails to properly guard such code to turn it into "dead code",
libbpf will helpfully post-process BPF verifier log and will provide
more meaningful error and map name that needs to be guarded properly. As
such, instead of:

  ; value = bpf_map_lookup_elem(&missing_map, &zero);
  4: (85) call unknown#2001000000
  invalid func unknown#2001000000

... user will see:

  ; value = bpf_map_lookup_elem(&missing_map, &zero);
  4: <invalid BPF map reference>
  BPF map 'missing_map' is referenced but wasn't created

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c   | 124 ++++++++++++++++++++++++++++++++++-----
 tools/lib/bpf/libbpf.h   |  22 +++++++
 tools/lib/bpf/libbpf.map |   4 +-
 3 files changed, 133 insertions(+), 17 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index ee43719a0376..ad6f8669234a 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -357,6 +357,7 @@ enum libbpf_map_type {
 };
 
 struct bpf_map {
+	struct bpf_object *obj;
 	char *name;
 	/* real_name is defined for special internal maps (.rodata*,
 	 * .data*, .bss, .kconfig) and preserves their original ELF section
@@ -386,7 +387,7 @@ struct bpf_map {
 	char *pin_path;
 	bool pinned;
 	bool reused;
-	bool skipped;
+	bool autocreate;
 	__u64 map_extra;
 };
 
@@ -1442,8 +1443,10 @@ static struct bpf_map *bpf_object__add_map(struct bpf_object *obj)
 		return ERR_PTR(err);
 
 	map = &obj->maps[obj->nr_maps++];
+	map->obj = obj;
 	map->fd = -1;
 	map->inner_map_fd = -1;
+	map->autocreate = true;
 
 	return map;
 }
@@ -4307,6 +4310,20 @@ static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
 	return 0;
 }
 
+bool bpf_map__autocreate(const struct bpf_map *map)
+{
+	return map->autocreate;
+}
+
+int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
+{
+	if (map->obj->loaded)
+		return libbpf_err(-EBUSY);
+
+	map->autocreate = autocreate;
+	return 0;
+}
+
 int bpf_map__reuse_fd(struct bpf_map *map, int fd)
 {
 	struct bpf_map_info info = {};
@@ -5163,9 +5180,11 @@ bpf_object__create_maps(struct bpf_object *obj)
 		 * bpf_object loading will succeed just fine even on old
 		 * kernels.
 		 */
-		if (bpf_map__is_internal(map) &&
-		    !kernel_supports(obj, FEAT_GLOBAL_DATA)) {
-			map->skipped = true;
+		if (bpf_map__is_internal(map) && !kernel_supports(obj, FEAT_GLOBAL_DATA))
+			map->autocreate = false;
+
+		if (!map->autocreate) {
+			pr_debug("map '%s': skipped auto-creating...\n", map->name);
 			continue;
 		}
 
@@ -5788,6 +5807,36 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
 	return err;
 }
 
+/* base map load ldimm64 special constant, used also for log fixup logic */
+#define MAP_LDIMM64_POISON_BASE 2001000000
+#define MAP_LDIMM64_POISON_PFX "200100"
+
+static void poison_map_ldimm64(struct bpf_program *prog, int relo_idx,
+			       int insn_idx, struct bpf_insn *insn,
+			       int map_idx, const struct bpf_map *map)
+{
+	int i;
+
+	pr_debug("prog '%s': relo #%d: poisoning insn #%d that loads map #%d '%s'\n",
+		 prog->name, relo_idx, insn_idx, map_idx, map->name);
+
+	/* we turn single ldimm64 into two identical invalid calls */
+	for (i = 0; i < 2; i++) {
+		insn->code = BPF_JMP | BPF_CALL;
+		insn->dst_reg = 0;
+		insn->src_reg = 0;
+		insn->off = 0;
+		/* if this instruction is reachable (not a dead code),
+		 * verifier will complain with something like:
+		 * invalid func unknown#2001000123
+		 * where lower 123 is map index into obj->maps[] array
+		 */
+		insn->imm = MAP_LDIMM64_POISON_BASE + map_idx;
+
+		insn++;
+	}
+}
+
 /* Relocate data references within program code:
  *  - map references;
  *  - global variable references;
@@ -5801,33 +5850,35 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
 	for (i = 0; i < prog->nr_reloc; i++) {
 		struct reloc_desc *relo = &prog->reloc_desc[i];
 		struct bpf_insn *insn = &prog->insns[relo->insn_idx];
+		const struct bpf_map *map;
 		struct extern_desc *ext;
 
 		switch (relo->type) {
 		case RELO_LD64:
+			map = &obj->maps[relo->map_idx];
 			if (obj->gen_loader) {
 				insn[0].src_reg = BPF_PSEUDO_MAP_IDX;
 				insn[0].imm = relo->map_idx;
-			} else {
+			} else if (map->autocreate) {
 				insn[0].src_reg = BPF_PSEUDO_MAP_FD;
-				insn[0].imm = obj->maps[relo->map_idx].fd;
+				insn[0].imm = map->fd;
+			} else {
+				poison_map_ldimm64(prog, i, relo->insn_idx, insn,
+						   relo->map_idx, map);
 			}
 			break;
 		case RELO_DATA:
+			map = &obj->maps[relo->map_idx];
 			insn[1].imm = insn[0].imm + relo->sym_off;
 			if (obj->gen_loader) {
 				insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE;
 				insn[0].imm = relo->map_idx;
-			} else {
-				const struct bpf_map *map = &obj->maps[relo->map_idx];
-
-				if (map->skipped) {
-					pr_warn("prog '%s': relo #%d: kernel doesn't support global data\n",
-						prog->name, i);
-					return -ENOTSUP;
-				}
+			} else if (map->autocreate) {
 				insn[0].src_reg = BPF_PSEUDO_MAP_VALUE;
-				insn[0].imm = obj->maps[relo->map_idx].fd;
+				insn[0].imm = map->fd;
+			} else {
+				poison_map_ldimm64(prog, i, relo->insn_idx, insn,
+						   relo->map_idx, map);
 			}
 			break;
 		case RELO_EXTERN_VAR:
@@ -6952,6 +7003,39 @@ static void fixup_log_failed_core_relo(struct bpf_program *prog,
 	patch_log(buf, buf_sz, log_sz, line1, line3 - line1, patch);
 }
 
+static void fixup_log_missing_map_load(struct bpf_program *prog,
+				       char *buf, size_t buf_sz, size_t log_sz,
+				       char *line1, char *line2, char *line3)
+{
+	/* Expected log for failed and not properly guarded CO-RE relocation:
+	 * line1 -> 123: (85) call unknown#2001000345
+	 * line2 -> invalid func unknown#2001000345
+	 * line3 -> <anything else or end of buffer>
+	 *
+	 * "123" is the index of the instruction that was poisoned.
+	 * "345" in "2001000345" are map index in obj->maps to fetch map name.
+	 */
+	struct bpf_object *obj = prog->obj;
+	const struct bpf_map *map;
+	int insn_idx, map_idx;
+	char patch[128];
+
+	if (sscanf(line1, "%d: (%*d) call unknown#%d\n", &insn_idx, &map_idx) != 2)
+		return;
+
+	map_idx -= MAP_LDIMM64_POISON_BASE;
+	if (map_idx < 0 || map_idx >= obj->nr_maps)
+		return;
+	map = &obj->maps[map_idx];
+
+	snprintf(patch, sizeof(patch),
+		 "%d: <invalid BPF map reference>\n"
+		 "BPF map '%s' is referenced but wasn't created\n",
+		 insn_idx, map->name);
+
+	patch_log(buf, buf_sz, log_sz, line1, line3 - line1, patch);
+}
+
 static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_sz)
 {
 	/* look for familiar error patterns in last N lines of the log */
@@ -6980,6 +7064,14 @@ static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_s
 			fixup_log_failed_core_relo(prog, buf, buf_sz, log_sz,
 						   prev_line, cur_line, next_line);
 			return;
+		} else if (str_has_pfx(cur_line, "invalid func unknown#"MAP_LDIMM64_POISON_PFX)) {
+			prev_line = find_prev_line(buf, cur_line);
+			if (!prev_line)
+				continue;
+
+			fixup_log_missing_map_load(prog, buf, buf_sz, log_sz,
+						   prev_line, cur_line, next_line);
+			return;
 		}
 	}
 }
@@ -8168,7 +8260,7 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
 		char *pin_path = NULL;
 		char buf[PATH_MAX];
 
-		if (map->skipped)
+		if (!map->autocreate)
 			continue;
 
 		if (path) {
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index cdbfee60ea3e..114b1f6f73a5 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -866,6 +866,28 @@ struct bpf_map *bpf_map__prev(const struct bpf_map *map, const struct bpf_object
 LIBBPF_API struct bpf_map *
 bpf_object__prev_map(const struct bpf_object *obj, const struct bpf_map *map);
 
+/**
+ * @brief **bpf_map__set_autocreate()** sets whether libbpf has to auto-create
+ * BPF map during BPF object load phase.
+ * @param map the BPF map instance
+ * @param autocreate whether to create BPF map during BPF object load
+ * @return 0 on success; -EBUSY if BPF object was already loaded
+ *
+ * **bpf_map__set_autocreate()** allows to opt-out from libbpf auto-creating
+ * BPF map. By default, libbpf will attempt to create every single BPF map
+ * defined in BPF object file using BPF_MAP_CREATE command of bpf() syscall
+ * and fill in map FD in BPF instructions.
+ *
+ * This API allows to opt-out of this process for specific map instance. This
+ * can be useful if host kernel doesn't support such BPF map type or used
+ * combination of flags and user application wants to avoid creating such
+ * a map in the first place. User is still responsible to make sure that their
+ * BPF-side code that expects to use such missing BPF map is recognized by BPF
+ * verifier as dead code, otherwise BPF verifier will reject such BPF program.
+ */
+LIBBPF_API int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate);
+LIBBPF_API bool bpf_map__autocreate(const struct bpf_map *map);
+
 /**
  * @brief **bpf_map__fd()** gets the file descriptor of the passed
  * BPF map
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 82f6d62176dd..b5bc84039407 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -442,10 +442,12 @@ LIBBPF_0.7.0 {
 
 LIBBPF_0.8.0 {
 	global:
+		bpf_map__autocreate;
+		bpf_map__set_autocreate;
 		bpf_object__destroy_subskeleton;
 		bpf_object__open_subskeleton;
+		bpf_program__attach_kprobe_multi_opts;
 		bpf_program__attach_usdt;
 		libbpf_register_prog_handler;
 		libbpf_unregister_prog_handler;
-		bpf_program__attach_kprobe_multi_opts;
 } LIBBPF_0.7.0;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH bpf-next 4/4] selftests/bpf: test bpf_map__set_autocreate() and related log fixup logic
  2022-04-28  4:15 [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2022-04-28  4:15 ` [PATCH bpf-next 3/4] libbpf: allow to opt-out from creating BPF maps Andrii Nakryiko
@ 2022-04-28  4:15 ` Andrii Nakryiko
  2022-04-29  3:20 ` [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation patchwork-bot+netdevbpf
  4 siblings, 0 replies; 6+ messages in thread
From: Andrii Nakryiko @ 2022-04-28  4:15 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Add a subtest that excercises bpf_map__set_autocreate() API and
validates that libbpf properly fixes up BPF verifier log with correct
map information.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../selftests/bpf/prog_tests/log_fixup.c      | 37 ++++++++++++++++++-
 .../selftests/bpf/progs/test_log_fixup.c      | 26 +++++++++++++
 2 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/log_fixup.c b/tools/testing/selftests/bpf/prog_tests/log_fixup.c
index be3a956cb3a5..f4ffdcabf4e4 100644
--- a/tools/testing/selftests/bpf/prog_tests/log_fixup.c
+++ b/tools/testing/selftests/bpf/prog_tests/log_fixup.c
@@ -85,7 +85,6 @@ static void bad_core_relo_subprog(void)
 	if (!ASSERT_ERR(err, "load_fail"))
 		goto cleanup;
 
-	/* there should be no prog loading log because we specified per-prog log buf */
 	ASSERT_HAS_SUBSTR(log_buf,
 			  ": <invalid CO-RE relocation>\n"
 			  "failed to resolve CO-RE relocation <byte_off> ",
@@ -101,6 +100,40 @@ static void bad_core_relo_subprog(void)
 	test_log_fixup__destroy(skel);
 }
 
+static void missing_map(void)
+{
+	char log_buf[8 * 1024];
+	struct test_log_fixup* skel;
+	int err;
+
+	skel = test_log_fixup__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	bpf_map__set_autocreate(skel->maps.missing_map, false);
+
+	bpf_program__set_autoload(skel->progs.use_missing_map, true);
+	bpf_program__set_log_buf(skel->progs.use_missing_map, log_buf, sizeof(log_buf));
+
+	err = test_log_fixup__load(skel);
+	if (!ASSERT_ERR(err, "load_fail"))
+		goto cleanup;
+
+	ASSERT_TRUE(bpf_map__autocreate(skel->maps.existing_map), "existing_map_autocreate");
+	ASSERT_FALSE(bpf_map__autocreate(skel->maps.missing_map), "missing_map_autocreate");
+
+	ASSERT_HAS_SUBSTR(log_buf,
+			  "8: <invalid BPF map reference>\n"
+			  "BPF map 'missing_map' is referenced but wasn't created\n",
+			  "log_buf");
+
+	if (env.verbosity > VERBOSE_NONE)
+		printf("LOG:   \n=================\n%s=================\n", log_buf);
+
+cleanup:
+	test_log_fixup__destroy(skel);
+}
+
 void test_log_fixup(void)
 {
 	if (test__start_subtest("bad_core_relo_trunc_none"))
@@ -111,4 +144,6 @@ void test_log_fixup(void)
 		bad_core_relo(250, TRUNC_FULL  /* truncate also libbpf's message patch */);
 	if (test__start_subtest("bad_core_relo_subprog"))
 		bad_core_relo_subprog();
+	if (test__start_subtest("missing_map"))
+		missing_map();
 }
diff --git a/tools/testing/selftests/bpf/progs/test_log_fixup.c b/tools/testing/selftests/bpf/progs/test_log_fixup.c
index a78980d897b3..60450cb0e72e 100644
--- a/tools/testing/selftests/bpf/progs/test_log_fixup.c
+++ b/tools/testing/selftests/bpf/progs/test_log_fixup.c
@@ -35,4 +35,30 @@ int bad_relo_subprog(const void *ctx)
 	return bad_subprog() + bpf_core_field_size(t->pid);
 }
 
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} existing_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} missing_map SEC(".maps");
+
+SEC("?raw_tp/sys_enter")
+int use_missing_map(const void *ctx)
+{
+	int zero = 0, *value;
+
+	value = bpf_map_lookup_elem(&existing_map, &zero);
+
+	value = bpf_map_lookup_elem(&missing_map, &zero);
+
+	return value != NULL;
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation
  2022-04-28  4:15 [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2022-04-28  4:15 ` [PATCH bpf-next 4/4] selftests/bpf: test bpf_map__set_autocreate() and related log fixup logic Andrii Nakryiko
@ 2022-04-29  3:20 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-04-29  3:20 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Wed, 27 Apr 2022 21:15:19 -0700 you wrote:
> Add bpf_map__set_autocreate() API which is a BPF map counterpart of
> bpf_program__set_autoload() and serves similar goal of allowing to build more
> flexible CO-RE applications. See patch #3 for example scenarios in which the
> need for such API came up previously.
> 
> Patch #1 is a follow-up patch to previous patch set adding verifier log fixup
> logic, making sure bpf_core_format_spec()'s return result is used for
> something useful.
> 
> [...]

Here is the summary with links:
  - [bpf-next,1/4] libbpf: append "..." in fixed up log if CO-RE spec is truncated
    https://git.kernel.org/bpf/bpf-next/c/b198881d4b4c
  - [bpf-next,2/4] libbpf: use libbpf_mem_ensure() when allocating new map
    https://git.kernel.org/bpf/bpf-next/c/69721203b1f3
  - [bpf-next,3/4] libbpf: allow to opt-out from creating BPF maps
    https://git.kernel.org/bpf/bpf-next/c/ec41817b4af5
  - [bpf-next,4/4] selftests/bpf: test bpf_map__set_autocreate() and related log fixup logic
    https://git.kernel.org/bpf/bpf-next/c/68964e155677

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-04-29  3:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-28  4:15 [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation Andrii Nakryiko
2022-04-28  4:15 ` [PATCH bpf-next 1/4] libbpf: append "..." in fixed up log if CO-RE spec is truncated Andrii Nakryiko
2022-04-28  4:15 ` [PATCH bpf-next 2/4] libbpf: use libbpf_mem_ensure() when allocating new map Andrii Nakryiko
2022-04-28  4:15 ` [PATCH bpf-next 3/4] libbpf: allow to opt-out from creating BPF maps Andrii Nakryiko
2022-04-28  4:15 ` [PATCH bpf-next 4/4] selftests/bpf: test bpf_map__set_autocreate() and related log fixup logic Andrii Nakryiko
2022-04-29  3:20 ` [PATCH bpf-next 0/4] libbpf: allow to opt-out from BPF map creation patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).