bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/2] libbpf: capability for resizing datasec maps
@ 2023-05-10 22:33 JP Kobryn
  2023-05-10 22:33 ` [PATCH v2 bpf-next 1/2] libbpf: add " JP Kobryn
  2023-05-10 22:33 ` [PATCH v2 bpf-next 2/2] libbpf: selftests " JP Kobryn
  0 siblings, 2 replies; 5+ messages in thread
From: JP Kobryn @ 2023-05-10 22:33 UTC (permalink / raw)
  To: bpf, andrii; +Cc: kernel-team, inwardvessel

Due to the way the datasec maps like bss, data, rodata are memory
mapped, they cannot be resized with bpf_map__set_value_size() like
non-datasec maps can. This series offers a way to allow the resizing of
datasec maps, by having the mapped regions resized as needed and also
adjusting associated BTF info if possible.

The thought behind this is to allow for use cases where a given datasec
needs to scale to for example the number of CPU's present. A bpf program
can have a global array in a data section with an initial length and
before loading the bpf program, the array length could be extended to
match the CPU count. The selftests included in this series perform this
scaling to an arbitrary value to demonstrate how it can work.

JP Kobryn (2):
  add capability for resizing datasec maps
  selftests for resizing datasec maps

 tools/lib/bpf/libbpf.c                        | 158 +++++++++++-
 .../bpf/prog_tests/global_map_resize.c        | 236 ++++++++++++++++++
 .../bpf/progs/test_global_map_resize.c        |  58 +++++
 3 files changed, 451 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/global_map_resize.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_global_map_resize.c

-- 
2.40.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 bpf-next 1/2] libbpf: add capability for resizing datasec maps
  2023-05-10 22:33 [PATCH v2 bpf-next 0/2] libbpf: capability for resizing datasec maps JP Kobryn
@ 2023-05-10 22:33 ` JP Kobryn
  2023-05-16 21:10   ` Andrii Nakryiko
  2023-05-10 22:33 ` [PATCH v2 bpf-next 2/2] libbpf: selftests " JP Kobryn
  1 sibling, 1 reply; 5+ messages in thread
From: JP Kobryn @ 2023-05-10 22:33 UTC (permalink / raw)
  To: bpf, andrii; +Cc: kernel-team, inwardvessel

This patch updates bpf_map__set_value_size() so that if the given map is
memory mapped, it will attempt to resize the mapped region. Initial
contents of the mapped region are preserved. BTF is not required, but
after the mapping is resized an attempt is made to adjust the associated
BTF information if the following criteria is met:
 - BTF info is present
 - the map is a datasec
 - the final variable in the datasec is an array

... the resulting BTF info will be updated so that the final array
variable is associated with a new BTF array type sized to cover the
requested size.

Note that the initial resizing of the memory mapped region can succeed
while the subsequent BTF adjustment can fail. In this case, BTF info is
dropped from the map by clearing the key and value type.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
---
 tools/lib/bpf/libbpf.c | 158 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 157 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1cbacf9e71f3..50cfe2bd4ba0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -1510,6 +1510,39 @@ static size_t bpf_map_mmap_sz(const struct bpf_map *map)
 	return map_sz;
 }
 
+static int bpf_map_mmap_resize(struct bpf_map *map, size_t old_sz, size_t new_sz)
+{
+	void *mmaped;
+
+	if (!map->mmaped)
+		return -EINVAL;
+
+	if (old_sz == new_sz)
+		return 0;
+
+	mmaped = mmap(NULL, new_sz, PROT_READ | PROT_WRITE,
+			MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+	if (mmaped == MAP_FAILED)
+		return libbpf_err(-errno);
+
+	/* copy pre-existing contents to new region,
+	 * using the minimum of old/new size
+	 */
+	memcpy(mmaped, map->mmaped, min(old_sz, new_sz));
+
+	if (munmap(map->mmaped, old_sz)) {
+		pr_warn("map '%s': failed to unmap\n", bpf_map__name(map));
+		if (munmap(mmaped, new_sz))
+			pr_warn("map '%s': failed to unmap temp region\n",
+					bpf_map__name(map));
+		return libbpf_err(-errno);
+	}
+
+	map->mmaped = mmaped;
+
+	return 0;
+}
+
 static char *internal_map_name(struct bpf_object *obj, const char *real_name)
 {
 	char map_name[BPF_OBJ_NAME_LEN], *p;
@@ -9412,12 +9445,135 @@ __u32 bpf_map__value_size(const struct bpf_map *map)
 	return map->def.value_size;
 }
 
+static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
+{
+	int err;
+	int i, vlen;
+	struct btf *btf;
+	const struct btf_type *array_type, *array_element_type;
+	struct btf_type *datasec_type, *var_type;
+	struct btf_var_secinfo *var;
+	const struct btf_array *array;
+	__u32 offset, nr_elements, new_array_id;
+
+	/* check btf existence */
+	btf = bpf_object__btf(map->obj);
+	if (!btf)
+		return -ENOENT;
+
+	/* verify map is datasec */
+	datasec_type = btf_type_by_id(btf, bpf_map__btf_value_type_id(map));
+	if (!btf_is_datasec(datasec_type)) {
+		pr_warn("map '%s': attempted to resize but map is not a datasec\n",
+				bpf_map__name(map));
+		return -EINVAL;
+	}
+
+	/* verify datasec has at least one var */
+	vlen = btf_vlen(datasec_type);
+	if (vlen == 0) {
+		pr_warn("map '%s': attempted to resize but map vlen == 0\n",
+				bpf_map__name(map));
+		return -EINVAL;
+	}
+
+	/* walk to the last var in the datasec,
+	 * increasing the offset as we pass each var
+	 */
+	var = btf_var_secinfos(datasec_type);
+	offset = 0;
+	for (i = 0; i < vlen - 1; i++) {
+		offset += var->size;
+		var++;
+	}
+
+	/* verify last var in the datasec is an array */
+	var_type = btf_type_by_id(btf, var->type);
+	array_type = skip_mods_and_typedefs(btf, var_type->type, NULL);
+	if (!btf_is_array(array_type)) {
+		pr_warn("map '%s': cannot be resized last var must be array\n",
+				bpf_map__name(map));
+		return -EINVAL;
+	}
+
+	/* verify request size aligns with array */
+	array = btf_array(array_type);
+	array_element_type = btf_type_by_id(btf, array->type);
+	if ((size - offset) % array_element_type->size != 0) {
+		pr_warn("map '%s': attempted to resize but requested size does not align\n",
+				bpf_map__name(map));
+		return -EINVAL;
+	}
+
+	/* create a new array based on the existing array,
+	 * but with new length
+	 */
+	nr_elements = (size - offset) / array_element_type->size;
+	new_array_id = btf__add_array(btf, array->index_type, array->type,
+			nr_elements);
+	if (new_array_id < 0) {
+		pr_warn("map '%s': failed to create new array\n",
+				bpf_map__name(map));
+		err = new_array_id;
+		return err;
+	}
+
+	/* adding a new btf type invalidates existing pointers to btf objects,
+	 * so refresh pointers before proceeding
+	 */
+	datasec_type = btf_type_by_id(btf, map->btf_value_type_id);
+	var = btf_var_secinfos(datasec_type);
+	for (i = 0; i < vlen - 1; i++)
+		var++;
+	var_type = btf_type_by_id(btf, var->type);
+
+	/* finally update btf info */
+	datasec_type->size = size;
+	var->size = size - offset;
+	var_type->type = new_array_id;
+
+	return 0;
+}
+
 int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
 {
+	int err;
+	__u32 old_size;
+
 	if (map->fd >= 0)
 		return libbpf_err(-EBUSY);
-	map->def.value_size = size;
+
+	old_size = map->def.value_size;
+
+	if (map->mmaped) {
+		size_t mmap_old_sz, mmap_new_sz;
+
+		mmap_old_sz = bpf_map_mmap_sz(map);
+		map->def.value_size = size;
+		mmap_new_sz = bpf_map_mmap_sz(map);
+
+		err = bpf_map_mmap_resize(map, mmap_old_sz, mmap_new_sz);
+		if (err) {
+			pr_warn("map '%s': failed to resize memory mapped region\n",
+					bpf_map__name(map));
+			goto err_out;
+		}
+		err = map_btf_datasec_resize(map, size);
+		if (err && err != -ENOENT) {
+			pr_warn("map '%s': failed to adjust btf for resized map. dropping btf info\n",
+					bpf_map__name(map));
+			map->btf_value_type_id = 0;
+			map->btf_key_type_id = 0;
+		}
+	} else {
+		map->def.value_size = size;
+	}
+
 	return 0;
+
+err_out:
+	map->def.value_size = old_size;
+	return libbpf_err(err);
 }
 
 __u32 bpf_map__btf_key_type_id(const struct bpf_map *map)
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 bpf-next 2/2] libbpf: selftests for resizing datasec maps
  2023-05-10 22:33 [PATCH v2 bpf-next 0/2] libbpf: capability for resizing datasec maps JP Kobryn
  2023-05-10 22:33 ` [PATCH v2 bpf-next 1/2] libbpf: add " JP Kobryn
@ 2023-05-10 22:33 ` JP Kobryn
  2023-05-16 21:18   ` Andrii Nakryiko
  1 sibling, 1 reply; 5+ messages in thread
From: JP Kobryn @ 2023-05-10 22:33 UTC (permalink / raw)
  To: bpf, andrii; +Cc: kernel-team, inwardvessel

This patch adds test coverage for resizing datasec maps. The first two
subtests resize the bss and custom data sections. In both cases, an
initial array (of length one) has its element set to one. After resizing
the rest of the array is filled with ones as well. A BPF program is then
run to sum the respective arrays and back on the userspace side the sum
is checked to be equal to the number of elements.
The third subtest attempts to perform resizing under conditions that
will result in either the resize failing or the BTF info being dropped.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
---
 .../bpf/prog_tests/global_map_resize.c        | 236 ++++++++++++++++++
 .../bpf/progs/test_global_map_resize.c        |  58 +++++
 2 files changed, 294 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/global_map_resize.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_global_map_resize.c

diff --git a/tools/testing/selftests/bpf/prog_tests/global_map_resize.c b/tools/testing/selftests/bpf/prog_tests/global_map_resize.c
new file mode 100644
index 000000000000..58961789d0b3
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/global_map_resize.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+#include <errno.h>
+#include <sys/syscall.h>
+#include <unistd.h>
+
+#include "test_global_map_resize.skel.h"
+#include "test_progs.h"
+
+static void run_prog_bss_array_sum(void)
+{
+	(void)syscall(__NR_getpid);
+}
+
+static void run_prog_data_array_sum(void)
+{
+	(void)syscall(__NR_getuid);
+}
+
+static void global_map_resize_bss_subtest(void)
+{
+	int err;
+	struct test_global_map_resize *skel;
+	struct bpf_map *map;
+	const __u32 desired_sz = sizeof(skel->bss->sum) + (__u32)sysconf(_SC_PAGE_SIZE) * 2;
+	size_t array_len, actual_sz;
+
+	skel = test_global_map_resize__open();
+	if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open"))
+		goto teardown;
+
+	/* set some initial value before resizing.
+	 * it is expected this non-zero value will be preserved
+	 * while resizing.
+	 */
+	skel->bss->array[0] = 1;
+
+	/* resize map value and verify the new size */
+	map = skel->maps.bss;
+	err = bpf_map__set_value_size(map, desired_sz);
+	if (!ASSERT_OK(err, "bpf_map__set_value_size"))
+		goto teardown;
+	if (!ASSERT_EQ(bpf_map__value_size(map), desired_sz, "resize"))
+		goto teardown;
+
+	/* set the expected number of elements based on the resized array */
+	array_len = (desired_sz - sizeof(skel->bss->sum)) /
+		(__u32)sizeof(skel->bss->array[0]);
+	if (!ASSERT_GT(array_len, 1, "array_len"))
+		goto teardown;
+
+	skel->bss =
+		(struct test_global_map_resize__bss *)bpf_map__initial_value(
+				skel->maps.bss, &actual_sz);
+	if (!ASSERT_OK_PTR(skel->bss, "bpf_map__initial_value (ptr)"))
+		goto teardown;
+	if (!ASSERT_EQ(actual_sz, desired_sz, "bpf_map__initial_value (size)"))
+		goto teardown;
+
+	/* fill the newly resized array with ones,
+	 * skipping the first element which was previously set
+	 */
+	for (int i = 1; i < array_len; i++)
+		skel->bss->array[i] = 1;
+
+	/* set global const values before loading */
+	skel->rodata->pid = getpid();
+	skel->rodata->bss_array_len = array_len;
+	skel->rodata->data_array_len = 1;
+
+	err = test_global_map_resize__load(skel);
+	if (!ASSERT_OK(err, "test_global_map_resize__load"))
+		goto teardown;
+	err = test_global_map_resize__attach(skel);
+	if (!ASSERT_OK(err, "test_global_map_resize__attach"))
+		goto teardown;
+
+	/* run the bpf program which will sum the contents of the array.
+	 * since the array was filled with ones,verify the sum equals array_len
+	 */
+	run_prog_bss_array_sum();
+	if (!ASSERT_EQ(skel->bss->sum, array_len, "sum"))
+		goto teardown;
+
+teardown:
+	test_global_map_resize__destroy(skel);
+}
+
+static void global_map_resize_data_subtest(void)
+{
+	int err;
+	struct test_global_map_resize *skel;
+	struct bpf_map *map;
+	const __u32 desired_sz = (__u32)sysconf(_SC_PAGE_SIZE) * 2;
+	size_t array_len, actual_sz;
+
+	skel = test_global_map_resize__open();
+	if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open"))
+		goto teardown;
+
+	/* set some initial value before resizing.
+	 * it is expected this non-zero value will be preserved
+	 * while resizing.
+	 */
+	skel->data_custom->my_array[0] = 1;
+
+	/* resize map value and verify the new size */
+	map = skel->maps.data_custom;
+	err = bpf_map__set_value_size(map, desired_sz);
+	if (!ASSERT_OK(err, "bpf_map__set_value_size"))
+		goto teardown;
+	if (!ASSERT_EQ(bpf_map__value_size(map), desired_sz, "resize"))
+		goto teardown;
+
+	/* set the expected number of elements based on the resized array */
+	array_len = (desired_sz - sizeof(skel->bss->sum)) /
+		(__u32)sizeof(skel->data_custom->my_array[0]);
+	if (!ASSERT_GT(array_len, 1, "array_len"))
+		goto teardown;
+
+	skel->data_custom =
+		(struct test_global_map_resize__data_custom *)bpf_map__initial_value(
+				skel->maps.data_custom, &actual_sz);
+	if (!ASSERT_OK_PTR(skel->data_custom, "bpf_map__initial_value (ptr)"))
+		goto teardown;
+	if (!ASSERT_EQ(actual_sz, desired_sz, "bpf_map__initial_value (size)"))
+		goto teardown;
+
+	/* fill the newly resized array with ones,
+	 * skipping the first element which was previously set
+	 */
+	for (int i = 1; i < array_len; i++)
+		skel->data_custom->my_array[i] = 1;
+
+	/* set global const values before loading */
+	skel->rodata->pid = getpid();
+	skel->rodata->bss_array_len = 1;
+	skel->rodata->data_array_len = array_len;
+
+	err = test_global_map_resize__load(skel);
+	if (!ASSERT_OK(err, "test_global_map_resize__load"))
+		goto teardown;
+	err = test_global_map_resize__attach(skel);
+	if (!ASSERT_OK(err, "test_global_map_resize__attach"))
+		goto teardown;
+
+	/* run the bpf program which will sum the contents of the array.
+	 * since the array was filled with ones,verify the sum equals array_len
+	 */
+	run_prog_data_array_sum();
+	if (!ASSERT_EQ(skel->bss->sum, array_len, "sum"))
+		goto teardown;
+
+teardown:
+	test_global_map_resize__destroy(skel);
+}
+
+static void global_map_resize_invalid_subtest(void)
+{
+	int err;
+	struct test_global_map_resize *skel;
+	struct bpf_map *map;
+	__u32 element_sz, desired_sz;
+
+	skel = test_global_map_resize__open();
+	if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open"))
+		return;
+
+	 /* attempt to resize a global datasec map to size
+	  * which does NOT align with array
+	  */
+	map = skel->maps.data_custom;
+	if (!ASSERT_NEQ(bpf_map__btf_value_type_id(map), 0, ".data.custom initial btf"))
+		goto teardown;
+	/* set desired size a fraction of element size beyond an aligned size */
+	element_sz = (__u32)sizeof(skel->data_custom->my_array[0]);
+	desired_sz = element_sz + element_sz / 2;
+	/* confirm desired size does NOT align with array */
+	if (!ASSERT_NEQ(desired_sz % element_sz, 0, "my_array alignment"))
+		goto teardown;
+	err = bpf_map__set_value_size(map, desired_sz);
+	/* confirm resize is OK but BTF info is dropped */
+	if (!ASSERT_OK(err, ".data.custom bpf_map__set_value_size") ||
+		!ASSERT_EQ(bpf_map__btf_key_type_id(map), 0, ".data.custom drop btf key") ||
+		!ASSERT_EQ(bpf_map__btf_value_type_id(map), 0, ".data.custom drop btf val"))
+		goto teardown;
+
+	/* attempt to resize a global datasec map
+	 * whose only var is NOT an array
+	 */
+	map = skel->maps.data_non_array;
+	if (!ASSERT_NEQ(bpf_map__btf_value_type_id(map), 0, ".data.non_array initial btf"))
+		goto teardown;
+	/* set desired size to arbitrary value */
+	desired_sz = 1024;
+	err = bpf_map__set_value_size(map, desired_sz);
+	/* confirm resize is OK but BTF info is dropped */
+	if (!ASSERT_OK(err, ".data.non_array bpf_map__set_value_size") ||
+		!ASSERT_EQ(bpf_map__btf_key_type_id(map), 0, ".data.non_array drop btf key") ||
+		!ASSERT_EQ(bpf_map__btf_value_type_id(map), 0, ".data.non_array drop btf val"))
+		goto teardown;
+
+	/* attempt to resize a global datasec map
+	 * whose last var is NOT an array
+	 */
+	map = skel->maps.data_array_not_last;
+	if (!ASSERT_NEQ(bpf_map__btf_value_type_id(map), 0, ".data.array_not_last initial btf"))
+		goto teardown;
+	/* set desired size to a multiple of element size */
+	element_sz = (__u32)sizeof(skel->data_array_not_last->my_array_first[0]);
+	desired_sz = element_sz * 8;
+	/* confirm desired size aligns with array */
+	if (!ASSERT_EQ(desired_sz % element_sz, 0, "my_array_first alignment"))
+		goto teardown;
+	err = bpf_map__set_value_size(map, desired_sz);
+	/* confirm resize is OK but BTF info is dropped */
+	if (!ASSERT_OK(err, ".data.array_not_last bpf_map__set_value_size") ||
+		!ASSERT_EQ(bpf_map__btf_key_type_id(map), 0, ".data.array_not_last drop btf key") ||
+		!ASSERT_EQ(bpf_map__btf_value_type_id(map), 0, ".data.array_not_last drop btf val"))
+		goto teardown;
+
+teardown:
+	test_global_map_resize__destroy(skel);
+}
+
+void test_global_map_resize(void)
+{
+	if (test__start_subtest("global_map_resize_bss"))
+		global_map_resize_bss_subtest();
+
+	if (test__start_subtest("global_map_resize_data"))
+		global_map_resize_data_subtest();
+
+	if (test__start_subtest("global_map_resize_invalid"))
+		global_map_resize_invalid_subtest();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_global_map_resize.c b/tools/testing/selftests/bpf/progs/test_global_map_resize.c
new file mode 100644
index 000000000000..2588f2384246
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_global_map_resize.c
@@ -0,0 +1,58 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+/* rodata section */
+const volatile pid_t pid;
+const volatile size_t bss_array_len;
+const volatile size_t data_array_len;
+
+/* bss section */
+int sum = 0;
+int array[1];
+
+/* custom data secton */
+int my_array[1] SEC(".data.custom");
+
+/* custom data section which should NOT be resizable,
+ * since it contains a single var which is not an array
+ */
+int my_int SEC(".data.non_array");
+
+/* custom data section which should NOT be resizable,
+ * since its last var is not an array
+ */
+int my_array_first[1] SEC(".data.array_not_last");
+int my_int_last SEC(".data.array_not_last");
+
+SEC("tp/syscalls/sys_enter_getpid")
+int bss_array_sum(void *ctx)
+{
+	if (pid != (bpf_get_current_pid_tgid() >> 32))
+		return 0;
+
+	sum = 0;
+
+	for (size_t i = 0; i < bss_array_len; ++i)
+		sum += array[i];
+
+	return 0;
+}
+
+SEC("tp/syscalls/sys_enter_getuid")
+int data_array_sum(void *ctx)
+{
+	if (pid != (bpf_get_current_pid_tgid() >> 32))
+		return 0;
+
+	sum = 0;
+
+	for (size_t i = 0; i < data_array_len; ++i)
+		sum += my_array[i];
+
+	return 0;
+}
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 bpf-next 1/2] libbpf: add capability for resizing datasec maps
  2023-05-10 22:33 ` [PATCH v2 bpf-next 1/2] libbpf: add " JP Kobryn
@ 2023-05-16 21:10   ` Andrii Nakryiko
  0 siblings, 0 replies; 5+ messages in thread
From: Andrii Nakryiko @ 2023-05-16 21:10 UTC (permalink / raw)
  To: JP Kobryn; +Cc: bpf, andrii, kernel-team

On Wed, May 10, 2023 at 3:33 PM JP Kobryn <inwardvessel@gmail.com> wrote:
>
> This patch updates bpf_map__set_value_size() so that if the given map is
> memory mapped, it will attempt to resize the mapped region. Initial
> contents of the mapped region are preserved. BTF is not required, but
> after the mapping is resized an attempt is made to adjust the associated
> BTF information if the following criteria is met:
>  - BTF info is present
>  - the map is a datasec
>  - the final variable in the datasec is an array
>
> ... the resulting BTF info will be updated so that the final array
> variable is associated with a new BTF array type sized to cover the
> requested size.
>
> Note that the initial resizing of the memory mapped region can succeed
> while the subsequent BTF adjustment can fail. In this case, BTF info is
> dropped from the map by clearing the key and value type.
>
> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> ---

It's coming along pretty nicely, still few bugs and unnecessary
complications, but easy to fix.

Can you please also add a doc-comment in libbpf.h for
bpf_map__set_value_size() describing what this API is doing and
explicitly point out that once mmap-able BPF map is resized, all
previous pointers returned from bpf_map__initial_value() and BPF
skeleton pointers for corresponding data section will be invalidated
and have to be re-initialized. This is an important and subtle point,
best to call it out very explicitly.


>  tools/lib/bpf/libbpf.c | 158 ++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 157 insertions(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 1cbacf9e71f3..50cfe2bd4ba0 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -1510,6 +1510,39 @@ static size_t bpf_map_mmap_sz(const struct bpf_map *map)
>         return map_sz;
>  }
>
> +static int bpf_map_mmap_resize(struct bpf_map *map, size_t old_sz, size_t new_sz)
> +{
> +       void *mmaped;
> +
> +       if (!map->mmaped)
> +               return -EINVAL;
> +
> +       if (old_sz == new_sz)
> +               return 0;
> +
> +       mmaped = mmap(NULL, new_sz, PROT_READ | PROT_WRITE,
> +                       MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> +       if (mmaped == MAP_FAILED)
> +               return libbpf_err(-errno);

no need to use libbpf_err() here, it's internal function

> +
> +       /* copy pre-existing contents to new region,
> +        * using the minimum of old/new size
> +        */
> +       memcpy(mmaped, map->mmaped, min(old_sz, new_sz));
> +
> +       if (munmap(map->mmaped, old_sz)) {
> +               pr_warn("map '%s': failed to unmap\n", bpf_map__name(map));
> +               if (munmap(mmaped, new_sz))
> +                       pr_warn("map '%s': failed to unmap temp region\n",
> +                                       bpf_map__name(map));
> +               return libbpf_err(-errno);
> +       }

this seems a bit too paranoid. Let's just unconditionally
`munmap(map->mmaped, old_sz);` and that's it. Don't add warning and
definitely don't try to unmap newly mapped region. As is this code
could lead to double-free, effectively.

> +
> +       map->mmaped = mmaped;
> +
> +       return 0;
> +}
> +
>  static char *internal_map_name(struct bpf_object *obj, const char *real_name)
>  {
>         char map_name[BPF_OBJ_NAME_LEN], *p;
> @@ -9412,12 +9445,135 @@ __u32 bpf_map__value_size(const struct bpf_map *map)
>         return map->def.value_size;
>  }
>
> +static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
> +{
> +       int err;
> +       int i, vlen;
> +       struct btf *btf;
> +       const struct btf_type *array_type, *array_element_type;
> +       struct btf_type *datasec_type, *var_type;
> +       struct btf_var_secinfo *var;
> +       const struct btf_array *array;
> +       __u32 offset, nr_elements, new_array_id;
> +
> +       /* check btf existence */
> +       btf = bpf_object__btf(map->obj);
> +       if (!btf)
> +               return -ENOENT;
> +
> +       /* verify map is datasec */
> +       datasec_type = btf_type_by_id(btf, bpf_map__btf_value_type_id(map));
> +       if (!btf_is_datasec(datasec_type)) {
> +               pr_warn("map '%s': attempted to resize but map is not a datasec\n",

"map value type is not a datasec". map is not a datasec, it's value type is

> +                               bpf_map__name(map));
> +               return -EINVAL;
> +       }
> +
> +       /* verify datasec has at least one var */
> +       vlen = btf_vlen(datasec_type);
> +       if (vlen == 0) {
> +               pr_warn("map '%s': attempted to resize but map vlen == 0\n",

maybe "map value datasec is empty"? "vlen" is not necessarily
something that users will easily recognize and understand

> +                               bpf_map__name(map));
> +               return -EINVAL;
> +       }
> +
> +       /* walk to the last var in the datasec,
> +        * increasing the offset as we pass each var
> +        */
> +       var = btf_var_secinfos(datasec_type);
> +       offset = 0;
> +       for (i = 0; i < vlen - 1; i++) {
> +               offset += var->size;
> +               var++;
> +       }

it's both incorrect and overcomplicated. Just:

var = btf_var_secinfos(datasec_type)[vlen - 1];
offset = var->offset;

> +
> +       /* verify last var in the datasec is an array */
> +       var_type = btf_type_by_id(btf, var->type);
> +       array_type = skip_mods_and_typedefs(btf, var_type->type, NULL);
> +       if (!btf_is_array(array_type)) {
> +               pr_warn("map '%s': cannot be resized last var must be array\n",

"cannot be resized, last var must be an array"?

> +                               bpf_map__name(map));
> +               return -EINVAL;
> +       }
> +
> +       /* verify request size aligns with array */
> +       array = btf_array(array_type);
> +       array_element_type = btf_type_by_id(btf, array->type);

not enough, need to skip_mods_and_typedefs() first, etc. But probably
simpler to just use btf__resolve_size(array->type)? And don't forget
to check that we get > 0 result, otherwise we run a risk of division
by zero below

> +       if ((size - offset) % array_element_type->size != 0) {
> +               pr_warn("map '%s': attempted to resize but requested size does not align\n",
> +                               bpf_map__name(map));
> +               return -EINVAL;
> +       }
> +
> +       /* create a new array based on the existing array,
> +        * but with new length
> +        */
> +       nr_elements = (size - offset) / array_element_type->size;
> +       new_array_id = btf__add_array(btf, array->index_type, array->type,
> +                       nr_elements);
> +       if (new_array_id < 0) {
> +               pr_warn("map '%s': failed to create new array\n",

this is a very unlikely error to happen, unless there is some bug (the
only legitimate reason is -ENOMEM, which we generally don't log
everywhere). So let's drop unnecessary pr_warn() here.

> +                               bpf_map__name(map));
> +               err = new_array_id;
> +               return err;
> +       }
> +
> +       /* adding a new btf type invalidates existing pointers to btf objects,
> +        * so refresh pointers before proceeding
> +        */
> +       datasec_type = btf_type_by_id(btf, map->btf_value_type_id);
> +       var = btf_var_secinfos(datasec_type);
> +       for (i = 0; i < vlen - 1; i++)
> +               var++;

as I showed above, btf_var_secinfos(datasec_type)[vlen - 1], no need
for linear search

> +       var_type = btf_type_by_id(btf, var->type);
> +
> +       /* finally update btf info */
> +       datasec_type->size = size;
> +       var->size = size - offset;
> +       var_type->type = new_array_id;
> +
> +       return 0;
> +}
> +
>  int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
>  {
> +       int err;
> +       __u32 old_size;
> +
>         if (map->fd >= 0)
>                 return libbpf_err(-EBUSY);
> -       map->def.value_size = size;
> +
> +       old_size = map->def.value_size;
> +
> +       if (map->mmaped) {
> +               size_t mmap_old_sz, mmap_new_sz;
> +
> +               mmap_old_sz = bpf_map_mmap_sz(map);
> +               map->def.value_size = size;
> +               mmap_new_sz = bpf_map_mmap_sz(map);

it's ugly that we need to modify map->def.value_size just to calculate
mmap region size. Let's add a helper that would take value_size and
max_entries explicitly and return mmap size? This will make this
function simpler as well as map->def.value_size will be updated once
at the very end, regardless of map->mmaped or not.

> +
> +               err = bpf_map_mmap_resize(map, mmap_old_sz, mmap_new_sz);
> +               if (err) {
> +                       pr_warn("map '%s': failed to resize memory mapped region\n",
> +                                       bpf_map__name(map));
> +                       goto err_out;
> +               }
> +               err = map_btf_datasec_resize(map, size);
> +               if (err && err != -ENOENT) {
> +                       pr_warn("map '%s': failed to adjust btf for resized map. dropping btf info\n",

nit, either capitalize "Dropping", or (preferably) turn it into a
single sentence: "failed to adjust BTF for resized map, clearing BTF
key/value type info\n". Dropping is ambiguous and sounds more
dangerous than it really is


> +                                       bpf_map__name(map));
> +                       map->btf_value_type_id = 0;
> +                       map->btf_key_type_id = 0;
> +               }
> +       } else {
> +               map->def.value_size = size;
> +       }
> +
>         return 0;
> +
> +err_out:
> +       map->def.value_size = old_size;
> +       return libbpf_err(err);
>  }
>
>  __u32 bpf_map__btf_key_type_id(const struct bpf_map *map)
> --
> 2.40.0
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 bpf-next 2/2] libbpf: selftests for resizing datasec maps
  2023-05-10 22:33 ` [PATCH v2 bpf-next 2/2] libbpf: selftests " JP Kobryn
@ 2023-05-16 21:18   ` Andrii Nakryiko
  0 siblings, 0 replies; 5+ messages in thread
From: Andrii Nakryiko @ 2023-05-16 21:18 UTC (permalink / raw)
  To: JP Kobryn; +Cc: bpf, andrii, kernel-team

On Wed, May 10, 2023 at 3:33 PM JP Kobryn <inwardvessel@gmail.com> wrote:
>
> This patch adds test coverage for resizing datasec maps. The first two
> subtests resize the bss and custom data sections. In both cases, an
> initial array (of length one) has its element set to one. After resizing
> the rest of the array is filled with ones as well. A BPF program is then
> run to sum the respective arrays and back on the userspace side the sum
> is checked to be equal to the number of elements.
> The third subtest attempts to perform resizing under conditions that
> will result in either the resize failing or the BTF info being dropped.
>
> Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
> ---
>  .../bpf/prog_tests/global_map_resize.c        | 236 ++++++++++++++++++
>  .../bpf/progs/test_global_map_resize.c        |  58 +++++
>  2 files changed, 294 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/global_map_resize.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_global_map_resize.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/global_map_resize.c b/tools/testing/selftests/bpf/prog_tests/global_map_resize.c
> new file mode 100644
> index 000000000000..58961789d0b3
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/global_map_resize.c
> @@ -0,0 +1,236 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +#include <errno.h>
> +#include <sys/syscall.h>
> +#include <unistd.h>
> +
> +#include "test_global_map_resize.skel.h"
> +#include "test_progs.h"
> +
> +static void run_prog_bss_array_sum(void)
> +{
> +       (void)syscall(__NR_getpid);
> +}
> +
> +static void run_prog_data_array_sum(void)
> +{
> +       (void)syscall(__NR_getuid);
> +}
> +
> +static void global_map_resize_bss_subtest(void)
> +{
> +       int err;
> +       struct test_global_map_resize *skel;
> +       struct bpf_map *map;
> +       const __u32 desired_sz = sizeof(skel->bss->sum) + (__u32)sysconf(_SC_PAGE_SIZE) * 2;
> +       size_t array_len, actual_sz;
> +
> +       skel = test_global_map_resize__open();
> +       if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open"))
> +               goto teardown;
> +
> +       /* set some initial value before resizing.
> +        * it is expected this non-zero value will be preserved
> +        * while resizing.
> +        */
> +       skel->bss->array[0] = 1;
> +
> +       /* resize map value and verify the new size */
> +       map = skel->maps.bss;
> +       err = bpf_map__set_value_size(map, desired_sz);
> +       if (!ASSERT_OK(err, "bpf_map__set_value_size"))
> +               goto teardown;
> +       if (!ASSERT_EQ(bpf_map__value_size(map), desired_sz, "resize"))
> +               goto teardown;
> +
> +       /* set the expected number of elements based on the resized array */
> +       array_len = (desired_sz - sizeof(skel->bss->sum)) /
> +               (__u32)sizeof(skel->bss->array[0]);

(__u32) cast is not necessary, overly pedantic here :) same above for
desired_sz initialization


> +       if (!ASSERT_GT(array_len, 1, "array_len"))
> +               goto teardown;
> +
> +       skel->bss =
> +               (struct test_global_map_resize__bss *)bpf_map__initial_value(
> +                               skel->maps.bss, &actual_sz);

another unnecessary cast making the code ugly, please drop the casting part

> +       if (!ASSERT_OK_PTR(skel->bss, "bpf_map__initial_value (ptr)"))
> +               goto teardown;
> +       if (!ASSERT_EQ(actual_sz, desired_sz, "bpf_map__initial_value (size)"))
> +               goto teardown;
> +
> +       /* fill the newly resized array with ones,
> +        * skipping the first element which was previously set
> +        */
> +       for (int i = 1; i < array_len; i++)
> +               skel->bss->array[i] = 1;
> +
> +       /* set global const values before loading */
> +       skel->rodata->pid = getpid();
> +       skel->rodata->bss_array_len = array_len;
> +       skel->rodata->data_array_len = 1;
> +
> +       err = test_global_map_resize__load(skel);
> +       if (!ASSERT_OK(err, "test_global_map_resize__load"))
> +               goto teardown;
> +       err = test_global_map_resize__attach(skel);
> +       if (!ASSERT_OK(err, "test_global_map_resize__attach"))
> +               goto teardown;
> +
> +       /* run the bpf program which will sum the contents of the array.
> +        * since the array was filled with ones,verify the sum equals array_len
> +        */
> +       run_prog_bss_array_sum();
> +       if (!ASSERT_EQ(skel->bss->sum, array_len, "sum"))
> +               goto teardown;
> +
> +teardown:
> +       test_global_map_resize__destroy(skel);
> +}
> +
> +static void global_map_resize_data_subtest(void)
> +{
> +       int err;
> +       struct test_global_map_resize *skel;
> +       struct bpf_map *map;
> +       const __u32 desired_sz = (__u32)sysconf(_SC_PAGE_SIZE) * 2;
> +       size_t array_len, actual_sz;
> +
> +       skel = test_global_map_resize__open();
> +       if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open"))
> +               goto teardown;
> +
> +       /* set some initial value before resizing.
> +        * it is expected this non-zero value will be preserved
> +        * while resizing.
> +        */
> +       skel->data_custom->my_array[0] = 1;
> +
> +       /* resize map value and verify the new size */
> +       map = skel->maps.data_custom;
> +       err = bpf_map__set_value_size(map, desired_sz);
> +       if (!ASSERT_OK(err, "bpf_map__set_value_size"))
> +               goto teardown;
> +       if (!ASSERT_EQ(bpf_map__value_size(map), desired_sz, "resize"))
> +               goto teardown;
> +
> +       /* set the expected number of elements based on the resized array */
> +       array_len = (desired_sz - sizeof(skel->bss->sum)) /
> +               (__u32)sizeof(skel->data_custom->my_array[0]);
> +       if (!ASSERT_GT(array_len, 1, "array_len"))
> +               goto teardown;
> +
> +       skel->data_custom =
> +               (struct test_global_map_resize__data_custom *)bpf_map__initial_value(
> +                               skel->maps.data_custom, &actual_sz);

all the same points about pedantic and unnecessary casts, please simplify

> +       if (!ASSERT_OK_PTR(skel->data_custom, "bpf_map__initial_value (ptr)"))
> +               goto teardown;
> +       if (!ASSERT_EQ(actual_sz, desired_sz, "bpf_map__initial_value (size)"))
> +               goto teardown;
> +
> +       /* fill the newly resized array with ones,
> +        * skipping the first element which was previously set
> +        */
> +       for (int i = 1; i < array_len; i++)
> +               skel->data_custom->my_array[i] = 1;
> +
> +       /* set global const values before loading */
> +       skel->rodata->pid = getpid();
> +       skel->rodata->bss_array_len = 1;
> +       skel->rodata->data_array_len = array_len;
> +
> +       err = test_global_map_resize__load(skel);
> +       if (!ASSERT_OK(err, "test_global_map_resize__load"))
> +               goto teardown;
> +       err = test_global_map_resize__attach(skel);
> +       if (!ASSERT_OK(err, "test_global_map_resize__attach"))
> +               goto teardown;
> +
> +       /* run the bpf program which will sum the contents of the array.
> +        * since the array was filled with ones,verify the sum equals array_len
> +        */
> +       run_prog_data_array_sum();
> +       if (!ASSERT_EQ(skel->bss->sum, array_len, "sum"))
> +               goto teardown;
> +
> +teardown:
> +       test_global_map_resize__destroy(skel);
> +}
> +
> +static void global_map_resize_invalid_subtest(void)
> +{
> +       int err;
> +       struct test_global_map_resize *skel;
> +       struct bpf_map *map;
> +       __u32 element_sz, desired_sz;
> +
> +       skel = test_global_map_resize__open();
> +       if (!ASSERT_OK_PTR(skel, "test_global_map_resize__open"))
> +               return;
> +
> +        /* attempt to resize a global datasec map to size
> +         * which does NOT align with array
> +         */

indentation seems off, please double check

> +       map = skel->maps.data_custom;
> +       if (!ASSERT_NEQ(bpf_map__btf_value_type_id(map), 0, ".data.custom initial btf"))
> +               goto teardown;
> +       /* set desired size a fraction of element size beyond an aligned size */
> +       element_sz = (__u32)sizeof(skel->data_custom->my_array[0]);
> +       desired_sz = element_sz + element_sz / 2;
> +       /* confirm desired size does NOT align with array */
> +       if (!ASSERT_NEQ(desired_sz % element_sz, 0, "my_array alignment"))
> +               goto teardown;
> +       err = bpf_map__set_value_size(map, desired_sz);
> +       /* confirm resize is OK but BTF info is dropped */
> +       if (!ASSERT_OK(err, ".data.custom bpf_map__set_value_size") ||
> +               !ASSERT_EQ(bpf_map__btf_key_type_id(map), 0, ".data.custom drop btf key") ||
> +               !ASSERT_EQ(bpf_map__btf_value_type_id(map), 0, ".data.custom drop btf val"))
> +               goto teardown;
> +
> +       /* attempt to resize a global datasec map
> +        * whose only var is NOT an array
> +        */
> +       map = skel->maps.data_non_array;
> +       if (!ASSERT_NEQ(bpf_map__btf_value_type_id(map), 0, ".data.non_array initial btf"))
> +               goto teardown;
> +       /* set desired size to arbitrary value */
> +       desired_sz = 1024;
> +       err = bpf_map__set_value_size(map, desired_sz);
> +       /* confirm resize is OK but BTF info is dropped */
> +       if (!ASSERT_OK(err, ".data.non_array bpf_map__set_value_size") ||
> +               !ASSERT_EQ(bpf_map__btf_key_type_id(map), 0, ".data.non_array drop btf key") ||
> +               !ASSERT_EQ(bpf_map__btf_value_type_id(map), 0, ".data.non_array drop btf val"))
> +               goto teardown;
> +
> +       /* attempt to resize a global datasec map
> +        * whose last var is NOT an array
> +        */
> +       map = skel->maps.data_array_not_last;
> +       if (!ASSERT_NEQ(bpf_map__btf_value_type_id(map), 0, ".data.array_not_last initial btf"))
> +               goto teardown;
> +       /* set desired size to a multiple of element size */
> +       element_sz = (__u32)sizeof(skel->data_array_not_last->my_array_first[0]);
> +       desired_sz = element_sz * 8;
> +       /* confirm desired size aligns with array */
> +       if (!ASSERT_EQ(desired_sz % element_sz, 0, "my_array_first alignment"))
> +               goto teardown;
> +       err = bpf_map__set_value_size(map, desired_sz);
> +       /* confirm resize is OK but BTF info is dropped */
> +       if (!ASSERT_OK(err, ".data.array_not_last bpf_map__set_value_size") ||
> +               !ASSERT_EQ(bpf_map__btf_key_type_id(map), 0, ".data.array_not_last drop btf key") ||
> +               !ASSERT_EQ(bpf_map__btf_value_type_id(map), 0, ".data.array_not_last drop btf val"))
> +               goto teardown;
> +
> +teardown:
> +       test_global_map_resize__destroy(skel);
> +}
> +
> +void test_global_map_resize(void)
> +{
> +       if (test__start_subtest("global_map_resize_bss"))
> +               global_map_resize_bss_subtest();
> +
> +       if (test__start_subtest("global_map_resize_data"))
> +               global_map_resize_data_subtest();
> +
> +       if (test__start_subtest("global_map_resize_invalid"))
> +               global_map_resize_invalid_subtest();
> +}
> diff --git a/tools/testing/selftests/bpf/progs/test_global_map_resize.c b/tools/testing/selftests/bpf/progs/test_global_map_resize.c
> new file mode 100644
> index 000000000000..2588f2384246
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_global_map_resize.c
> @@ -0,0 +1,58 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +/* rodata section */
> +const volatile pid_t pid;
> +const volatile size_t bss_array_len;
> +const volatile size_t data_array_len;
> +
> +/* bss section */
> +int sum = 0;
> +int array[1];
> +
> +/* custom data secton */

typo: section

> +int my_array[1] SEC(".data.custom");
> +
> +/* custom data section which should NOT be resizable,
> + * since it contains a single var which is not an array
> + */
> +int my_int SEC(".data.non_array");
> +
> +/* custom data section which should NOT be resizable,
> + * since its last var is not an array
> + */
> +int my_array_first[1] SEC(".data.array_not_last");
> +int my_int_last SEC(".data.array_not_last");
> +
> +SEC("tp/syscalls/sys_enter_getpid")
> +int bss_array_sum(void *ctx)
> +{
> +       if (pid != (bpf_get_current_pid_tgid() >> 32))
> +               return 0;
> +
> +       sum = 0;
> +
> +       for (size_t i = 0; i < bss_array_len; ++i)
> +               sum += array[i];
> +
> +       return 0;
> +}
> +
> +SEC("tp/syscalls/sys_enter_getuid")
> +int data_array_sum(void *ctx)
> +{
> +       if (pid != (bpf_get_current_pid_tgid() >> 32))
> +               return 0;
> +
> +       sum = 0;
> +
> +       for (size_t i = 0; i < data_array_len; ++i)
> +               sum += my_array[i];
> +
> +       return 0;
> +}
> --
> 2.40.0
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-05-16 21:18 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-10 22:33 [PATCH v2 bpf-next 0/2] libbpf: capability for resizing datasec maps JP Kobryn
2023-05-10 22:33 ` [PATCH v2 bpf-next 1/2] libbpf: add " JP Kobryn
2023-05-16 21:10   ` Andrii Nakryiko
2023-05-10 22:33 ` [PATCH v2 bpf-next 2/2] libbpf: selftests " JP Kobryn
2023-05-16 21:18   ` Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).