bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 0/3] add batched ops for percpu array
@ 2021-04-15 17:46 Pedro Tammela
  2021-04-15 17:46 ` [PATCH bpf-next v4 1/3] bpf: add batched ops support " Pedro Tammela
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Pedro Tammela @ 2021-04-15 17:46 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, David Verbeiren,
	Matthieu Baerts, netdev, bpf, linux-kernel, linux-kselftest

This patchset introduces batched operations for the per-cpu variant of
the array map.

It also removes the percpu macros from 'bpf_util.h'. This change was
suggested by Andrii in a earlier iteration of this patchset.

The tests were updated to reflect all the new changes.

v3 -> v4:
- Prefer 'calloc()' over 'malloc()' on batch ops tests
- Add missing static keyword in a couple of test functions
- 'offset' to 'cpu_offset' as suggested by Martin

v2 -> v3:
- Remove percpu macros as suggested by Andrii
- Update tests that used the per cpu macros

v1 -> v2:
- Amended a more descriptive commit message

Pedro Tammela (3):
  bpf: add batched ops support for percpu array
  bpf: selftests: remove percpu macros from bpf_util.h
  bpf: selftests: update array map tests for per-cpu batched ops

 kernel/bpf/arraymap.c                         |   2 +
 tools/testing/selftests/bpf/bpf_util.h        |   7 --
 .../bpf/map_tests/array_map_batch_ops.c       | 104 +++++++++++++-----
 .../bpf/map_tests/htab_map_batch_ops.c        |  87 +++++++--------
 .../selftests/bpf/prog_tests/map_init.c       |   9 +-
 tools/testing/selftests/bpf/test_maps.c       |  84 ++++++++------
 6 files changed, 173 insertions(+), 120 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v4 1/3] bpf: add batched ops support for percpu array
  2021-04-15 17:46 [PATCH bpf-next v4 0/3] add batched ops for percpu array Pedro Tammela
@ 2021-04-15 17:46 ` Pedro Tammela
  2021-04-15 17:46 ` [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Pedro Tammela @ 2021-04-15 17:46 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, David Verbeiren,
	Matthieu Baerts, netdev, bpf, linux-kernel, linux-kselftest
  Cc: Jamal Hadi Salim

Uses the already in-place infrastructure provided by the
'generic_map_*_batch' functions.

No tweak was needed as it transparently handles the percpu variant.

As arrays don't have delete operations, let it return a error to
user space (default behaviour).

Suggested-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 kernel/bpf/arraymap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 463d25e1e67e..3c4105603f9d 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -698,6 +698,8 @@ const struct bpf_map_ops percpu_array_map_ops = {
 	.map_delete_elem = array_map_delete_elem,
 	.map_seq_show_elem = percpu_array_map_seq_show_elem,
 	.map_check_btf = array_map_check_btf,
+	.map_lookup_batch = generic_map_lookup_batch,
+	.map_update_batch = generic_map_update_batch,
 	.map_set_for_each_callback_args = map_set_for_each_callback_args,
 	.map_for_each_callback = bpf_for_each_array_elem,
 	.map_btf_name = "bpf_array",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h
  2021-04-15 17:46 [PATCH bpf-next v4 0/3] add batched ops for percpu array Pedro Tammela
  2021-04-15 17:46 ` [PATCH bpf-next v4 1/3] bpf: add batched ops support " Pedro Tammela
@ 2021-04-15 17:46 ` Pedro Tammela
  2021-04-20  1:17   ` Alexei Starovoitov
  2021-04-15 17:46 ` [PATCH bpf-next v4 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
  2021-04-16 21:35 ` [PATCH bpf-next v4 0/3] add batched ops for percpu array Martin KaFai Lau
  3 siblings, 1 reply; 10+ messages in thread
From: Pedro Tammela @ 2021-04-15 17:46 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, David Verbeiren,
	Matthieu Baerts, netdev, bpf, linux-kernel, linux-kselftest

Andrii suggested to remove this abstraction layer and have the percpu
handling more explicit[1].

This patch also updates the tests that relied on the macros.

[1] https://lore.kernel.org/bpf/CAEf4BzYmj_ZPDq8Zi4dbntboJKRPU2TVopysBNrdd9foHTfLZw@mail.gmail.com/

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 tools/testing/selftests/bpf/bpf_util.h        |  7 --
 .../bpf/map_tests/htab_map_batch_ops.c        | 87 +++++++++----------
 .../selftests/bpf/prog_tests/map_init.c       |  9 +-
 tools/testing/selftests/bpf/test_maps.c       | 84 +++++++++++-------
 4 files changed, 96 insertions(+), 91 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
index a3352a64c067..105db3120ab4 100644
--- a/tools/testing/selftests/bpf/bpf_util.h
+++ b/tools/testing/selftests/bpf/bpf_util.h
@@ -20,13 +20,6 @@ static inline unsigned int bpf_num_possible_cpus(void)
 	return possible_cpus;
 }
 
-#define __bpf_percpu_val_align	__attribute__((__aligned__(8)))
-
-#define BPF_DECLARE_PERCPU(type, name)				\
-	struct { type v; /* padding */ } __bpf_percpu_val_align	\
-		name[bpf_num_possible_cpus()]
-#define bpf_percpu(name, cpu) name[(cpu)].v
-
 #ifndef ARRAY_SIZE
 # define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
 #endif
diff --git a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
index 976bf415fbdd..8562600ad1df 100644
--- a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
+++ b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
@@ -7,65 +7,60 @@
 #include <bpf/bpf.h>
 #include <bpf/libbpf.h>
 
-#include <bpf_util.h>
 #include <test_maps.h>
 
+static int nr_cpus;
+
 static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
-			     void *values, bool is_pcpu)
+			     __s64 *values, bool is_pcpu)
 {
-	typedef BPF_DECLARE_PERCPU(int, value);
-	value *v = NULL;
 	int i, j, err;
+	int cpu_offset = 0;
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
 	);
 
-	if (is_pcpu)
-		v = (value *)values;
-
 	for (i = 0; i < max_entries; i++) {
 		keys[i] = i + 1;
-		if (is_pcpu)
-			for (j = 0; j < bpf_num_possible_cpus(); j++)
-				bpf_percpu(v[i], j) = i + 2 + j;
-		else
-			((int *)values)[i] = i + 2;
+		if (is_pcpu) {
+			cpu_offset = i * nr_cpus;
+			for (j = 0; j < nr_cpus; j++)
+				(values + cpu_offset)[j] = i + 2 + j;
+
+		} else {
+			values[i] = i + 2;
+		}
 	}
 
 	err = bpf_map_update_batch(map_fd, keys, values, &max_entries, &opts);
 	CHECK(err, "bpf_map_update_batch()", "error:%s\n", strerror(errno));
 }
 
-static void map_batch_verify(int *visited, __u32 max_entries,
-			     int *keys, void *values, bool is_pcpu)
+static void map_batch_verify(int *visited, __u32 max_entries, int *keys,
+			     __s64 *values, bool is_pcpu)
 {
-	typedef BPF_DECLARE_PERCPU(int, value);
-	value *v = NULL;
 	int i, j;
-
-	if (is_pcpu)
-		v = (value *)values;
+	int cpu_offset = 0;
 
 	memset(visited, 0, max_entries * sizeof(*visited));
 	for (i = 0; i < max_entries; i++) {
-
 		if (is_pcpu) {
-			for (j = 0; j < bpf_num_possible_cpus(); j++) {
-				CHECK(keys[i] + 1 + j != bpf_percpu(v[i], j),
+			cpu_offset = i * nr_cpus;
+			for (j = 0; j < nr_cpus; j++) {
+				__s64 value = (values + cpu_offset)[j];
+				CHECK(keys[i] + 1 + j != value,
 				      "key/value checking",
-				      "error: i %d j %d key %d value %d\n",
-				      i, j, keys[i], bpf_percpu(v[i],  j));
+				      "error: i %d j %d key %d value %lld\n", i,
+				      j, keys[i], value);
 			}
+
 		} else {
-			CHECK(keys[i] + 1 != ((int *)values)[i],
-			      "key/value checking",
-			      "error: i %d key %d value %d\n", i, keys[i],
-			      ((int *)values)[i]);
+			CHECK(keys[i] + 1 != values[i], "key/value checking",
+			      "error: i %d key %d value %lld\n", i, keys[i],
+			      values[i]);
 		}
-
 		visited[i] = 1;
-
 	}
 	for (i = 0; i < max_entries; i++) {
 		CHECK(visited[i] != 1, "visited checking",
@@ -73,13 +68,11 @@ static void map_batch_verify(int *visited, __u32 max_entries,
 	}
 }
 
-void __test_map_lookup_and_delete_batch(bool is_pcpu)
+static void __test_map_lookup_and_delete_batch(bool is_pcpu)
 {
 	__u32 batch, count, total, total_success;
-	typedef BPF_DECLARE_PERCPU(int, value);
 	int map_fd, *keys, *visited, key;
 	const __u32 max_entries = 10;
-	value pcpu_values[max_entries];
 	int err, step, value_size;
 	bool nospace_err;
 	void *values;
@@ -88,7 +81,7 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 		.map_type = is_pcpu ? BPF_MAP_TYPE_PERCPU_HASH :
 			    BPF_MAP_TYPE_HASH,
 		.key_size = sizeof(int),
-		.value_size = sizeof(int),
+		.value_size = sizeof(__s64),
 	};
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
@@ -100,13 +93,13 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 	CHECK(map_fd == -1,
 	      "bpf_create_map_xattr()", "error:%s\n", strerror(errno));
 
-	value_size = is_pcpu ? sizeof(value) : sizeof(int);
-	keys = malloc(max_entries * sizeof(int));
+	value_size = sizeof(__s64);
 	if (is_pcpu)
-		values = pcpu_values;
-	else
-		values = malloc(max_entries * sizeof(int));
-	visited = malloc(max_entries * sizeof(int));
+		value_size *= nr_cpus;
+
+	keys = calloc(max_entries, sizeof(int));
+	values = calloc(max_entries, value_size);
+	visited = calloc(max_entries, sizeof(int));
 	CHECK(!keys || !values || !visited, "malloc()",
 	      "error:%s\n", strerror(errno));
 
@@ -203,7 +196,7 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 		CHECK(total != max_entries, "delete with steps",
 		      "total = %u, max_entries = %u\n", total, max_entries);
 
-		/* check map is empty, errono == ENOENT */
+		/* check map is empty, errno == ENOENT */
 		err = bpf_map_get_next_key(map_fd, NULL, &key);
 		CHECK(!err || errno != ENOENT, "bpf_map_get_next_key()",
 		      "error: %s\n", strerror(errno));
@@ -260,17 +253,16 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 	      "unexpected failure\n");
 	free(keys);
 	free(visited);
-	if (!is_pcpu)
-		free(values);
+	free(values);
 }
 
-void htab_map_batch_ops(void)
+static void htab_map_batch_ops(void)
 {
 	__test_map_lookup_and_delete_batch(false);
 	printf("test_%s:PASS\n", __func__);
 }
 
-void htab_percpu_map_batch_ops(void)
+static void htab_percpu_map_batch_ops(void)
 {
 	__test_map_lookup_and_delete_batch(true);
 	printf("test_%s:PASS\n", __func__);
@@ -278,6 +270,11 @@ void htab_percpu_map_batch_ops(void)
 
 void test_htab_map_batch_ops(void)
 {
+	nr_cpus = libbpf_num_possible_cpus();
+
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
+
 	htab_map_batch_ops();
 	htab_percpu_map_batch_ops();
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/map_init.c b/tools/testing/selftests/bpf/prog_tests/map_init.c
index 14a31109dd0e..49386d0aa684 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_init.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_init.c
@@ -12,10 +12,7 @@ static int duration;
 
 typedef unsigned long long map_key_t;
 typedef unsigned long long map_value_t;
-typedef struct {
-	map_value_t v; /* padding */
-} __bpf_percpu_val_align pcpu_map_value_t;
-
+typedef __s64 pcpu_map_value_t;
 
 static int map_populate(int map_fd, int num)
 {
@@ -24,7 +21,7 @@ static int map_populate(int map_fd, int num)
 	map_key_t key;
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(value, i) = FILL_VALUE;
+		value[i] = FILL_VALUE;
 
 	for (key = 1; key <= num; key++) {
 		err = bpf_map_update_elem(map_fd, &key, value, BPF_NOEXIST);
@@ -103,7 +100,7 @@ static int check_values_one_cpu(pcpu_map_value_t *value, map_value_t expected)
 	map_value_t val;
 
 	for (i = 0; i < nr_cpus; i++) {
-		val = bpf_percpu(value, i);
+		val = value[i];
 		if (val) {
 			if (CHECK(val != expected, "map value",
 				  "unexpected for cpu %d: 0x%llx\n", i, val))
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 51adc42b2b40..b8ce837a7ada 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -146,63 +146,69 @@ static void test_hashmap_sizes(unsigned int task, void *data)
 
 static void test_hashmap_percpu(unsigned int task, void *data)
 {
-	unsigned int nr_cpus = bpf_num_possible_cpus();
-	BPF_DECLARE_PERCPU(long, value);
+	int nr_cpus = libbpf_num_possible_cpus();
+	__s64 *values;
 	long long key, next_key, first_key;
 	int expected_key_mask = 0;
 	int fd, i;
 
+	if (nr_cpus < 0) {
+		printf("Failed get possible cpus\n");
+		exit(1);
+	}
+
+	values = alloca(nr_cpus * sizeof(__s64));
+
 	fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_HASH, sizeof(key),
-			    sizeof(bpf_percpu(value, 0)), 2, map_flags);
+			    sizeof(*values), 2, map_flags);
 	if (fd < 0) {
 		printf("Failed to create hashmap '%s'!\n", strerror(errno));
 		exit(1);
 	}
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(value, i) = i + 100;
+		values[i] = i + 100;
 
 	key = 1;
 	/* Insert key=1 element. */
 	assert(!(expected_key_mask & key));
-	assert(bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0);
+	assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0);
 	expected_key_mask |= key;
 
 	/* BPF_NOEXIST means add new element if it doesn't exist. */
-	assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 &&
 	       /* key=1 already exists. */
 	       errno == EEXIST);
 
 	/* -1 is an invalid flag. */
-	assert(bpf_map_update_elem(fd, &key, value, -1) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, -1) == -1 &&
 	       errno == EINVAL);
 
 	/* Check that key=1 can be found. Value could be 0 if the lookup
 	 * was run from a different CPU.
 	 */
-	bpf_percpu(value, 0) = 1;
-	assert(bpf_map_lookup_elem(fd, &key, value) == 0 &&
-	       bpf_percpu(value, 0) == 100);
+	values[0] = 1;
+	assert(bpf_map_lookup_elem(fd, &key, values) == 0 && values[0] == 100);
 
 	key = 2;
 	/* Check that key=2 is not found. */
-	assert(bpf_map_lookup_elem(fd, &key, value) == -1 && errno == ENOENT);
+	assert(bpf_map_lookup_elem(fd, &key, values) == -1 && errno == ENOENT);
 
 	/* BPF_EXIST means update existing element. */
-	assert(bpf_map_update_elem(fd, &key, value, BPF_EXIST) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, BPF_EXIST) == -1 &&
 	       /* key=2 is not there. */
 	       errno == ENOENT);
 
 	/* Insert key=2 element. */
 	assert(!(expected_key_mask & key));
-	assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == 0);
+	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == 0);
 	expected_key_mask |= key;
 
 	/* key=1 and key=2 were inserted, check that key=0 cannot be
 	 * inserted due to max_entries limit.
 	 */
 	key = 0;
-	assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 &&
 	       errno == E2BIG);
 
 	/* Check that key = 0 doesn't exist. */
@@ -219,10 +225,10 @@ static void test_hashmap_percpu(unsigned int task, void *data)
 		assert((expected_key_mask & next_key) == next_key);
 		expected_key_mask &= ~next_key;
 
-		assert(bpf_map_lookup_elem(fd, &next_key, value) == 0);
+		assert(bpf_map_lookup_elem(fd, &next_key, values) == 0);
 
 		for (i = 0; i < nr_cpus; i++)
-			assert(bpf_percpu(value, i) == i + 100);
+			assert(values[i] == i + 100);
 
 		key = next_key;
 	}
@@ -230,7 +236,7 @@ static void test_hashmap_percpu(unsigned int task, void *data)
 
 	/* Update with BPF_EXIST. */
 	key = 1;
-	assert(bpf_map_update_elem(fd, &key, value, BPF_EXIST) == 0);
+	assert(bpf_map_update_elem(fd, &key, values, BPF_EXIST) == 0);
 
 	/* Delete both elements. */
 	key = 1;
@@ -399,37 +405,42 @@ static void test_arraymap(unsigned int task, void *data)
 
 static void test_arraymap_percpu(unsigned int task, void *data)
 {
-	unsigned int nr_cpus = bpf_num_possible_cpus();
-	BPF_DECLARE_PERCPU(long, values);
+	int nr_cpus = libbpf_num_possible_cpus();
+	__s64 *values;
 	int key, next_key, fd, i;
 
+	if (nr_cpus < 0) {
+		printf("Failed get possible cpus\n");
+		exit(1);
+	}
+
+	values = alloca(nr_cpus * sizeof(__s64));
+
 	fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_ARRAY, sizeof(key),
-			    sizeof(bpf_percpu(values, 0)), 2, 0);
+			    sizeof(*values), 2, 0);
 	if (fd < 0) {
 		printf("Failed to create arraymap '%s'!\n", strerror(errno));
 		exit(1);
 	}
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(values, i) = i + 100;
+		values[i] = i + 100;
 
 	key = 1;
 	/* Insert key=1 element. */
 	assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0);
 
-	bpf_percpu(values, 0) = 0;
+	values[0] = 0;
 	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 &&
 	       errno == EEXIST);
 
 	/* Check that key=1 can be found. */
-	assert(bpf_map_lookup_elem(fd, &key, values) == 0 &&
-	       bpf_percpu(values, 0) == 100);
+	assert(bpf_map_lookup_elem(fd, &key, values) == 0 && values[0] == 100);
 
 	key = 0;
 	/* Check that key=0 is also found and zero initialized. */
-	assert(bpf_map_lookup_elem(fd, &key, values) == 0 &&
-	       bpf_percpu(values, 0) == 0 &&
-	       bpf_percpu(values, nr_cpus - 1) == 0);
+	assert(bpf_map_lookup_elem(fd, &key, values) == 0 && values[0] == 0 &&
+	       values[nr_cpus - 1] == 0);
 
 	/* Check that key=2 cannot be inserted due to max_entries limit. */
 	key = 2;
@@ -458,16 +469,23 @@ static void test_arraymap_percpu(unsigned int task, void *data)
 
 static void test_arraymap_percpu_many_keys(void)
 {
-	unsigned int nr_cpus = bpf_num_possible_cpus();
-	BPF_DECLARE_PERCPU(long, values);
+	unsigned int nr_cpus = libbpf_num_possible_cpus();
+	__s64 *values;
 	/* nr_keys is not too large otherwise the test stresses percpu
 	 * allocator more than anything else
 	 */
 	unsigned int nr_keys = 2000;
 	int key, fd, i;
 
+	if (nr_cpus < 0) {
+		printf("Failed get possible cpus\n");
+		exit(1);
+	}
+
+	values = alloca(nr_cpus * sizeof(__s64));
+
 	fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_ARRAY, sizeof(key),
-			    sizeof(bpf_percpu(values, 0)), nr_keys, 0);
+			    sizeof(*values), nr_keys, 0);
 	if (fd < 0) {
 		printf("Failed to create per-cpu arraymap '%s'!\n",
 		       strerror(errno));
@@ -475,19 +493,19 @@ static void test_arraymap_percpu_many_keys(void)
 	}
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(values, i) = i + 10;
+		values[i] = i + 10;
 
 	for (key = 0; key < nr_keys; key++)
 		assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0);
 
 	for (key = 0; key < nr_keys; key++) {
 		for (i = 0; i < nr_cpus; i++)
-			bpf_percpu(values, i) = 0;
+			values[i] = 0;
 
 		assert(bpf_map_lookup_elem(fd, &key, values) == 0);
 
 		for (i = 0; i < nr_cpus; i++)
-			assert(bpf_percpu(values, i) == i + 10);
+			assert(values[i] == i + 10);
 	}
 
 	close(fd);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v4 3/3] bpf: selftests: update array map tests for per-cpu batched ops
  2021-04-15 17:46 [PATCH bpf-next v4 0/3] add batched ops for percpu array Pedro Tammela
  2021-04-15 17:46 ` [PATCH bpf-next v4 1/3] bpf: add batched ops support " Pedro Tammela
  2021-04-15 17:46 ` [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
@ 2021-04-15 17:46 ` Pedro Tammela
  2021-04-16 21:35 ` [PATCH bpf-next v4 0/3] add batched ops for percpu array Martin KaFai Lau
  3 siblings, 0 replies; 10+ messages in thread
From: Pedro Tammela @ 2021-04-15 17:46 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, David Verbeiren,
	Matthieu Baerts, netdev, bpf, linux-kernel, linux-kselftest

Follows the same logic as the hashtable tests.

Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 .../bpf/map_tests/array_map_batch_ops.c       | 104 +++++++++++++-----
 1 file changed, 75 insertions(+), 29 deletions(-)

diff --git a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
index e42ea1195d18..f4d870da7684 100644
--- a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
+++ b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
@@ -9,10 +9,13 @@
 
 #include <test_maps.h>
 
+static int nr_cpus;
+
 static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
-			     int *values)
+			     __s64 *values, bool is_pcpu)
 {
-	int i, err;
+	int i, j, err;
+	int cpu_offset = 0;
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
@@ -20,22 +23,41 @@ static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
 
 	for (i = 0; i < max_entries; i++) {
 		keys[i] = i;
-		values[i] = i + 1;
+		if (is_pcpu) {
+			cpu_offset = i * nr_cpus;
+			for (j = 0; j < nr_cpus; j++)
+				(values + cpu_offset)[j] = i + 1 + j;
+		} else {
+			values[i] = i + 1;
+		}
 	}
 
 	err = bpf_map_update_batch(map_fd, keys, values, &max_entries, &opts);
 	CHECK(err, "bpf_map_update_batch()", "error:%s\n", strerror(errno));
 }
 
-static void map_batch_verify(int *visited, __u32 max_entries,
-			     int *keys, int *values)
+static void map_batch_verify(int *visited, __u32 max_entries, int *keys,
+			     __s64 *values, bool is_pcpu)
 {
-	int i;
+	int i, j;
+	int cpu_offset = 0;
 
 	memset(visited, 0, max_entries * sizeof(*visited));
 	for (i = 0; i < max_entries; i++) {
-		CHECK(keys[i] + 1 != values[i], "key/value checking",
-		      "error: i %d key %d value %d\n", i, keys[i], values[i]);
+		if (is_pcpu) {
+			cpu_offset = i * nr_cpus;
+			for (j = 0; j < nr_cpus; j++) {
+				__s64 value = (values + cpu_offset)[j];
+				CHECK(keys[i] + j + 1 != value,
+				      "key/value checking",
+				      "error: i %d j %d key %d value %lld\n", i,
+				      j, keys[i], value);
+			}
+		} else {
+			CHECK(keys[i] + 1 != values[i], "key/value checking",
+			      "error: i %d key %d value %lld\n", i, keys[i],
+			      values[i]);
+		}
 		visited[i] = 1;
 	}
 	for (i = 0; i < max_entries; i++) {
@@ -44,19 +66,21 @@ static void map_batch_verify(int *visited, __u32 max_entries,
 	}
 }
 
-void test_array_map_batch_ops(void)
+static void __test_map_lookup_and_update_batch(bool is_pcpu)
 {
 	struct bpf_create_map_attr xattr = {
 		.name = "array_map",
-		.map_type = BPF_MAP_TYPE_ARRAY,
+		.map_type = is_pcpu ? BPF_MAP_TYPE_PERCPU_ARRAY :
+				      BPF_MAP_TYPE_ARRAY,
 		.key_size = sizeof(int),
-		.value_size = sizeof(int),
+		.value_size = sizeof(__s64),
 	};
-	int map_fd, *keys, *values, *visited;
+	int map_fd, *keys, *visited;
 	__u32 count, total, total_success;
 	const __u32 max_entries = 10;
 	__u64 batch = 0;
-	int err, step;
+	int err, step, value_size;
+	void *values;
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
@@ -67,22 +91,23 @@ void test_array_map_batch_ops(void)
 	CHECK(map_fd == -1,
 	      "bpf_create_map_xattr()", "error:%s\n", strerror(errno));
 
-	keys = malloc(max_entries * sizeof(int));
-	values = malloc(max_entries * sizeof(int));
-	visited = malloc(max_entries * sizeof(int));
+	value_size = sizeof(__s64);
+	if (is_pcpu)
+		value_size *= nr_cpus;
+
+	keys = calloc(max_entries, sizeof(*keys));
+	values = calloc(max_entries, value_size);
+	visited = calloc(max_entries, sizeof(*visited));
 	CHECK(!keys || !values || !visited, "malloc()", "error:%s\n",
 	      strerror(errno));
 
-	/* populate elements to the map */
-	map_batch_update(map_fd, max_entries, keys, values);
-
 	/* test 1: lookup in a loop with various steps. */
 	total_success = 0;
 	for (step = 1; step < max_entries; step++) {
-		map_batch_update(map_fd, max_entries, keys, values);
-		map_batch_verify(visited, max_entries, keys, values);
+		map_batch_update(map_fd, max_entries, keys, values, is_pcpu);
+		map_batch_verify(visited, max_entries, keys, values, is_pcpu);
 		memset(keys, 0, max_entries * sizeof(*keys));
-		memset(values, 0, max_entries * sizeof(*values));
+		memset(values, 0, max_entries * value_size);
 		batch = 0;
 		total = 0;
 		/* iteratively lookup/delete elements with 'step'
@@ -91,10 +116,10 @@ void test_array_map_batch_ops(void)
 		count = step;
 		while (true) {
 			err = bpf_map_lookup_batch(map_fd,
-						total ? &batch : NULL, &batch,
-						keys + total,
-						values + total,
-						&count, &opts);
+						   total ? &batch : NULL,
+						   &batch, keys + total,
+						   values + total * value_size,
+						   &count, &opts);
 
 			CHECK((err && errno != ENOENT), "lookup with steps",
 			      "error: %s\n", strerror(errno));
@@ -108,7 +133,7 @@ void test_array_map_batch_ops(void)
 		CHECK(total != max_entries, "lookup with steps",
 		      "total = %u, max_entries = %u\n", total, max_entries);
 
-		map_batch_verify(visited, max_entries, keys, values);
+		map_batch_verify(visited, max_entries, keys, values, is_pcpu);
 
 		total_success++;
 	}
@@ -116,9 +141,30 @@ void test_array_map_batch_ops(void)
 	CHECK(total_success == 0, "check total_success",
 	      "unexpected failure\n");
 
-	printf("%s:PASS\n", __func__);
-
 	free(keys);
 	free(values);
 	free(visited);
 }
+
+static void array_map_batch_ops(void)
+{
+	__test_map_lookup_and_update_batch(false);
+	printf("test_%s:PASS\n", __func__);
+}
+
+static void array_percpu_map_batch_ops(void)
+{
+	__test_map_lookup_and_update_batch(true);
+	printf("test_%s:PASS\n", __func__);
+}
+
+void test_array_map_batch_ops(void)
+{
+	nr_cpus = libbpf_num_possible_cpus();
+
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
+
+	array_map_batch_ops();
+	array_percpu_map_batch_ops();
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v4 0/3] add batched ops for percpu array
  2021-04-15 17:46 [PATCH bpf-next v4 0/3] add batched ops for percpu array Pedro Tammela
                   ` (2 preceding siblings ...)
  2021-04-15 17:46 ` [PATCH bpf-next v4 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
@ 2021-04-16 21:35 ` Martin KaFai Lau
  2021-04-19 17:25   ` Brian Vazquez
  3 siblings, 1 reply; 10+ messages in thread
From: Martin KaFai Lau @ 2021-04-16 21:35 UTC (permalink / raw)
  To: Pedro Tammela
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Song Liu,
	Yonghong Song, John Fastabend, KP Singh, Shuah Khan,
	Pedro Tammela, David Verbeiren, Matthieu Baerts, netdev, bpf,
	linux-kernel, linux-kselftest

On Thu, Apr 15, 2021 at 02:46:16PM -0300, Pedro Tammela wrote:
> This patchset introduces batched operations for the per-cpu variant of
> the array map.
> 
> It also removes the percpu macros from 'bpf_util.h'. This change was
> suggested by Andrii in a earlier iteration of this patchset.
> 
> The tests were updated to reflect all the new changes.
Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v4 0/3] add batched ops for percpu array
  2021-04-16 21:35 ` [PATCH bpf-next v4 0/3] add batched ops for percpu array Martin KaFai Lau
@ 2021-04-19 17:25   ` Brian Vazquez
  0 siblings, 0 replies; 10+ messages in thread
From: Brian Vazquez @ 2021-04-19 17:25 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Pedro Tammela, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, David Verbeiren,
	Matthieu Baerts, netdev, bpf, LKML, linux-kselftest,
	Brian Vazquez

Sorry I missed  this.
I don't recall why I didn't add the array percpu variant, but LGTM.

Acked-by: Brian Vazquez <brianvv@google.com>

On Fri, Apr 16, 2021 at 3:09 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Thu, Apr 15, 2021 at 02:46:16PM -0300, Pedro Tammela wrote:
> > This patchset introduces batched operations for the per-cpu variant of
> > the array map.
> >
> > It also removes the percpu macros from 'bpf_util.h'. This change was
> > suggested by Andrii in a earlier iteration of this patchset.
> >
> > The tests were updated to reflect all the new changes.
> Acked-by: Martin KaFai Lau <kafai@fb.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h
  2021-04-15 17:46 ` [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
@ 2021-04-20  1:17   ` Alexei Starovoitov
  2021-04-20 15:58     ` Daniel Borkmann
  0 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2021-04-20  1:17 UTC (permalink / raw)
  To: Pedro Tammela
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, David Verbeiren,
	Matthieu Baerts, Network Development, bpf

On Thu, Apr 15, 2021 at 10:47 AM Pedro Tammela <pctammela@gmail.com> wrote:
>
> Andrii suggested to remove this abstraction layer and have the percpu
> handling more explicit[1].
>
> This patch also updates the tests that relied on the macros.
>
> [1] https://lore.kernel.org/bpf/CAEf4BzYmj_ZPDq8Zi4dbntboJKRPU2TVopysBNrdd9foHTfLZw@mail.gmail.com/
>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
> ---
>  tools/testing/selftests/bpf/bpf_util.h        |  7 --
>  .../bpf/map_tests/htab_map_batch_ops.c        | 87 +++++++++----------
>  .../selftests/bpf/prog_tests/map_init.c       |  9 +-
>  tools/testing/selftests/bpf/test_maps.c       | 84 +++++++++++-------
>  4 files changed, 96 insertions(+), 91 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
> index a3352a64c067..105db3120ab4 100644
> --- a/tools/testing/selftests/bpf/bpf_util.h
> +++ b/tools/testing/selftests/bpf/bpf_util.h
> @@ -20,13 +20,6 @@ static inline unsigned int bpf_num_possible_cpus(void)
>         return possible_cpus;
>  }
>
> -#define __bpf_percpu_val_align __attribute__((__aligned__(8)))
> -
> -#define BPF_DECLARE_PERCPU(type, name)                         \
> -       struct { type v; /* padding */ } __bpf_percpu_val_align \
> -               name[bpf_num_possible_cpus()]
> -#define bpf_percpu(name, cpu) name[(cpu)].v
> -

Hmm. I wonder what Daniel has to say about it, since he
introduced it in commit f3515b5d0b71 ("bpf: provide a generic macro
for percpu values for selftests")
to address a class of bugs.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h
  2021-04-20  1:17   ` Alexei Starovoitov
@ 2021-04-20 15:58     ` Daniel Borkmann
  2021-04-20 16:42       ` Andrii Nakryiko
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Borkmann @ 2021-04-20 15:58 UTC (permalink / raw)
  To: Alexei Starovoitov, Pedro Tammela
  Cc: Alexei Starovoitov, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh, Shuah Khan,
	Pedro Tammela, David Verbeiren, Matthieu Baerts,
	Network Development, bpf

On 4/20/21 3:17 AM, Alexei Starovoitov wrote:
> On Thu, Apr 15, 2021 at 10:47 AM Pedro Tammela <pctammela@gmail.com> wrote:
>>
>> Andrii suggested to remove this abstraction layer and have the percpu
>> handling more explicit[1].
>>
>> This patch also updates the tests that relied on the macros.
>>
>> [1] https://lore.kernel.org/bpf/CAEf4BzYmj_ZPDq8Zi4dbntboJKRPU2TVopysBNrdd9foHTfLZw@mail.gmail.com/
>>
>> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
>> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
>> ---
>>   tools/testing/selftests/bpf/bpf_util.h        |  7 --
>>   .../bpf/map_tests/htab_map_batch_ops.c        | 87 +++++++++----------
>>   .../selftests/bpf/prog_tests/map_init.c       |  9 +-
>>   tools/testing/selftests/bpf/test_maps.c       | 84 +++++++++++-------
>>   4 files changed, 96 insertions(+), 91 deletions(-)
>>
>> diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
>> index a3352a64c067..105db3120ab4 100644
>> --- a/tools/testing/selftests/bpf/bpf_util.h
>> +++ b/tools/testing/selftests/bpf/bpf_util.h
>> @@ -20,13 +20,6 @@ static inline unsigned int bpf_num_possible_cpus(void)
>>          return possible_cpus;
>>   }
>>
>> -#define __bpf_percpu_val_align __attribute__((__aligned__(8)))
>> -
>> -#define BPF_DECLARE_PERCPU(type, name)                         \
>> -       struct { type v; /* padding */ } __bpf_percpu_val_align \
>> -               name[bpf_num_possible_cpus()]
>> -#define bpf_percpu(name, cpu) name[(cpu)].v
>> -
> 
> Hmm. I wonder what Daniel has to say about it, since he
> introduced it in commit f3515b5d0b71 ("bpf: provide a generic macro
> for percpu values for selftests")
> to address a class of bugs.

I would probably even move those into libbpf instead. ;-) The problem is that this can
be missed easily and innocent changes would lead to corruption of the applications
memory if there's a map lookup. Having this at least in selftest code or even in libbpf
would document code-wise that care needs to be taken on per cpu maps. Even if we'd put
a note under Documentation/bpf/ or such, this might get missed easily and finding such
bugs is like looking for a needle in a haystack.. so I don't think this should be removed.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h
  2021-04-20 15:58     ` Daniel Borkmann
@ 2021-04-20 16:42       ` Andrii Nakryiko
  2021-04-22 19:27         ` Pedro Tammela
  0 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2021-04-20 16:42 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Alexei Starovoitov, Pedro Tammela, Alexei Starovoitov,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Shuah Khan, Pedro Tammela,
	David Verbeiren, Matthieu Baerts, Network Development, bpf

On Tue, Apr 20, 2021 at 8:58 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 4/20/21 3:17 AM, Alexei Starovoitov wrote:
> > On Thu, Apr 15, 2021 at 10:47 AM Pedro Tammela <pctammela@gmail.com> wrote:
> >>
> >> Andrii suggested to remove this abstraction layer and have the percpu
> >> handling more explicit[1].
> >>
> >> This patch also updates the tests that relied on the macros.
> >>
> >> [1] https://lore.kernel.org/bpf/CAEf4BzYmj_ZPDq8Zi4dbntboJKRPU2TVopysBNrdd9foHTfLZw@mail.gmail.com/
> >>
> >> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> >> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
> >> ---
> >>   tools/testing/selftests/bpf/bpf_util.h        |  7 --
> >>   .../bpf/map_tests/htab_map_batch_ops.c        | 87 +++++++++----------
> >>   .../selftests/bpf/prog_tests/map_init.c       |  9 +-
> >>   tools/testing/selftests/bpf/test_maps.c       | 84 +++++++++++-------
> >>   4 files changed, 96 insertions(+), 91 deletions(-)
> >>
> >> diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
> >> index a3352a64c067..105db3120ab4 100644
> >> --- a/tools/testing/selftests/bpf/bpf_util.h
> >> +++ b/tools/testing/selftests/bpf/bpf_util.h
> >> @@ -20,13 +20,6 @@ static inline unsigned int bpf_num_possible_cpus(void)
> >>          return possible_cpus;
> >>   }
> >>
> >> -#define __bpf_percpu_val_align __attribute__((__aligned__(8)))
> >> -
> >> -#define BPF_DECLARE_PERCPU(type, name)                         \
> >> -       struct { type v; /* padding */ } __bpf_percpu_val_align \
> >> -               name[bpf_num_possible_cpus()]
> >> -#define bpf_percpu(name, cpu) name[(cpu)].v
> >> -
> >
> > Hmm. I wonder what Daniel has to say about it, since he
> > introduced it in commit f3515b5d0b71 ("bpf: provide a generic macro
> > for percpu values for selftests")
> > to address a class of bugs.
>
> I would probably even move those into libbpf instead. ;-) The problem is that this can
> be missed easily and innocent changes would lead to corruption of the applications
> memory if there's a map lookup. Having this at least in selftest code or even in libbpf
> would document code-wise that care needs to be taken on per cpu maps. Even if we'd put
> a note under Documentation/bpf/ or such, this might get missed easily and finding such
> bugs is like looking for a needle in a haystack.. so I don't think this should be removed.
>

See [0] for previous discussion. I don't mind leaving bpf_percpu() in
selftests. I'm not sure I ever suggested removing it from selftests,
but I don't think it's a good idea to add it to libbpf. I think it's
better to have an extra paragraph in bpf_lookup_map_elem() in
uapi/linux/bpf.h mentioning how per-CPU values should be read/updated.
I think we should just recommend to use u64 for primitive values (or
otherwise users can embed their int in custom aligned(8) struct, if
they insist on <u64) and __attribute__((aligned(8))) for structs.

  [0] https://lore.kernel.org/bpf/CAEf4BzaLKm_fy4oO4Rdp76q2KoC6yC1WcJLuehoZUu9JobG-Cw@mail.gmail.com/


> Thanks,
> Daniel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h
  2021-04-20 16:42       ` Andrii Nakryiko
@ 2021-04-22 19:27         ` Pedro Tammela
  0 siblings, 0 replies; 10+ messages in thread
From: Pedro Tammela @ 2021-04-22 19:27 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Daniel Borkmann, Alexei Starovoitov, Alexei Starovoitov,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Shuah Khan, Pedro Tammela,
	David Verbeiren, Matthieu Baerts, Network Development, bpf

Em ter., 20 de abr. de 2021 às 13:42, Andrii Nakryiko
<andrii.nakryiko@gmail.com> escreveu:
>
> On Tue, Apr 20, 2021 at 8:58 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >
> > On 4/20/21 3:17 AM, Alexei Starovoitov wrote:
> > > On Thu, Apr 15, 2021 at 10:47 AM Pedro Tammela <pctammela@gmail.com> wrote:
> > >>
> > >> Andrii suggested to remove this abstraction layer and have the percpu
> > >> handling more explicit[1].
> > >>
> > >> This patch also updates the tests that relied on the macros.
> > >>
> > >> [1] https://lore.kernel.org/bpf/CAEf4BzYmj_ZPDq8Zi4dbntboJKRPU2TVopysBNrdd9foHTfLZw@mail.gmail.com/
> > >>
> > >> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > >> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
> > >> ---
> > >>   tools/testing/selftests/bpf/bpf_util.h        |  7 --
> > >>   .../bpf/map_tests/htab_map_batch_ops.c        | 87 +++++++++----------
> > >>   .../selftests/bpf/prog_tests/map_init.c       |  9 +-
> > >>   tools/testing/selftests/bpf/test_maps.c       | 84 +++++++++++-------
> > >>   4 files changed, 96 insertions(+), 91 deletions(-)
> > >>
> > >> diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
> > >> index a3352a64c067..105db3120ab4 100644
> > >> --- a/tools/testing/selftests/bpf/bpf_util.h
> > >> +++ b/tools/testing/selftests/bpf/bpf_util.h
> > >> @@ -20,13 +20,6 @@ static inline unsigned int bpf_num_possible_cpus(void)
> > >>          return possible_cpus;
> > >>   }
> > >>
> > >> -#define __bpf_percpu_val_align __attribute__((__aligned__(8)))
> > >> -
> > >> -#define BPF_DECLARE_PERCPU(type, name)                         \
> > >> -       struct { type v; /* padding */ } __bpf_percpu_val_align \
> > >> -               name[bpf_num_possible_cpus()]
> > >> -#define bpf_percpu(name, cpu) name[(cpu)].v
> > >> -
> > >
> > > Hmm. I wonder what Daniel has to say about it, since he
> > > introduced it in commit f3515b5d0b71 ("bpf: provide a generic macro
> > > for percpu values for selftests")
> > > to address a class of bugs.
> >
> > I would probably even move those into libbpf instead. ;-) The problem is that this can
> > be missed easily and innocent changes would lead to corruption of the applications
> > memory if there's a map lookup. Having this at least in selftest code or even in libbpf
> > would document code-wise that care needs to be taken on per cpu maps. Even if we'd put
> > a note under Documentation/bpf/ or such, this might get missed easily and finding such
> > bugs is like looking for a needle in a haystack.. so I don't think this should be removed.
> >
>
> See [0] for previous discussion. I don't mind leaving bpf_percpu() in
> selftests. I'm not sure I ever suggested removing it from selftests,
> but I don't think it's a good idea to add it to libbpf. I think it's
> better to have an extra paragraph in bpf_lookup_map_elem() in
> uapi/linux/bpf.h mentioning how per-CPU values should be read/updated.
> I think we should just recommend to use u64 for primitive values (or
> otherwise users can embed their int in custom aligned(8) struct, if
> they insist on <u64) and __attribute__((aligned(8))) for structs.
>
>   [0] https://lore.kernel.org/bpf/CAEf4BzaLKm_fy4oO4Rdp76q2KoC6yC1WcJLuehoZUu9JobG-Cw@mail.gmail.com/
>
>
> > Thanks,
> > Daniel

OK, since this is not the main topic of this series I will revert this
patch in v5.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-04-22 19:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-15 17:46 [PATCH bpf-next v4 0/3] add batched ops for percpu array Pedro Tammela
2021-04-15 17:46 ` [PATCH bpf-next v4 1/3] bpf: add batched ops support " Pedro Tammela
2021-04-15 17:46 ` [PATCH bpf-next v4 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
2021-04-20  1:17   ` Alexei Starovoitov
2021-04-20 15:58     ` Daniel Borkmann
2021-04-20 16:42       ` Andrii Nakryiko
2021-04-22 19:27         ` Pedro Tammela
2021-04-15 17:46 ` [PATCH bpf-next v4 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
2021-04-16 21:35 ` [PATCH bpf-next v4 0/3] add batched ops for percpu array Martin KaFai Lau
2021-04-19 17:25   ` Brian Vazquez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).