netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 0/3] add batched ops for percpu array
@ 2021-04-12 19:39 Pedro Tammela
  2021-04-12 19:39 ` [PATCH bpf-next v3 1/3] bpf: add batched ops support " Pedro Tammela
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Pedro Tammela @ 2021-04-12 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, Matthieu Baerts,
	David Verbeiren, open list:BPF (Safe dynamic programs and tools),
	open list:BPF (Safe dynamic programs and tools),
	open list, open list:KERNEL SELFTEST FRAMEWORK

This patchset introduces batched operations for the per-cpu variant of
the array map.

It also removes the percpu macros from 'bpf_util.h'. This change was
suggested by Andrii in a earlier iteration of this patchset.

The tests were updated to reflect all the new changes.

v2 -> v3:
- Remove percpu macros as suggested by Andrii
- Update tests that used the per cpu macros

v1 -> v2:
- Amended a more descriptive commit message

Pedro Tammela (3):
  bpf: add batched ops support for percpu array
  bpf: selftests: remove percpu macros from bpf_util.h
  bpf: selftests: update array map tests for per-cpu batched ops

 kernel/bpf/arraymap.c                         |   2 +
 tools/testing/selftests/bpf/bpf_util.h        |   7 --
 .../bpf/map_tests/array_map_batch_ops.c       | 110 +++++++++++++-----
 .../bpf/map_tests/htab_map_batch_ops.c        |  71 ++++++-----
 .../selftests/bpf/prog_tests/map_init.c       |   9 +-
 tools/testing/selftests/bpf/test_maps.c       |  84 +++++++------
 6 files changed, 171 insertions(+), 112 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH bpf-next v3 1/3] bpf: add batched ops support for percpu array
  2021-04-12 19:39 [PATCH bpf-next v3 0/3] add batched ops for percpu array Pedro Tammela
@ 2021-04-12 19:39 ` Pedro Tammela
  2021-04-12 19:40 ` [PATCH bpf-next v3 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
  2021-04-12 19:40 ` [PATCH bpf-next v3 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
  2 siblings, 0 replies; 5+ messages in thread
From: Pedro Tammela @ 2021-04-12 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, Matthieu Baerts,
	David Verbeiren, open list:BPF (Safe dynamic programs and tools),
	open list:BPF (Safe dynamic programs and tools),
	open list, open list:KERNEL SELFTEST FRAMEWORK
  Cc: Jamal Hadi Salim

Uses the already in-place infrastructure provided by the
'generic_map_*_batch' functions.

No tweak was needed as it transparently handles the percpu variant.

As arrays don't have delete operations, let it return a error to
user space (default behaviour).

Suggested-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 kernel/bpf/arraymap.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 463d25e1e67e..3c4105603f9d 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -698,6 +698,8 @@ const struct bpf_map_ops percpu_array_map_ops = {
 	.map_delete_elem = array_map_delete_elem,
 	.map_seq_show_elem = percpu_array_map_seq_show_elem,
 	.map_check_btf = array_map_check_btf,
+	.map_lookup_batch = generic_map_lookup_batch,
+	.map_update_batch = generic_map_update_batch,
 	.map_set_for_each_callback_args = map_set_for_each_callback_args,
 	.map_for_each_callback = bpf_for_each_array_elem,
 	.map_btf_name = "bpf_array",
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH bpf-next v3 2/3] bpf: selftests: remove percpu macros from bpf_util.h
  2021-04-12 19:39 [PATCH bpf-next v3 0/3] add batched ops for percpu array Pedro Tammela
  2021-04-12 19:39 ` [PATCH bpf-next v3 1/3] bpf: add batched ops support " Pedro Tammela
@ 2021-04-12 19:40 ` Pedro Tammela
  2021-04-12 19:40 ` [PATCH bpf-next v3 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
  2 siblings, 0 replies; 5+ messages in thread
From: Pedro Tammela @ 2021-04-12 19:40 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, Matthieu Baerts,
	David Verbeiren, open list:BPF (Safe dynamic programs and tools),
	open list:BPF (Safe dynamic programs and tools),
	open list, open list:KERNEL SELFTEST FRAMEWORK

Andrii suggested to remove this abstraction layer and have the percpu
handling more explicit[1].

This patch also updates the tests that relied on the macros.

[1] https://lore.kernel.org/bpf/CAEf4BzYmj_ZPDq8Zi4dbntboJKRPU2TVopysBNrdd9foHTfLZw@mail.gmail.com/

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 tools/testing/selftests/bpf/bpf_util.h        |  7 --
 .../bpf/map_tests/htab_map_batch_ops.c        | 71 ++++++++--------
 .../selftests/bpf/prog_tests/map_init.c       |  9 +-
 tools/testing/selftests/bpf/test_maps.c       | 84 +++++++++++--------
 4 files changed, 89 insertions(+), 82 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
index a3352a64c067..105db3120ab4 100644
--- a/tools/testing/selftests/bpf/bpf_util.h
+++ b/tools/testing/selftests/bpf/bpf_util.h
@@ -20,13 +20,6 @@ static inline unsigned int bpf_num_possible_cpus(void)
 	return possible_cpus;
 }
 
-#define __bpf_percpu_val_align	__attribute__((__aligned__(8)))
-
-#define BPF_DECLARE_PERCPU(type, name)				\
-	struct { type v; /* padding */ } __bpf_percpu_val_align	\
-		name[bpf_num_possible_cpus()]
-#define bpf_percpu(name, cpu) name[(cpu)].v
-
 #ifndef ARRAY_SIZE
 # define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
 #endif
diff --git a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
index 976bf415fbdd..16e64bd89b1a 100644
--- a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
+++ b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
@@ -7,65 +7,63 @@
 #include <bpf/bpf.h>
 #include <bpf/libbpf.h>
 
-#include <bpf_util.h>
 #include <test_maps.h>
 
 static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
-			     void *values, bool is_pcpu)
+			     __s64 *values, bool is_pcpu)
 {
-	typedef BPF_DECLARE_PERCPU(int, value);
-	value *v = NULL;
+	int nr_cpus = libbpf_num_possible_cpus();
 	int i, j, err;
+	int offset = 0;
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
 	);
 
-	if (is_pcpu)
-		v = (value *)values;
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
 
 	for (i = 0; i < max_entries; i++) {
 		keys[i] = i + 1;
 		if (is_pcpu)
-			for (j = 0; j < bpf_num_possible_cpus(); j++)
-				bpf_percpu(v[i], j) = i + 2 + j;
+			for (j = 0; j < nr_cpus; j++)
+				(values + offset)[j] = i + 2 + j;
 		else
-			((int *)values)[i] = i + 2;
+			values[i] = i + 2;
+		offset += nr_cpus;
 	}
 
 	err = bpf_map_update_batch(map_fd, keys, values, &max_entries, &opts);
 	CHECK(err, "bpf_map_update_batch()", "error:%s\n", strerror(errno));
 }
 
-static void map_batch_verify(int *visited, __u32 max_entries,
-			     int *keys, void *values, bool is_pcpu)
+static void map_batch_verify(int *visited, __u32 max_entries, int *keys,
+			     __s64 *values, bool is_pcpu)
 {
-	typedef BPF_DECLARE_PERCPU(int, value);
-	value *v = NULL;
+	int nr_cpus = libbpf_num_possible_cpus();
 	int i, j;
+	int offset = 0;
 
-	if (is_pcpu)
-		v = (value *)values;
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
 
 	memset(visited, 0, max_entries * sizeof(*visited));
 	for (i = 0; i < max_entries; i++) {
-
 		if (is_pcpu) {
-			for (j = 0; j < bpf_num_possible_cpus(); j++) {
-				CHECK(keys[i] + 1 + j != bpf_percpu(v[i], j),
+			for (j = 0; j < nr_cpus; j++) {
+				__s64 value = (values + offset)[j];
+				CHECK(keys[i] + 1 + j != value,
 				      "key/value checking",
-				      "error: i %d j %d key %d value %d\n",
-				      i, j, keys[i], bpf_percpu(v[i],  j));
+				      "error: i %d j %d key %d value %d\n", i,
+				      j, keys[i], value);
 			}
 		} else {
-			CHECK(keys[i] + 1 != ((int *)values)[i],
-			      "key/value checking",
+			CHECK(keys[i] + 1 != values[i], "key/value checking",
 			      "error: i %d key %d value %d\n", i, keys[i],
-			      ((int *)values)[i]);
+			      values[i]);
 		}
-
+		offset += nr_cpus;
 		visited[i] = 1;
-
 	}
 	for (i = 0; i < max_entries; i++) {
 		CHECK(visited[i] != 1, "visited checking",
@@ -75,11 +73,10 @@ static void map_batch_verify(int *visited, __u32 max_entries,
 
 void __test_map_lookup_and_delete_batch(bool is_pcpu)
 {
+	int nr_cpus = libbpf_num_possible_cpus();
 	__u32 batch, count, total, total_success;
-	typedef BPF_DECLARE_PERCPU(int, value);
 	int map_fd, *keys, *visited, key;
 	const __u32 max_entries = 10;
-	value pcpu_values[max_entries];
 	int err, step, value_size;
 	bool nospace_err;
 	void *values;
@@ -88,24 +85,27 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 		.map_type = is_pcpu ? BPF_MAP_TYPE_PERCPU_HASH :
 			    BPF_MAP_TYPE_HASH,
 		.key_size = sizeof(int),
-		.value_size = sizeof(int),
+		.value_size = sizeof(__s64),
 	};
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
 	);
 
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
+
 	xattr.max_entries = max_entries;
 	map_fd = bpf_create_map_xattr(&xattr);
 	CHECK(map_fd == -1,
 	      "bpf_create_map_xattr()", "error:%s\n", strerror(errno));
 
-	value_size = is_pcpu ? sizeof(value) : sizeof(int);
-	keys = malloc(max_entries * sizeof(int));
+	value_size = sizeof(__s64);
 	if (is_pcpu)
-		values = pcpu_values;
-	else
-		values = malloc(max_entries * sizeof(int));
+		value_size *= nr_cpus;
+
+	keys = malloc(max_entries * sizeof(int));
+	values = calloc(max_entries, value_size);
 	visited = malloc(max_entries * sizeof(int));
 	CHECK(!keys || !values || !visited, "malloc()",
 	      "error:%s\n", strerror(errno));
@@ -203,7 +203,7 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 		CHECK(total != max_entries, "delete with steps",
 		      "total = %u, max_entries = %u\n", total, max_entries);
 
-		/* check map is empty, errono == ENOENT */
+		/* check map is empty, errno == ENOENT */
 		err = bpf_map_get_next_key(map_fd, NULL, &key);
 		CHECK(!err || errno != ENOENT, "bpf_map_get_next_key()",
 		      "error: %s\n", strerror(errno));
@@ -260,8 +260,7 @@ void __test_map_lookup_and_delete_batch(bool is_pcpu)
 	      "unexpected failure\n");
 	free(keys);
 	free(visited);
-	if (!is_pcpu)
-		free(values);
+	free(values);
 }
 
 void htab_map_batch_ops(void)
diff --git a/tools/testing/selftests/bpf/prog_tests/map_init.c b/tools/testing/selftests/bpf/prog_tests/map_init.c
index 14a31109dd0e..49386d0aa684 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_init.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_init.c
@@ -12,10 +12,7 @@ static int duration;
 
 typedef unsigned long long map_key_t;
 typedef unsigned long long map_value_t;
-typedef struct {
-	map_value_t v; /* padding */
-} __bpf_percpu_val_align pcpu_map_value_t;
-
+typedef __s64 pcpu_map_value_t;
 
 static int map_populate(int map_fd, int num)
 {
@@ -24,7 +21,7 @@ static int map_populate(int map_fd, int num)
 	map_key_t key;
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(value, i) = FILL_VALUE;
+		value[i] = FILL_VALUE;
 
 	for (key = 1; key <= num; key++) {
 		err = bpf_map_update_elem(map_fd, &key, value, BPF_NOEXIST);
@@ -103,7 +100,7 @@ static int check_values_one_cpu(pcpu_map_value_t *value, map_value_t expected)
 	map_value_t val;
 
 	for (i = 0; i < nr_cpus; i++) {
-		val = bpf_percpu(value, i);
+		val = value[i];
 		if (val) {
 			if (CHECK(val != expected, "map value",
 				  "unexpected for cpu %d: 0x%llx\n", i, val))
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 51adc42b2b40..b8ce837a7ada 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -146,63 +146,69 @@ static void test_hashmap_sizes(unsigned int task, void *data)
 
 static void test_hashmap_percpu(unsigned int task, void *data)
 {
-	unsigned int nr_cpus = bpf_num_possible_cpus();
-	BPF_DECLARE_PERCPU(long, value);
+	int nr_cpus = libbpf_num_possible_cpus();
+	__s64 *values;
 	long long key, next_key, first_key;
 	int expected_key_mask = 0;
 	int fd, i;
 
+	if (nr_cpus < 0) {
+		printf("Failed get possible cpus\n");
+		exit(1);
+	}
+
+	values = alloca(nr_cpus * sizeof(__s64));
+
 	fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_HASH, sizeof(key),
-			    sizeof(bpf_percpu(value, 0)), 2, map_flags);
+			    sizeof(*values), 2, map_flags);
 	if (fd < 0) {
 		printf("Failed to create hashmap '%s'!\n", strerror(errno));
 		exit(1);
 	}
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(value, i) = i + 100;
+		values[i] = i + 100;
 
 	key = 1;
 	/* Insert key=1 element. */
 	assert(!(expected_key_mask & key));
-	assert(bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0);
+	assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0);
 	expected_key_mask |= key;
 
 	/* BPF_NOEXIST means add new element if it doesn't exist. */
-	assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 &&
 	       /* key=1 already exists. */
 	       errno == EEXIST);
 
 	/* -1 is an invalid flag. */
-	assert(bpf_map_update_elem(fd, &key, value, -1) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, -1) == -1 &&
 	       errno == EINVAL);
 
 	/* Check that key=1 can be found. Value could be 0 if the lookup
 	 * was run from a different CPU.
 	 */
-	bpf_percpu(value, 0) = 1;
-	assert(bpf_map_lookup_elem(fd, &key, value) == 0 &&
-	       bpf_percpu(value, 0) == 100);
+	values[0] = 1;
+	assert(bpf_map_lookup_elem(fd, &key, values) == 0 && values[0] == 100);
 
 	key = 2;
 	/* Check that key=2 is not found. */
-	assert(bpf_map_lookup_elem(fd, &key, value) == -1 && errno == ENOENT);
+	assert(bpf_map_lookup_elem(fd, &key, values) == -1 && errno == ENOENT);
 
 	/* BPF_EXIST means update existing element. */
-	assert(bpf_map_update_elem(fd, &key, value, BPF_EXIST) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, BPF_EXIST) == -1 &&
 	       /* key=2 is not there. */
 	       errno == ENOENT);
 
 	/* Insert key=2 element. */
 	assert(!(expected_key_mask & key));
-	assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == 0);
+	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == 0);
 	expected_key_mask |= key;
 
 	/* key=1 and key=2 were inserted, check that key=0 cannot be
 	 * inserted due to max_entries limit.
 	 */
 	key = 0;
-	assert(bpf_map_update_elem(fd, &key, value, BPF_NOEXIST) == -1 &&
+	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 &&
 	       errno == E2BIG);
 
 	/* Check that key = 0 doesn't exist. */
@@ -219,10 +225,10 @@ static void test_hashmap_percpu(unsigned int task, void *data)
 		assert((expected_key_mask & next_key) == next_key);
 		expected_key_mask &= ~next_key;
 
-		assert(bpf_map_lookup_elem(fd, &next_key, value) == 0);
+		assert(bpf_map_lookup_elem(fd, &next_key, values) == 0);
 
 		for (i = 0; i < nr_cpus; i++)
-			assert(bpf_percpu(value, i) == i + 100);
+			assert(values[i] == i + 100);
 
 		key = next_key;
 	}
@@ -230,7 +236,7 @@ static void test_hashmap_percpu(unsigned int task, void *data)
 
 	/* Update with BPF_EXIST. */
 	key = 1;
-	assert(bpf_map_update_elem(fd, &key, value, BPF_EXIST) == 0);
+	assert(bpf_map_update_elem(fd, &key, values, BPF_EXIST) == 0);
 
 	/* Delete both elements. */
 	key = 1;
@@ -399,37 +405,42 @@ static void test_arraymap(unsigned int task, void *data)
 
 static void test_arraymap_percpu(unsigned int task, void *data)
 {
-	unsigned int nr_cpus = bpf_num_possible_cpus();
-	BPF_DECLARE_PERCPU(long, values);
+	int nr_cpus = libbpf_num_possible_cpus();
+	__s64 *values;
 	int key, next_key, fd, i;
 
+	if (nr_cpus < 0) {
+		printf("Failed get possible cpus\n");
+		exit(1);
+	}
+
+	values = alloca(nr_cpus * sizeof(__s64));
+
 	fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_ARRAY, sizeof(key),
-			    sizeof(bpf_percpu(values, 0)), 2, 0);
+			    sizeof(*values), 2, 0);
 	if (fd < 0) {
 		printf("Failed to create arraymap '%s'!\n", strerror(errno));
 		exit(1);
 	}
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(values, i) = i + 100;
+		values[i] = i + 100;
 
 	key = 1;
 	/* Insert key=1 element. */
 	assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0);
 
-	bpf_percpu(values, 0) = 0;
+	values[0] = 0;
 	assert(bpf_map_update_elem(fd, &key, values, BPF_NOEXIST) == -1 &&
 	       errno == EEXIST);
 
 	/* Check that key=1 can be found. */
-	assert(bpf_map_lookup_elem(fd, &key, values) == 0 &&
-	       bpf_percpu(values, 0) == 100);
+	assert(bpf_map_lookup_elem(fd, &key, values) == 0 && values[0] == 100);
 
 	key = 0;
 	/* Check that key=0 is also found and zero initialized. */
-	assert(bpf_map_lookup_elem(fd, &key, values) == 0 &&
-	       bpf_percpu(values, 0) == 0 &&
-	       bpf_percpu(values, nr_cpus - 1) == 0);
+	assert(bpf_map_lookup_elem(fd, &key, values) == 0 && values[0] == 0 &&
+	       values[nr_cpus - 1] == 0);
 
 	/* Check that key=2 cannot be inserted due to max_entries limit. */
 	key = 2;
@@ -458,16 +469,23 @@ static void test_arraymap_percpu(unsigned int task, void *data)
 
 static void test_arraymap_percpu_many_keys(void)
 {
-	unsigned int nr_cpus = bpf_num_possible_cpus();
-	BPF_DECLARE_PERCPU(long, values);
+	unsigned int nr_cpus = libbpf_num_possible_cpus();
+	__s64 *values;
 	/* nr_keys is not too large otherwise the test stresses percpu
 	 * allocator more than anything else
 	 */
 	unsigned int nr_keys = 2000;
 	int key, fd, i;
 
+	if (nr_cpus < 0) {
+		printf("Failed get possible cpus\n");
+		exit(1);
+	}
+
+	values = alloca(nr_cpus * sizeof(__s64));
+
 	fd = bpf_create_map(BPF_MAP_TYPE_PERCPU_ARRAY, sizeof(key),
-			    sizeof(bpf_percpu(values, 0)), nr_keys, 0);
+			    sizeof(*values), nr_keys, 0);
 	if (fd < 0) {
 		printf("Failed to create per-cpu arraymap '%s'!\n",
 		       strerror(errno));
@@ -475,19 +493,19 @@ static void test_arraymap_percpu_many_keys(void)
 	}
 
 	for (i = 0; i < nr_cpus; i++)
-		bpf_percpu(values, i) = i + 10;
+		values[i] = i + 10;
 
 	for (key = 0; key < nr_keys; key++)
 		assert(bpf_map_update_elem(fd, &key, values, BPF_ANY) == 0);
 
 	for (key = 0; key < nr_keys; key++) {
 		for (i = 0; i < nr_cpus; i++)
-			bpf_percpu(values, i) = 0;
+			values[i] = 0;
 
 		assert(bpf_map_lookup_elem(fd, &key, values) == 0);
 
 		for (i = 0; i < nr_cpus; i++)
-			assert(bpf_percpu(values, i) == i + 10);
+			assert(values[i] == i + 10);
 	}
 
 	close(fd);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH bpf-next v3 3/3] bpf: selftests: update array map tests for per-cpu batched ops
  2021-04-12 19:39 [PATCH bpf-next v3 0/3] add batched ops for percpu array Pedro Tammela
  2021-04-12 19:39 ` [PATCH bpf-next v3 1/3] bpf: add batched ops support " Pedro Tammela
  2021-04-12 19:40 ` [PATCH bpf-next v3 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
@ 2021-04-12 19:40 ` Pedro Tammela
  2021-04-12 21:49   ` Martin KaFai Lau
  2 siblings, 1 reply; 5+ messages in thread
From: Pedro Tammela @ 2021-04-12 19:40 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Shuah Khan, Pedro Tammela, Matthieu Baerts,
	David Verbeiren, open list:BPF (Safe dynamic programs and tools),
	open list:BPF (Safe dynamic programs and tools),
	open list, open list:KERNEL SELFTEST FRAMEWORK

Follows the same logic as the hashtable tests.

Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 .../bpf/map_tests/array_map_batch_ops.c       | 110 +++++++++++++-----
 1 file changed, 80 insertions(+), 30 deletions(-)

diff --git a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
index e42ea1195d18..707d17414dee 100644
--- a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
+++ b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
@@ -10,32 +10,59 @@
 #include <test_maps.h>
 
 static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
-			     int *values)
+			     __s64 *values, bool is_pcpu)
 {
-	int i, err;
+	int nr_cpus = libbpf_num_possible_cpus();
+	int i, j, err;
+	int offset = 0;
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
 	);
 
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
+
 	for (i = 0; i < max_entries; i++) {
 		keys[i] = i;
-		values[i] = i + 1;
+		if (is_pcpu)
+			for (j = 0; j < nr_cpus; j++)
+				(values + offset)[j] = i + 1 + j;
+		else
+			values[i] = i + 1;
+		offset += nr_cpus;
 	}
 
 	err = bpf_map_update_batch(map_fd, keys, values, &max_entries, &opts);
 	CHECK(err, "bpf_map_update_batch()", "error:%s\n", strerror(errno));
 }
 
-static void map_batch_verify(int *visited, __u32 max_entries,
-			     int *keys, int *values)
+static void map_batch_verify(int *visited, __u32 max_entries, int *keys,
+			     __s64 *values, bool is_pcpu)
 {
-	int i;
+	int nr_cpus = libbpf_num_possible_cpus();
+	int i, j;
+	int offset = 0;
+
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
 
 	memset(visited, 0, max_entries * sizeof(*visited));
 	for (i = 0; i < max_entries; i++) {
-		CHECK(keys[i] + 1 != values[i], "key/value checking",
-		      "error: i %d key %d value %d\n", i, keys[i], values[i]);
+		if (is_pcpu) {
+			for (j = 0; j < nr_cpus; j++) {
+				__s64 value = (values + offset)[j];
+				CHECK(keys[i] + j + 1 != value,
+				      "key/value checking",
+				      "error: i %d j %d key %d value %d\n", i,
+				      j, keys[i], value);
+			}
+		} else {
+			CHECK(keys[i] + 1 != values[i], "key/value checking",
+			      "error: i %d key %d value %d\n", i, keys[i],
+			      values[i]);
+		}
+		offset += nr_cpus;
 		visited[i] = 1;
 	}
 	for (i = 0; i < max_entries; i++) {
@@ -44,45 +71,52 @@ static void map_batch_verify(int *visited, __u32 max_entries,
 	}
 }
 
-void test_array_map_batch_ops(void)
+void __test_map_lookup_and_update_batch(bool is_pcpu)
 {
+	int nr_cpus = libbpf_num_possible_cpus();
 	struct bpf_create_map_attr xattr = {
 		.name = "array_map",
-		.map_type = BPF_MAP_TYPE_ARRAY,
+		.map_type = is_pcpu ? BPF_MAP_TYPE_PERCPU_ARRAY :
+				      BPF_MAP_TYPE_ARRAY,
 		.key_size = sizeof(int),
-		.value_size = sizeof(int),
+		.value_size = sizeof(__s64),
 	};
-	int map_fd, *keys, *values, *visited;
+	int map_fd, *keys, *visited;
 	__u32 count, total, total_success;
 	const __u32 max_entries = 10;
 	__u64 batch = 0;
-	int err, step;
+	int err, step, value_size;
+	void *values;
 	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
 		.elem_flags = 0,
 		.flags = 0,
 	);
 
+	CHECK(nr_cpus < 0, "nr_cpus checking",
+	      "error: get possible cpus failed");
+
 	xattr.max_entries = max_entries;
 	map_fd = bpf_create_map_xattr(&xattr);
 	CHECK(map_fd == -1,
 	      "bpf_create_map_xattr()", "error:%s\n", strerror(errno));
 
-	keys = malloc(max_entries * sizeof(int));
-	values = malloc(max_entries * sizeof(int));
-	visited = malloc(max_entries * sizeof(int));
+	value_size = sizeof(__s64);
+	if (is_pcpu)
+		value_size *= nr_cpus;
+
+	keys = malloc(max_entries * sizeof(*keys));
+	values = calloc(max_entries, value_size);
+	visited = malloc(max_entries * sizeof(*visited));
 	CHECK(!keys || !values || !visited, "malloc()", "error:%s\n",
 	      strerror(errno));
 
-	/* populate elements to the map */
-	map_batch_update(map_fd, max_entries, keys, values);
-
 	/* test 1: lookup in a loop with various steps. */
 	total_success = 0;
 	for (step = 1; step < max_entries; step++) {
-		map_batch_update(map_fd, max_entries, keys, values);
-		map_batch_verify(visited, max_entries, keys, values);
+		map_batch_update(map_fd, max_entries, keys, values, is_pcpu);
+		map_batch_verify(visited, max_entries, keys, values, is_pcpu);
 		memset(keys, 0, max_entries * sizeof(*keys));
-		memset(values, 0, max_entries * sizeof(*values));
+		memset(values, 0, max_entries * value_size);
 		batch = 0;
 		total = 0;
 		/* iteratively lookup/delete elements with 'step'
@@ -91,10 +125,10 @@ void test_array_map_batch_ops(void)
 		count = step;
 		while (true) {
 			err = bpf_map_lookup_batch(map_fd,
-						total ? &batch : NULL, &batch,
-						keys + total,
-						values + total,
-						&count, &opts);
+						   total ? &batch : NULL,
+						   &batch, keys + total,
+						   values + total * value_size,
+						   &count, &opts);
 
 			CHECK((err && errno != ENOENT), "lookup with steps",
 			      "error: %s\n", strerror(errno));
@@ -108,7 +142,7 @@ void test_array_map_batch_ops(void)
 		CHECK(total != max_entries, "lookup with steps",
 		      "total = %u, max_entries = %u\n", total, max_entries);
 
-		map_batch_verify(visited, max_entries, keys, values);
+		map_batch_verify(visited, max_entries, keys, values, is_pcpu);
 
 		total_success++;
 	}
@@ -116,9 +150,25 @@ void test_array_map_batch_ops(void)
 	CHECK(total_success == 0, "check total_success",
 	      "unexpected failure\n");
 
-	printf("%s:PASS\n", __func__);
-
 	free(keys);
-	free(values);
 	free(visited);
+	free(values);
+}
+
+void array_map_batch_ops(void)
+{
+	__test_map_lookup_and_update_batch(false);
+	printf("test_%s:PASS\n", __func__);
+}
+
+void array_percpu_map_batch_ops(void)
+{
+	__test_map_lookup_and_update_batch(true);
+	printf("test_%s:PASS\n", __func__);
+}
+
+void test_array_map_batch_ops(void)
+{
+	array_map_batch_ops();
+	array_percpu_map_batch_ops();
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH bpf-next v3 3/3] bpf: selftests: update array map tests for per-cpu batched ops
  2021-04-12 19:40 ` [PATCH bpf-next v3 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
@ 2021-04-12 21:49   ` Martin KaFai Lau
  0 siblings, 0 replies; 5+ messages in thread
From: Martin KaFai Lau @ 2021-04-12 21:49 UTC (permalink / raw)
  To: Pedro Tammela
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Song Liu,
	Yonghong Song, John Fastabend, KP Singh, Shuah Khan,
	Pedro Tammela, Matthieu Baerts, David Verbeiren,
	open list:BPF (Safe dynamic programs and tools),
	open list:BPF (Safe dynamic programs and tools),
	open list, open list:KERNEL SELFTEST FRAMEWORK

On Mon, Apr 12, 2021 at 04:40:01PM -0300, Pedro Tammela wrote:
> Follows the same logic as the hashtable tests.
> 
> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
> ---
>  .../bpf/map_tests/array_map_batch_ops.c       | 110 +++++++++++++-----
>  1 file changed, 80 insertions(+), 30 deletions(-)
> 
> diff --git a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
> index e42ea1195d18..707d17414dee 100644
> --- a/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
> +++ b/tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
> @@ -10,32 +10,59 @@
>  #include <test_maps.h>
>  
>  static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
> -			     int *values)
> +			     __s64 *values, bool is_pcpu)
>  {
> -	int i, err;
> +	int nr_cpus = libbpf_num_possible_cpus();
Instead of getting it multiple times, how about moving it out to
a static global and initialize it in test_array_map_batch_ops().


> +	int i, j, err;
> +	int offset = 0;
>  	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
>  		.elem_flags = 0,
>  		.flags = 0,
>  	);
>  
> +	CHECK(nr_cpus < 0, "nr_cpus checking",
> +	      "error: get possible cpus failed");
> +
>  	for (i = 0; i < max_entries; i++) {
>  		keys[i] = i;
> -		values[i] = i + 1;
> +		if (is_pcpu)
> +			for (j = 0; j < nr_cpus; j++)
> +				(values + offset)[j] = i + 1 + j;
> +		else
> +			values[i] = i + 1;
> +		offset += nr_cpus;
This "offset" update here is confusing to read because it is only
used in the is_pcpu case but it always gets updated regardless.
How about only defines and uses offset in the "if (is_pcpu)" case and
rename it to "cpu_offset": cpu_offset = i * nr_cpus.

The same goes for other occasions.

>  	}
>  
>  	err = bpf_map_update_batch(map_fd, keys, values, &max_entries, &opts);
>  	CHECK(err, "bpf_map_update_batch()", "error:%s\n", strerror(errno));
>  }
>  
> -static void map_batch_verify(int *visited, __u32 max_entries,
> -			     int *keys, int *values)
> +static void map_batch_verify(int *visited, __u32 max_entries, int *keys,
> +			     __s64 *values, bool is_pcpu)
>  {
> -	int i;
> +	int nr_cpus = libbpf_num_possible_cpus();
> +	int i, j;
> +	int offset = 0;
> +
> +	CHECK(nr_cpus < 0, "nr_cpus checking",
> +	      "error: get possible cpus failed");
>  
>  	memset(visited, 0, max_entries * sizeof(*visited));
>  	for (i = 0; i < max_entries; i++) {
> -		CHECK(keys[i] + 1 != values[i], "key/value checking",
> -		      "error: i %d key %d value %d\n", i, keys[i], values[i]);
> +		if (is_pcpu) {
> +			for (j = 0; j < nr_cpus; j++) {
> +				__s64 value = (values + offset)[j];
> +				CHECK(keys[i] + j + 1 != value,
> +				      "key/value checking",
> +				      "error: i %d j %d key %d value %d\n", i,
> +				      j, keys[i], value);
> +			}
> +		} else {
> +			CHECK(keys[i] + 1 != values[i], "key/value checking",
> +			      "error: i %d key %d value %d\n", i, keys[i],
> +			      values[i]);
> +		}
> +		offset += nr_cpus;
>  		visited[i] = 1;
>  	}
>  	for (i = 0; i < max_entries; i++) {
> @@ -44,45 +71,52 @@ static void map_batch_verify(int *visited, __u32 max_entries,
>  	}
>  }
>  
> -void test_array_map_batch_ops(void)
> +void __test_map_lookup_and_update_batch(bool is_pcpu)
static

>  {
> +	int nr_cpus = libbpf_num_possible_cpus();
>  	struct bpf_create_map_attr xattr = {
>  		.name = "array_map",
> -		.map_type = BPF_MAP_TYPE_ARRAY,
> +		.map_type = is_pcpu ? BPF_MAP_TYPE_PERCPU_ARRAY :
> +				      BPF_MAP_TYPE_ARRAY,
>  		.key_size = sizeof(int),
> -		.value_size = sizeof(int),
> +		.value_size = sizeof(__s64),
>  	};
> -	int map_fd, *keys, *values, *visited;
> +	int map_fd, *keys, *visited;
>  	__u32 count, total, total_success;
>  	const __u32 max_entries = 10;
>  	__u64 batch = 0;
> -	int err, step;
> +	int err, step, value_size;
> +	void *values;
>  	DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
>  		.elem_flags = 0,
>  		.flags = 0,
>  	);
>  
> +	CHECK(nr_cpus < 0, "nr_cpus checking",
> +	      "error: get possible cpus failed");
> +
>  	xattr.max_entries = max_entries;
>  	map_fd = bpf_create_map_xattr(&xattr);
>  	CHECK(map_fd == -1,
>  	      "bpf_create_map_xattr()", "error:%s\n", strerror(errno));
>  
> -	keys = malloc(max_entries * sizeof(int));
> -	values = malloc(max_entries * sizeof(int));
> -	visited = malloc(max_entries * sizeof(int));
> +	value_size = sizeof(__s64);
> +	if (is_pcpu)
> +		value_size *= nr_cpus;
> +
> +	keys = malloc(max_entries * sizeof(*keys));
> +	values = calloc(max_entries, value_size);
Why only this one uses calloc?

> +	visited = malloc(max_entries * sizeof(*visited));
>  	CHECK(!keys || !values || !visited, "malloc()", "error:%s\n",
>  	      strerror(errno));
>  
> -	/* populate elements to the map */
> -	map_batch_update(map_fd, max_entries, keys, values);
> -
>  	/* test 1: lookup in a loop with various steps. */
>  	total_success = 0;
>  	for (step = 1; step < max_entries; step++) {
> -		map_batch_update(map_fd, max_entries, keys, values);
> -		map_batch_verify(visited, max_entries, keys, values);
> +		map_batch_update(map_fd, max_entries, keys, values, is_pcpu);
> +		map_batch_verify(visited, max_entries, keys, values, is_pcpu);
>  		memset(keys, 0, max_entries * sizeof(*keys));
> -		memset(values, 0, max_entries * sizeof(*values));
> +		memset(values, 0, max_entries * value_size);
>  		batch = 0;
>  		total = 0;
>  		/* iteratively lookup/delete elements with 'step'
> @@ -91,10 +125,10 @@ void test_array_map_batch_ops(void)
>  		count = step;
>  		while (true) {
>  			err = bpf_map_lookup_batch(map_fd,
> -						total ? &batch : NULL, &batch,
> -						keys + total,
> -						values + total,
> -						&count, &opts);
> +						   total ? &batch : NULL,
> +						   &batch, keys + total,
> +						   values + total * value_size,
> +						   &count, &opts);
>  
>  			CHECK((err && errno != ENOENT), "lookup with steps",
>  			      "error: %s\n", strerror(errno));
> @@ -108,7 +142,7 @@ void test_array_map_batch_ops(void)
>  		CHECK(total != max_entries, "lookup with steps",
>  		      "total = %u, max_entries = %u\n", total, max_entries);
>  
> -		map_batch_verify(visited, max_entries, keys, values);
> +		map_batch_verify(visited, max_entries, keys, values, is_pcpu);
>  
>  		total_success++;
>  	}
> @@ -116,9 +150,25 @@ void test_array_map_batch_ops(void)
>  	CHECK(total_success == 0, "check total_success",
>  	      "unexpected failure\n");
>  
> -	printf("%s:PASS\n", __func__);
> -
>  	free(keys);
> -	free(values);
>  	free(visited);
> +	free(values);
This re-ordering is unnecessary.

> +}
> +
> +void array_map_batch_ops(void)
static

> +{
> +	__test_map_lookup_and_update_batch(false);
> +	printf("test_%s:PASS\n", __func__);
> +}
> +
> +void array_percpu_map_batch_ops(void)
static

> +{
> +	__test_map_lookup_and_update_batch(true);
> +	printf("test_%s:PASS\n", __func__);
> +}
> +
> +void test_array_map_batch_ops(void)
> +{
> +	array_map_batch_ops();
> +	array_percpu_map_batch_ops();
>  }
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-04-12 21:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-12 19:39 [PATCH bpf-next v3 0/3] add batched ops for percpu array Pedro Tammela
2021-04-12 19:39 ` [PATCH bpf-next v3 1/3] bpf: add batched ops support " Pedro Tammela
2021-04-12 19:40 ` [PATCH bpf-next v3 2/3] bpf: selftests: remove percpu macros from bpf_util.h Pedro Tammela
2021-04-12 19:40 ` [PATCH bpf-next v3 3/3] bpf: selftests: update array map tests for per-cpu batched ops Pedro Tammela
2021-04-12 21:49   ` Martin KaFai Lau

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).