bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf
@ 2020-07-07 18:48 Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 1/4] samples: bpf: fix bpf programs with kprobe/sys_connect event Daniel T. Lee
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Daniel T. Lee @ 2020-07-07 18:48 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Andrii Nakryiko
  Cc: netdev, bpf

There have been many changes in how the current bpf program defines
map. The development of libbbpf has led to the new method called 
BTF-defined map, which is a new way of defining BPF maps, and thus has
a lot of differences from the existing MAP definition method.

Although bpf_load was also internally using libbbpf, fragmentation in 
its implementation began to occur, such as using its own structure, 
bpf_load_map_def, to define the map.

Therefore, in this patch set, map test programs, which are closely
related to changes in the definition method of BPF map, were refactored
with libbbpf.

---
Changes in V2:
 - instead of changing event from __x64_sys_connect to __sys_connect,
 fetch and set register values directly
 - fix wrong error check logic with bpf_program
 - set numa_node 0 declaratively at map definition instead of setting it
 from user-space
 - static initialization of ARRAY_OF_MAPS element with '.values'

Daniel T. Lee (4):
  samples: bpf: fix bpf programs with kprobe/sys_connect event
  samples: bpf: refactor BPF map in map test with libbpf
  samples: bpf: refactor BPF map performance test with libbpf
  selftests: bpf: remove unused bpf_map_def_legacy struct

 samples/bpf/Makefile                     |   2 +-
 samples/bpf/map_perf_test_kern.c         | 188 ++++++++++++-----------
 samples/bpf/map_perf_test_user.c         | 164 +++++++++++++-------
 samples/bpf/test_map_in_map_kern.c       |  94 ++++++------
 samples/bpf/test_map_in_map_user.c       |  53 ++++++-
 samples/bpf/test_probe_write_user_kern.c |   9 +-
 tools/testing/selftests/bpf/bpf_legacy.h |  14 --
 7 files changed, 305 insertions(+), 219 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 1/4] samples: bpf: fix bpf programs with kprobe/sys_connect event
  2020-07-07 18:48 [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Daniel T. Lee
@ 2020-07-07 18:48 ` Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 2/4] samples: bpf: refactor BPF map in map test with libbpf Daniel T. Lee
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Daniel T. Lee @ 2020-07-07 18:48 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Andrii Nakryiko
  Cc: netdev, bpf

Currently, BPF programs with kprobe/sys_connect does not work properly.

Commit 34745aed515c ("samples/bpf: fix kprobe attachment issue on x64")
This commit modifies the bpf_load behavior of kprobe events in the x64
architecture. If the current kprobe event target starts with "sys_*",
add the prefix "__x64_" to the front of the event.

Appending "__x64_" prefix with kprobe/sys_* event was appropriate as a
solution to most of the problems caused by the commit below.

    commit d5a00528b58c ("syscalls/core, syscalls/x86: Rename struct
    pt_regs-based sys_*() to __x64_sys_*()")

However, there is a problem with the sys_connect kprobe event that does
not work properly. For __sys_connect event, parameters can be fetched
normally, but for __x64_sys_connect, parameters cannot be fetched.

    ffffffff818d3520 <__x64_sys_connect>:
    ffffffff818d3520: e8 fb df 32 00        callq   0xffffffff81c01520
    <__fentry__>
    ffffffff818d3525: 48 8b 57 60           movq    96(%rdi), %rdx
    ffffffff818d3529: 48 8b 77 68           movq    104(%rdi), %rsi
    ffffffff818d352d: 48 8b 7f 70           movq    112(%rdi), %rdi
    ffffffff818d3531: e8 1a ff ff ff        callq   0xffffffff818d3450
    <__sys_connect>
    ffffffff818d3536: 48 98                 cltq
    ffffffff818d3538: c3                    retq
    ffffffff818d3539: 0f 1f 80 00 00 00 00  nopl    (%rax)

As the assembly code for __x64_sys_connect shows, parameters should be
fetched and set into rdi, rsi, rdx registers prior to calling
__sys_connect.

Because of this problem, this commit fixes the sys_connect event by
first getting the value of the rdi register and then the value of the
rdi, rsi, and rdx register through an offset based on that value.

Fixes: 34745aed515c ("samples/bpf: fix kprobe attachment issue on x64")
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>

---
Changes in V2:
 - instead of changing event from __x64_sys_connect to __sys_connect,
 fetch and set register values directly

 samples/bpf/map_perf_test_kern.c         | 9 ++++++---
 samples/bpf/test_map_in_map_kern.c       | 9 ++++++---
 samples/bpf/test_probe_write_user_kern.c | 9 ++++++---
 3 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
index 12e91ae64d4d..c9b31193ca12 100644
--- a/samples/bpf/map_perf_test_kern.c
+++ b/samples/bpf/map_perf_test_kern.c
@@ -11,6 +11,8 @@
 #include <bpf/bpf_helpers.h>
 #include "bpf_legacy.h"
 #include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "trace_common.h"
 
 #define MAX_ENTRIES 1000
 #define MAX_NR_CPUS 1024
@@ -154,9 +156,10 @@ int stress_percpu_hmap_alloc(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_connect")
+SEC("kprobe/" SYSCALL(sys_connect))
 int stress_lru_hmap_alloc(struct pt_regs *ctx)
 {
+	struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
 	char fmt[] = "Failed at stress_lru_hmap_alloc. ret:%dn";
 	union {
 		u16 dst6[8];
@@ -175,8 +178,8 @@ int stress_lru_hmap_alloc(struct pt_regs *ctx)
 	long val = 1;
 	u32 key = 0;
 
-	in6 = (struct sockaddr_in6 *)PT_REGS_PARM2(ctx);
-	addrlen = (int)PT_REGS_PARM3(ctx);
+	in6 = (struct sockaddr_in6 *)PT_REGS_PARM2_CORE(real_regs);
+	addrlen = (int)PT_REGS_PARM3_CORE(real_regs);
 
 	if (addrlen != sizeof(*in6))
 		return 0;
diff --git a/samples/bpf/test_map_in_map_kern.c b/samples/bpf/test_map_in_map_kern.c
index 6cee61e8ce9b..36a203e69064 100644
--- a/samples/bpf/test_map_in_map_kern.c
+++ b/samples/bpf/test_map_in_map_kern.c
@@ -13,6 +13,8 @@
 #include <bpf/bpf_helpers.h>
 #include "bpf_legacy.h"
 #include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "trace_common.h"
 
 #define MAX_NR_PORTS 65536
 
@@ -102,9 +104,10 @@ static __always_inline int do_inline_hash_lookup(void *inner_map, u32 port)
 	return result ? *result : -ENOENT;
 }
 
-SEC("kprobe/sys_connect")
+SEC("kprobe/" SYSCALL(sys_connect))
 int trace_sys_connect(struct pt_regs *ctx)
 {
+	struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
 	struct sockaddr_in6 *in6;
 	u16 test_case, port, dst6[8];
 	int addrlen, ret, inline_ret, ret_key = 0;
@@ -112,8 +115,8 @@ int trace_sys_connect(struct pt_regs *ctx)
 	void *outer_map, *inner_map;
 	bool inline_hash = false;
 
-	in6 = (struct sockaddr_in6 *)PT_REGS_PARM2(ctx);
-	addrlen = (int)PT_REGS_PARM3(ctx);
+	in6 = (struct sockaddr_in6 *)PT_REGS_PARM2_CORE(real_regs);
+	addrlen = (int)PT_REGS_PARM3_CORE(real_regs);
 
 	if (addrlen != sizeof(*in6))
 		return 0;
diff --git a/samples/bpf/test_probe_write_user_kern.c b/samples/bpf/test_probe_write_user_kern.c
index f033f36a13a3..fd651a65281e 100644
--- a/samples/bpf/test_probe_write_user_kern.c
+++ b/samples/bpf/test_probe_write_user_kern.c
@@ -10,6 +10,8 @@
 #include <linux/version.h>
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "trace_common.h"
 
 struct bpf_map_def SEC("maps") dnat_map = {
 	.type = BPF_MAP_TYPE_HASH,
@@ -26,13 +28,14 @@ struct bpf_map_def SEC("maps") dnat_map = {
  * This example sits on a syscall, and the syscall ABI is relatively stable
  * of course, across platforms, and over time, the ABI may change.
  */
-SEC("kprobe/sys_connect")
+SEC("kprobe/" SYSCALL(sys_connect))
 int bpf_prog1(struct pt_regs *ctx)
 {
+	struct pt_regs *real_regs = (struct pt_regs *)PT_REGS_PARM1_CORE(ctx);
+	void *sockaddr_arg = (void *)PT_REGS_PARM2_CORE(real_regs);
+	int sockaddr_len = (int)PT_REGS_PARM3_CORE(real_regs);
 	struct sockaddr_in new_addr, orig_addr = {};
 	struct sockaddr_in *mapped_addr;
-	void *sockaddr_arg = (void *)PT_REGS_PARM2(ctx);
-	int sockaddr_len = (int)PT_REGS_PARM3(ctx);
 
 	if (sockaddr_len > sizeof(orig_addr))
 		return 0;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 2/4] samples: bpf: refactor BPF map in map test with libbpf
  2020-07-07 18:48 [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 1/4] samples: bpf: fix bpf programs with kprobe/sys_connect event Daniel T. Lee
@ 2020-07-07 18:48 ` Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 3/4] samples: bpf: refactor BPF map performance " Daniel T. Lee
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Daniel T. Lee @ 2020-07-07 18:48 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Andrii Nakryiko
  Cc: netdev, bpf

From commit 646f02ffdd49 ("libbpf: Add BTF-defined map-in-map
support"), a way to define internal map in BTF-defined map has been
added.

Instead of using previous 'inner_map_idx' definition, the structure to
be used for the inner map can be directly defined using array directive.

    __array(values, struct inner_map)

This commit refactors map in map test program with libbpf by explicitly
defining inner map with BTF-defined format.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>

---
Changes in V2:
 - fix wrong error check logic with bpf_program

 samples/bpf/Makefile               |  2 +-
 samples/bpf/test_map_in_map_kern.c | 85 +++++++++++++++---------------
 samples/bpf/test_map_in_map_user.c | 53 +++++++++++++++++--
 3 files changed, 91 insertions(+), 49 deletions(-)

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 8403e4762306..f87ee02073ba 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -93,7 +93,7 @@ sampleip-objs := sampleip_user.o $(TRACE_HELPERS)
 tc_l2_redirect-objs := bpf_load.o tc_l2_redirect_user.o
 lwt_len_hist-objs := bpf_load.o lwt_len_hist_user.o
 xdp_tx_iptunnel-objs := xdp_tx_iptunnel_user.o
-test_map_in_map-objs := bpf_load.o test_map_in_map_user.o
+test_map_in_map-objs := test_map_in_map_user.o
 per_socket_stats_example-objs := cookie_uid_helper_example.o
 xdp_redirect-objs := xdp_redirect_user.o
 xdp_redirect_map-objs := xdp_redirect_map_user.o
diff --git a/samples/bpf/test_map_in_map_kern.c b/samples/bpf/test_map_in_map_kern.c
index 36a203e69064..8def45c5b697 100644
--- a/samples/bpf/test_map_in_map_kern.c
+++ b/samples/bpf/test_map_in_map_kern.c
@@ -11,7 +11,6 @@
 #include <uapi/linux/bpf.h>
 #include <uapi/linux/in6.h>
 #include <bpf/bpf_helpers.h>
-#include "bpf_legacy.h"
 #include <bpf/bpf_tracing.h>
 #include <bpf/bpf_core_read.h>
 #include "trace_common.h"
@@ -19,60 +18,60 @@
 #define MAX_NR_PORTS 65536
 
 /* map #0 */
-struct bpf_map_def_legacy SEC("maps") port_a = {
-	.type = BPF_MAP_TYPE_ARRAY,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(int),
-	.max_entries = MAX_NR_PORTS,
-};
+struct inner_a {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__type(key, u32);
+	__type(value, int);
+	__uint(max_entries, MAX_NR_PORTS);
+} port_a SEC(".maps");
 
 /* map #1 */
-struct bpf_map_def_legacy SEC("maps") port_h = {
-	.type = BPF_MAP_TYPE_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(int),
-	.max_entries = 1,
-};
+struct inner_h {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, u32);
+	__type(value, int);
+	__uint(max_entries, 1);
+} port_h SEC(".maps");
 
 /* map #2 */
-struct bpf_map_def_legacy SEC("maps") reg_result_h = {
-	.type = BPF_MAP_TYPE_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(int),
-	.max_entries = 1,
-};
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, u32);
+	__type(value, int);
+	__uint(max_entries, 1);
+} reg_result_h SEC(".maps");
 
 /* map #3 */
-struct bpf_map_def_legacy SEC("maps") inline_result_h = {
-	.type = BPF_MAP_TYPE_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(int),
-	.max_entries = 1,
-};
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, u32);
+	__type(value, int);
+	__uint(max_entries, 1);
+} inline_result_h SEC(".maps");
 
 /* map #4 */ /* Test case #0 */
-struct bpf_map_def_legacy SEC("maps") a_of_port_a = {
-	.type = BPF_MAP_TYPE_ARRAY_OF_MAPS,
-	.key_size = sizeof(u32),
-	.inner_map_idx = 0, /* map_fd[0] is port_a */
-	.max_entries = MAX_NR_PORTS,
-};
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, MAX_NR_PORTS);
+	__uint(key_size, sizeof(u32));
+	__array(values, struct inner_a); /* use inner_a as inner map */
+} a_of_port_a SEC(".maps");
 
 /* map #5 */ /* Test case #1 */
-struct bpf_map_def_legacy SEC("maps") h_of_port_a = {
-	.type = BPF_MAP_TYPE_HASH_OF_MAPS,
-	.key_size = sizeof(u32),
-	.inner_map_idx = 0, /* map_fd[0] is port_a */
-	.max_entries = 1,
-};
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
+	__uint(max_entries, 1);
+	__uint(key_size, sizeof(u32));
+	__array(values, struct inner_a); /* use inner_a as inner map */
+} h_of_port_a SEC(".maps");
 
 /* map #6 */ /* Test case #2 */
-struct bpf_map_def_legacy SEC("maps") h_of_port_h = {
-	.type = BPF_MAP_TYPE_HASH_OF_MAPS,
-	.key_size = sizeof(u32),
-	.inner_map_idx = 1, /* map_fd[1] is port_h */
-	.max_entries = 1,
-};
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
+	__uint(max_entries, 1);
+	__uint(key_size, sizeof(u32));
+	__array(values, struct inner_h); /* use inner_h as inner map */
+} h_of_port_h SEC(".maps");
 
 static __always_inline int do_reg_lookup(void *inner_map, u32 port)
 {
diff --git a/samples/bpf/test_map_in_map_user.c b/samples/bpf/test_map_in_map_user.c
index eb29bcb76f3f..98656de56b83 100644
--- a/samples/bpf/test_map_in_map_user.c
+++ b/samples/bpf/test_map_in_map_user.c
@@ -11,7 +11,9 @@
 #include <stdlib.h>
 #include <stdio.h>
 #include <bpf/bpf.h>
-#include "bpf_load.h"
+#include <bpf/libbpf.h>
+
+static int map_fd[7];
 
 #define PORT_A		(map_fd[0])
 #define PORT_H		(map_fd[1])
@@ -113,18 +115,59 @@ static void test_map_in_map(void)
 int main(int argc, char **argv)
 {
 	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
+	struct bpf_link *link = NULL;
+	struct bpf_program *prog;
+	struct bpf_object *obj;
 	char filename[256];
 
-	assert(!setrlimit(RLIMIT_MEMLOCK, &r));
+	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
+		perror("setrlimit(RLIMIT_MEMLOCK)");
+		return 1;
+	}
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+	obj = bpf_object__open_file(filename, NULL);
+	if (libbpf_get_error(obj)) {
+		fprintf(stderr, "ERROR: opening BPF object file failed\n");
+		return 0;
+	}
 
-	if (load_bpf_file(filename)) {
-		printf("%s", bpf_log_buf);
-		return 1;
+	prog = bpf_object__find_program_by_name(obj, "trace_sys_connect");
+	if (!prog) {
+		printf("finding a prog in obj file failed\n");
+		goto cleanup;
+	}
+
+	/* load BPF program */
+	if (bpf_object__load(obj)) {
+		fprintf(stderr, "ERROR: loading BPF object file failed\n");
+		goto cleanup;
+	}
+
+	map_fd[0] = bpf_object__find_map_fd_by_name(obj, "port_a");
+	map_fd[1] = bpf_object__find_map_fd_by_name(obj, "port_h");
+	map_fd[2] = bpf_object__find_map_fd_by_name(obj, "reg_result_h");
+	map_fd[3] = bpf_object__find_map_fd_by_name(obj, "inline_result_h");
+	map_fd[4] = bpf_object__find_map_fd_by_name(obj, "a_of_port_a");
+	map_fd[5] = bpf_object__find_map_fd_by_name(obj, "h_of_port_a");
+	map_fd[6] = bpf_object__find_map_fd_by_name(obj, "h_of_port_h");
+	if (map_fd[0] < 0 || map_fd[1] < 0 || map_fd[2] < 0 ||
+	    map_fd[3] < 0 || map_fd[4] < 0 || map_fd[5] < 0 || map_fd[6] < 0) {
+		fprintf(stderr, "ERROR: finding a map in obj file failed\n");
+		goto cleanup;
+	}
+
+	link = bpf_program__attach(prog);
+	if (libbpf_get_error(link)) {
+		fprintf(stderr, "ERROR: bpf_program__attach failed\n");
+		link = NULL;
+		goto cleanup;
 	}
 
 	test_map_in_map();
 
+cleanup:
+	bpf_link__destroy(link);
+	bpf_object__close(obj);
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 3/4] samples: bpf: refactor BPF map performance test with libbpf
  2020-07-07 18:48 [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 1/4] samples: bpf: fix bpf programs with kprobe/sys_connect event Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 2/4] samples: bpf: refactor BPF map in map test with libbpf Daniel T. Lee
@ 2020-07-07 18:48 ` Daniel T. Lee
  2020-07-07 18:48 ` [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct Daniel T. Lee
  2020-07-07 19:01 ` [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Andrii Nakryiko
  4 siblings, 0 replies; 9+ messages in thread
From: Daniel T. Lee @ 2020-07-07 18:48 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Andrii Nakryiko
  Cc: netdev, bpf

Previously, in order to set the numa_node attribute at the time of map
creation using "libbpf", it was necessary to call bpf_create_map_node()
directly (bpf_load approach), instead of calling bpf_object_load()
that handles everything on its own, including map creation. And because
of this problem, this sample had problems with refactoring from bpf_load
to libbbpf.

However, by commit 1bdb6c9a1c43 ("libbpf: Add a bunch of attribute
getters/setters for map definitions") added the numa_node attribute and
allowed it to be set in the map.

By using libbpf instead of bpf_load, the inner map definition has
been explicitly declared with BTF-defined format. Also, the element of
ARRAY_OF_MAPS was also statically specified using the BTF format. And
for this reason some logic in fixup_map() was not needed and changed
or removed.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>

---
Changes in V2:
 - set numa_node 0 declaratively at map definition instead of setting it
 from user-space
 - static initialization of ARRAY_OF_MAPS element with '.values'

 samples/bpf/map_perf_test_kern.c | 179 ++++++++++++++++---------------
 samples/bpf/map_perf_test_user.c | 164 ++++++++++++++++++----------
 2 files changed, 196 insertions(+), 147 deletions(-)

diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
index c9b31193ca12..8773f22b6a98 100644
--- a/samples/bpf/map_perf_test_kern.c
+++ b/samples/bpf/map_perf_test_kern.c
@@ -9,7 +9,6 @@
 #include <linux/version.h>
 #include <uapi/linux/bpf.h>
 #include <bpf/bpf_helpers.h>
-#include "bpf_legacy.h"
 #include <bpf/bpf_tracing.h>
 #include <bpf/bpf_core_read.h>
 #include "trace_common.h"
@@ -17,89 +16,93 @@
 #define MAX_ENTRIES 1000
 #define MAX_NR_CPUS 1024
 
-struct bpf_map_def_legacy SEC("maps") hash_map = {
-	.type = BPF_MAP_TYPE_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, MAX_ENTRIES);
+} hash_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LRU_HASH);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, 10000);
+} lru_hash_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LRU_HASH);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, 10000);
+	__uint(map_flags, BPF_F_NO_COMMON_LRU);
+} nocommon_lru_hash_map SEC(".maps");
+
+struct inner_lru {
+	__uint(type, BPF_MAP_TYPE_LRU_HASH);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, MAX_ENTRIES);
+	__uint(map_flags, BPF_F_NUMA_NODE);
+	__uint(numa_node, 0);
+} inner_lru_hash_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, MAX_NR_CPUS);
+	__uint(key_size, sizeof(u32));
+	__array(values, struct inner_lru); /* use inner_lru as inner map */
+} array_of_lru_hashs SEC(".maps") = {
+	/* statically initialize the first element */
+	.values = { &inner_lru_hash_map },
 };
 
-struct bpf_map_def_legacy SEC("maps") lru_hash_map = {
-	.type = BPF_MAP_TYPE_LRU_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = 10000,
-};
-
-struct bpf_map_def_legacy SEC("maps") nocommon_lru_hash_map = {
-	.type = BPF_MAP_TYPE_LRU_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = 10000,
-	.map_flags = BPF_F_NO_COMMON_LRU,
-};
-
-struct bpf_map_def_legacy SEC("maps") inner_lru_hash_map = {
-	.type = BPF_MAP_TYPE_LRU_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
-	.map_flags = BPF_F_NUMA_NODE,
-	.numa_node = 0,
-};
-
-struct bpf_map_def_legacy SEC("maps") array_of_lru_hashs = {
-	.type = BPF_MAP_TYPE_ARRAY_OF_MAPS,
-	.key_size = sizeof(u32),
-	.max_entries = MAX_NR_CPUS,
-};
-
-struct bpf_map_def_legacy SEC("maps") percpu_hash_map = {
-	.type = BPF_MAP_TYPE_PERCPU_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
-};
-
-struct bpf_map_def_legacy SEC("maps") hash_map_alloc = {
-	.type = BPF_MAP_TYPE_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
-	.map_flags = BPF_F_NO_PREALLOC,
-};
-
-struct bpf_map_def_legacy SEC("maps") percpu_hash_map_alloc = {
-	.type = BPF_MAP_TYPE_PERCPU_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
-	.map_flags = BPF_F_NO_PREALLOC,
-};
-
-struct bpf_map_def_legacy SEC("maps") lpm_trie_map_alloc = {
-	.type = BPF_MAP_TYPE_LPM_TRIE,
-	.key_size = 8,
-	.value_size = sizeof(long),
-	.max_entries = 10000,
-	.map_flags = BPF_F_NO_PREALLOC,
-};
-
-struct bpf_map_def_legacy SEC("maps") array_map = {
-	.type = BPF_MAP_TYPE_ARRAY,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
-};
-
-struct bpf_map_def_legacy SEC("maps") lru_hash_lookup_map = {
-	.type = BPF_MAP_TYPE_LRU_HASH,
-	.key_size = sizeof(u32),
-	.value_size = sizeof(long),
-	.max_entries = MAX_ENTRIES,
-};
-
-SEC("kprobe/sys_getuid")
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
+	__uint(key_size, sizeof(u32));
+	__uint(value_size, sizeof(long));
+	__uint(max_entries, MAX_ENTRIES);
+} percpu_hash_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, MAX_ENTRIES);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+} hash_map_alloc SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
+	__uint(key_size, sizeof(u32));
+	__uint(value_size, sizeof(long));
+	__uint(max_entries, MAX_ENTRIES);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+} percpu_hash_map_alloc SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LPM_TRIE);
+	__uint(key_size, 8);
+	__uint(value_size, sizeof(long));
+	__uint(max_entries, 10000);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+} lpm_trie_map_alloc SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, MAX_ENTRIES);
+} array_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_LRU_HASH);
+	__type(key, u32);
+	__type(value, long);
+	__uint(max_entries, MAX_ENTRIES);
+} lru_hash_lookup_map SEC(".maps");
+
+SEC("kprobe/" SYSCALL(sys_getuid))
 int stress_hmap(struct pt_regs *ctx)
 {
 	u32 key = bpf_get_current_pid_tgid();
@@ -114,7 +117,7 @@ int stress_hmap(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_geteuid")
+SEC("kprobe/" SYSCALL(sys_geteuid))
 int stress_percpu_hmap(struct pt_regs *ctx)
 {
 	u32 key = bpf_get_current_pid_tgid();
@@ -128,7 +131,7 @@ int stress_percpu_hmap(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_getgid")
+SEC("kprobe/" SYSCALL(sys_getgid))
 int stress_hmap_alloc(struct pt_regs *ctx)
 {
 	u32 key = bpf_get_current_pid_tgid();
@@ -142,7 +145,7 @@ int stress_hmap_alloc(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_getegid")
+SEC("kprobe/" SYSCALL(sys_getegid))
 int stress_percpu_hmap_alloc(struct pt_regs *ctx)
 {
 	u32 key = bpf_get_current_pid_tgid();
@@ -236,7 +239,7 @@ int stress_lru_hmap_alloc(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_gettid")
+SEC("kprobe/" SYSCALL(sys_gettid))
 int stress_lpm_trie_map_alloc(struct pt_regs *ctx)
 {
 	union {
@@ -258,7 +261,7 @@ int stress_lpm_trie_map_alloc(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_getpgid")
+SEC("kprobe/" SYSCALL(sys_getpgid))
 int stress_hash_map_lookup(struct pt_regs *ctx)
 {
 	u32 key = 1, i;
@@ -271,7 +274,7 @@ int stress_hash_map_lookup(struct pt_regs *ctx)
 	return 0;
 }
 
-SEC("kprobe/sys_getppid")
+SEC("kprobe/" SYSCALL(sys_getppid))
 int stress_array_map_lookup(struct pt_regs *ctx)
 {
 	u32 key = 1, i;
diff --git a/samples/bpf/map_perf_test_user.c b/samples/bpf/map_perf_test_user.c
index fe5564bff39b..8b13230b4c46 100644
--- a/samples/bpf/map_perf_test_user.c
+++ b/samples/bpf/map_perf_test_user.c
@@ -11,7 +11,6 @@
 #include <sys/wait.h>
 #include <stdlib.h>
 #include <signal.h>
-#include <linux/bpf.h>
 #include <string.h>
 #include <time.h>
 #include <sys/resource.h>
@@ -19,7 +18,7 @@
 #include <errno.h>
 
 #include <bpf/bpf.h>
-#include "bpf_load.h"
+#include <bpf/libbpf.h>
 
 #define TEST_BIT(t) (1U << (t))
 #define MAX_NR_CPUS 1024
@@ -61,12 +60,18 @@ const char *test_map_names[NR_TESTS] = {
 	[LRU_HASH_LOOKUP] = "lru_hash_lookup_map",
 };
 
+enum map_idx {
+	array_of_lru_hashs_idx,
+	hash_map_alloc_idx,
+	lru_hash_lookup_idx,
+	NR_IDXES,
+};
+
+static int map_fd[NR_IDXES];
+
 static int test_flags = ~0;
 static uint32_t num_map_entries;
 static uint32_t inner_lru_hash_size;
-static int inner_lru_hash_idx = -1;
-static int array_of_lru_hashs_idx = -1;
-static int lru_hash_lookup_idx = -1;
 static int lru_hash_lookup_test_entries = 32;
 static uint32_t max_cnt = 1000000;
 
@@ -122,30 +127,30 @@ static void do_test_lru(enum test_type test, int cpu)
 	__u64 start_time;
 	int i, ret;
 
-	if (test == INNER_LRU_HASH_PREALLOC) {
+	if (test == INNER_LRU_HASH_PREALLOC && cpu) {
+		/* If CPU is not 0, create inner_lru hash map and insert the fd
+		 * value into the array_of_lru_hash map. In case of CPU 0,
+		 * 'inner_lru_hash_map' was statically inserted on the map init
+		 */
 		int outer_fd = map_fd[array_of_lru_hashs_idx];
 		unsigned int mycpu, mynode;
 
 		assert(cpu < MAX_NR_CPUS);
 
-		if (cpu) {
-			ret = syscall(__NR_getcpu, &mycpu, &mynode, NULL);
-			assert(!ret);
-
-			inner_lru_map_fds[cpu] =
-				bpf_create_map_node(BPF_MAP_TYPE_LRU_HASH,
-						    test_map_names[INNER_LRU_HASH_PREALLOC],
-						    sizeof(uint32_t),
-						    sizeof(long),
-						    inner_lru_hash_size, 0,
-						    mynode);
-			if (inner_lru_map_fds[cpu] == -1) {
-				printf("cannot create BPF_MAP_TYPE_LRU_HASH %s(%d)\n",
-				       strerror(errno), errno);
-				exit(1);
-			}
-		} else {
-			inner_lru_map_fds[cpu] = map_fd[inner_lru_hash_idx];
+		ret = syscall(__NR_getcpu, &mycpu, &mynode, NULL);
+		assert(!ret);
+
+		inner_lru_map_fds[cpu] =
+			bpf_create_map_node(BPF_MAP_TYPE_LRU_HASH,
+					    test_map_names[INNER_LRU_HASH_PREALLOC],
+					    sizeof(uint32_t),
+					    sizeof(long),
+					    inner_lru_hash_size, 0,
+					    mynode);
+		if (inner_lru_map_fds[cpu] == -1) {
+			printf("cannot create BPF_MAP_TYPE_LRU_HASH %s(%d)\n",
+			       strerror(errno), errno);
+			exit(1);
 		}
 
 		ret = bpf_map_update_elem(outer_fd, &cpu,
@@ -377,7 +382,8 @@ static void fill_lpm_trie(void)
 		key->data[1] = rand() & 0xff;
 		key->data[2] = rand() & 0xff;
 		key->data[3] = rand() & 0xff;
-		r = bpf_map_update_elem(map_fd[6], key, &value, 0);
+		r = bpf_map_update_elem(map_fd[hash_map_alloc_idx],
+					key, &value, 0);
 		assert(!r);
 	}
 
@@ -388,59 +394,52 @@ static void fill_lpm_trie(void)
 	key->data[3] = 1;
 	value = 128;
 
-	r = bpf_map_update_elem(map_fd[6], key, &value, 0);
+	r = bpf_map_update_elem(map_fd[hash_map_alloc_idx], key, &value, 0);
 	assert(!r);
 }
 
-static void fixup_map(struct bpf_map_data *map, int idx)
+static void fixup_map(struct bpf_object *obj)
 {
+	struct bpf_map *map;
 	int i;
 
-	if (!strcmp("inner_lru_hash_map", map->name)) {
-		inner_lru_hash_idx = idx;
-		inner_lru_hash_size = map->def.max_entries;
-	}
+	bpf_object__for_each_map(map, obj) {
+		const char *name = bpf_map__name(map);
 
-	if (!strcmp("array_of_lru_hashs", map->name)) {
-		if (inner_lru_hash_idx == -1) {
-			printf("inner_lru_hash_map must be defined before array_of_lru_hashs\n");
-			exit(1);
+		/* Only change the max_entries for the enabled test(s) */
+		for (i = 0; i < NR_TESTS; i++) {
+			if (!strcmp(test_map_names[i], name) &&
+			    (check_test_flags(i))) {
+				bpf_map__resize(map, num_map_entries);
+				continue;
+			}
 		}
-		map->def.inner_map_idx = inner_lru_hash_idx;
-		array_of_lru_hashs_idx = idx;
 	}
 
-	if (!strcmp("lru_hash_lookup_map", map->name))
-		lru_hash_lookup_idx = idx;
-
-	if (num_map_entries <= 0)
-		return;
-
 	inner_lru_hash_size = num_map_entries;
-
-	/* Only change the max_entries for the enabled test(s) */
-	for (i = 0; i < NR_TESTS; i++) {
-		if (!strcmp(test_map_names[i], map->name) &&
-		    (check_test_flags(i))) {
-			map->def.max_entries = num_map_entries;
-		}
-	}
 }
 
 int main(int argc, char **argv)
 {
 	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
+	int nr_cpus = sysconf(_SC_NPROCESSORS_ONLN);
+	struct bpf_link *links[8];
+	struct bpf_program *prog;
+	struct bpf_object *obj;
+	struct bpf_map *map;
 	char filename[256];
-	int num_cpu = 8;
+	int i = 0;
 
-	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	setrlimit(RLIMIT_MEMLOCK, &r);
+	if (setrlimit(RLIMIT_MEMLOCK, &r)) {
+		perror("setrlimit(RLIMIT_MEMLOCK)");
+		return 1;
+	}
 
 	if (argc > 1)
 		test_flags = atoi(argv[1]) ? : test_flags;
 
 	if (argc > 2)
-		num_cpu = atoi(argv[2]) ? : num_cpu;
+		nr_cpus = atoi(argv[2]) ? : nr_cpus;
 
 	if (argc > 3)
 		num_map_entries = atoi(argv[3]);
@@ -448,14 +447,61 @@ int main(int argc, char **argv)
 	if (argc > 4)
 		max_cnt = atoi(argv[4]);
 
-	if (load_bpf_file_fixup_map(filename, fixup_map)) {
-		printf("%s", bpf_log_buf);
-		return 1;
+	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+	obj = bpf_object__open_file(filename, NULL);
+	if (libbpf_get_error(obj)) {
+		fprintf(stderr, "ERROR: opening BPF object file failed\n");
+		return 0;
+	}
+
+	map = bpf_object__find_map_by_name(obj, "inner_lru_hash_map");
+	if (libbpf_get_error(map)) {
+		fprintf(stderr, "ERROR: finding a map in obj file failed\n");
+		goto cleanup;
+	}
+
+	inner_lru_hash_size = bpf_map__max_entries(map);
+	if (!inner_lru_hash_size) {
+		fprintf(stderr, "ERROR: failed to get map attribute\n");
+		goto cleanup;
+	}
+
+	/* resize BPF map prior to loading */
+	if (num_map_entries > 0)
+		fixup_map(obj);
+
+	/* load BPF program */
+	if (bpf_object__load(obj)) {
+		fprintf(stderr, "ERROR: loading BPF object file failed\n");
+		goto cleanup;
+	}
+
+	map_fd[0] = bpf_object__find_map_fd_by_name(obj, "array_of_lru_hashs");
+	map_fd[1] = bpf_object__find_map_fd_by_name(obj, "hash_map_alloc");
+	map_fd[2] = bpf_object__find_map_fd_by_name(obj, "lru_hash_lookup_map");
+	if (map_fd[0] < 0 || map_fd[1] < 0 || map_fd[2] < 0) {
+		fprintf(stderr, "ERROR: finding a map in obj file failed\n");
+		goto cleanup;
+	}
+
+	bpf_object__for_each_program(prog, obj) {
+		links[i] = bpf_program__attach(prog);
+		if (libbpf_get_error(links[i])) {
+			fprintf(stderr, "ERROR: bpf_program__attach failed\n");
+			links[i] = NULL;
+			goto cleanup;
+		}
+		i++;
 	}
 
 	fill_lpm_trie();
 
-	run_perf_test(num_cpu);
+	run_perf_test(nr_cpus);
+
+cleanup:
+	for (i--; i >= 0; i--)
+		bpf_link__destroy(links[i]);
 
+	bpf_object__close(obj);
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct
  2020-07-07 18:48 [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Daniel T. Lee
                   ` (2 preceding siblings ...)
  2020-07-07 18:48 ` [PATCH bpf-next v2 3/4] samples: bpf: refactor BPF map performance " Daniel T. Lee
@ 2020-07-07 18:48 ` Daniel T. Lee
  2020-07-07 19:00   ` Andrii Nakryiko
  2020-07-07 19:01 ` [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Andrii Nakryiko
  4 siblings, 1 reply; 9+ messages in thread
From: Daniel T. Lee @ 2020-07-07 18:48 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Andrii Nakryiko
  Cc: netdev, bpf

samples/bpf no longer use bpf_map_def_legacy and instead use the
libbpf's bpf_map_def or new BTF-defined MAP format. This commit removes
unused bpf_map_def_legacy struct from selftests/bpf/bpf_legacy.h.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
---
 tools/testing/selftests/bpf/bpf_legacy.h | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_legacy.h b/tools/testing/selftests/bpf/bpf_legacy.h
index 6f8988738bc1..719ab56cdb5d 100644
--- a/tools/testing/selftests/bpf/bpf_legacy.h
+++ b/tools/testing/selftests/bpf/bpf_legacy.h
@@ -2,20 +2,6 @@
 #ifndef __BPF_LEGACY__
 #define __BPF_LEGACY__
 
-/*
- * legacy bpf_map_def with extra fields supported only by bpf_load(), do not
- * use outside of samples/bpf
- */
-struct bpf_map_def_legacy {
-	unsigned int type;
-	unsigned int key_size;
-	unsigned int value_size;
-	unsigned int max_entries;
-	unsigned int map_flags;
-	unsigned int inner_map_idx;
-	unsigned int numa_node;
-};
-
 #define BPF_ANNOTATE_KV_PAIR(name, type_key, type_val)		\
 	struct ____btf_map_##name {				\
 		type_key key;					\
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct
  2020-07-07 18:48 ` [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct Daniel T. Lee
@ 2020-07-07 19:00   ` Andrii Nakryiko
  2020-07-08  1:56     ` Daniel T. Lee
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2020-07-07 19:00 UTC (permalink / raw)
  To: Daniel T. Lee
  Cc: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Networking, bpf

On Tue, Jul 7, 2020 at 11:49 AM Daniel T. Lee <danieltimlee@gmail.com> wrote:
>
> samples/bpf no longer use bpf_map_def_legacy and instead use the
> libbpf's bpf_map_def or new BTF-defined MAP format. This commit removes
> unused bpf_map_def_legacy struct from selftests/bpf/bpf_legacy.h.
>
> Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
> ---

Next time please don't forget to keep Ack's you've received on
previous revision.

>  tools/testing/selftests/bpf/bpf_legacy.h | 14 --------------
>  1 file changed, 14 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/bpf_legacy.h b/tools/testing/selftests/bpf/bpf_legacy.h
> index 6f8988738bc1..719ab56cdb5d 100644
> --- a/tools/testing/selftests/bpf/bpf_legacy.h
> +++ b/tools/testing/selftests/bpf/bpf_legacy.h
> @@ -2,20 +2,6 @@
>  #ifndef __BPF_LEGACY__
>  #define __BPF_LEGACY__
>
> -/*
> - * legacy bpf_map_def with extra fields supported only by bpf_load(), do not
> - * use outside of samples/bpf
> - */
> -struct bpf_map_def_legacy {
> -       unsigned int type;
> -       unsigned int key_size;
> -       unsigned int value_size;
> -       unsigned int max_entries;
> -       unsigned int map_flags;
> -       unsigned int inner_map_idx;
> -       unsigned int numa_node;
> -};
> -
>  #define BPF_ANNOTATE_KV_PAIR(name, type_key, type_val)         \
>         struct ____btf_map_##name {                             \
>                 type_key key;                                   \
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf
  2020-07-07 18:48 [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Daniel T. Lee
                   ` (3 preceding siblings ...)
  2020-07-07 18:48 ` [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct Daniel T. Lee
@ 2020-07-07 19:01 ` Andrii Nakryiko
  2020-07-07 23:59   ` Daniel Borkmann
  4 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2020-07-07 19:01 UTC (permalink / raw)
  To: Daniel T. Lee
  Cc: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Networking, bpf

On Tue, Jul 7, 2020 at 11:49 AM Daniel T. Lee <danieltimlee@gmail.com> wrote:
>
> There have been many changes in how the current bpf program defines
> map. The development of libbbpf has led to the new method called
> BTF-defined map, which is a new way of defining BPF maps, and thus has
> a lot of differences from the existing MAP definition method.
>
> Although bpf_load was also internally using libbbpf, fragmentation in
> its implementation began to occur, such as using its own structure,
> bpf_load_map_def, to define the map.
>
> Therefore, in this patch set, map test programs, which are closely
> related to changes in the definition method of BPF map, were refactored
> with libbbpf.
>
> ---

For the series:

Acked-by: Andrii Nakryiko <andriin@fb.com>

> Changes in V2:
>  - instead of changing event from __x64_sys_connect to __sys_connect,
>  fetch and set register values directly
>  - fix wrong error check logic with bpf_program
>  - set numa_node 0 declaratively at map definition instead of setting it
>  from user-space
>  - static initialization of ARRAY_OF_MAPS element with '.values'
>
> Daniel T. Lee (4):
>   samples: bpf: fix bpf programs with kprobe/sys_connect event
>   samples: bpf: refactor BPF map in map test with libbpf
>   samples: bpf: refactor BPF map performance test with libbpf
>   selftests: bpf: remove unused bpf_map_def_legacy struct
>
>  samples/bpf/Makefile                     |   2 +-
>  samples/bpf/map_perf_test_kern.c         | 188 ++++++++++++-----------
>  samples/bpf/map_perf_test_user.c         | 164 +++++++++++++-------
>  samples/bpf/test_map_in_map_kern.c       |  94 ++++++------
>  samples/bpf/test_map_in_map_user.c       |  53 ++++++-
>  samples/bpf/test_probe_write_user_kern.c |   9 +-
>  tools/testing/selftests/bpf/bpf_legacy.h |  14 --
>  7 files changed, 305 insertions(+), 219 deletions(-)
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf
  2020-07-07 19:01 ` [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Andrii Nakryiko
@ 2020-07-07 23:59   ` Daniel Borkmann
  0 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2020-07-07 23:59 UTC (permalink / raw)
  To: Andrii Nakryiko, Daniel T. Lee
  Cc: Alexei Starovoitov, Yonghong Song, Martin KaFai Lau,
	Andrii Nakryiko, Networking, bpf

On 7/7/20 9:01 PM, Andrii Nakryiko wrote:
> On Tue, Jul 7, 2020 at 11:49 AM Daniel T. Lee <danieltimlee@gmail.com> wrote:
>>
>> There have been many changes in how the current bpf program defines
>> map. The development of libbbpf has led to the new method called
>> BTF-defined map, which is a new way of defining BPF maps, and thus has
>> a lot of differences from the existing MAP definition method.
>>
>> Although bpf_load was also internally using libbbpf, fragmentation in
>> its implementation began to occur, such as using its own structure,
>> bpf_load_map_def, to define the map.
>>
>> Therefore, in this patch set, map test programs, which are closely
>> related to changes in the definition method of BPF map, were refactored
>> with libbbpf.
>>
>> ---
> 
> For the series:
> 
> Acked-by: Andrii Nakryiko <andriin@fb.com>

Applied, thanks everyone!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct
  2020-07-07 19:00   ` Andrii Nakryiko
@ 2020-07-08  1:56     ` Daniel T. Lee
  0 siblings, 0 replies; 9+ messages in thread
From: Daniel T. Lee @ 2020-07-08  1:56 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Daniel Borkmann, Alexei Starovoitov, Yonghong Song,
	Martin KaFai Lau, Andrii Nakryiko, Networking, bpf

On Wed, Jul 8, 2020 at 4:00 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Jul 7, 2020 at 11:49 AM Daniel T. Lee <danieltimlee@gmail.com> wrote:
> >
> > samples/bpf no longer use bpf_map_def_legacy and instead use the
> > libbpf's bpf_map_def or new BTF-defined MAP format. This commit removes
> > unused bpf_map_def_legacy struct from selftests/bpf/bpf_legacy.h.
> >
> > Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
> > ---
>
> Next time please don't forget to keep Ack's you've received on
> previous revision.
>

I'll keep that in mind.

Thank you for your time and effort for the review.
Daniel.

> >  tools/testing/selftests/bpf/bpf_legacy.h | 14 --------------
> >  1 file changed, 14 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/bpf_legacy.h b/tools/testing/selftests/bpf/bpf_legacy.h
> > index 6f8988738bc1..719ab56cdb5d 100644
> > --- a/tools/testing/selftests/bpf/bpf_legacy.h
> > +++ b/tools/testing/selftests/bpf/bpf_legacy.h
> > @@ -2,20 +2,6 @@
> >  #ifndef __BPF_LEGACY__
> >  #define __BPF_LEGACY__
> >
> > -/*
> > - * legacy bpf_map_def with extra fields supported only by bpf_load(), do not
> > - * use outside of samples/bpf
> > - */
> > -struct bpf_map_def_legacy {
> > -       unsigned int type;
> > -       unsigned int key_size;
> > -       unsigned int value_size;
> > -       unsigned int max_entries;
> > -       unsigned int map_flags;
> > -       unsigned int inner_map_idx;
> > -       unsigned int numa_node;
> > -};
> > -
> >  #define BPF_ANNOTATE_KV_PAIR(name, type_key, type_val)         \
> >         struct ____btf_map_##name {                             \
> >                 type_key key;                                   \
> > --
> > 2.25.1
> >

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-07-08  1:56 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-07 18:48 [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Daniel T. Lee
2020-07-07 18:48 ` [PATCH bpf-next v2 1/4] samples: bpf: fix bpf programs with kprobe/sys_connect event Daniel T. Lee
2020-07-07 18:48 ` [PATCH bpf-next v2 2/4] samples: bpf: refactor BPF map in map test with libbpf Daniel T. Lee
2020-07-07 18:48 ` [PATCH bpf-next v2 3/4] samples: bpf: refactor BPF map performance " Daniel T. Lee
2020-07-07 18:48 ` [PATCH bpf-next v2 4/4] selftests: bpf: remove unused bpf_map_def_legacy struct Daniel T. Lee
2020-07-07 19:00   ` Andrii Nakryiko
2020-07-08  1:56     ` Daniel T. Lee
2020-07-07 19:01 ` [PATCH bpf-next v2 0/4] samples: bpf: refactor BPF map test with libbpf Andrii Nakryiko
2020-07-07 23:59   ` Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).