netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [bpf-next v2 00/10] Test the 32bit narrow reads
@ 2019-06-25 19:42 Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 01/10] selftests/bpf: Print a message when tester could not run a program Krzesimir Nowak
                   ` (9 more replies)
  0 siblings, 10 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

These patches try to test the fix made in commit e2f7fc0ac695 ("bpf:
fix undefined behavior in narrow load handling"). The problem existed
in the generated BPF bytecode that was doing a 32bit narrow read of a
64bit field, so to test it the code would need to be executed.
Currently the only such field exists in BPF_PROG_TYPE_PERF_EVENT,
which was not supported by bpf_prog_test_run().

I'm sending these patches to bpf-next now as they introduce a new
feature. But maybe some of those patches could go to the bpf branch?


There is a bit of yak shaving to do for the test to be run:

1. Print why the program could not be run (patch 1).

2. Some fixes for errno clobbering (patches 2 and 3).

3. Using bpf_prog_test_run_xattr, so I can pass ctx_in stuff too
   (patch 4).

4. Adding ctx stuff to struct bpf_test (patch 5).

5. Some tools headers syncing (patches 6 and 7).

6. Implement bpf_prog_test_run for perf event programs and test it
   (patches 8 and 9).


The last point is where I'm least sure how things should be done
properly:

1. There is a bunch of stuff to prepare for the
   bpf_perf_prog_read_value to work, and that stuff is very hacky. I
   would welcome some hints about how to set up the perf_event and
   perf_sample_data structs in a way that is a bit more future-proof
   than just setting some fields in a specific way, so some other code
   won't use some other fields (like setting event.oncpu to -1 to
   avoid event.pmu to be used for reading the value of the event).

2. The tests try to see if the test run for perf event sets up the
   context properly, so they verify the struct pt_regs contents. They
   way it is currently written Works For Me, but surely it won't work
   on other arch. So what would be the way forward? Just put the test
   case inside #ifdef __x86_64__?

3. Another thing in tests - I'm trying to make sure that the
   bpf_perf_prog_read_value helper works as it seems to be the only
   bpf perf helper that takes the ctx as a parameter. Is that enough
   or I should test other helpers too?


About the test itself - I'm not sure if it will work on a big endian
machine. I think it should, but I don't have anything handy here to
verify it.

Krzesimir Nowak (10):
  selftests/bpf: Print a message when tester could not run a program
  selftests/bpf: Avoid a clobbering of errno
  selftests/bpf: Avoid another case of errno clobbering
  selftests/bpf: Use bpf_prog_test_run_xattr
  selftests/bpf: Allow passing more information to BPF prog test run
  tools headers: Adopt compiletime_assert from kernel sources
  tools headers: sync struct bpf_perf_event_data
  bpf: Implement bpf_prog_test_run for perf event programs
  selftests/bpf: Add tests for bpf_prog_test_run for perf events progs
  selftests/bpf: Test correctness of narrow 32bit read on 64bit field

 kernel/trace/bpf_trace.c                      | 107 +++++++++++
 tools/include/linux/compiler.h                |  28 +++
 tools/include/uapi/linux/bpf_perf_event.h     |   1 +
 tools/testing/selftests/bpf/test_verifier.c   | 172 ++++++++++++++++--
 .../selftests/bpf/verifier/perf_event_run.c   |  93 ++++++++++
 .../bpf/verifier/perf_event_sample_period.c   |   8 +
 .../testing/selftests/bpf/verifier/var_off.c  |  20 ++
 7 files changed, 414 insertions(+), 15 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/verifier/perf_event_run.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [bpf-next v2 01/10] selftests/bpf: Print a message when tester could not run a program
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 02/10] selftests/bpf: Avoid a clobbering of errno Krzesimir Nowak
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

This prints a message when the error is about program type being not
supported by the test runner or because of permissions problem. This
is to see if the program we expected to run was actually executed.

The messages are open-coded because strerror(ENOTSUPP) returns
"Unknown error 524".

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index c5514daf8865..9e17bda016ef 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -831,11 +831,20 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
 				tmp, &size_tmp, &retval, NULL);
 	if (unpriv)
 		set_admin(false);
-	if (err && errno != 524/*ENOTSUPP*/ && errno != EPERM) {
-		printf("Unexpected bpf_prog_test_run error ");
-		return err;
+	if (err) {
+		switch (errno) {
+		case 524/*ENOTSUPP*/:
+			printf("Did not run the program (not supported) ");
+			return 0;
+		case EPERM:
+			printf("Did not run the program (no permission) ");
+			return 0;
+		default:
+			printf("Unexpected bpf_prog_test_run error (%s) ", strerror(saved_errno));
+			return err;
+		}
 	}
-	if (!err && retval != expected_val &&
+	if (retval != expected_val &&
 	    expected_val != POINTER_VALUE) {
 		printf("FAIL retval %d != %d ", retval, expected_val);
 		return 1;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 02/10] selftests/bpf: Avoid a clobbering of errno
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 01/10] selftests/bpf: Print a message when tester could not run a program Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 03/10] selftests/bpf: Avoid another case of errno clobbering Krzesimir Nowak
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

Save errno right after bpf_prog_test_run returns, so we later check
the error code actually set by bpf_prog_test_run, not by some libcap
function.

Cc: Daniel Borkmann <daniel@iogearbox.net>
Fixes: 832c6f2c29ec ("bpf: test make sure to run unpriv test cases in test_verifier")
Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 9e17bda016ef..12589da13487 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -824,15 +824,17 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
 	__u32 size_tmp = sizeof(tmp);
 	uint32_t retval;
 	int err;
+	int saved_errno;
 
 	if (unpriv)
 		set_admin(true);
 	err = bpf_prog_test_run(fd_prog, 1, data, size_data,
 				tmp, &size_tmp, &retval, NULL);
+	saved_errno = errno;
 	if (unpriv)
 		set_admin(false);
 	if (err) {
-		switch (errno) {
+		switch (saved_errno) {
 		case 524/*ENOTSUPP*/:
 			printf("Did not run the program (not supported) ");
 			return 0;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 03/10] selftests/bpf: Avoid another case of errno clobbering
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 01/10] selftests/bpf: Print a message when tester could not run a program Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 02/10] selftests/bpf: Avoid a clobbering of errno Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 20:08   ` Stanislav Fomichev
  2019-06-25 19:42 ` [bpf-next v2 04/10] selftests/bpf: Use bpf_prog_test_run_xattr Krzesimir Nowak
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak, Stanislav Fomichev

Commit 8184d44c9a57 ("selftests/bpf: skip verifier tests for
unsupported program types") added a check for an unsupported program
type. The function doing it changes errno, so test_verifier should
save it before calling it if test_verifier wants to print a reason why
verifying a BPF program of a supported type failed.

Fixes: 8184d44c9a57 ("selftests/bpf: skip verifier tests for unsupported program types")
Cc: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 12589da13487..779e30b96ded 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -867,6 +867,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
 	int fixup_skips;
 	__u32 pflags;
 	int i, err;
+	int saved_errno;
 
 	for (i = 0; i < MAX_NR_MAPS; i++)
 		map_fds[i] = -1;
@@ -894,6 +895,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
 		pflags |= BPF_F_ANY_ALIGNMENT;
 	fd_prog = bpf_verify_program(prog_type, prog, prog_len, pflags,
 				     "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 4);
+	saved_errno = errno;
 	if (fd_prog < 0 && !bpf_probe_prog_type(prog_type, 0)) {
 		printf("SKIP (unsupported program type %d)\n", prog_type);
 		skips++;
@@ -910,7 +912,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
 	if (expected_ret == ACCEPT) {
 		if (fd_prog < 0) {
 			printf("FAIL\nFailed to load prog '%s'!\n",
-			       strerror(errno));
+			       strerror(saved_errno));
 			goto fail_log;
 		}
 #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 04/10] selftests/bpf: Use bpf_prog_test_run_xattr
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (2 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 03/10] selftests/bpf: Avoid another case of errno clobbering Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 05/10] selftests/bpf: Allow passing more information to BPF prog test run Krzesimir Nowak
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

The bpf_prog_test_run_xattr function gives more options to set up a
test run of a BPF program than the bpf_prog_test_run function.

We will need this extra flexibility to pass ctx data later.

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 779e30b96ded..db1f0f758f81 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -822,14 +822,20 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
 {
 	__u8 tmp[TEST_DATA_LEN << 2];
 	__u32 size_tmp = sizeof(tmp);
-	uint32_t retval;
 	int err;
 	int saved_errno;
+	struct bpf_prog_test_run_attr attr = {
+		.prog_fd = fd_prog,
+		.repeat = 1,
+		.data_in = data,
+		.data_size_in = size_data,
+		.data_out = tmp,
+		.data_size_out = size_tmp,
+	};
 
 	if (unpriv)
 		set_admin(true);
-	err = bpf_prog_test_run(fd_prog, 1, data, size_data,
-				tmp, &size_tmp, &retval, NULL);
+	err = bpf_prog_test_run_xattr(&attr);
 	saved_errno = errno;
 	if (unpriv)
 		set_admin(false);
@@ -846,9 +852,9 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
 			return err;
 		}
 	}
-	if (retval != expected_val &&
+	if (attr.retval != expected_val &&
 	    expected_val != POINTER_VALUE) {
-		printf("FAIL retval %d != %d ", retval, expected_val);
+		printf("FAIL retval %d != %d ", attr.retval, expected_val);
 		return 1;
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 05/10] selftests/bpf: Allow passing more information to BPF prog test run
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (3 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 04/10] selftests/bpf: Use bpf_prog_test_run_xattr Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 06/10] tools headers: Adopt compiletime_assert from kernel sources Krzesimir Nowak
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

The test case can specify a custom length of the data member, context
data and its length, which will be passed to
bpf_prog_test_run_xattr. For backward compatilibity, if the data
length is 0 (which is what will happen when the field is left
unspecified in the designated initializer of a struct), then the
length passed to the bpf_prog_test_run_xattr is TEST_DATA_LEN.

Also for backward compatilibity, if context data length is 0, NULL is
passed as a context to bpf_prog_test_run_xattr. This is to avoid
breaking other tests, where context data being NULL and context data
length being 0 is handled differently from the case where context data
is not NULL and context data length is 0.

Custom lengths still can't be greater than hardcoded 64 bytes for data
and 192 for context data.

192 for context data was picked to allow passing struct
bpf_perf_event_data as a context for perf event programs. The struct
is quite large, because it contains struct pt_regs.

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c | 68 +++++++++++++++++++--
 1 file changed, 62 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index db1f0f758f81..05bad54f481f 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -54,6 +54,7 @@
 #define MAX_TEST_RUNS	8
 #define POINTER_VALUE	0xcafe4all
 #define TEST_DATA_LEN	64
+#define TEST_CTX_LEN	192
 
 #define F_NEEDS_EFFICIENT_UNALIGNED_ACCESS	(1 << 0)
 #define F_LOAD_WITH_STRICT_ALIGNMENT		(1 << 1)
@@ -96,6 +97,9 @@ struct bpf_test {
 	enum bpf_prog_type prog_type;
 	uint8_t flags;
 	__u8 data[TEST_DATA_LEN];
+	__u32 data_len;
+	__u8 ctx[TEST_CTX_LEN];
+	__u32 ctx_len;
 	void (*fill_helper)(struct bpf_test *self);
 	uint8_t runs;
 	struct {
@@ -104,6 +108,9 @@ struct bpf_test {
 			__u8 data[TEST_DATA_LEN];
 			__u64 data64[TEST_DATA_LEN / 8];
 		};
+		__u32 data_len;
+		__u8 ctx[TEST_CTX_LEN];
+		__u32 ctx_len;
 	} retvals[MAX_TEST_RUNS];
 };
 
@@ -818,7 +825,7 @@ static int set_admin(bool admin)
 }
 
 static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
-			    void *data, size_t size_data)
+			    void *data, size_t size_data, void *ctx, size_t size_ctx)
 {
 	__u8 tmp[TEST_DATA_LEN << 2];
 	__u32 size_tmp = sizeof(tmp);
@@ -831,6 +838,8 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
 		.data_size_in = size_data,
 		.data_out = tmp,
 		.data_size_out = size_tmp,
+		.ctx_in = ctx,
+		.ctx_size_in = size_ctx,
 	};
 
 	if (unpriv)
@@ -956,13 +965,39 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
 	if (!alignment_prevented_execution && fd_prog >= 0) {
 		uint32_t expected_val;
 		int i;
+		__u32 size_data;
+		__u32 size_ctx;
+		bool bad_size;
+		void *ctx;
 
 		if (!test->runs) {
+			if (test->data_len > 0)
+				size_data = test->data_len;
+			else
+				size_data = sizeof(test->data);
+			size_ctx = test->ctx_len;
+			bad_size = false;
 			expected_val = unpriv && test->retval_unpriv ?
 				test->retval_unpriv : test->retval;
 
-			err = do_prog_test_run(fd_prog, unpriv, expected_val,
-					       test->data, sizeof(test->data));
+			if (size_data > sizeof(test->data)) {
+				printf("FAIL: data size (%u) greater than TEST_DATA_LEN (%lu) ", size_data, sizeof(test->data));
+				bad_size = true;
+			}
+			if (size_ctx > sizeof(test->ctx)) {
+				printf("FAIL: ctx size (%u) greater than TEST_CTX_LEN (%lu) ", size_ctx, sizeof(test->ctx));
+				bad_size = true;
+			}
+			if (size_ctx)
+				ctx = test->ctx;
+			else
+				ctx = NULL;
+			if (bad_size)
+				err = 1;
+			else
+				err = do_prog_test_run(fd_prog, unpriv, expected_val,
+						       test->data, size_data,
+						       ctx, size_ctx);
 			if (err)
 				run_errs++;
 			else
@@ -970,14 +1005,35 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
 		}
 
 		for (i = 0; i < test->runs; i++) {
+			if (test->retvals[i].data_len > 0)
+				size_data = test->retvals[i].data_len;
+			else
+				size_data = sizeof(test->retvals[i].data);
+			size_ctx = test->retvals[i].ctx_len;
+			bad_size = false;
 			if (unpriv && test->retvals[i].retval_unpriv)
 				expected_val = test->retvals[i].retval_unpriv;
 			else
 				expected_val = test->retvals[i].retval;
 
-			err = do_prog_test_run(fd_prog, unpriv, expected_val,
-					       test->retvals[i].data,
-					       sizeof(test->retvals[i].data));
+			if (size_data > sizeof(test->retvals[i].data)) {
+				printf("FAIL: data size (%u) at run %i greater than TEST_DATA_LEN (%lu) ", size_data, i + 1, sizeof(test->retvals[i].data));
+				bad_size = true;
+			}
+			if (size_ctx > sizeof(test->retvals[i].ctx)) {
+				printf("FAIL: ctx size (%u) at run %i greater than TEST_CTX_LEN (%lu) ", size_ctx, i + 1, sizeof(test->retvals[i].ctx));
+				bad_size = true;
+			}
+			if (size_ctx)
+				ctx = test->retvals[i].ctx;
+			else
+				ctx = NULL;
+			if (bad_size)
+				err = 1;
+			else
+				err = do_prog_test_run(fd_prog, unpriv, expected_val,
+						       test->retvals[i].data, size_data,
+						       ctx, size_ctx);
 			if (err) {
 				printf("(run %d/%d) ", i + 1, test->runs);
 				run_errs++;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 06/10] tools headers: Adopt compiletime_assert from kernel sources
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (4 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 05/10] selftests/bpf: Allow passing more information to BPF prog test run Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 07/10] tools headers: sync struct bpf_perf_event_data Krzesimir Nowak
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

This will come in handy to verify that the hardcoded size of the
context data in bpf_test struct is high enough to hold some struct.

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/include/linux/compiler.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/tools/include/linux/compiler.h b/tools/include/linux/compiler.h
index 1827c2f973f9..b4e97751000a 100644
--- a/tools/include/linux/compiler.h
+++ b/tools/include/linux/compiler.h
@@ -172,4 +172,32 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
 # define __fallthrough
 #endif
 
+
+#ifdef __OPTIMIZE__
+# define __compiletime_assert(condition, msg, prefix, suffix)		\
+	do {								\
+		extern void prefix ## suffix(void) __compiletime_error(msg); \
+		if (!(condition))					\
+			prefix ## suffix();				\
+	} while (0)
+#else
+# define __compiletime_assert(condition, msg, prefix, suffix) do { } while (0)
+#endif
+
+#define _compiletime_assert(condition, msg, prefix, suffix) \
+	__compiletime_assert(condition, msg, prefix, suffix)
+
+/**
+ * compiletime_assert - break build and emit msg if condition is false
+ * @condition: a compile-time constant condition to check
+ * @msg:       a message to emit if condition is false
+ *
+ * In tradition of POSIX assert, this macro will break the build if the
+ * supplied condition is *false*, emitting the supplied error message if the
+ * compiler has support to do so.
+ */
+#define compiletime_assert(condition, msg) \
+	_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
+
+
 #endif /* _TOOLS_LINUX_COMPILER_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 07/10] tools headers: sync struct bpf_perf_event_data
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (5 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 06/10] tools headers: Adopt compiletime_assert from kernel sources Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs Krzesimir Nowak
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

struct bpf_perf_event_data in kernel headers has the addr field, which
is missing in the tools version of the struct. This will be important
for the bpf prog test run implementation for perf events as it will
expect data to be an instance of struct bpf_perf_event_data, so the
size of the data needs to match sizeof(bpf_perf_event_data).

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/include/uapi/linux/bpf_perf_event.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/include/uapi/linux/bpf_perf_event.h b/tools/include/uapi/linux/bpf_perf_event.h
index 8f95303f9d80..eb1b9d21250c 100644
--- a/tools/include/uapi/linux/bpf_perf_event.h
+++ b/tools/include/uapi/linux/bpf_perf_event.h
@@ -13,6 +13,7 @@
 struct bpf_perf_event_data {
 	bpf_user_pt_regs_t regs;
 	__u64 sample_period;
+	__u64 addr;
 };
 
 #endif /* _UAPI__LINUX_BPF_PERF_EVENT_H__ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (6 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 07/10] tools headers: sync struct bpf_perf_event_data Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 20:12   ` Stanislav Fomichev
  2019-06-25 19:42 ` [bpf-next v2 09/10] selftests/bpf: Add tests for bpf_prog_test_run for perf events progs Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 10/10] selftests/bpf: Test correctness of narrow 32bit read on 64bit field Krzesimir Nowak
  9 siblings, 1 reply; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

As an input, test run for perf event program takes struct
bpf_perf_event_data as ctx_in and struct bpf_perf_event_value as
data_in. For an output, it basically ignores ctx_out and data_out.

The implementation sets an instance of struct bpf_perf_event_data_kern
in such a way that the BPF program reading data from context will
receive what we passed to the bpf prog test run in ctx_in. Also BPF
program can call bpf_perf_prog_read_value to receive what was passed
in data_in.

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 kernel/trace/bpf_trace.c                      | 107 ++++++++++++++++++
 .../bpf/verifier/perf_event_sample_period.c   |   8 ++
 2 files changed, 115 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c102c240bb0b..2fa49ea8a475 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -16,6 +16,8 @@
 
 #include <asm/tlb.h>
 
+#include <trace/events/bpf_test_run.h>
+
 #include "trace_probe.h"
 #include "trace.h"
 
@@ -1160,7 +1162,112 @@ const struct bpf_verifier_ops perf_event_verifier_ops = {
 	.convert_ctx_access	= pe_prog_convert_ctx_access,
 };
 
+static int pe_prog_test_run(struct bpf_prog *prog,
+			    const union bpf_attr *kattr,
+			    union bpf_attr __user *uattr)
+{
+	void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
+	void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
+	u32 data_size_in = kattr->test.data_size_in;
+	u32 ctx_size_in = kattr->test.ctx_size_in;
+	u32 repeat = kattr->test.repeat;
+	u32 retval = 0, duration = 0;
+	int err = -EINVAL;
+	u64 time_start, time_spent = 0;
+	int i;
+	struct perf_sample_data sample_data = {0, };
+	struct perf_event event = {0, };
+	struct bpf_perf_event_data_kern real_ctx = {0, };
+	struct bpf_perf_event_data fake_ctx = {0, };
+	struct bpf_perf_event_value value = {0, };
+
+	if (ctx_size_in != sizeof(fake_ctx))
+		goto out;
+	if (data_size_in != sizeof(value))
+		goto out;
+
+	if (copy_from_user(&fake_ctx, ctx_in, ctx_size_in)) {
+		err = -EFAULT;
+		goto out;
+	}
+	if (copy_from_user(&value, data_in, data_size_in)) {
+		err = -EFAULT;
+		goto out;
+	}
+
+	real_ctx.regs = &fake_ctx.regs;
+	real_ctx.data = &sample_data;
+	real_ctx.event = &event;
+	perf_sample_data_init(&sample_data, fake_ctx.addr,
+			      fake_ctx.sample_period);
+	event.cpu = smp_processor_id();
+	event.oncpu = -1;
+	event.state = PERF_EVENT_STATE_OFF;
+	local64_set(&event.count, value.counter);
+	event.total_time_enabled = value.enabled;
+	event.total_time_running = value.running;
+	/* make self as a leader - it is used only for checking the
+	 * state field
+	 */
+	event.group_leader = &event;
+
+	/* slightly changed copy pasta from bpf_test_run() in
+	 * net/bpf/test_run.c
+	 */
+	if (!repeat)
+		repeat = 1;
+
+	rcu_read_lock();
+	preempt_disable();
+	time_start = ktime_get_ns();
+	for (i = 0; i < repeat; i++) {
+		retval = BPF_PROG_RUN(prog, &real_ctx);
+
+		if (signal_pending(current)) {
+			err = -EINTR;
+			preempt_enable();
+			rcu_read_unlock();
+			goto out;
+		}
+
+		if (need_resched()) {
+			time_spent += ktime_get_ns() - time_start;
+			preempt_enable();
+			rcu_read_unlock();
+
+			cond_resched();
+
+			rcu_read_lock();
+			preempt_disable();
+			time_start = ktime_get_ns();
+		}
+	}
+	time_spent += ktime_get_ns() - time_start;
+	preempt_enable();
+	rcu_read_unlock();
+
+	do_div(time_spent, repeat);
+	duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
+	/* end of slightly changed copy pasta from bpf_test_run() in
+	 * net/bpf/test_run.c
+	 */
+
+	if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) {
+		err = -EFAULT;
+		goto out;
+	}
+	if (copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
+		err = -EFAULT;
+		goto out;
+	}
+	err = 0;
+out:
+	trace_bpf_test_finish(&err);
+	return err;
+}
+
 const struct bpf_prog_ops perf_event_prog_ops = {
+	.test_run	= pe_prog_test_run,
 };
 
 static DEFINE_MUTEX(bpf_event_mutex);
diff --git a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
index 471c1a5950d8..16e9e5824d14 100644
--- a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
+++ b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
@@ -13,6 +13,8 @@
 	},
 	.result = ACCEPT,
 	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
 },
 {
 	"check bpf_perf_event_data->sample_period half load permitted",
@@ -29,6 +31,8 @@
 	},
 	.result = ACCEPT,
 	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
 },
 {
 	"check bpf_perf_event_data->sample_period word load permitted",
@@ -45,6 +49,8 @@
 	},
 	.result = ACCEPT,
 	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
 },
 {
 	"check bpf_perf_event_data->sample_period dword load permitted",
@@ -56,4 +62,6 @@
 	},
 	.result = ACCEPT,
 	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
 },
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 09/10] selftests/bpf: Add tests for bpf_prog_test_run for perf events progs
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (7 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  2019-06-25 19:42 ` [bpf-next v2 10/10] selftests/bpf: Test correctness of narrow 32bit read on 64bit field Krzesimir Nowak
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

The tests check if ctx and data are correctly prepared from ctx_in and
data_in, so accessing the ctx and using the bpf_perf_prog_read_value
work as expected.

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c   | 48 ++++++++++
 .../selftests/bpf/verifier/perf_event_run.c   | 93 +++++++++++++++++++
 2 files changed, 141 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/verifier/perf_event_run.c

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 05bad54f481f..6fa962014b64 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -293,6 +293,54 @@ static void bpf_fill_scale(struct bpf_test *self)
 	}
 }
 
+static void bpf_fill_perf_event_test_run_check(struct bpf_test *self)
+{
+	compiletime_assert(
+		sizeof(struct bpf_perf_event_data) <= TEST_CTX_LEN,
+		"buffer for ctx is too short to fit struct bpf_perf_event_data");
+	compiletime_assert(
+		sizeof(struct bpf_perf_event_value) <= TEST_DATA_LEN,
+		"buffer for data is too short to fit struct bpf_perf_event_value");
+
+	struct bpf_perf_event_data ctx = {
+		.regs = (bpf_user_pt_regs_t) {
+			.r15 = 1,
+			.r14 = 2,
+			.r13 = 3,
+			.r12 = 4,
+			.rbp = 5,
+			.rbx = 6,
+			.r11 = 7,
+			.r10 = 8,
+			.r9 = 9,
+			.r8 = 10,
+			.rax = 11,
+			.rcx = 12,
+			.rdx = 13,
+			.rsi = 14,
+			.rdi = 15,
+			.orig_rax = 16,
+			.rip = 17,
+			.cs = 18,
+			.eflags = 19,
+			.rsp = 20,
+			.ss = 21,
+		},
+		.sample_period = 1,
+		.addr = 2,
+	};
+	struct bpf_perf_event_value data = {
+		.counter = 1,
+		.enabled = 2,
+		.running = 3,
+	};
+
+	memcpy(self->ctx, &ctx, sizeof(ctx));
+	memcpy(self->data, &data, sizeof(data));
+	free(self->fill_insns);
+	self->fill_insns = NULL;
+}
+
 /* BPF_SK_LOOKUP contains 13 instructions, if you need to fix up maps */
 #define BPF_SK_LOOKUP(func)						\
 	/* struct bpf_sock_tuple tuple = {} */				\
diff --git a/tools/testing/selftests/bpf/verifier/perf_event_run.c b/tools/testing/selftests/bpf/verifier/perf_event_run.c
new file mode 100644
index 000000000000..d451932a6fc0
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/perf_event_run.c
@@ -0,0 +1,93 @@
+#define PER_LOAD_AND_CHECK_PTREG(PT_REG_FIELD, VALUE)			\
+	PER_LOAD_AND_CHECK_CTX(offsetof(bpf_user_pt_regs_t, PT_REG_FIELD), VALUE)
+#define PER_LOAD_AND_CHECK_EVENT(PED_FIELD, VALUE)			\
+	PER_LOAD_AND_CHECK_CTX(offsetof(struct bpf_perf_event_data, PED_FIELD), VALUE)
+#define PER_LOAD_AND_CHECK_CTX(OFFSET, VALUE)				\
+	PER_LOAD_AND_CHECK_64(BPF_REG_4, BPF_REG_1, OFFSET, VALUE)
+#define PER_LOAD_AND_CHECK_VALUE(PEV_FIELD, VALUE)			\
+	PER_LOAD_AND_CHECK_64(BPF_REG_7, BPF_REG_6, offsetof(struct bpf_perf_event_value, PEV_FIELD), VALUE)
+#define PER_LOAD_AND_CHECK_64(DST, SRC, OFFSET, VALUE)			\
+	BPF_LDX_MEM(BPF_DW, DST, SRC, OFFSET),				\
+	BPF_JMP_IMM(BPF_JEQ, DST, VALUE, 2),				\
+	BPF_MOV64_IMM(BPF_REG_0, VALUE),				\
+	BPF_EXIT_INSN()
+
+{
+	"check if regs contain expected values",
+	.insns = {
+	PER_LOAD_AND_CHECK_PTREG(r15, 1),
+	PER_LOAD_AND_CHECK_PTREG(r14, 2),
+	PER_LOAD_AND_CHECK_PTREG(r13, 3),
+	PER_LOAD_AND_CHECK_PTREG(r12, 4),
+	PER_LOAD_AND_CHECK_PTREG(rbp, 5),
+	PER_LOAD_AND_CHECK_PTREG(rbx, 6),
+	PER_LOAD_AND_CHECK_PTREG(r11, 7),
+	PER_LOAD_AND_CHECK_PTREG(r10, 8),
+	PER_LOAD_AND_CHECK_PTREG(r9, 9),
+	PER_LOAD_AND_CHECK_PTREG(r8, 10),
+	PER_LOAD_AND_CHECK_PTREG(rax, 11),
+	PER_LOAD_AND_CHECK_PTREG(rcx, 12),
+	PER_LOAD_AND_CHECK_PTREG(rdx, 13),
+	PER_LOAD_AND_CHECK_PTREG(rsi, 14),
+	PER_LOAD_AND_CHECK_PTREG(rdi, 15),
+	PER_LOAD_AND_CHECK_PTREG(orig_rax, 16),
+	PER_LOAD_AND_CHECK_PTREG(rip, 17),
+	PER_LOAD_AND_CHECK_PTREG(cs, 18),
+	PER_LOAD_AND_CHECK_PTREG(eflags, 19),
+	PER_LOAD_AND_CHECK_PTREG(rsp, 20),
+	PER_LOAD_AND_CHECK_PTREG(ss, 21),
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
+	.fill_helper = bpf_fill_perf_event_test_run_check,
+},
+{
+	"check if sample period and addr contain expected values",
+	.insns = {
+	PER_LOAD_AND_CHECK_EVENT(sample_period, 1),
+	PER_LOAD_AND_CHECK_EVENT(addr, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
+	.fill_helper = bpf_fill_perf_event_test_run_check,
+},
+{
+	"check if bpf_perf_prog_read_value returns expected data",
+	.insns = {
+	// allocate space for a struct bpf_perf_event_value
+	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -(int)sizeof(struct bpf_perf_event_value)),
+	// prepare parameters for bpf_perf_prog_read_value(ctx, struct bpf_perf_event_value*, u32)
+	// BPF_REG_1 already contains the context
+	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
+	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct bpf_perf_event_value)),
+	BPF_EMIT_CALL(BPF_FUNC_perf_prog_read_value),
+	// check the return value
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_EXIT_INSN(),
+	// check if the fields match the expected values
+	PER_LOAD_AND_CHECK_VALUE(counter, 1),
+	PER_LOAD_AND_CHECK_VALUE(enabled, 2),
+	PER_LOAD_AND_CHECK_VALUE(running, 3),
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data_len = sizeof(struct bpf_perf_event_value),
+	.fill_helper = bpf_fill_perf_event_test_run_check,
+},
+#undef PER_LOAD_AND_CHECK_64
+#undef PER_LOAD_AND_CHECK_VALUE
+#undef PER_LOAD_AND_CHECK_CTX
+#undef PER_LOAD_AND_CHECK_EVENT
+#undef PER_LOAD_AND_CHECK_PTREG
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [bpf-next v2 10/10] selftests/bpf: Test correctness of narrow 32bit read on 64bit field
  2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
                   ` (8 preceding siblings ...)
  2019-06-25 19:42 ` [bpf-next v2 09/10] selftests/bpf: Add tests for bpf_prog_test_run for perf events progs Krzesimir Nowak
@ 2019-06-25 19:42 ` Krzesimir Nowak
  9 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-25 19:42 UTC (permalink / raw)
  To: netdev
  Cc: Alban Crequy, Iago López Galeiras, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	linux-kernel, bpf, Krzesimir Nowak

Test the correctness of the 32bit narrow reads by reading both halves
of the 64 bit field and doing a binary or on them to see if we get the
original value.

It succeeds as it should, but with the commit e2f7fc0ac695 ("bpf: fix
undefined behavior in narrow load handling") reverted, the test fails
with a following message:

> $ sudo ./test_verifier
> ...
> #967/p 32bit loads of a 64bit field (both least and most significant words) FAIL retval -1985229329 != 0
> verification time 17 usec
> stack depth 0
> processed 8 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
> ...
> Summary: 1519 PASSED, 0 SKIPPED, 1 FAILED

Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
---
 tools/testing/selftests/bpf/test_verifier.c   | 19 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/var_off.c  | 20 +++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 6fa962014b64..444c1ea1e326 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -24,6 +24,7 @@
 
 #include <sys/capability.h>
 
+#include <linux/compiler.h>
 #include <linux/unistd.h>
 #include <linux/filter.h>
 #include <linux/bpf_perf_event.h>
@@ -341,6 +342,24 @@ static void bpf_fill_perf_event_test_run_check(struct bpf_test *self)
 	self->fill_insns = NULL;
 }
 
+static void bpf_fill_32bit_loads(struct bpf_test *self)
+{
+	compiletime_assert(
+		sizeof(struct bpf_perf_event_data) <= TEST_CTX_LEN,
+		"buffer for ctx is too short to fit struct bpf_perf_event_data");
+	compiletime_assert(
+		sizeof(struct bpf_perf_event_value) <= TEST_DATA_LEN,
+		"buffer for data is too short to fit struct bpf_perf_event_value");
+
+	struct bpf_perf_event_data ctx = {
+		.sample_period = 0x0123456789abcdef,
+	};
+
+	memcpy(self->ctx, &ctx, sizeof(ctx));
+	free(self->fill_insns);
+	self->fill_insns = NULL;
+}
+
 /* BPF_SK_LOOKUP contains 13 instructions, if you need to fix up maps */
 #define BPF_SK_LOOKUP(func)						\
 	/* struct bpf_sock_tuple tuple = {} */				\
diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
index 8504ac937809..14d222f37081 100644
--- a/tools/testing/selftests/bpf/verifier/var_off.c
+++ b/tools/testing/selftests/bpf/verifier/var_off.c
@@ -246,3 +246,23 @@
 	.result = ACCEPT,
 	.prog_type = BPF_PROG_TYPE_LWT_IN,
 },
+{
+	"32bit loads of a 64bit field (both least and most significant words)",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct bpf_perf_event_data, sample_period)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, offsetof(struct bpf_perf_event_data, sample_period) + 4),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct bpf_perf_event_data, sample_period)),
+	BPF_ALU64_IMM(BPF_LSH, BPF_REG_5, 32),
+	BPF_ALU64_REG(BPF_OR, BPF_REG_4, BPF_REG_5),
+	BPF_ALU64_REG(BPF_XOR, BPF_REG_4, BPF_REG_6),
+	BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
+	.ctx = { 0, },
+	.ctx_len = sizeof(struct bpf_perf_event_data),
+	.data = { 0, },
+	.data_len = sizeof(struct bpf_perf_event_value),
+	.fill_helper = bpf_fill_32bit_loads,
+},
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [bpf-next v2 03/10] selftests/bpf: Avoid another case of errno clobbering
  2019-06-25 19:42 ` [bpf-next v2 03/10] selftests/bpf: Avoid another case of errno clobbering Krzesimir Nowak
@ 2019-06-25 20:08   ` Stanislav Fomichev
  0 siblings, 0 replies; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-25 20:08 UTC (permalink / raw)
  To: Krzesimir Nowak
  Cc: netdev, Alban Crequy, Iago López Galeiras,
	Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, linux-kernel, bpf, Stanislav Fomichev

On 06/25, Krzesimir Nowak wrote:
> Commit 8184d44c9a57 ("selftests/bpf: skip verifier tests for
> unsupported program types") added a check for an unsupported program
> type. The function doing it changes errno, so test_verifier should
> save it before calling it if test_verifier wants to print a reason why
> verifying a BPF program of a supported type failed.
> 
> Fixes: 8184d44c9a57 ("selftests/bpf: skip verifier tests for unsupported program types")
> Cc: Stanislav Fomichev <sdf@google.com>
> Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> ---
>  tools/testing/selftests/bpf/test_verifier.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
> index 12589da13487..779e30b96ded 100644
> --- a/tools/testing/selftests/bpf/test_verifier.c
> +++ b/tools/testing/selftests/bpf/test_verifier.c
> @@ -867,6 +867,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
>  	int fixup_skips;
>  	__u32 pflags;
>  	int i, err;
> +	int saved_errno;
Reverse Christmas tree. Otherwise LGTM.

>  
>  	for (i = 0; i < MAX_NR_MAPS; i++)
>  		map_fds[i] = -1;
> @@ -894,6 +895,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
>  		pflags |= BPF_F_ANY_ALIGNMENT;
>  	fd_prog = bpf_verify_program(prog_type, prog, prog_len, pflags,
>  				     "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 4);
> +	saved_errno = errno;
>  	if (fd_prog < 0 && !bpf_probe_prog_type(prog_type, 0)) {
>  		printf("SKIP (unsupported program type %d)\n", prog_type);
>  		skips++;
> @@ -910,7 +912,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
>  	if (expected_ret == ACCEPT) {
>  		if (fd_prog < 0) {
>  			printf("FAIL\nFailed to load prog '%s'!\n",
> -			       strerror(errno));
> +			       strerror(saved_errno));
>  			goto fail_log;
>  		}
>  #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs
  2019-06-25 19:42 ` [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs Krzesimir Nowak
@ 2019-06-25 20:12   ` Stanislav Fomichev
  2019-06-26  9:10     ` Krzesimir Nowak
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-25 20:12 UTC (permalink / raw)
  To: Krzesimir Nowak
  Cc: netdev, Alban Crequy, Iago López Galeiras,
	Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, linux-kernel, bpf

On 06/25, Krzesimir Nowak wrote:
> As an input, test run for perf event program takes struct
> bpf_perf_event_data as ctx_in and struct bpf_perf_event_value as
> data_in. For an output, it basically ignores ctx_out and data_out.
> 
> The implementation sets an instance of struct bpf_perf_event_data_kern
> in such a way that the BPF program reading data from context will
> receive what we passed to the bpf prog test run in ctx_in. Also BPF
> program can call bpf_perf_prog_read_value to receive what was passed
> in data_in.
> 
> Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> ---
>  kernel/trace/bpf_trace.c                      | 107 ++++++++++++++++++
>  .../bpf/verifier/perf_event_sample_period.c   |   8 ++
>  2 files changed, 115 insertions(+)
> 
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index c102c240bb0b..2fa49ea8a475 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -16,6 +16,8 @@
>  
>  #include <asm/tlb.h>
>  
> +#include <trace/events/bpf_test_run.h>
> +
>  #include "trace_probe.h"
>  #include "trace.h"
>  
> @@ -1160,7 +1162,112 @@ const struct bpf_verifier_ops perf_event_verifier_ops = {
>  	.convert_ctx_access	= pe_prog_convert_ctx_access,
>  };
>  
> +static int pe_prog_test_run(struct bpf_prog *prog,
> +			    const union bpf_attr *kattr,
> +			    union bpf_attr __user *uattr)
> +{
> +	void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> +	void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
> +	u32 data_size_in = kattr->test.data_size_in;
> +	u32 ctx_size_in = kattr->test.ctx_size_in;
> +	u32 repeat = kattr->test.repeat;
> +	u32 retval = 0, duration = 0;
> +	int err = -EINVAL;
> +	u64 time_start, time_spent = 0;
> +	int i;
> +	struct perf_sample_data sample_data = {0, };
> +	struct perf_event event = {0, };
> +	struct bpf_perf_event_data_kern real_ctx = {0, };
> +	struct bpf_perf_event_data fake_ctx = {0, };
> +	struct bpf_perf_event_value value = {0, };
> +
> +	if (ctx_size_in != sizeof(fake_ctx))
> +		goto out;
> +	if (data_size_in != sizeof(value))
> +		goto out;
> +
> +	if (copy_from_user(&fake_ctx, ctx_in, ctx_size_in)) {
> +		err = -EFAULT;
> +		goto out;
> +	}
Move this to net/bpf/test_run.c? I have a bpf_ctx_init helper to deal
with ctx input, might save you some code above wrt ctx size/etc.

> +	if (copy_from_user(&value, data_in, data_size_in)) {
> +		err = -EFAULT;
> +		goto out;
> +	}
> +
> +	real_ctx.regs = &fake_ctx.regs;
> +	real_ctx.data = &sample_data;
> +	real_ctx.event = &event;
> +	perf_sample_data_init(&sample_data, fake_ctx.addr,
> +			      fake_ctx.sample_period);
> +	event.cpu = smp_processor_id();
> +	event.oncpu = -1;
> +	event.state = PERF_EVENT_STATE_OFF;
> +	local64_set(&event.count, value.counter);
> +	event.total_time_enabled = value.enabled;
> +	event.total_time_running = value.running;
> +	/* make self as a leader - it is used only for checking the
> +	 * state field
> +	 */
> +	event.group_leader = &event;
> +
> +	/* slightly changed copy pasta from bpf_test_run() in
> +	 * net/bpf/test_run.c
> +	 */
> +	if (!repeat)
> +		repeat = 1;
> +
> +	rcu_read_lock();
> +	preempt_disable();
> +	time_start = ktime_get_ns();
> +	for (i = 0; i < repeat; i++) {
Any reason for not using bpf_test_run?

> +		retval = BPF_PROG_RUN(prog, &real_ctx);
> +
> +		if (signal_pending(current)) {
> +			err = -EINTR;
> +			preempt_enable();
> +			rcu_read_unlock();
> +			goto out;
> +		}
> +
> +		if (need_resched()) {
> +			time_spent += ktime_get_ns() - time_start;
> +			preempt_enable();
> +			rcu_read_unlock();
> +
> +			cond_resched();
> +
> +			rcu_read_lock();
> +			preempt_disable();
> +			time_start = ktime_get_ns();
> +		}
> +	}
> +	time_spent += ktime_get_ns() - time_start;
> +	preempt_enable();
> +	rcu_read_unlock();
> +
> +	do_div(time_spent, repeat);
> +	duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
> +	/* end of slightly changed copy pasta from bpf_test_run() in
> +	 * net/bpf/test_run.c
> +	 */
> +
> +	if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) {
> +		err = -EFAULT;
> +		goto out;
> +	}
> +	if (copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
> +		err = -EFAULT;
> +		goto out;
> +	}
Can BPF program modify fake_ctx? Do we need/want to copy it back?

> +	err = 0;
> +out:
> +	trace_bpf_test_finish(&err);
> +	return err;
> +}
> +
>  const struct bpf_prog_ops perf_event_prog_ops = {
> +	.test_run	= pe_prog_test_run,
>  };
>  
>  static DEFINE_MUTEX(bpf_event_mutex);
> diff --git a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> index 471c1a5950d8..16e9e5824d14 100644
> --- a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> +++ b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
This should probably go in another patch.

> @@ -13,6 +13,8 @@
>  	},
>  	.result = ACCEPT,
>  	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
> +	.ctx_len = sizeof(struct bpf_perf_event_data),
> +	.data_len = sizeof(struct bpf_perf_event_value),
>  },
>  {
>  	"check bpf_perf_event_data->sample_period half load permitted",
> @@ -29,6 +31,8 @@
>  	},
>  	.result = ACCEPT,
>  	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
> +	.ctx_len = sizeof(struct bpf_perf_event_data),
> +	.data_len = sizeof(struct bpf_perf_event_value),
>  },
>  {
>  	"check bpf_perf_event_data->sample_period word load permitted",
> @@ -45,6 +49,8 @@
>  	},
>  	.result = ACCEPT,
>  	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
> +	.ctx_len = sizeof(struct bpf_perf_event_data),
> +	.data_len = sizeof(struct bpf_perf_event_value),
>  },
>  {
>  	"check bpf_perf_event_data->sample_period dword load permitted",
> @@ -56,4 +62,6 @@
>  	},
>  	.result = ACCEPT,
>  	.prog_type = BPF_PROG_TYPE_PERF_EVENT,
> +	.ctx_len = sizeof(struct bpf_perf_event_data),
> +	.data_len = sizeof(struct bpf_perf_event_value),
>  },
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs
  2019-06-25 20:12   ` Stanislav Fomichev
@ 2019-06-26  9:10     ` Krzesimir Nowak
  2019-06-26 16:12       ` Stanislav Fomichev
  0 siblings, 1 reply; 16+ messages in thread
From: Krzesimir Nowak @ 2019-06-26  9:10 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: netdev, Alban Crequy, Iago López Galeiras,
	Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, linux-kernel, bpf

On Tue, Jun 25, 2019 at 10:12 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 06/25, Krzesimir Nowak wrote:
> > As an input, test run for perf event program takes struct
> > bpf_perf_event_data as ctx_in and struct bpf_perf_event_value as
> > data_in. For an output, it basically ignores ctx_out and data_out.
> >
> > The implementation sets an instance of struct bpf_perf_event_data_kern
> > in such a way that the BPF program reading data from context will
> > receive what we passed to the bpf prog test run in ctx_in. Also BPF
> > program can call bpf_perf_prog_read_value to receive what was passed
> > in data_in.
> >
> > Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> > ---
> >  kernel/trace/bpf_trace.c                      | 107 ++++++++++++++++++
> >  .../bpf/verifier/perf_event_sample_period.c   |   8 ++
> >  2 files changed, 115 insertions(+)
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index c102c240bb0b..2fa49ea8a475 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -16,6 +16,8 @@
> >
> >  #include <asm/tlb.h>
> >
> > +#include <trace/events/bpf_test_run.h>
> > +
> >  #include "trace_probe.h"
> >  #include "trace.h"
> >
> > @@ -1160,7 +1162,112 @@ const struct bpf_verifier_ops perf_event_verifier_ops = {
> >       .convert_ctx_access     = pe_prog_convert_ctx_access,
> >  };
> >
> > +static int pe_prog_test_run(struct bpf_prog *prog,
> > +                         const union bpf_attr *kattr,
> > +                         union bpf_attr __user *uattr)
> > +{
> > +     void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> > +     void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
> > +     u32 data_size_in = kattr->test.data_size_in;
> > +     u32 ctx_size_in = kattr->test.ctx_size_in;
> > +     u32 repeat = kattr->test.repeat;
> > +     u32 retval = 0, duration = 0;
> > +     int err = -EINVAL;
> > +     u64 time_start, time_spent = 0;
> > +     int i;
> > +     struct perf_sample_data sample_data = {0, };
> > +     struct perf_event event = {0, };
> > +     struct bpf_perf_event_data_kern real_ctx = {0, };
> > +     struct bpf_perf_event_data fake_ctx = {0, };
> > +     struct bpf_perf_event_value value = {0, };
> > +
> > +     if (ctx_size_in != sizeof(fake_ctx))
> > +             goto out;
> > +     if (data_size_in != sizeof(value))
> > +             goto out;
> > +
> > +     if (copy_from_user(&fake_ctx, ctx_in, ctx_size_in)) {
> > +             err = -EFAULT;
> > +             goto out;
> > +     }
> Move this to net/bpf/test_run.c? I have a bpf_ctx_init helper to deal
> with ctx input, might save you some code above wrt ctx size/etc.

My impression about net/bpf/test_run.c was that it was a collection of
helpers for test runs of the network-related BPF programs, because
they are so similar to each other. So kernel/trace/bpf_trace.c looked
like an obvious place for the test_run implementation since other perf
trace BPF stuff was already there.

And about bpf_ctx_init - looks useful as it seems to me that it
handles the scenario where the size of the ctx struct grows, but still
allows passing older version of the struct (thus smaller) from
userspace for compatibility. Maybe that checking and copying part of
the function could be moved into some non-static helper function, so I
could use it and still skip the need for allocating memory for the
context?

>
> > +     if (copy_from_user(&value, data_in, data_size_in)) {
> > +             err = -EFAULT;
> > +             goto out;
> > +     }
> > +
> > +     real_ctx.regs = &fake_ctx.regs;
> > +     real_ctx.data = &sample_data;
> > +     real_ctx.event = &event;
> > +     perf_sample_data_init(&sample_data, fake_ctx.addr,
> > +                           fake_ctx.sample_period);
> > +     event.cpu = smp_processor_id();
> > +     event.oncpu = -1;
> > +     event.state = PERF_EVENT_STATE_OFF;
> > +     local64_set(&event.count, value.counter);
> > +     event.total_time_enabled = value.enabled;
> > +     event.total_time_running = value.running;
> > +     /* make self as a leader - it is used only for checking the
> > +      * state field
> > +      */
> > +     event.group_leader = &event;
> > +
> > +     /* slightly changed copy pasta from bpf_test_run() in
> > +      * net/bpf/test_run.c
> > +      */
> > +     if (!repeat)
> > +             repeat = 1;
> > +
> > +     rcu_read_lock();
> > +     preempt_disable();
> > +     time_start = ktime_get_ns();
> > +     for (i = 0; i < repeat; i++) {
> Any reason for not using bpf_test_run?

Two, mostly. One was that it is a static function and my code was
elsewhere. Second was that it does some cgroup storage setup and I'm
not sure if the perf event BPF program needs that.

>
> > +             retval = BPF_PROG_RUN(prog, &real_ctx);
> > +
> > +             if (signal_pending(current)) {
> > +                     err = -EINTR;
> > +                     preempt_enable();
> > +                     rcu_read_unlock();
> > +                     goto out;
> > +             }
> > +
> > +             if (need_resched()) {
> > +                     time_spent += ktime_get_ns() - time_start;
> > +                     preempt_enable();
> > +                     rcu_read_unlock();
> > +
> > +                     cond_resched();
> > +
> > +                     rcu_read_lock();
> > +                     preempt_disable();
> > +                     time_start = ktime_get_ns();
> > +             }
> > +     }
> > +     time_spent += ktime_get_ns() - time_start;
> > +     preempt_enable();
> > +     rcu_read_unlock();
> > +
> > +     do_div(time_spent, repeat);
> > +     duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
> > +     /* end of slightly changed copy pasta from bpf_test_run() in
> > +      * net/bpf/test_run.c
> > +      */
> > +
> > +     if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) {
> > +             err = -EFAULT;
> > +             goto out;
> > +     }
> > +     if (copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
> > +             err = -EFAULT;
> > +             goto out;
> > +     }
> Can BPF program modify fake_ctx? Do we need/want to copy it back?

Reading the pe_prog_is_valid_access function tells me that it's not
possible - the only type of valid access is read. So maybe I should be
stricter about the requirements for the data_out and ctx_out sizes
(should be zero or return -EINVAL).

>
> > +     err = 0;
> > +out:
> > +     trace_bpf_test_finish(&err);
> > +     return err;
> > +}
> > +
> >  const struct bpf_prog_ops perf_event_prog_ops = {
> > +     .test_run       = pe_prog_test_run,
> >  };
> >
> >  static DEFINE_MUTEX(bpf_event_mutex);
> > diff --git a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > index 471c1a5950d8..16e9e5824d14 100644
> > --- a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > +++ b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> This should probably go in another patch.

Yeah, I was wondering about it. These changes are here to avoid
breaking the tests, since perf event program can actually be run now
and the test_run for perf event required certain sizes for ctx and
data.

So, I will either move them to a separate patch or rework the test_run
for perf event to accept the size between 0 and sizeof(struct
something), so the changes in tests maybe will not be necessary.

>
> > @@ -13,6 +13,8 @@
> >       },
> >       .result = ACCEPT,
> >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > +     .data_len = sizeof(struct bpf_perf_event_value),
> >  },
> >  {
> >       "check bpf_perf_event_data->sample_period half load permitted",
> > @@ -29,6 +31,8 @@
> >       },
> >       .result = ACCEPT,
> >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > +     .data_len = sizeof(struct bpf_perf_event_value),
> >  },
> >  {
> >       "check bpf_perf_event_data->sample_period word load permitted",
> > @@ -45,6 +49,8 @@
> >       },
> >       .result = ACCEPT,
> >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > +     .data_len = sizeof(struct bpf_perf_event_value),
> >  },
> >  {
> >       "check bpf_perf_event_data->sample_period dword load permitted",
> > @@ -56,4 +62,6 @@
> >       },
> >       .result = ACCEPT,
> >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > +     .data_len = sizeof(struct bpf_perf_event_value),
> >  },
> > --
> > 2.20.1
> >



-- 
Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
Registergericht/Court of registration: Amtsgericht Charlottenburg
Registernummer/Registration number: HRB 171414 B
Ust-ID-Nummer/VAT ID number: DE302207000

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs
  2019-06-26  9:10     ` Krzesimir Nowak
@ 2019-06-26 16:12       ` Stanislav Fomichev
  2019-07-08 16:51         ` Krzesimir Nowak
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-26 16:12 UTC (permalink / raw)
  To: Krzesimir Nowak
  Cc: netdev, Alban Crequy, Iago López Galeiras,
	Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, linux-kernel, bpf

On 06/26, Krzesimir Nowak wrote:
> On Tue, Jun 25, 2019 at 10:12 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
> >
> > On 06/25, Krzesimir Nowak wrote:
> > > As an input, test run for perf event program takes struct
> > > bpf_perf_event_data as ctx_in and struct bpf_perf_event_value as
> > > data_in. For an output, it basically ignores ctx_out and data_out.
> > >
> > > The implementation sets an instance of struct bpf_perf_event_data_kern
> > > in such a way that the BPF program reading data from context will
> > > receive what we passed to the bpf prog test run in ctx_in. Also BPF
> > > program can call bpf_perf_prog_read_value to receive what was passed
> > > in data_in.
> > >
> > > Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> > > ---
> > >  kernel/trace/bpf_trace.c                      | 107 ++++++++++++++++++
> > >  .../bpf/verifier/perf_event_sample_period.c   |   8 ++
> > >  2 files changed, 115 insertions(+)
> > >
> > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > index c102c240bb0b..2fa49ea8a475 100644
> > > --- a/kernel/trace/bpf_trace.c
> > > +++ b/kernel/trace/bpf_trace.c
> > > @@ -16,6 +16,8 @@
> > >
> > >  #include <asm/tlb.h>
> > >
> > > +#include <trace/events/bpf_test_run.h>
> > > +
> > >  #include "trace_probe.h"
> > >  #include "trace.h"
> > >
> > > @@ -1160,7 +1162,112 @@ const struct bpf_verifier_ops perf_event_verifier_ops = {
> > >       .convert_ctx_access     = pe_prog_convert_ctx_access,
> > >  };
> > >
> > > +static int pe_prog_test_run(struct bpf_prog *prog,
> > > +                         const union bpf_attr *kattr,
> > > +                         union bpf_attr __user *uattr)
> > > +{
> > > +     void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> > > +     void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
> > > +     u32 data_size_in = kattr->test.data_size_in;
> > > +     u32 ctx_size_in = kattr->test.ctx_size_in;
> > > +     u32 repeat = kattr->test.repeat;
> > > +     u32 retval = 0, duration = 0;
> > > +     int err = -EINVAL;
> > > +     u64 time_start, time_spent = 0;
> > > +     int i;
> > > +     struct perf_sample_data sample_data = {0, };
> > > +     struct perf_event event = {0, };
> > > +     struct bpf_perf_event_data_kern real_ctx = {0, };
> > > +     struct bpf_perf_event_data fake_ctx = {0, };
> > > +     struct bpf_perf_event_value value = {0, };
> > > +
> > > +     if (ctx_size_in != sizeof(fake_ctx))
> > > +             goto out;
> > > +     if (data_size_in != sizeof(value))
> > > +             goto out;
> > > +
> > > +     if (copy_from_user(&fake_ctx, ctx_in, ctx_size_in)) {
> > > +             err = -EFAULT;
> > > +             goto out;
> > > +     }
> > Move this to net/bpf/test_run.c? I have a bpf_ctx_init helper to deal
> > with ctx input, might save you some code above wrt ctx size/etc.
> 
> My impression about net/bpf/test_run.c was that it was a collection of
> helpers for test runs of the network-related BPF programs, because
> they are so similar to each other. So kernel/trace/bpf_trace.c looked
> like an obvious place for the test_run implementation since other perf
> trace BPF stuff was already there.
Maybe net/bpf/test_run.c should be renamed to kernel/bpf/test_run.c?

> And about bpf_ctx_init - looks useful as it seems to me that it
> handles the scenario where the size of the ctx struct grows, but still
> allows passing older version of the struct (thus smaller) from
> userspace for compatibility. Maybe that checking and copying part of
> the function could be moved into some non-static helper function, so I
> could use it and still skip the need for allocating memory for the
> context?
You can always make bpf_ctx_init non-static and export it.
But, again, consider adding your stuff to the net/bpf/test_run.c
and exporting only pe_prog_test_run. That way you can reuse
bpf_ctx_init and bpf_test_run.

Why do you care about memory allocation though? It's a one time
operation and doesn't affect the performance measurements.

> > > +     if (copy_from_user(&value, data_in, data_size_in)) {
> > > +             err = -EFAULT;
> > > +             goto out;
> > > +     }
> > > +
> > > +     real_ctx.regs = &fake_ctx.regs;
> > > +     real_ctx.data = &sample_data;
> > > +     real_ctx.event = &event;
> > > +     perf_sample_data_init(&sample_data, fake_ctx.addr,
> > > +                           fake_ctx.sample_period);
> > > +     event.cpu = smp_processor_id();
> > > +     event.oncpu = -1;
> > > +     event.state = PERF_EVENT_STATE_OFF;
> > > +     local64_set(&event.count, value.counter);
> > > +     event.total_time_enabled = value.enabled;
> > > +     event.total_time_running = value.running;
> > > +     /* make self as a leader - it is used only for checking the
> > > +      * state field
> > > +      */
> > > +     event.group_leader = &event;
> > > +
> > > +     /* slightly changed copy pasta from bpf_test_run() in
> > > +      * net/bpf/test_run.c
> > > +      */
> > > +     if (!repeat)
> > > +             repeat = 1;
> > > +
> > > +     rcu_read_lock();
> > > +     preempt_disable();
> > > +     time_start = ktime_get_ns();
> > > +     for (i = 0; i < repeat; i++) {
> > Any reason for not using bpf_test_run?
> 
> Two, mostly. One was that it is a static function and my code was
> elsewhere. Second was that it does some cgroup storage setup and I'm
> not sure if the perf event BPF program needs that.
You can always make it non-static.

Regarding cgroup storage: do we care? If you can see it affecting
your performance numbers, then yes, but you can try to measure to see
if it gives you any noticeable overhead. Maybe add an argument to
bpf_test_run to skip cgroup storage stuff?

> > > +             retval = BPF_PROG_RUN(prog, &real_ctx);
> > > +
> > > +             if (signal_pending(current)) {
> > > +                     err = -EINTR;
> > > +                     preempt_enable();
> > > +                     rcu_read_unlock();
> > > +                     goto out;
> > > +             }
> > > +
> > > +             if (need_resched()) {
> > > +                     time_spent += ktime_get_ns() - time_start;
> > > +                     preempt_enable();
> > > +                     rcu_read_unlock();
> > > +
> > > +                     cond_resched();
> > > +
> > > +                     rcu_read_lock();
> > > +                     preempt_disable();
> > > +                     time_start = ktime_get_ns();
> > > +             }
> > > +     }
> > > +     time_spent += ktime_get_ns() - time_start;
> > > +     preempt_enable();
> > > +     rcu_read_unlock();
> > > +
> > > +     do_div(time_spent, repeat);
> > > +     duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
> > > +     /* end of slightly changed copy pasta from bpf_test_run() in
> > > +      * net/bpf/test_run.c
> > > +      */
> > > +
> > > +     if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) {
> > > +             err = -EFAULT;
> > > +             goto out;
> > > +     }
> > > +     if (copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
> > > +             err = -EFAULT;
> > > +             goto out;
> > > +     }
> > Can BPF program modify fake_ctx? Do we need/want to copy it back?
> 
> Reading the pe_prog_is_valid_access function tells me that it's not
> possible - the only type of valid access is read. So maybe I should be
> stricter about the requirements for the data_out and ctx_out sizes
> (should be zero or return -EINVAL).
Yes, better to explicitly prohibit anything that we don't support.

> > > +     err = 0;
> > > +out:
> > > +     trace_bpf_test_finish(&err);
> > > +     return err;
> > > +}
> > > +
> > >  const struct bpf_prog_ops perf_event_prog_ops = {
> > > +     .test_run       = pe_prog_test_run,
> > >  };
> > >
> > >  static DEFINE_MUTEX(bpf_event_mutex);
> > > diff --git a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > > index 471c1a5950d8..16e9e5824d14 100644
> > > --- a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > > +++ b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > This should probably go in another patch.
> 
> Yeah, I was wondering about it. These changes are here to avoid
> breaking the tests, since perf event program can actually be run now
> and the test_run for perf event required certain sizes for ctx and
> data.
You need to make sure the context is optional, that way you don't break
any existing tests out in the wild and can move those changes to
another patch.

> So, I will either move them to a separate patch or rework the test_run
> for perf event to accept the size between 0 and sizeof(struct
> something), so the changes in tests maybe will not be necessary.
> 
> >
> > > @@ -13,6 +13,8 @@
> > >       },
> > >       .result = ACCEPT,
> > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > >  },
> > >  {
> > >       "check bpf_perf_event_data->sample_period half load permitted",
> > > @@ -29,6 +31,8 @@
> > >       },
> > >       .result = ACCEPT,
> > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > >  },
> > >  {
> > >       "check bpf_perf_event_data->sample_period word load permitted",
> > > @@ -45,6 +49,8 @@
> > >       },
> > >       .result = ACCEPT,
> > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > >  },
> > >  {
> > >       "check bpf_perf_event_data->sample_period dword load permitted",
> > > @@ -56,4 +62,6 @@
> > >       },
> > >       .result = ACCEPT,
> > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > >  },
> > > --
> > > 2.20.1
> > >
> 
> 
> 
> -- 
> Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
> Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
> Registergericht/Court of registration: Amtsgericht Charlottenburg
> Registernummer/Registration number: HRB 171414 B
> Ust-ID-Nummer/VAT ID number: DE302207000

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs
  2019-06-26 16:12       ` Stanislav Fomichev
@ 2019-07-08 16:51         ` Krzesimir Nowak
  0 siblings, 0 replies; 16+ messages in thread
From: Krzesimir Nowak @ 2019-07-08 16:51 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: netdev, Alban Crequy, Iago López Galeiras,
	Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, linux-kernel, bpf

On Wed, Jun 26, 2019 at 6:12 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 06/26, Krzesimir Nowak wrote:
> > On Tue, Jun 25, 2019 at 10:12 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
> > >
> > > On 06/25, Krzesimir Nowak wrote:
> > > > As an input, test run for perf event program takes struct
> > > > bpf_perf_event_data as ctx_in and struct bpf_perf_event_value as
> > > > data_in. For an output, it basically ignores ctx_out and data_out.
> > > >
> > > > The implementation sets an instance of struct bpf_perf_event_data_kern
> > > > in such a way that the BPF program reading data from context will
> > > > receive what we passed to the bpf prog test run in ctx_in. Also BPF
> > > > program can call bpf_perf_prog_read_value to receive what was passed
> > > > in data_in.
> > > >
> > > > Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io>
> > > > ---
> > > >  kernel/trace/bpf_trace.c                      | 107 ++++++++++++++++++
> > > >  .../bpf/verifier/perf_event_sample_period.c   |   8 ++
> > > >  2 files changed, 115 insertions(+)
> > > >
> > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > index c102c240bb0b..2fa49ea8a475 100644
> > > > --- a/kernel/trace/bpf_trace.c
> > > > +++ b/kernel/trace/bpf_trace.c
> > > > @@ -16,6 +16,8 @@
> > > >
> > > >  #include <asm/tlb.h>
> > > >
> > > > +#include <trace/events/bpf_test_run.h>
> > > > +
> > > >  #include "trace_probe.h"
> > > >  #include "trace.h"
> > > >
> > > > @@ -1160,7 +1162,112 @@ const struct bpf_verifier_ops perf_event_verifier_ops = {
> > > >       .convert_ctx_access     = pe_prog_convert_ctx_access,
> > > >  };
> > > >
> > > > +static int pe_prog_test_run(struct bpf_prog *prog,
> > > > +                         const union bpf_attr *kattr,
> > > > +                         union bpf_attr __user *uattr)
> > > > +{
> > > > +     void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> > > > +     void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
> > > > +     u32 data_size_in = kattr->test.data_size_in;
> > > > +     u32 ctx_size_in = kattr->test.ctx_size_in;
> > > > +     u32 repeat = kattr->test.repeat;
> > > > +     u32 retval = 0, duration = 0;
> > > > +     int err = -EINVAL;
> > > > +     u64 time_start, time_spent = 0;
> > > > +     int i;
> > > > +     struct perf_sample_data sample_data = {0, };
> > > > +     struct perf_event event = {0, };
> > > > +     struct bpf_perf_event_data_kern real_ctx = {0, };
> > > > +     struct bpf_perf_event_data fake_ctx = {0, };
> > > > +     struct bpf_perf_event_value value = {0, };
> > > > +
> > > > +     if (ctx_size_in != sizeof(fake_ctx))
> > > > +             goto out;
> > > > +     if (data_size_in != sizeof(value))
> > > > +             goto out;
> > > > +
> > > > +     if (copy_from_user(&fake_ctx, ctx_in, ctx_size_in)) {
> > > > +             err = -EFAULT;
> > > > +             goto out;
> > > > +     }
> > > Move this to net/bpf/test_run.c? I have a bpf_ctx_init helper to deal
> > > with ctx input, might save you some code above wrt ctx size/etc.
> >
> > My impression about net/bpf/test_run.c was that it was a collection of
> > helpers for test runs of the network-related BPF programs, because
> > they are so similar to each other. So kernel/trace/bpf_trace.c looked
> > like an obvious place for the test_run implementation since other perf
> > trace BPF stuff was already there.
> Maybe net/bpf/test_run.c should be renamed to kernel/bpf/test_run.c?

Just sent another version of this patch series. I went with slightly
different approach - moved some functions to kernel/bpf/test_run.c and
left the network specific stuff in net/bpf/test_run.c.

>
> > And about bpf_ctx_init - looks useful as it seems to me that it
> > handles the scenario where the size of the ctx struct grows, but still
> > allows passing older version of the struct (thus smaller) from
> > userspace for compatibility. Maybe that checking and copying part of
> > the function could be moved into some non-static helper function, so I
> > could use it and still skip the need for allocating memory for the
> > context?
> You can always make bpf_ctx_init non-static and export it.
> But, again, consider adding your stuff to the net/bpf/test_run.c
> and exporting only pe_prog_test_run. That way you can reuse
> bpf_ctx_init and bpf_test_run.
>
> Why do you care about memory allocation though? It's a one time
> operation and doesn't affect the performance measurements.
>
> > > > +     if (copy_from_user(&value, data_in, data_size_in)) {
> > > > +             err = -EFAULT;
> > > > +             goto out;
> > > > +     }
> > > > +
> > > > +     real_ctx.regs = &fake_ctx.regs;
> > > > +     real_ctx.data = &sample_data;
> > > > +     real_ctx.event = &event;
> > > > +     perf_sample_data_init(&sample_data, fake_ctx.addr,
> > > > +                           fake_ctx.sample_period);
> > > > +     event.cpu = smp_processor_id();
> > > > +     event.oncpu = -1;
> > > > +     event.state = PERF_EVENT_STATE_OFF;
> > > > +     local64_set(&event.count, value.counter);
> > > > +     event.total_time_enabled = value.enabled;
> > > > +     event.total_time_running = value.running;
> > > > +     /* make self as a leader - it is used only for checking the
> > > > +      * state field
> > > > +      */
> > > > +     event.group_leader = &event;
> > > > +
> > > > +     /* slightly changed copy pasta from bpf_test_run() in
> > > > +      * net/bpf/test_run.c
> > > > +      */
> > > > +     if (!repeat)
> > > > +             repeat = 1;
> > > > +
> > > > +     rcu_read_lock();
> > > > +     preempt_disable();
> > > > +     time_start = ktime_get_ns();
> > > > +     for (i = 0; i < repeat; i++) {
> > > Any reason for not using bpf_test_run?
> >
> > Two, mostly. One was that it is a static function and my code was
> > elsewhere. Second was that it does some cgroup storage setup and I'm
> > not sure if the perf event BPF program needs that.
> You can always make it non-static.
>
> Regarding cgroup storage: do we care? If you can see it affecting
> your performance numbers, then yes, but you can try to measure to see
> if it gives you any noticeable overhead. Maybe add an argument to
> bpf_test_run to skip cgroup storage stuff?
>
> > > > +             retval = BPF_PROG_RUN(prog, &real_ctx);
> > > > +
> > > > +             if (signal_pending(current)) {
> > > > +                     err = -EINTR;
> > > > +                     preempt_enable();
> > > > +                     rcu_read_unlock();
> > > > +                     goto out;
> > > > +             }
> > > > +
> > > > +             if (need_resched()) {
> > > > +                     time_spent += ktime_get_ns() - time_start;
> > > > +                     preempt_enable();
> > > > +                     rcu_read_unlock();
> > > > +
> > > > +                     cond_resched();
> > > > +
> > > > +                     rcu_read_lock();
> > > > +                     preempt_disable();
> > > > +                     time_start = ktime_get_ns();
> > > > +             }
> > > > +     }
> > > > +     time_spent += ktime_get_ns() - time_start;
> > > > +     preempt_enable();
> > > > +     rcu_read_unlock();
> > > > +
> > > > +     do_div(time_spent, repeat);
> > > > +     duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
> > > > +     /* end of slightly changed copy pasta from bpf_test_run() in
> > > > +      * net/bpf/test_run.c
> > > > +      */
> > > > +
> > > > +     if (copy_to_user(&uattr->test.retval, &retval, sizeof(retval))) {
> > > > +             err = -EFAULT;
> > > > +             goto out;
> > > > +     }
> > > > +     if (copy_to_user(&uattr->test.duration, &duration, sizeof(duration))) {
> > > > +             err = -EFAULT;
> > > > +             goto out;
> > > > +     }
> > > Can BPF program modify fake_ctx? Do we need/want to copy it back?
> >
> > Reading the pe_prog_is_valid_access function tells me that it's not
> > possible - the only type of valid access is read. So maybe I should be
> > stricter about the requirements for the data_out and ctx_out sizes
> > (should be zero or return -EINVAL).
> Yes, better to explicitly prohibit anything that we don't support.
>
> > > > +     err = 0;
> > > > +out:
> > > > +     trace_bpf_test_finish(&err);
> > > > +     return err;
> > > > +}
> > > > +
> > > >  const struct bpf_prog_ops perf_event_prog_ops = {
> > > > +     .test_run       = pe_prog_test_run,
> > > >  };
> > > >
> > > >  static DEFINE_MUTEX(bpf_event_mutex);
> > > > diff --git a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > > > index 471c1a5950d8..16e9e5824d14 100644
> > > > --- a/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > > > +++ b/tools/testing/selftests/bpf/verifier/perf_event_sample_period.c
> > > This should probably go in another patch.
> >
> > Yeah, I was wondering about it. These changes are here to avoid
> > breaking the tests, since perf event program can actually be run now
> > and the test_run for perf event required certain sizes for ctx and
> > data.
> You need to make sure the context is optional, that way you don't break
> any existing tests out in the wild and can move those changes to
> another patch.
>
> > So, I will either move them to a separate patch or rework the test_run
> > for perf event to accept the size between 0 and sizeof(struct
> > something), so the changes in tests maybe will not be necessary.
> >
> > >
> > > > @@ -13,6 +13,8 @@
> > > >       },
> > > >       .result = ACCEPT,
> > > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > > >  },
> > > >  {
> > > >       "check bpf_perf_event_data->sample_period half load permitted",
> > > > @@ -29,6 +31,8 @@
> > > >       },
> > > >       .result = ACCEPT,
> > > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > > >  },
> > > >  {
> > > >       "check bpf_perf_event_data->sample_period word load permitted",
> > > > @@ -45,6 +49,8 @@
> > > >       },
> > > >       .result = ACCEPT,
> > > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > > >  },
> > > >  {
> > > >       "check bpf_perf_event_data->sample_period dword load permitted",
> > > > @@ -56,4 +62,6 @@
> > > >       },
> > > >       .result = ACCEPT,
> > > >       .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> > > > +     .ctx_len = sizeof(struct bpf_perf_event_data),
> > > > +     .data_len = sizeof(struct bpf_perf_event_value),
> > > >  },
> > > > --
> > > > 2.20.1
> > > >
> >
> >
> >
> > --
> > Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
> > Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
> > Registergericht/Court of registration: Amtsgericht Charlottenburg
> > Registernummer/Registration number: HRB 171414 B
> > Ust-ID-Nummer/VAT ID number: DE302207000



-- 
Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
Registergericht/Court of registration: Amtsgericht Charlottenburg
Registernummer/Registration number: HRB 171414 B
Ust-ID-Nummer/VAT ID number: DE302207000

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-07-08 16:51 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-25 19:42 [bpf-next v2 00/10] Test the 32bit narrow reads Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 01/10] selftests/bpf: Print a message when tester could not run a program Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 02/10] selftests/bpf: Avoid a clobbering of errno Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 03/10] selftests/bpf: Avoid another case of errno clobbering Krzesimir Nowak
2019-06-25 20:08   ` Stanislav Fomichev
2019-06-25 19:42 ` [bpf-next v2 04/10] selftests/bpf: Use bpf_prog_test_run_xattr Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 05/10] selftests/bpf: Allow passing more information to BPF prog test run Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 06/10] tools headers: Adopt compiletime_assert from kernel sources Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 07/10] tools headers: sync struct bpf_perf_event_data Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 08/10] bpf: Implement bpf_prog_test_run for perf event programs Krzesimir Nowak
2019-06-25 20:12   ` Stanislav Fomichev
2019-06-26  9:10     ` Krzesimir Nowak
2019-06-26 16:12       ` Stanislav Fomichev
2019-07-08 16:51         ` Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 09/10] selftests/bpf: Add tests for bpf_prog_test_run for perf events progs Krzesimir Nowak
2019-06-25 19:42 ` [bpf-next v2 10/10] selftests/bpf: Test correctness of narrow 32bit read on 64bit field Krzesimir Nowak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).