bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 bpf-next 00/19] BPF verifier rotating log
@ 2023-04-06 23:41 Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file Andrii Nakryiko
                   ` (21 more replies)
  0 siblings, 22 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

This patch set changes BPF verifier log behavior to behave as a rotating log,
by default. If user-supplied log buffer is big enough to contain entire
verifier log output, there is no effective difference. But where previously
user supplied too small log buffer and would get -ENOSPC error result and the
beginning part of the verifier log, now there will be no error and user will
get ending part of verifier log filling up user-supplied log buffer.  Which
is, in absolute majority of cases, is exactly what's useful, relevant, and
what users want and need, as the ending of the verifier log is containing
details of verifier failure and relevant state that got us to that failure. So
this rotating mode is made default, but for some niche advanced debugging
scenarios it's possible to request old behavior by specifying additional
BPF_LOG_FIXED (8) flag.

This patch set adjusts libbpf to allow specifying flags beyond 1 | 2 | 4. We
also add --log-size and --log-fixed options to veristat to be able to both
test this functionality manually, but also to be used in various debugging
scenarios. We also add selftests that tries many variants of log buffer size
to stress-test correctness of internal verifier log bookkeeping code.

Further, this patch set is merged with log_size_actual v1 patchset ([0]),
which adds ability to get required log buffer size to fit entire verifier log
output. 

This addresses a long-standing limitation, which causes users and BPF loader
library writers to guess and pre-size log buffer, often allocating unnecessary
extra memory for this or doing extra program verifications just to size logs
better, ultimately wasting resources. This was requested most recently by Go
BPF library maintainers ([1]). 

See respective patches for details. A bunch of them some drive-by fixes
detecting during working with the code. Some other further refactor and
compratmentalize verifier log handling code into kernel/bpf/log.c, which
should also make it simpler to integrate such verbose log for other
complicated bpf() syscall commands, if necessary. The rest are actual logic to
calculate maximum log buffer size needed and return it to user-space. Few
patches wire this on libbpf side, and the rest add selftests to test proper
log truncation and log_buf==NULL handling.

This turned into a pretty sizable patch set with lots of arithmetics, but
hopefully the set of features added to verifier log in this patch set are both
useful for BPF users and are self-contained and isolated enough to not cause
troubles going forward.

v3->v4:
  - s/log_size_actual/log_true_size/ (Alexei);
  - log_buf==NULL && log_size==0 don't trigger -ENOSPC (Lorenz);
  - added WARN_ON_ONCE if we try bpf_vlog_reset() forward (Lorenz);
  - added selftests for truncation in BPF_LOG_FIXED mode;
  - fixed edge case in BPF_LOG_FIXED when log_size==1, leaving buf not zero
    terminated;
v2->v3:
  - typos and comment improvement (Lorenz);
  - merged with log_size_actual v1 ([0]) patch set (Alexei);
  - added log_buf==NULL condition allowed (Lorenz);
  - added BPF_BTF_LOAD logs tests (Lorenz);
  - more clean up and refactoring of internal verifier log API;
v1->v2:
  - return -ENOSPC even in rotating log mode for preserving backwards
    compatibility (Lorenz);

  [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=735213&state=*
  [1] https://lore.kernel.org/bpf/CAN+4W8iNoEbQzQVbB_o1W0MWBDV4xCJAq7K3f6psVE-kkCfMqg@mail.gmail.com/


Andrii Nakryiko (19):
  bpf: split off basic BPF verifier log into separate file
  bpf: remove minimum size restrictions on verifier log buffer
  bpf: switch BPF verifier log to be a rotating log by default
  libbpf: don't enforce unnecessary verifier log restrictions on libbpf
    side
  veristat: add more veristat control over verifier log options
  selftests/bpf: add fixed vs rotating verifier log tests
  bpf: ignore verifier log reset in BPF_LOG_KERNEL mode
  bpf: fix missing -EFAULT return on user log buf error in btf_parse()
  bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode
  bpf: simplify logging-related error conditions handling
  bpf: keep track of total log content size in both fixed and rolling
    modes
  bpf: add log_true_size output field to return necessary log buffer
    size
  bpf: simplify internal verifier log interface
  bpf: relax log_buf NULL conditions when log_level>0 is requested
  libbpf: wire through log_true_size returned from kernel for
    BPF_PROG_LOAD
  libbpf: wire through log_true_size for bpf_btf_load() API
  selftests/bpf: add tests to validate log_true_size feature
  selftests/bpf: add testing of log_buf==NULL condition for
    BPF_PROG_LOAD
  selftests/bpf: add verifier log tests for BPF_BTF_LOAD command

 include/linux/bpf.h                           |   2 +-
 include/linux/bpf_verifier.h                  |  41 +-
 include/linux/btf.h                           |   2 +-
 include/uapi/linux/bpf.h                      |  10 +
 kernel/bpf/Makefile                           |   3 +-
 kernel/bpf/btf.c                              |  74 +--
 kernel/bpf/log.c                              | 330 +++++++++++++
 kernel/bpf/syscall.c                          |  16 +-
 kernel/bpf/verifier.c                         | 125 ++---
 tools/include/uapi/linux/bpf.h                |  12 +-
 tools/lib/bpf/bpf.c                           |  17 +-
 tools/lib/bpf/bpf.h                           |  22 +-
 .../selftests/bpf/prog_tests/log_fixup.c      |   1 +
 .../selftests/bpf/prog_tests/verifier_log.c   | 450 ++++++++++++++++++
 tools/testing/selftests/bpf/veristat.c        |  44 +-
 15 files changed, 965 insertions(+), 184 deletions(-)
 create mode 100644 kernel/bpf/log.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier_log.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-11 10:43   ` Lorenz Bauer
  2023-04-06 23:41 ` [PATCH v4 bpf-next 02/19] bpf: remove minimum size restrictions on verifier log buffer Andrii Nakryiko
                   ` (20 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

kernel/bpf/verifier.c file is large and growing larger all the time. So
it's good to start splitting off more or less self-contained parts into
separate files to keep source code size (somewhat) somewhat under
control.

This patch is a one step in this direction, moving some of BPF verifier log
routines into a separate kernel/bpf/log.c. Right now it's most low-level
and isolated routines to append data to log, reset log to previous
position, etc. Eventually we could probably move verifier state
printing logic here as well, but this patch doesn't attempt to do that
yet.

Subsequent patches will add more logic to verifier log management, so
having basics in a separate file will make sure verifier.c doesn't grow
more with new changes.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h | 19 +++-----
 kernel/bpf/Makefile          |  3 +-
 kernel/bpf/log.c             | 85 ++++++++++++++++++++++++++++++++++++
 kernel/bpf/verifier.c        | 69 -----------------------------
 4 files changed, 94 insertions(+), 82 deletions(-)
 create mode 100644 kernel/bpf/log.c

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 81d525d057c7..83dff25545ee 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -498,11 +498,6 @@ struct bpf_verifier_log {
 	u32 len_total;
 };
 
-static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log)
-{
-	return log->len_used >= log->len_total - 1;
-}
-
 #define BPF_LOG_LEVEL1	1
 #define BPF_LOG_LEVEL2	2
 #define BPF_LOG_STATS	4
@@ -512,6 +507,11 @@ static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log)
 #define BPF_LOG_MIN_ALIGNMENT 8U
 #define BPF_LOG_ALIGNMENT 40U
 
+static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log)
+{
+	return log->len_used >= log->len_total - 1;
+}
+
 static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
 {
 	return log &&
@@ -519,13 +519,6 @@ static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
 		 log->level == BPF_LOG_KERNEL);
 }
 
-static inline bool
-bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
-{
-	return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 &&
-	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
-}
-
 #define BPF_MAX_SUBPROGS 256
 
 struct bpf_subprog_info {
@@ -608,12 +601,14 @@ struct bpf_verifier_env {
 	char type_str_buf[TYPE_STR_BUF_LEN];
 };
 
+bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log);
 __printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log,
 				      const char *fmt, va_list args);
 __printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env,
 					   const char *fmt, ...);
 __printf(2, 3) void bpf_log(struct bpf_verifier_log *log,
 			    const char *fmt, ...);
+void bpf_vlog_reset(struct bpf_verifier_log *log, u32 new_pos);
 
 static inline struct bpf_func_state *cur_func(struct bpf_verifier_env *env)
 {
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 02242614dcc7..1d3892168d32 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -6,7 +6,8 @@ cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse
 endif
 CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy)
 
-obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
+obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o
+obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
 obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o bloom_filter.o
 obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
 obj-$(CONFIG_BPF_SYSCALL) += bpf_local_storage.o bpf_task_storage.o
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
new file mode 100644
index 000000000000..920061e38d2e
--- /dev/null
+++ b/kernel/bpf/log.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ * Copyright (c) 2016 Facebook
+ * Copyright (c) 2018 Covalent IO, Inc. http://covalent.io
+ */
+#include <uapi/linux/btf.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/bpf.h>
+#include <linux/bpf_verifier.h>
+
+bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
+{
+	return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 &&
+	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
+}
+
+void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
+		       va_list args)
+{
+	unsigned int n;
+
+	n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args);
+
+	WARN_ONCE(n >= BPF_VERIFIER_TMP_LOG_SIZE - 1,
+		  "verifier log line truncated - local buffer too short\n");
+
+	if (log->level == BPF_LOG_KERNEL) {
+		bool newline = n > 0 && log->kbuf[n - 1] == '\n';
+
+		pr_err("BPF: %s%s", log->kbuf, newline ? "" : "\n");
+		return;
+	}
+
+	n = min(log->len_total - log->len_used - 1, n);
+	log->kbuf[n] = '\0';
+	if (!copy_to_user(log->ubuf + log->len_used, log->kbuf, n + 1))
+		log->len_used += n;
+	else
+		log->ubuf = NULL;
+}
+
+void bpf_vlog_reset(struct bpf_verifier_log *log, u32 new_pos)
+{
+	char zero = 0;
+
+	if (!bpf_verifier_log_needed(log))
+		return;
+
+	log->len_used = new_pos;
+	if (put_user(zero, log->ubuf + new_pos))
+		log->ubuf = NULL;
+}
+
+/* log_level controls verbosity level of eBPF verifier.
+ * bpf_verifier_log_write() is used to dump the verification trace to the log,
+ * so the user can figure out what's wrong with the program
+ */
+__printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env,
+					   const char *fmt, ...)
+{
+	va_list args;
+
+	if (!bpf_verifier_log_needed(&env->log))
+		return;
+
+	va_start(args, fmt);
+	bpf_verifier_vlog(&env->log, fmt, args);
+	va_end(args);
+}
+EXPORT_SYMBOL_GPL(bpf_verifier_log_write);
+
+__printf(2, 3) void bpf_log(struct bpf_verifier_log *log,
+			    const char *fmt, ...)
+{
+	va_list args;
+
+	if (!bpf_verifier_log_needed(log))
+		return;
+
+	va_start(args, fmt);
+	bpf_verifier_vlog(log, fmt, args);
+	va_end(args);
+}
+EXPORT_SYMBOL_GPL(bpf_log);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 56f569811f70..2fe2beb5698f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -335,61 +335,6 @@ find_linfo(const struct bpf_verifier_env *env, u32 insn_off)
 	return &linfo[i - 1];
 }
 
-void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
-		       va_list args)
-{
-	unsigned int n;
-
-	n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args);
-
-	WARN_ONCE(n >= BPF_VERIFIER_TMP_LOG_SIZE - 1,
-		  "verifier log line truncated - local buffer too short\n");
-
-	if (log->level == BPF_LOG_KERNEL) {
-		bool newline = n > 0 && log->kbuf[n - 1] == '\n';
-
-		pr_err("BPF: %s%s", log->kbuf, newline ? "" : "\n");
-		return;
-	}
-
-	n = min(log->len_total - log->len_used - 1, n);
-	log->kbuf[n] = '\0';
-	if (!copy_to_user(log->ubuf + log->len_used, log->kbuf, n + 1))
-		log->len_used += n;
-	else
-		log->ubuf = NULL;
-}
-
-static void bpf_vlog_reset(struct bpf_verifier_log *log, u32 new_pos)
-{
-	char zero = 0;
-
-	if (!bpf_verifier_log_needed(log))
-		return;
-
-	log->len_used = new_pos;
-	if (put_user(zero, log->ubuf + new_pos))
-		log->ubuf = NULL;
-}
-
-/* log_level controls verbosity level of eBPF verifier.
- * bpf_verifier_log_write() is used to dump the verification trace to the log,
- * so the user can figure out what's wrong with the program
- */
-__printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env,
-					   const char *fmt, ...)
-{
-	va_list args;
-
-	if (!bpf_verifier_log_needed(&env->log))
-		return;
-
-	va_start(args, fmt);
-	bpf_verifier_vlog(&env->log, fmt, args);
-	va_end(args);
-}
-EXPORT_SYMBOL_GPL(bpf_verifier_log_write);
-
 __printf(2, 3) static void verbose(void *private_data, const char *fmt, ...)
 {
 	struct bpf_verifier_env *env = private_data;
@@ -403,20 +348,6 @@ __printf(2, 3) static void verbose(void *private_data, const char *fmt, ...)
 	va_end(args);
 }
 
-__printf(2, 3) void bpf_log(struct bpf_verifier_log *log,
-			    const char *fmt, ...)
-{
-	va_list args;
-
-	if (!bpf_verifier_log_needed(log))
-		return;
-
-	va_start(args, fmt);
-	bpf_verifier_vlog(log, fmt, args);
-	va_end(args);
-}
-EXPORT_SYMBOL_GPL(bpf_log);
-
 static const char *ltrim(const char *s)
 {
 	while (isspace(*s))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 02/19] bpf: remove minimum size restrictions on verifier log buffer
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 03/19] bpf: switch BPF verifier log to be a rotating log by default Andrii Nakryiko
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

It's not clear why we have 128 as minimum size, but it makes testing
harder and seems unnecessary, as we carefully handle truncation
scenarios and use proper snprintf variants. So remove this limitation
and just enforce positive length for log buffer.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/log.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 920061e38d2e..1974891fc324 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -11,7 +11,7 @@
 
 bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
 {
-	return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 &&
+	return log->len_total > 0 && log->len_total <= UINT_MAX >> 2 &&
 	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 03/19] bpf: switch BPF verifier log to be a rotating log by default
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 02/19] bpf: remove minimum size restrictions on verifier log buffer Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-11 10:28   ` Lorenz Bauer
  2023-04-06 23:41 ` [PATCH v4 bpf-next 04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side Andrii Nakryiko
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Currently, if user-supplied log buffer to collect BPF verifier log turns
out to be too small to contain full log, bpf() syscall returns -ENOSPC,
fails BPF program verification/load, and preserves first N-1 bytes of
the verifier log (where N is the size of user-supplied buffer).

This is problematic in a bunch of common scenarios, especially when
working with real-world BPF programs that tend to be pretty complex as
far as verification goes and require big log buffers. Typically, it's
when debugging tricky cases at log level 2 (verbose). Also, when BPF program
is successfully validated, log level 2 is the only way to actually see
verifier state progression and all the important details.

Even with log level 1, it's possible to get -ENOSPC even if the final
verifier log fits in log buffer, if there is a code path that's deep
enough to fill up entire log, even if normally it would be reset later
on (there is a logic to chop off successfully validated portions of BPF
verifier log).

In short, it's not always possible to pre-size log buffer. Also, what's
worse, in practice, the end of the log most often is way more important
than the beginning, but verifier stops emitting log as soon as initial
log buffer is filled up.

This patch switches BPF verifier log behavior to effectively behave as
rotating log. That is, if user-supplied log buffer turns out to be too
short, verifier will keep overwriting previously written log,
effectively treating user's log buffer as a ring buffer. -ENOSPC is
still going to be returned at the end, to notify user that log contents
was truncated, but the important last N bytes of the log would be
returned, which might be all that user really needs. This consistent
-ENOSPC behavior, regardless of rotating or fixed log behavior, allows
to prevent backwards compatibility breakage. The only user-visible
change is which portion of verifier log user ends up seeing *if buffer
is too small*. Given contents of verifier log itself is not an ABI,
there is no breakage due to this behavior change. Specialized tools that
rely on specific contents of verifier log in -ENOSPC scenario are
expected to be easily adapted to accommodate old and new behaviors.

Importantly, though, to preserve good user experience and not require
every user-space application to adopt to this new behavior, before
exiting to user-space verifier will rotate log (in place) to make it
start at the very beginning of user buffer as a continuous
zero-terminated string. The contents will be a chopped off N-1 last
bytes of full verifier log, of course.

Given beginning of log is sometimes important as well, we add
BPF_LOG_FIXED (which equals 8) flag to force old behavior, which allows
tools like veristat to request first part of verifier log, if necessary.
BPF_LOG_FIXED flag is also a simple and straightforward way to check if
BPF verifier supports rotating behavior.

On the implementation side, conceptually, it's all simple. We maintain
64-bit logical start and end positions. If we need to truncate the log,
start position will be adjusted accordingly to lag end position by
N bytes. We then use those logical positions to calculate their matching
actual positions in user buffer and handle wrap around the end of the
buffer properly. Finally, right before returning from bpf_check(), we
rotate user log buffer contents in-place as necessary, to make log
contents contiguous. See comments in relevant functions for details.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h                  |  33 ++-
 kernel/bpf/btf.c                              |   3 +-
 kernel/bpf/log.c                              | 198 +++++++++++++++++-
 kernel/bpf/verifier.c                         |  19 +-
 .../selftests/bpf/prog_tests/log_fixup.c      |   1 +
 5 files changed, 228 insertions(+), 26 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 83dff25545ee..4c926227f612 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -491,25 +491,42 @@ struct bpf_insn_aux_data {
 #define BPF_VERIFIER_TMP_LOG_SIZE	1024
 
 struct bpf_verifier_log {
-	u32 level;
-	char kbuf[BPF_VERIFIER_TMP_LOG_SIZE];
+	/* Logical start and end positions of a "log window" of the verifier log.
+	 * start_pos == 0 means we haven't truncated anything.
+	 * Once truncation starts to happen, start_pos + len_total == end_pos,
+	 * except during log reset situations, in which (end_pos - start_pos)
+	 * might get smaller than len_total (see bpf_vlog_reset()).
+	 * Generally, (end_pos - start_pos) gives number of useful data in
+	 * user log buffer.
+	 */
+	u64 start_pos;
+	u64 end_pos;
 	char __user *ubuf;
-	u32 len_used;
+	u32 level;
 	u32 len_total;
+	char kbuf[BPF_VERIFIER_TMP_LOG_SIZE];
 };
 
 #define BPF_LOG_LEVEL1	1
 #define BPF_LOG_LEVEL2	2
 #define BPF_LOG_STATS	4
+#define BPF_LOG_FIXED	8
 #define BPF_LOG_LEVEL	(BPF_LOG_LEVEL1 | BPF_LOG_LEVEL2)
-#define BPF_LOG_MASK	(BPF_LOG_LEVEL | BPF_LOG_STATS)
+#define BPF_LOG_MASK	(BPF_LOG_LEVEL | BPF_LOG_STATS | BPF_LOG_FIXED)
 #define BPF_LOG_KERNEL	(BPF_LOG_MASK + 1) /* kernel internal flag */
 #define BPF_LOG_MIN_ALIGNMENT 8U
 #define BPF_LOG_ALIGNMENT 40U
 
+static inline u32 bpf_log_used(const struct bpf_verifier_log *log)
+{
+	return log->end_pos - log->start_pos;
+}
+
 static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log)
 {
-	return log->len_used >= log->len_total - 1;
+	if (log->level & BPF_LOG_FIXED)
+		return bpf_log_used(log) >= log->len_total - 1;
+	return false;
 }
 
 static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
@@ -596,7 +613,7 @@ struct bpf_verifier_env {
 	u32 scratched_regs;
 	/* Same as scratched_regs but for stack slots */
 	u64 scratched_stack_slots;
-	u32 prev_log_len, prev_insn_print_len;
+	u64 prev_log_pos, prev_insn_print_pos;
 	/* buffer used in reg_type_str() to generate reg_type string */
 	char type_str_buf[TYPE_STR_BUF_LEN];
 };
@@ -608,7 +625,9 @@ __printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env,
 					   const char *fmt, ...);
 __printf(2, 3) void bpf_log(struct bpf_verifier_log *log,
 			    const char *fmt, ...);
-void bpf_vlog_reset(struct bpf_verifier_log *log, u32 new_pos);
+void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos);
+void bpf_vlog_finalize(struct bpf_verifier_log *log);
+bool bpf_vlog_truncated(const struct bpf_verifier_log *log);
 
 static inline struct bpf_func_state *cur_func(struct bpf_verifier_env *env)
 {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 593c45a294d0..20a05b8932db 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5593,7 +5593,8 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 		}
 	}
 
-	if (log->level && bpf_verifier_log_full(log)) {
+	bpf_vlog_finalize(log);
+	if (log->level && bpf_vlog_truncated(log)) {
 		err = -ENOSPC;
 		goto errout_meta;
 	}
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 1974891fc324..92b1c8ad6601 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -8,6 +8,7 @@
 #include <linux/types.h>
 #include <linux/bpf.h>
 #include <linux/bpf_verifier.h>
+#include <linux/math64.h>
 
 bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
 {
@@ -32,23 +33,202 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 		return;
 	}
 
-	n = min(log->len_total - log->len_used - 1, n);
-	log->kbuf[n] = '\0';
-	if (!copy_to_user(log->ubuf + log->len_used, log->kbuf, n + 1))
-		log->len_used += n;
-	else
-		log->ubuf = NULL;
+	if (log->level & BPF_LOG_FIXED) {
+		n = min(log->len_total - bpf_log_used(log) - 1, n);
+		log->kbuf[n] = '\0';
+		n += 1;
+
+		if (copy_to_user(log->ubuf + log->end_pos, log->kbuf, n))
+			goto fail;
+
+		log->end_pos += n - 1; /* don't count terminating '\0' */
+	} else {
+		u64 new_end, new_start, cur_pos;
+		u32 buf_start, buf_end, new_n;
+
+		n += 1;
+
+		new_end = log->end_pos + n;
+		if (new_end - log->start_pos >= log->len_total)
+			new_start = new_end - log->len_total;
+		else
+			new_start = log->start_pos;
+		new_n = min(n, log->len_total);
+		cur_pos = new_end - new_n;
+
+		div_u64_rem(cur_pos, log->len_total, &buf_start);
+		div_u64_rem(new_end, log->len_total, &buf_end);
+		/* new_end and buf_end are exclusive indices, so if buf_end is
+		 * exactly zero, then it actually points right to the end of
+		 * ubuf and there is no wrap around
+		 */
+		if (buf_end == 0)
+			buf_end = log->len_total;
+
+		/* if buf_start > buf_end, we wrapped around;
+		 * if buf_start == buf_end, then we fill ubuf completely; we
+		 * can't have buf_start == buf_end to mean that there is
+		 * nothing to write, because we always write at least
+		 * something, even if terminal '\0'
+		 */
+		if (buf_start < buf_end) {
+			/* message fits within contiguous chunk of ubuf */
+			if (copy_to_user(log->ubuf + buf_start,
+					 log->kbuf + n - new_n,
+					 buf_end - buf_start))
+				goto fail;
+		} else {
+			/* message wraps around the end of ubuf, copy in two chunks */
+			if (copy_to_user(log->ubuf + buf_start,
+					 log->kbuf + n - new_n,
+					 log->len_total - buf_start))
+				goto fail;
+			if (copy_to_user(log->ubuf,
+					 log->kbuf + n - buf_end,
+					 buf_end))
+				goto fail;
+		}
+
+		log->start_pos = new_start;
+		log->end_pos = new_end - 1; /* don't count terminating '\0' */
+	}
+
+	return;
+fail:
+	log->ubuf = NULL;
 }
 
-void bpf_vlog_reset(struct bpf_verifier_log *log, u32 new_pos)
+void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos)
 {
 	char zero = 0;
+	u32 pos;
+
+	if (WARN_ON_ONCE(new_pos > log->end_pos))
+		return;
 
 	if (!bpf_verifier_log_needed(log))
 		return;
 
-	log->len_used = new_pos;
-	if (put_user(zero, log->ubuf + new_pos))
+	/* if position to which we reset is beyond current log window,
+	 * then we didn't preserve any useful content and should adjust
+	 * start_pos to end up with an empty log (start_pos == end_pos)
+	 */
+	log->end_pos = new_pos;
+	if (log->end_pos < log->start_pos)
+		log->start_pos = log->end_pos;
+	div_u64_rem(new_pos, log->len_total, &pos);
+	if (put_user(zero, log->ubuf + pos))
+		log->ubuf = NULL;
+}
+
+static void bpf_vlog_reverse_kbuf(char *buf, int len)
+{
+	int i, j;
+
+	for (i = 0, j = len - 1; i < j; i++, j--)
+		swap(buf[i], buf[j]);
+}
+
+static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int end)
+{
+	/* we split log->kbuf into two equal parts for both ends of array */
+	int n = sizeof(log->kbuf) / 2, nn;
+	char *lbuf = log->kbuf, *rbuf = log->kbuf + n;
+
+	/* Read ubuf's section [start, end) two chunks at a time, from left
+	 * and right side; within each chunk, swap all the bytes; after that
+	 * reverse the order of lbuf and rbuf and write result back to ubuf.
+	 * This way we'll end up with swapped contents of specified
+	 * [start, end) ubuf segment.
+	 */
+	while (end - start > 1) {
+		nn = min(n, (end - start ) / 2);
+
+		if (copy_from_user(lbuf, log->ubuf + start, nn))
+			return -EFAULT;
+		if (copy_from_user(rbuf, log->ubuf + end - nn, nn))
+			return -EFAULT;
+
+		bpf_vlog_reverse_kbuf(lbuf, nn);
+		bpf_vlog_reverse_kbuf(rbuf, nn);
+
+		/* we write lbuf to the right end of ubuf, while rbuf to the
+		 * left one to end up with properly reversed overall ubuf
+		 */
+		if (copy_to_user(log->ubuf + start, rbuf, nn))
+			return -EFAULT;
+		if (copy_to_user(log->ubuf + end - nn, lbuf, nn))
+			return -EFAULT;
+
+		start += nn;
+		end -= nn;
+	}
+
+	return 0;
+}
+
+bool bpf_vlog_truncated(const struct bpf_verifier_log *log)
+{
+	if (log->level & BPF_LOG_FIXED)
+		return bpf_log_used(log) >= log->len_total - 1;
+	else
+		return log->start_pos > 0;
+}
+
+void bpf_vlog_finalize(struct bpf_verifier_log *log)
+{
+	u32 sublen;
+	int err;
+
+	if (!log || !log->level || !log->ubuf)
+		return;
+	if ((log->level & BPF_LOG_FIXED) || log->level == BPF_LOG_KERNEL)
+		return;
+
+	/* If we never truncated log, there is nothing to move around. */
+	if (log->start_pos == 0)
+		return;
+
+	/* Otherwise we need to rotate log contents to make it start from the
+	 * buffer beginning and be a continuous zero-terminated string. Note
+	 * that if log->start_pos != 0 then we definitely filled up entire log
+	 * buffer with no gaps, and we just need to shift buffer contents to
+	 * the left by (log->start_pos % log->len_total) bytes.
+	 *
+	 * Unfortunately, user buffer could be huge and we don't want to
+	 * allocate temporary kernel memory of the same size just to shift
+	 * contents in a straightforward fashion. Instead, we'll be clever and
+	 * do in-place array rotation. This is a leetcode-style problem, which
+	 * could be solved by three rotations.
+	 *
+	 * Let's say we have log buffer that has to be shifted left by 7 bytes
+	 * (spaces and vertical bar is just for demonstrative purposes):
+	 *   E F G H I J K | A B C D
+	 *
+	 * First, we reverse entire array:
+	 *   D C B A | K J I H G F E
+	 *
+	 * Then we rotate first 4 bytes (DCBA) and separately last 7 bytes
+	 * (KJIHGFE), resulting in a properly rotated array:
+	 *   A B C D | E F G H I J K
+	 *
+	 * We'll utilize log->kbuf to read user memory chunk by chunk, swap
+	 * bytes, and write them back. Doing it byte-by-byte would be
+	 * unnecessarily inefficient. Altogether we are going to read and
+	 * write each byte twice, for total 4 memory copies between kernel and
+	 * user space.
+	 */
+
+	/* length of the chopped off part that will be the beginning;
+	 * len(ABCD) in the example above
+	 */
+	div_u64_rem(log->start_pos, log->len_total, &sublen);
+	sublen = log->len_total - sublen;
+
+	err = bpf_vlog_reverse_ubuf(log, 0, log->len_total);
+	err = err ?: bpf_vlog_reverse_ubuf(log, 0, sublen);
+	err = err ?: bpf_vlog_reverse_ubuf(log, sublen, log->len_total);
+	if (err)
 		log->ubuf = NULL;
 }
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2fe2beb5698f..1ff5f6e2a9ee 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1439,10 +1439,10 @@ static inline u32 vlog_alignment(u32 pos)
 static void print_insn_state(struct bpf_verifier_env *env,
 			     const struct bpf_func_state *state)
 {
-	if (env->prev_log_len && env->prev_log_len == env->log.len_used) {
+	if (env->prev_log_pos && env->prev_log_pos == env->log.end_pos) {
 		/* remove new line character */
-		bpf_vlog_reset(&env->log, env->prev_log_len - 1);
-		verbose(env, "%*c;", vlog_alignment(env->prev_insn_print_len), ' ');
+		bpf_vlog_reset(&env->log, env->prev_log_pos - 1);
+		verbose(env, "%*c;", vlog_alignment(env->prev_insn_print_pos), ' ');
 	} else {
 		verbose(env, "%d:", env->insn_idx);
 	}
@@ -1750,7 +1750,7 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
 	elem->insn_idx = insn_idx;
 	elem->prev_insn_idx = prev_insn_idx;
 	elem->next = env->head;
-	elem->log_pos = env->log.len_used;
+	elem->log_pos = env->log.end_pos;
 	env->head = elem;
 	env->stack_size++;
 	err = copy_verifier_state(&elem->st, cur);
@@ -2286,7 +2286,7 @@ static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env,
 	elem->insn_idx = insn_idx;
 	elem->prev_insn_idx = prev_insn_idx;
 	elem->next = env->head;
-	elem->log_pos = env->log.len_used;
+	elem->log_pos = env->log.end_pos;
 	env->head = elem;
 	env->stack_size++;
 	if (env->stack_size > BPF_COMPLEXITY_LIMIT_JMP_SEQ) {
@@ -15618,11 +15618,11 @@ static int do_check(struct bpf_verifier_env *env)
 				print_insn_state(env, state->frame[state->curframe]);
 
 			verbose_linfo(env, env->insn_idx, "; ");
-			env->prev_log_len = env->log.len_used;
+			env->prev_log_pos = env->log.end_pos;
 			verbose(env, "%d: ", env->insn_idx);
 			print_bpf_insn(&cbs, insn, env->allow_ptr_leaks);
-			env->prev_insn_print_len = env->log.len_used - env->prev_log_len;
-			env->prev_log_len = env->log.len_used;
+			env->prev_insn_print_pos = env->log.end_pos - env->prev_log_pos;
+			env->prev_log_pos = env->log.end_pos;
 		}
 
 		if (bpf_prog_is_offloaded(env->prog->aux)) {
@@ -18840,7 +18840,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
 	print_verification_stats(env);
 	env->prog->aux->verified_insns = env->insn_processed;
 
-	if (log->level && bpf_verifier_log_full(log))
+	bpf_vlog_finalize(log);
+	if (log->level && bpf_vlog_truncated(log))
 		ret = -ENOSPC;
 	if (log->level && !log->ubuf) {
 		ret = -EFAULT;
diff --git a/tools/testing/selftests/bpf/prog_tests/log_fixup.c b/tools/testing/selftests/bpf/prog_tests/log_fixup.c
index 239e1c5753b0..bc27170bdeb0 100644
--- a/tools/testing/selftests/bpf/prog_tests/log_fixup.c
+++ b/tools/testing/selftests/bpf/prog_tests/log_fixup.c
@@ -24,6 +24,7 @@ static void bad_core_relo(size_t log_buf_size, enum trunc_type trunc_type)
 	bpf_program__set_autoload(skel->progs.bad_relo, true);
 	memset(log_buf, 0, sizeof(log_buf));
 	bpf_program__set_log_buf(skel->progs.bad_relo, log_buf, log_buf_size ?: sizeof(log_buf));
+	bpf_program__set_log_level(skel->progs.bad_relo, 1 | 8); /* BPF_LOG_FIXED to force truncation */
 
 	err = test_log_fixup__load(skel);
 	if (!ASSERT_ERR(err, "load_fail"))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 03/19] bpf: switch BPF verifier log to be a rotating log by default Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-06 23:41 ` [PATCH v4 bpf-next 05/19] veristat: add more veristat control over verifier log options Andrii Nakryiko
                   ` (17 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

This basically prevents any forward compatibility. And we either way
just return -EINVAL, which would otherwise be returned from bpf()
syscall anyways.

Similarly, drop enforcement of non-NULL log_buf when log_level > 0. This
won't be true anymore soon.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/bpf.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 767035900354..f1d04ee14d83 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -290,10 +290,6 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 
 	if (!!log_buf != !!log_size)
 		return libbpf_err(-EINVAL);
-	if (log_level > (4 | 2 | 1))
-		return libbpf_err(-EINVAL);
-	if (log_level && !log_buf)
-		return libbpf_err(-EINVAL);
 
 	func_info_rec_size = OPTS_GET(opts, func_info_rec_size, 0);
 	func_info = OPTS_GET(opts, func_info, NULL);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 05/19] veristat: add more veristat control over verifier log options
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 06/19] selftests/bpf: add fixed vs rotating verifier log tests Andrii Nakryiko
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add --log-size to be able to customize log buffer sent to bpf() syscall
for BPF program verification logging.

Add --log-fixed to enforce BPF_LOG_FIXED behavior for BPF verifier log.
This is useful in unlikely event that beginning of truncated verifier
log is more important than the end of it (which with rotating verifier
log behavior is the default now).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/testing/selftests/bpf/veristat.c | 44 ++++++++++++++++++++------
 1 file changed, 35 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
index 53d7ec168268..af6d7a555dbd 100644
--- a/tools/testing/selftests/bpf/veristat.c
+++ b/tools/testing/selftests/bpf/veristat.c
@@ -141,12 +141,15 @@ static struct env {
 	bool verbose;
 	bool debug;
 	bool quiet;
-	int log_level;
 	enum resfmt out_fmt;
 	bool show_version;
 	bool comparison_mode;
 	bool replay_mode;
 
+	int log_level;
+	int log_size;
+	bool log_fixed;
+
 	struct verif_stats *prog_stats;
 	int prog_stat_cnt;
 
@@ -193,12 +196,19 @@ const char argp_program_doc[] =
 "   OR: veristat -C <baseline.csv> <comparison.csv>\n"
 "   OR: veristat -R <results.csv>\n";
 
+enum {
+	OPT_LOG_FIXED = 1000,
+	OPT_LOG_SIZE = 1001,
+};
+
 static const struct argp_option opts[] = {
 	{ NULL, 'h', NULL, OPTION_HIDDEN, "Show the full help" },
 	{ "version", 'V', NULL, 0, "Print version" },
 	{ "verbose", 'v', NULL, 0, "Verbose mode" },
-	{ "log-level", 'l', "LEVEL", 0, "Verifier log level (default 0 for normal mode, 1 for verbose mode)" },
 	{ "debug", 'd', NULL, 0, "Debug mode (turns on libbpf debug logging)" },
+	{ "log-level", 'l', "LEVEL", 0, "Verifier log level (default 0 for normal mode, 1 for verbose mode)" },
+	{ "log-fixed", OPT_LOG_FIXED, NULL, 0, "Disable verifier log rotation" },
+	{ "log-size", OPT_LOG_SIZE, "BYTES", 0, "Customize verifier log size (default to 16MB)" },
 	{ "quiet", 'q', NULL, 0, "Quiet mode" },
 	{ "emit", 'e', "SPEC", 0, "Specify stats to be emitted" },
 	{ "sort", 's', "SPEC", 0, "Specify sort order" },
@@ -263,6 +273,17 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state)
 			argp_usage(state);
 		}
 		break;
+	case OPT_LOG_FIXED:
+		env.log_fixed = true;
+		break;
+	case OPT_LOG_SIZE:
+		errno = 0;
+		env.log_size = strtol(arg, NULL, 10);
+		if (errno) {
+			fprintf(stderr, "invalid log size: %s\n", arg);
+			argp_usage(state);
+		}
+		break;
 	case 'C':
 		env.comparison_mode = true;
 		break;
@@ -929,8 +950,8 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
 {
 	const char *prog_name = bpf_program__name(prog);
 	const char *base_filename = basename(filename);
-	size_t buf_sz = sizeof(verif_log_buf);
-	char *buf = verif_log_buf;
+	char *buf;
+	int buf_sz, log_level;
 	struct verif_stats *stats;
 	int err = 0;
 	void *tmp;
@@ -948,18 +969,23 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf
 	memset(stats, 0, sizeof(*stats));
 
 	if (env.verbose) {
-		buf_sz = 16 * 1024 * 1024;
+		buf_sz = env.log_size ? env.log_size : 16 * 1024 * 1024;
 		buf = malloc(buf_sz);
 		if (!buf)
 			return -ENOMEM;
-		bpf_program__set_log_buf(prog, buf, buf_sz);
-		bpf_program__set_log_level(prog, env.log_level | 4); /* stats + log */
+		/* ensure we always request stats */
+		log_level = env.log_level | 4 | (env.log_fixed ? 8 : 0);
 	} else {
-		bpf_program__set_log_buf(prog, buf, buf_sz);
-		bpf_program__set_log_level(prog, 4); /* only verifier stats */
+		buf = verif_log_buf;
+		buf_sz = sizeof(verif_log_buf);
+		/* request only verifier stats */
+		log_level = 4 | (env.log_fixed ? 8 : 0);
 	}
 	verif_log_buf[0] = '\0';
 
+	bpf_program__set_log_buf(prog, buf, buf_sz);
+	bpf_program__set_log_level(prog, log_level);
+
 	/* increase chances of successful BPF object loading */
 	fixup_obj(obj, prog, base_filename);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 06/19] selftests/bpf: add fixed vs rotating verifier log tests
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (4 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 05/19] veristat: add more veristat control over verifier log options Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 07/19] bpf: ignore verifier log reset in BPF_LOG_KERNEL mode Andrii Nakryiko
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add selftests validating BPF_LOG_FIXED behavior, which used to be the
only behavior, and now default rotating BPF verifier log, which returns
just up to last N bytes of full verifier log, instead of returning
-ENOSPC.

To stress test correctness of in-kernel verifier log logic, we force it
to truncate program's verifier log to all lengths from 1 all the way to
its full size (about 450 bytes today). This was a useful stress test
while developing the feature.

For both fixed and rotating log modes we expect -ENOSPC if log contents
doesn't fit in user-supplied log buffer.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../selftests/bpf/prog_tests/verifier_log.c   | 179 ++++++++++++++++++
 1 file changed, 179 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier_log.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier_log.c b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
new file mode 100644
index 000000000000..3284108a6ce8
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include <bpf/btf.h>
+
+#include "test_log_buf.skel.h"
+
+
+static bool check_prog_load(int prog_fd, bool expect_err, const char *tag)
+{
+	if (expect_err) {
+		if (!ASSERT_LT(prog_fd, 0, tag)) {
+			close(prog_fd);
+			return false;
+		}
+	} else /* !expect_err */ {
+		if (!ASSERT_GT(prog_fd, 0, tag))
+			return false;
+	}
+	return true;
+}
+
+static void verif_log_subtest(const char *name, bool expect_load_error, int log_level)
+{
+	LIBBPF_OPTS(bpf_prog_load_opts, opts);
+	struct {
+		/* strategically placed before others to avoid accidental modification by kernel */
+		char filler[1024];
+		char buf[1024];
+		/* strategically placed after buf[] to catch more accidental corruptions */
+		char reference[1024];
+	} logs;
+	char *exp_log, prog_name[16], op_name[32];
+	struct test_log_buf *skel;
+	struct bpf_program *prog;
+	const struct bpf_insn *insns;
+	size_t insn_cnt, fixed_log_sz;
+	int i, mode, err, prog_fd;
+
+	skel = test_log_buf__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	bpf_object__for_each_program(prog, skel->obj) {
+		if (strcmp(bpf_program__name(prog), name) == 0)
+			bpf_program__set_autoload(prog, true);
+		else
+			bpf_program__set_autoload(prog, false);
+	}
+
+	err = test_log_buf__load(skel);
+	if (!expect_load_error && !ASSERT_OK(err, "unexpected_load_failure"))
+		goto cleanup;
+	if (expect_load_error && !ASSERT_ERR(err, "unexpected_load_success"))
+		goto cleanup;
+
+	insns = bpf_program__insns(skel->progs.good_prog);
+	insn_cnt = bpf_program__insn_cnt(skel->progs.good_prog);
+
+	opts.log_buf = logs.reference;
+	opts.log_size = sizeof(logs.reference);
+	opts.log_level = log_level | 8 /* BPF_LOG_FIXED */;
+	prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed",
+				"GPL", insns, insn_cnt, &opts);
+	if (!check_prog_load(prog_fd, expect_load_error, "fixed_buf_prog_load"))
+		goto cleanup;
+	close(prog_fd);
+
+	fixed_log_sz = strlen(logs.reference) + 1;
+	if (!ASSERT_GT(fixed_log_sz, 50, "fixed_log_sz"))
+		goto cleanup;
+	memset(logs.reference + fixed_log_sz, 0, sizeof(logs.reference) - fixed_log_sz);
+
+	/* validate BPF_LOG_FIXED works as verifier log used to work, that is:
+	 * we get -ENOSPC and beginning of the full verifier log. This only
+	 * works for log_level 2 and log_level 1 + failed program. For log
+	 * level 2 we don't reset log at all. For log_level 1 + failed program
+	 * we don't get to verification stats output. With log level 1
+	 * for successful program  final result will be just verifier stats.
+	 * But if provided too short log buf, kernel will NULL-out log->ubuf
+	 * and will stop emitting further log. This means we'll never see
+	 * predictable verifier stats.
+	 * Long story short, we do the following -ENOSPC test only for
+	 * predictable combinations.
+	 */
+	if (log_level >= 2 || expect_load_error) {
+		opts.log_buf = logs.buf;
+		opts.log_level = log_level | 8; /* fixed-length log */
+		opts.log_size = 25;
+
+		prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed50",
+					"GPL", insns, insn_cnt, &opts);
+		if (!ASSERT_EQ(prog_fd, -ENOSPC, "unexpected_log_fixed_prog_load_result")) {
+			if (prog_fd >= 0)
+				close(prog_fd);
+			goto cleanup;
+		}
+		if (!ASSERT_EQ(strlen(logs.buf), 24, "log_fixed_25"))
+			goto cleanup;
+		if (!ASSERT_STRNEQ(logs.buf, logs.reference, 24, op_name))
+			goto cleanup;
+	}
+
+	/* validate rolling verifier log logic: try all variations of log buf
+	 * length to force various truncation scenarios
+	 */
+	opts.log_buf = logs.buf;
+
+	/* rotating mode, then fixed mode */
+	for (mode = 1; mode >= 0; mode--) {
+		/* prefill logs.buf with 'A's to detect any write beyond allowed length */
+		memset(logs.filler, 'A', sizeof(logs.filler));
+		logs.filler[sizeof(logs.filler) - 1] = '\0';
+		memset(logs.buf, 'A', sizeof(logs.buf));
+		logs.buf[sizeof(logs.buf) - 1] = '\0';
+
+		for (i = 1; i < fixed_log_sz; i++) {
+			opts.log_size = i;
+			opts.log_level = log_level | (mode ? 0 : 8 /* BPF_LOG_FIXED */);
+
+			snprintf(prog_name, sizeof(prog_name),
+				 "log_%s_%d", mode ? "roll" : "fixed", i);
+			prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, prog_name,
+						"GPL", insns, insn_cnt, &opts);
+
+			snprintf(op_name, sizeof(op_name),
+				 "log_%s_prog_load_%d", mode ? "roll" : "fixed", i);
+			if (!ASSERT_EQ(prog_fd, -ENOSPC, op_name)) {
+				if (prog_fd >= 0)
+					close(prog_fd);
+				goto cleanup;
+			}
+
+			snprintf(op_name, sizeof(op_name),
+				 "log_%s_strlen_%d", mode ? "roll" : "fixed", i);
+			ASSERT_EQ(strlen(logs.buf), i - 1, op_name);
+
+			if (mode)
+				exp_log = logs.reference + fixed_log_sz - i;
+			else
+				exp_log = logs.reference;
+
+			snprintf(op_name, sizeof(op_name),
+				 "log_%s_contents_%d", mode ? "roll" : "fixed", i);
+			if (!ASSERT_STRNEQ(logs.buf, exp_log, i - 1, op_name)) {
+				printf("CMP:%d\nS1:'%s'\nS2:'%s'\n",
+					strncmp(logs.buf, exp_log, i - 1),
+					logs.buf, exp_log);
+				goto cleanup;
+			}
+
+			/* check that unused portions of logs.buf is not overwritten */
+			snprintf(op_name, sizeof(op_name),
+				 "log_%s_unused_%d", mode ? "roll" : "fixed", i);
+			if (!ASSERT_STREQ(logs.buf + i, logs.filler + i, op_name)) {
+				printf("CMP:%d\nS1:'%s'\nS2:'%s'\n",
+					strcmp(logs.buf + i, logs.filler + i),
+					logs.buf + i, logs.filler + i);
+				goto cleanup;
+			}
+		}
+	}
+
+cleanup:
+	test_log_buf__destroy(skel);
+}
+
+void test_verifier_log(void)
+{
+	if (test__start_subtest("good_prog-level1"))
+		verif_log_subtest("good_prog", false, 1);
+	if (test__start_subtest("good_prog-level2"))
+		verif_log_subtest("good_prog", false, 2);
+	if (test__start_subtest("bad_prog-level1"))
+		verif_log_subtest("bad_prog", true, 1);
+	if (test__start_subtest("bad_prog-level2"))
+		verif_log_subtest("bad_prog", true, 2);
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 07/19] bpf: ignore verifier log reset in BPF_LOG_KERNEL mode
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (5 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 06/19] selftests/bpf: add fixed vs rotating verifier log tests Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 08/19] bpf: fix missing -EFAULT return on user log buf error in btf_parse() Andrii Nakryiko
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Verifier log position reset is meaningless in BPF_LOG_KERNEL mode, so
just exit early in bpf_vlog_reset() if log->level is BPF_LOG_KERNEL.

This avoid meaningless put_user() into NULL log->ubuf.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/log.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 92b1c8ad6601..d99a50f07187 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -106,7 +106,7 @@ void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos)
 	if (WARN_ON_ONCE(new_pos > log->end_pos))
 		return;
 
-	if (!bpf_verifier_log_needed(log))
+	if (!bpf_verifier_log_needed(log) || log->level == BPF_LOG_KERNEL)
 		return;
 
 	/* if position to which we reset is beyond current log window,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 08/19] bpf: fix missing -EFAULT return on user log buf error in btf_parse()
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (6 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 07/19] bpf: ignore verifier log reset in BPF_LOG_KERNEL mode Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 09/19] bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode Andrii Nakryiko
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

btf_parse() is missing -EFAULT error return if log->ubuf was NULL-ed out
due to error while copying data into user-provided buffer. Add it, but
handle a special case of BPF_LOG_KERNEL in which log->ubuf is always NULL.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/btf.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 20a05b8932db..6372c144a294 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5598,6 +5598,10 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 		err = -ENOSPC;
 		goto errout_meta;
 	}
+	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) {
+		err = -EFAULT;
+		goto errout_meta;
+	}
 
 	btf_verifier_env_free(env);
 	refcount_set(&btf->refcnt, 1);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 09/19] bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (7 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 08/19] bpf: fix missing -EFAULT return on user log buf error in btf_parse() Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 10/19] bpf: simplify logging-related error conditions handling Andrii Nakryiko
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

If verifier log is in BPF_LOG_KERNEL mode, no log->ubuf is expected and
it stays NULL throughout entire verification process. Don't erroneously
return -EFAULT in such case.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1ff5f6e2a9ee..52fb5216f44c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18843,7 +18843,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
 	bpf_vlog_finalize(log);
 	if (log->level && bpf_vlog_truncated(log))
 		ret = -ENOSPC;
-	if (log->level && !log->ubuf) {
+	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) {
 		ret = -EFAULT;
 		goto err_release_maps;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 10/19] bpf: simplify logging-related error conditions handling
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (8 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 09/19] bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:41 ` [PATCH v4 bpf-next 11/19] bpf: keep track of total log content size in both fixed and rolling modes Andrii Nakryiko
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Move log->level == 0 check into bpf_vlog_truncated() instead of doing it
explicitly. Also remove unnecessary goto in kernel/bpf/verifier.c.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/btf.c      | 2 +-
 kernel/bpf/log.c      | 4 +++-
 kernel/bpf/verifier.c | 6 ++----
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 6372c144a294..5aa540ee611f 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5594,7 +5594,7 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 	}
 
 	bpf_vlog_finalize(log);
-	if (log->level && bpf_vlog_truncated(log)) {
+	if (bpf_vlog_truncated(log)) {
 		err = -ENOSPC;
 		goto errout_meta;
 	}
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index d99a50f07187..c778f3b290cb 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -169,7 +169,9 @@ static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int en
 
 bool bpf_vlog_truncated(const struct bpf_verifier_log *log)
 {
-	if (log->level & BPF_LOG_FIXED)
+	if (!log->level)
+		return false;
+	else if (log->level & BPF_LOG_FIXED)
 		return bpf_log_used(log) >= log->len_total - 1;
 	else
 		return log->start_pos > 0;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 52fb5216f44c..392b821bb27c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18841,12 +18841,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
 	env->prog->aux->verified_insns = env->insn_processed;
 
 	bpf_vlog_finalize(log);
-	if (log->level && bpf_vlog_truncated(log))
+	if (bpf_vlog_truncated(log))
 		ret = -ENOSPC;
-	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) {
+	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf)
 		ret = -EFAULT;
-		goto err_release_maps;
-	}
 
 	if (ret)
 		goto err_release_maps;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 11/19] bpf: keep track of total log content size in both fixed and rolling modes
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (9 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 10/19] bpf: simplify logging-related error conditions handling Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-06 23:41 ` [PATCH v4 bpf-next 12/19] bpf: add log_true_size output field to return necessary log buffer size Andrii Nakryiko
                   ` (10 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Change how we do accounting in BPF_LOG_FIXED mode and adopt log->end_pos
as *logical* log position. This means that we can go beyond physical log
buffer size now and be able to tell what log buffer size should be to
fit entire log contents without -ENOSPC.

To do this for BPF_LOG_FIXED mode, we need to remove a short-circuiting
logic of not vsnprintf()'ing further log content once we filled up
user-provided buffer, which is done by bpf_verifier_log_needed() checks.
We modify these checks to always keep going if log->level is non-zero
(i.e., log is requested), even if log->ubuf was NULL'ed out due to
copying data to user-space, or if entire log buffer is physically full.
We adopt bpf_verifier_vlog() routine to work correctly with
log->ubuf == NULL condition, performing log formatting into temporary
kernel buffer, doing all the necessary accounting, but just avoiding
copying data out if buffer is full or NULL'ed out.

With these changes, it's now possible to do this sort of determination of
log contents size in both BPF_LOG_FIXED and default rolling log mode.
We need to keep in mind bpf_vlog_reset(), though, which shrinks log
contents after successful verification of a particular code path. This
log reset means that log->end_pos isn't always increasing, so to return
back to users what should be the log buffer size to fit all log content
without causing -ENOSPC even in the presence of log resetting, we need
to keep maximum over "lifetime" of logging. We do this accounting in
bpf_vlog_update_len_max() helper.

A related and subtle aspect is that with this logical log->end_pos even in
BPF_LOG_FIXED mode we could temporary "overflow" buffer, but then reset
it back with bpf_vlog_reset() to a position inside user-supplied
log_buf. In such situation we still want to properly maintain
terminating zero. We will eventually return -ENOSPC even if final log
buffer is small (we detect this through log->len_max check). This
behavior is simpler to reason about and is consistent with current
behavior of verifier log. Handling of this required a small addition to
bpf_vlog_reset() logic to avoid doing put_user() beyond physical log
buffer dimensions.

Another issue to keep in mind is that we limit log buffer size to 32-bit
value and keep such log length as u32, but theoretically verifier could
produce huge log stretching beyond 4GB. Instead of keeping (and later
returning) 64-bit log length, we cap it at UINT_MAX. Current UAPI makes
it impossible to specify log buffer size bigger than 4GB anyways, so we
don't really loose anything here and keep everything consistently 32-bit
in UAPI. This property will be utilized in next patch.

Doing the same determination of maximum log buffer for rolling mode is
trivial, as log->end_pos and log->start_pos are already logical
positions, so there is nothing new there.

These changes do incidentally fix one small issue with previous logging
logic. Previously, if use provided log buffer of size N, and actual log
output was exactly N-1 bytes + terminating \0, kernel logic coun't
distinguish this condition from log truncation scenario which would end
up with truncated log contents of N-1 bytes + terminating \0 as well.

But now with log->end_pos being logical position that could go beyond
actual log buffer size, we can distinguish these two conditions, which
we do in this patch. This plays nicely with returning log_size_actual
(implemented in UAPI in the next patch), as we can now guarantee that if
user takes such log_size_actual and provides log buffer of that exact
size, they will not get -ENOSPC in return.

All in all, all these changes do conceptually unify fixed and rolling
log modes much better, and allow a nice feature requested by users:
knowing what should be the size of the buffer to avoid -ENOSPC.

We'll plumb this through the UAPI and the code in the next patch.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h | 12 ++-----
 kernel/bpf/log.c             | 67 +++++++++++++++++++++++++-----------
 2 files changed, 49 insertions(+), 30 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 4c926227f612..98d2eb382dbb 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -504,6 +504,7 @@ struct bpf_verifier_log {
 	char __user *ubuf;
 	u32 level;
 	u32 len_total;
+	u32 len_max;
 	char kbuf[BPF_VERIFIER_TMP_LOG_SIZE];
 };
 
@@ -517,23 +518,16 @@ struct bpf_verifier_log {
 #define BPF_LOG_MIN_ALIGNMENT 8U
 #define BPF_LOG_ALIGNMENT 40U
 
-static inline u32 bpf_log_used(const struct bpf_verifier_log *log)
-{
-	return log->end_pos - log->start_pos;
-}
-
 static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log)
 {
 	if (log->level & BPF_LOG_FIXED)
-		return bpf_log_used(log) >= log->len_total - 1;
+		return log->end_pos >= log->len_total;
 	return false;
 }
 
 static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
 {
-	return log &&
-		((log->level && log->ubuf && !bpf_verifier_log_full(log)) ||
-		 log->level == BPF_LOG_KERNEL);
+	return log && log->level;
 }
 
 #define BPF_MAX_SUBPROGS 256
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index c778f3b290cb..47bea2fad6fe 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -16,10 +16,26 @@ bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
 	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
 }
 
+static void bpf_vlog_update_len_max(struct bpf_verifier_log *log, u32 add_len)
+{
+	/* add_len includes terminal \0, so no need for +1. */
+	u64 len = log->end_pos + add_len;
+
+	/* log->len_max could be larger than our current len due to
+	 * bpf_vlog_reset() calls, so we maintain the max of any length at any
+	 * previous point
+	 */
+	if (len > UINT_MAX)
+		log->len_max = UINT_MAX;
+	else if (len > log->len_max)
+		log->len_max = len;
+}
+
 void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 		       va_list args)
 {
-	unsigned int n;
+	u64 cur_pos;
+	u32 new_n, n;
 
 	n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args);
 
@@ -33,21 +49,27 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 		return;
 	}
 
-	if (log->level & BPF_LOG_FIXED) {
-		n = min(log->len_total - bpf_log_used(log) - 1, n);
-		log->kbuf[n] = '\0';
-		n += 1;
+	n += 1; /* include terminating zero */
+	bpf_vlog_update_len_max(log, n);
 
-		if (copy_to_user(log->ubuf + log->end_pos, log->kbuf, n))
-			goto fail;
+	if (log->level & BPF_LOG_FIXED) {
+		/* check if we have at least something to put into user buf */
+		new_n = 0;
+		if (log->end_pos < log->len_total) {
+			new_n = min_t(u32, log->len_total - log->end_pos, n);
+			log->kbuf[new_n - 1] = '\0';
+		}
 
+		cur_pos = log->end_pos;
 		log->end_pos += n - 1; /* don't count terminating '\0' */
+
+		if (log->ubuf && new_n &&
+		    copy_to_user(log->ubuf + cur_pos, log->kbuf, new_n))
+			goto fail;
 	} else {
-		u64 new_end, new_start, cur_pos;
+		u64 new_end, new_start;
 		u32 buf_start, buf_end, new_n;
 
-		n += 1;
-
 		new_end = log->end_pos + n;
 		if (new_end - log->start_pos >= log->len_total)
 			new_start = new_end - log->len_total;
@@ -65,6 +87,12 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 		if (buf_end == 0)
 			buf_end = log->len_total;
 
+		log->start_pos = new_start;
+		log->end_pos = new_end - 1; /* don't count terminating '\0' */
+
+		if (!log->ubuf)
+			return;
+
 		/* if buf_start > buf_end, we wrapped around;
 		 * if buf_start == buf_end, then we fill ubuf completely; we
 		 * can't have buf_start == buf_end to mean that there is
@@ -88,9 +116,6 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 					 buf_end))
 				goto fail;
 		}
-
-		log->start_pos = new_start;
-		log->end_pos = new_end - 1; /* don't count terminating '\0' */
 	}
 
 	return;
@@ -116,8 +141,13 @@ void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos)
 	log->end_pos = new_pos;
 	if (log->end_pos < log->start_pos)
 		log->start_pos = log->end_pos;
-	div_u64_rem(new_pos, log->len_total, &pos);
-	if (put_user(zero, log->ubuf + pos))
+
+	if (log->level & BPF_LOG_FIXED)
+		pos = log->end_pos + 1;
+	else
+		div_u64_rem(new_pos, log->len_total, &pos);
+
+	if (log->ubuf && pos < log->len_total && put_user(zero, log->ubuf + pos))
 		log->ubuf = NULL;
 }
 
@@ -169,12 +199,7 @@ static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int en
 
 bool bpf_vlog_truncated(const struct bpf_verifier_log *log)
 {
-	if (!log->level)
-		return false;
-	else if (log->level & BPF_LOG_FIXED)
-		return bpf_log_used(log) >= log->len_total - 1;
-	else
-		return log->start_pos > 0;
+	return log->len_max > log->len_total;
 }
 
 void bpf_vlog_finalize(struct bpf_verifier_log *log)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 12/19] bpf: add log_true_size output field to return necessary log buffer size
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (10 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 11/19] bpf: keep track of total log content size in both fixed and rolling modes Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-06 23:41 ` [PATCH v4 bpf-next 13/19] bpf: simplify internal verifier log interface Andrii Nakryiko
                   ` (9 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add output-only log_true_size and btf_log_true_size field to
BPF_PROG_LOAD and BPF_BTF_LOAD commands, respectively. It will return
the size of log buffer necessary to fit in all the log contents at
specified log_level. This is very useful for BPF loader libraries like
libbpf to be able to size log buffer correctly, but could be used by
users directly, if necessary, as well.

This patch plumbs all this through the code, taking into account actual
bpf_attr size provided by user to determine if these new fields are
expected by users. And if they are, set them from kernel on return.

We refactory btf_parse() function to accommodate this, moving attr and
uattr handling inside it. The rest is very straightforward code, which
is split from the logging accounting changes in the previous patch to
make it simpler to review logic vs UAPI changes.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf.h            |  2 +-
 include/linux/btf.h            |  2 +-
 include/uapi/linux/bpf.h       | 10 ++++++++++
 kernel/bpf/btf.c               | 32 ++++++++++++++++++--------------
 kernel/bpf/syscall.c           | 16 ++++++++--------
 kernel/bpf/verifier.c          |  8 +++++++-
 tools/include/uapi/linux/bpf.h | 12 +++++++++++-
 7 files changed, 56 insertions(+), 26 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 002a811b6b90..2c6095bd7d69 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2175,7 +2175,7 @@ int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size,
 			     size_t actual_size);
 
 /* verify correctness of eBPF program */
-int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr);
+int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size);
 
 #ifndef CONFIG_BPF_JIT_ALWAYS_ON
 void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth);
diff --git a/include/linux/btf.h b/include/linux/btf.h
index d53b10cc55f2..495250162422 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -125,7 +125,7 @@ extern const struct file_operations btf_fops;
 
 void btf_get(struct btf *btf);
 void btf_put(struct btf *btf);
-int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr);
+int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_sz);
 struct btf *btf_get_by_fd(int fd);
 int btf_get_info_by_fd(const struct btf *btf,
 		       const union bpf_attr *attr,
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index e3d3b5160d26..3823100b7934 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1407,6 +1407,11 @@ union bpf_attr {
 		__aligned_u64	fd_array;	/* array of FDs */
 		__aligned_u64	core_relos;
 		__u32		core_relo_rec_size; /* sizeof(struct bpf_core_relo) */
+		/* output: actual total log contents size (including termintaing zero).
+		 * It could be both larger than original log_size (if log was
+		 * truncated), or smaller (if log buffer wasn't filled completely).
+		 */
+		__u32		log_true_size;
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
@@ -1492,6 +1497,11 @@ union bpf_attr {
 		__u32		btf_size;
 		__u32		btf_log_size;
 		__u32		btf_log_level;
+		/* output: actual total log contents size (including termintaing zero).
+		 * It could be both larger than original log_size (if log was
+		 * truncated), or smaller (if log buffer wasn't filled completely).
+		 */
+		__u32		btf_log_true_size;
 	};
 
 	struct {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 5aa540ee611f..0748cf4b8ab6 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5504,9 +5504,10 @@ static int btf_check_type_tags(struct btf_verifier_env *env,
 	return 0;
 }
 
-static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
-			     u32 log_level, char __user *log_ubuf, u32 log_size)
+static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 {
+	bpfptr_t btf_data = make_bpfptr(attr->btf, uattr.is_kernel);
+	char __user *log_ubuf = u64_to_user_ptr(attr->btf_log_buf);
 	struct btf_struct_metas *struct_meta_tab;
 	struct btf_verifier_env *env = NULL;
 	struct bpf_verifier_log *log;
@@ -5514,7 +5515,7 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 	u8 *data;
 	int err;
 
-	if (btf_data_size > BTF_MAX_SIZE)
+	if (attr->btf_size > BTF_MAX_SIZE)
 		return ERR_PTR(-E2BIG);
 
 	env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN);
@@ -5522,13 +5523,13 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 		return ERR_PTR(-ENOMEM);
 
 	log = &env->log;
-	if (log_level || log_ubuf || log_size) {
+	if (attr->btf_log_level || log_ubuf || attr->btf_log_size) {
 		/* user requested verbose verifier output
 		 * and supplied buffer to store the verification trace
 		 */
-		log->level = log_level;
+		log->level = attr->btf_log_level;
 		log->ubuf = log_ubuf;
-		log->len_total = log_size;
+		log->len_total = attr->btf_log_size;
 
 		/* log attributes have to be sane */
 		if (!bpf_verifier_log_attr_valid(log)) {
@@ -5544,16 +5545,16 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 	}
 	env->btf = btf;
 
-	data = kvmalloc(btf_data_size, GFP_KERNEL | __GFP_NOWARN);
+	data = kvmalloc(attr->btf_size, GFP_KERNEL | __GFP_NOWARN);
 	if (!data) {
 		err = -ENOMEM;
 		goto errout;
 	}
 
 	btf->data = data;
-	btf->data_size = btf_data_size;
+	btf->data_size = attr->btf_size;
 
-	if (copy_from_bpfptr(data, btf_data, btf_data_size)) {
+	if (copy_from_bpfptr(data, btf_data, attr->btf_size)) {
 		err = -EFAULT;
 		goto errout;
 	}
@@ -5594,6 +5595,12 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
 	}
 
 	bpf_vlog_finalize(log);
+	if (uattr_size >= offsetofend(union bpf_attr, btf_log_true_size) &&
+	    copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, btf_log_true_size),
+				  &log->len_max, sizeof(log->len_max))) {
+		err = -EFAULT;
+		goto errout_meta;
+	}
 	if (bpf_vlog_truncated(log)) {
 		err = -ENOSPC;
 		goto errout_meta;
@@ -7218,15 +7225,12 @@ static int __btf_new_fd(struct btf *btf)
 	return anon_inode_getfd("btf", &btf_fops, btf, O_RDONLY | O_CLOEXEC);
 }
 
-int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr)
+int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 {
 	struct btf *btf;
 	int ret;
 
-	btf = btf_parse(make_bpfptr(attr->btf, uattr.is_kernel),
-			attr->btf_size, attr->btf_log_level,
-			u64_to_user_ptr(attr->btf_log_buf),
-			attr->btf_log_size);
+	btf = btf_parse(attr, uattr, uattr_size);
 	if (IS_ERR(btf))
 		return PTR_ERR(btf);
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e18ac7fdc210..6d575505f89c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2501,9 +2501,9 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
 }
 
 /* last field in 'union bpf_attr' used by this command */
-#define	BPF_PROG_LOAD_LAST_FIELD core_relo_rec_size
+#define	BPF_PROG_LOAD_LAST_FIELD log_true_size
 
-static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
+static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 {
 	enum bpf_prog_type type = attr->prog_type;
 	struct bpf_prog *prog, *dst_prog = NULL;
@@ -2653,7 +2653,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 		goto free_prog_sec;
 
 	/* run eBPF verifier */
-	err = bpf_check(&prog, attr, uattr);
+	err = bpf_check(&prog, attr, uattr, uattr_size);
 	if (err < 0)
 		goto free_used_maps;
 
@@ -4371,9 +4371,9 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr,
 	return err;
 }
 
-#define BPF_BTF_LOAD_LAST_FIELD btf_log_level
+#define BPF_BTF_LOAD_LAST_FIELD btf_log_true_size
 
-static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr)
+static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size)
 {
 	if (CHECK_ATTR(BPF_BTF_LOAD))
 		return -EINVAL;
@@ -4381,7 +4381,7 @@ static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr)
 	if (!bpf_capable())
 		return -EPERM;
 
-	return btf_new_fd(attr, uattr);
+	return btf_new_fd(attr, uattr, uattr_size);
 }
 
 #define BPF_BTF_GET_FD_BY_ID_LAST_FIELD btf_id
@@ -5059,7 +5059,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
 		err = map_freeze(&attr);
 		break;
 	case BPF_PROG_LOAD:
-		err = bpf_prog_load(&attr, uattr);
+		err = bpf_prog_load(&attr, uattr, size);
 		break;
 	case BPF_OBJ_PIN:
 		err = bpf_obj_pin(&attr);
@@ -5104,7 +5104,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
 		err = bpf_raw_tracepoint_open(&attr);
 		break;
 	case BPF_BTF_LOAD:
-		err = bpf_btf_load(&attr, uattr);
+		err = bpf_btf_load(&attr, uattr, size);
 		break;
 	case BPF_BTF_GET_FD_BY_ID:
 		err = bpf_btf_get_fd_by_id(&attr);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 392b821bb27c..da8b030c82ce 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18674,7 +18674,7 @@ struct btf *bpf_get_btf_vmlinux(void)
 	return btf_vmlinux;
 }
 
-int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
+int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size)
 {
 	u64 start_time = ktime_get_ns();
 	struct bpf_verifier_env *env;
@@ -18841,6 +18841,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
 	env->prog->aux->verified_insns = env->insn_processed;
 
 	bpf_vlog_finalize(log);
+	if (uattr_size >= offsetofend(union bpf_attr, log_true_size) &&
+	    copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, log_true_size),
+				  &log->len_max, sizeof(log->len_max))) {
+		ret = -EFAULT;
+		goto err_release_maps;
+	}
 	if (bpf_vlog_truncated(log))
 		ret = -ENOSPC;
 	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d6c5a022ae28..3823100b7934 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1407,6 +1407,11 @@ union bpf_attr {
 		__aligned_u64	fd_array;	/* array of FDs */
 		__aligned_u64	core_relos;
 		__u32		core_relo_rec_size; /* sizeof(struct bpf_core_relo) */
+		/* output: actual total log contents size (including termintaing zero).
+		 * It could be both larger than original log_size (if log was
+		 * truncated), or smaller (if log buffer wasn't filled completely).
+		 */
+		__u32		log_true_size;
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
@@ -1492,6 +1497,11 @@ union bpf_attr {
 		__u32		btf_size;
 		__u32		btf_log_size;
 		__u32		btf_log_level;
+		/* output: actual total log contents size (including termintaing zero).
+		 * It could be both larger than original log_size (if log was
+		 * truncated), or smaller (if log buffer wasn't filled completely).
+		 */
+		__u32		btf_log_true_size;
 	};
 
 	struct {
@@ -1513,7 +1523,7 @@ union bpf_attr {
 	struct { /* struct used by BPF_LINK_CREATE command */
 		union {
 			__u32		prog_fd;	/* eBPF program to attach */
-			__u32		map_fd;		/* eBPF struct_ops to attach */
+			__u32		map_fd;		/* struct_ops to attach */
 		};
 		union {
 			__u32		target_fd;	/* object to attach to */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 13/19] bpf: simplify internal verifier log interface
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (11 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 12/19] bpf: add log_true_size output field to return necessary log buffer size Andrii Nakryiko
@ 2023-04-06 23:41 ` Andrii Nakryiko
  2023-04-06 23:42 ` [PATCH v4 bpf-next 14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested Andrii Nakryiko
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:41 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Simplify internal verifier log API down to bpf_vlog_init() and
bpf_vlog_finalize(). The former handles input arguments validation in
one place and makes it easier to change it. The latter subsumes -ENOSPC
(truncation) and -EFAULT handling and simplifies both caller's code
(bpf_check() and btf_parse()).

For btf_parse(), this patch also makes sure that verifier log
finalization happens even if there is some error condition during BTF
verification process prior to normal finalization step.

Acked-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h | 13 ++------
 kernel/bpf/btf.c             | 65 ++++++++++++++++++------------------
 kernel/bpf/log.c             | 48 +++++++++++++++++++++-----
 kernel/bpf/verifier.c        | 39 +++++++++-------------
 4 files changed, 90 insertions(+), 75 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 98d2eb382dbb..f03852b89d28 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -518,13 +518,6 @@ struct bpf_verifier_log {
 #define BPF_LOG_MIN_ALIGNMENT 8U
 #define BPF_LOG_ALIGNMENT 40U
 
-static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log)
-{
-	if (log->level & BPF_LOG_FIXED)
-		return log->end_pos >= log->len_total;
-	return false;
-}
-
 static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
 {
 	return log && log->level;
@@ -612,16 +605,16 @@ struct bpf_verifier_env {
 	char type_str_buf[TYPE_STR_BUF_LEN];
 };
 
-bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log);
 __printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log,
 				      const char *fmt, va_list args);
 __printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env,
 					   const char *fmt, ...);
 __printf(2, 3) void bpf_log(struct bpf_verifier_log *log,
 			    const char *fmt, ...);
+int bpf_vlog_init(struct bpf_verifier_log *log, u32 log_level,
+		  char __user *log_buf, u32 log_size);
 void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos);
-void bpf_vlog_finalize(struct bpf_verifier_log *log);
-bool bpf_vlog_truncated(const struct bpf_verifier_log *log);
+int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual);
 
 static inline struct bpf_func_state *cur_func(struct bpf_verifier_env *env)
 {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 0748cf4b8ab6..ffc31a1c84af 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5504,16 +5504,30 @@ static int btf_check_type_tags(struct btf_verifier_env *env,
 	return 0;
 }
 
+static int finalize_log(struct bpf_verifier_log *log, bpfptr_t uattr, u32 uattr_size)
+{
+	u32 log_true_size;
+	int err;
+
+	err = bpf_vlog_finalize(log, &log_true_size);
+
+	if (uattr_size >= offsetofend(union bpf_attr, btf_log_true_size) &&
+	    copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, btf_log_true_size),
+				  &log_true_size, sizeof(log_true_size)))
+		err = -EFAULT;
+
+	return err;
+}
+
 static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 {
 	bpfptr_t btf_data = make_bpfptr(attr->btf, uattr.is_kernel);
 	char __user *log_ubuf = u64_to_user_ptr(attr->btf_log_buf);
 	struct btf_struct_metas *struct_meta_tab;
 	struct btf_verifier_env *env = NULL;
-	struct bpf_verifier_log *log;
 	struct btf *btf = NULL;
 	u8 *data;
-	int err;
+	int err, ret;
 
 	if (attr->btf_size > BTF_MAX_SIZE)
 		return ERR_PTR(-E2BIG);
@@ -5522,21 +5536,13 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
 	if (!env)
 		return ERR_PTR(-ENOMEM);
 
-	log = &env->log;
-	if (attr->btf_log_level || log_ubuf || attr->btf_log_size) {
-		/* user requested verbose verifier output
-		 * and supplied buffer to store the verification trace
-		 */
-		log->level = attr->btf_log_level;
-		log->ubuf = log_ubuf;
-		log->len_total = attr->btf_log_size;
-
-		/* log attributes have to be sane */
-		if (!bpf_verifier_log_attr_valid(log)) {
-			err = -EINVAL;
-			goto errout;
-		}
-	}
+	/* user could have requested verbose verifier output
+	 * and supplied buffer to store the verification trace
+	 */
+	err = bpf_vlog_init(&env->log, attr->btf_log_level,
+			    log_ubuf, attr->btf_log_size);
+	if (err)
+		goto errout_free;
 
 	btf = kzalloc(sizeof(*btf), GFP_KERNEL | __GFP_NOWARN);
 	if (!btf) {
@@ -5577,7 +5583,7 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
 	if (err)
 		goto errout;
 
-	struct_meta_tab = btf_parse_struct_metas(log, btf);
+	struct_meta_tab = btf_parse_struct_metas(&env->log, btf);
 	if (IS_ERR(struct_meta_tab)) {
 		err = PTR_ERR(struct_meta_tab);
 		goto errout;
@@ -5594,21 +5600,9 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
 		}
 	}
 
-	bpf_vlog_finalize(log);
-	if (uattr_size >= offsetofend(union bpf_attr, btf_log_true_size) &&
-	    copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, btf_log_true_size),
-				  &log->len_max, sizeof(log->len_max))) {
-		err = -EFAULT;
-		goto errout_meta;
-	}
-	if (bpf_vlog_truncated(log)) {
-		err = -ENOSPC;
-		goto errout_meta;
-	}
-	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) {
-		err = -EFAULT;
-		goto errout_meta;
-	}
+	err = finalize_log(&env->log, uattr, uattr_size);
+	if (err)
+		goto errout_free;
 
 	btf_verifier_env_free(env);
 	refcount_set(&btf->refcnt, 1);
@@ -5617,6 +5611,11 @@ static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uat
 errout_meta:
 	btf_free_struct_meta_tab(btf);
 errout:
+	/* overwrite err with -ENOSPC or -EFAULT */
+	ret = finalize_log(&env->log, uattr, uattr_size);
+	if (ret)
+		err = ret;
+errout_free:
 	btf_verifier_env_free(env);
 	if (btf)
 		btf_free(btf);
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 47bea2fad6fe..1fae2c5d7ae4 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -10,12 +10,26 @@
 #include <linux/bpf_verifier.h>
 #include <linux/math64.h>
 
-bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
+static bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
 {
 	return log->len_total > 0 && log->len_total <= UINT_MAX >> 2 &&
 	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
 }
 
+int bpf_vlog_init(struct bpf_verifier_log *log, u32 log_level,
+		  char __user *log_buf, u32 log_size)
+{
+	log->level = log_level;
+	log->ubuf = log_buf;
+	log->len_total = log_size;
+
+	/* log attributes have to be sane */
+	if (!bpf_verifier_log_attr_valid(log))
+		return -EINVAL;
+
+	return 0;
+}
+
 static void bpf_vlog_update_len_max(struct bpf_verifier_log *log, u32 add_len)
 {
 	/* add_len includes terminal \0, so no need for +1. */
@@ -197,24 +211,25 @@ static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int en
 	return 0;
 }
 
-bool bpf_vlog_truncated(const struct bpf_verifier_log *log)
+static bool bpf_vlog_truncated(const struct bpf_verifier_log *log)
 {
 	return log->len_max > log->len_total;
 }
 
-void bpf_vlog_finalize(struct bpf_verifier_log *log)
+int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual)
 {
 	u32 sublen;
 	int err;
 
-	if (!log || !log->level || !log->ubuf)
-		return;
-	if ((log->level & BPF_LOG_FIXED) || log->level == BPF_LOG_KERNEL)
-		return;
+	*log_size_actual = 0;
+	if (!log || log->level == 0 || log->level == BPF_LOG_KERNEL)
+		return 0;
 
+	if (!log->ubuf)
+		goto skip_log_rotate;
 	/* If we never truncated log, there is nothing to move around. */
-	if (log->start_pos == 0)
-		return;
+	if ((log->level & BPF_LOG_FIXED) || log->start_pos == 0)
+		goto skip_log_rotate;
 
 	/* Otherwise we need to rotate log contents to make it start from the
 	 * buffer beginning and be a continuous zero-terminated string. Note
@@ -257,6 +272,21 @@ void bpf_vlog_finalize(struct bpf_verifier_log *log)
 	err = err ?: bpf_vlog_reverse_ubuf(log, sublen, log->len_total);
 	if (err)
 		log->ubuf = NULL;
+
+skip_log_rotate:
+	*log_size_actual = log->len_max;
+
+	/* properly initialized log has either both ubuf!=NULL and len_total>0
+	 * or ubuf==NULL and len_total==0, so if this condition doesn't hold,
+	 * we got a fault somewhere along the way, so report it back
+	 */
+	if (!!log->ubuf != !!log->len_total)
+		return -EFAULT;
+
+	if (bpf_vlog_truncated(log))
+		return -ENOSPC;
+
+	return 0;
 }
 
 /* log_level controls verbosity level of eBPF verifier.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index da8b030c82ce..ff10a7214861 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18678,8 +18678,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
 {
 	u64 start_time = ktime_get_ns();
 	struct bpf_verifier_env *env;
-	struct bpf_verifier_log *log;
-	int i, len, ret = -EINVAL;
+	int i, len, ret = -EINVAL, err;
+	u32 log_true_size;
 	bool is_priv;
 
 	/* no program is valid */
@@ -18692,7 +18692,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
 	env = kzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL);
 	if (!env)
 		return -ENOMEM;
-	log = &env->log;
 
 	len = (*prog)->len;
 	env->insn_aux_data =
@@ -18713,20 +18712,14 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
 	if (!is_priv)
 		mutex_lock(&bpf_verifier_lock);
 
-	if (attr->log_level || attr->log_buf || attr->log_size) {
-		/* user requested verbose verifier output
-		 * and supplied buffer to store the verification trace
-		 */
-		log->level = attr->log_level;
-		log->ubuf = (char __user *) (unsigned long) attr->log_buf;
-		log->len_total = attr->log_size;
-
-		/* log attributes have to be sane */
-		if (!bpf_verifier_log_attr_valid(log)) {
-			ret = -EINVAL;
-			goto err_unlock;
-		}
-	}
+	/* user could have requested verbose verifier output
+	 * and supplied buffer to store the verification trace
+	 */
+	ret = bpf_vlog_init(&env->log, attr->log_level,
+			    (char __user *) (unsigned long) attr->log_buf,
+			    attr->log_size);
+	if (ret)
+		goto err_unlock;
 
 	mark_verifier_state_clean(env);
 
@@ -18840,17 +18833,17 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
 	print_verification_stats(env);
 	env->prog->aux->verified_insns = env->insn_processed;
 
-	bpf_vlog_finalize(log);
+	/* preserve original error even if log finalization is successful */
+	err = bpf_vlog_finalize(&env->log, &log_true_size);
+	if (err)
+		ret = err;
+
 	if (uattr_size >= offsetofend(union bpf_attr, log_true_size) &&
 	    copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, log_true_size),
-				  &log->len_max, sizeof(log->len_max))) {
+				  &log_true_size, sizeof(log_true_size))) {
 		ret = -EFAULT;
 		goto err_release_maps;
 	}
-	if (bpf_vlog_truncated(log))
-		ret = -ENOSPC;
-	if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf)
-		ret = -EFAULT;
 
 	if (ret)
 		goto err_release_maps;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (12 preceding siblings ...)
  2023-04-06 23:41 ` [PATCH v4 bpf-next 13/19] bpf: simplify internal verifier log interface Andrii Nakryiko
@ 2023-04-06 23:42 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-06 23:42 ` [PATCH v4 bpf-next 15/19] libbpf: wire through log_true_size returned from kernel for BPF_PROG_LOAD Andrii Nakryiko
                   ` (7 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:42 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Drop the log_size>0 and log_buf!=NULL condition when log_level>0. This
allows users to request log_true_size of a full log without providing
actual (even if small) log buffer. Verifier log handling code was mostly
ready to handle NULL log->ubuf, so only few small changes were necessary
to prevent NULL log->ubuf from causing problems.

Note, that if user provided NULL log_buf with log_level>0 we don't
consider this a log truncation, and thus won't return -ENOSPC.

We also enforce that either (log_buf==NULL && log_size==0) or
(log_buf!=NULL && log_size>0).

Suggested-by: Lorenz Bauer <lmb@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/log.c | 42 +++++++++++++++++++++++++-----------------
 1 file changed, 25 insertions(+), 17 deletions(-)

diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 1fae2c5d7ae4..046ddff37a76 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -12,8 +12,17 @@
 
 static bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log)
 {
-	return log->len_total > 0 && log->len_total <= UINT_MAX >> 2 &&
-	       log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK);
+	/* ubuf and len_total should both be specified (or not) together */
+	if (!!log->ubuf != !!log->len_total)
+		return false;
+	/* log buf without log_level is meaningless */
+	if (log->ubuf && log->level == 0)
+		return false;
+	if (log->level & ~BPF_LOG_MASK)
+		return false;
+	if (log->len_total > UINT_MAX >> 2)
+		return false;
+	return true;
 }
 
 int bpf_vlog_init(struct bpf_verifier_log *log, u32 log_level,
@@ -89,9 +98,15 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 			new_start = new_end - log->len_total;
 		else
 			new_start = log->start_pos;
+
+		log->start_pos = new_start;
+		log->end_pos = new_end - 1; /* don't count terminating '\0' */
+
+		if (!log->ubuf)
+			return;
+
 		new_n = min(n, log->len_total);
 		cur_pos = new_end - new_n;
-
 		div_u64_rem(cur_pos, log->len_total, &buf_start);
 		div_u64_rem(new_end, log->len_total, &buf_end);
 		/* new_end and buf_end are exclusive indices, so if buf_end is
@@ -101,12 +116,6 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt,
 		if (buf_end == 0)
 			buf_end = log->len_total;
 
-		log->start_pos = new_start;
-		log->end_pos = new_end - 1; /* don't count terminating '\0' */
-
-		if (!log->ubuf)
-			return;
-
 		/* if buf_start > buf_end, we wrapped around;
 		 * if buf_start == buf_end, then we fill ubuf completely; we
 		 * can't have buf_start == buf_end to mean that there is
@@ -156,12 +165,15 @@ void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos)
 	if (log->end_pos < log->start_pos)
 		log->start_pos = log->end_pos;
 
+	if (!log->ubuf)
+		return;
+
 	if (log->level & BPF_LOG_FIXED)
 		pos = log->end_pos + 1;
 	else
 		div_u64_rem(new_pos, log->len_total, &pos);
 
-	if (log->ubuf && pos < log->len_total && put_user(zero, log->ubuf + pos))
+	if (pos < log->len_total && put_user(zero, log->ubuf + pos))
 		log->ubuf = NULL;
 }
 
@@ -211,11 +223,6 @@ static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int en
 	return 0;
 }
 
-static bool bpf_vlog_truncated(const struct bpf_verifier_log *log)
-{
-	return log->len_max > log->len_total;
-}
-
 int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual)
 {
 	u32 sublen;
@@ -228,7 +235,7 @@ int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual)
 	if (!log->ubuf)
 		goto skip_log_rotate;
 	/* If we never truncated log, there is nothing to move around. */
-	if ((log->level & BPF_LOG_FIXED) || log->start_pos == 0)
+	if (log->start_pos == 0)
 		goto skip_log_rotate;
 
 	/* Otherwise we need to rotate log contents to make it start from the
@@ -283,7 +290,8 @@ int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual)
 	if (!!log->ubuf != !!log->len_total)
 		return -EFAULT;
 
-	if (bpf_vlog_truncated(log))
+	/* did truncation actually happen? */
+	if (log->ubuf && log->len_max > log->len_total)
 		return -ENOSPC;
 
 	return 0;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 15/19] libbpf: wire through log_true_size returned from kernel for BPF_PROG_LOAD
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (13 preceding siblings ...)
  2023-04-06 23:42 ` [PATCH v4 bpf-next 14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested Andrii Nakryiko
@ 2023-04-06 23:42 ` Andrii Nakryiko
  2023-04-06 23:42 ` [PATCH v4 bpf-next 16/19] libbpf: wire through log_true_size for bpf_btf_load() API Andrii Nakryiko
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:42 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add output-only log_true_size field to bpf_prog_load_opts to return
bpf_attr->log_true_size value back from bpf() syscall.

Note, that we have to drop const modifier from opts in bpf_prog_load().
This could potentially cause compilation error for some users. But
the usual practice is to define bpf_prog_load_ops
as a local variable next to bpf_prog_load() call and pass pointer to it,
so const vs non-const makes no difference and won't even come up in most
(if not all) cases.

There are no runtime and ABI backwards/forward compatibility issues at all.
If user provides old struct bpf_prog_load_opts, libbpf won't set new
fields. If old libbpf is provided new bpf_prog_load_opts, nothing will
happen either as old libbpf doesn't yet know about this new field.

Adding a new variant of bpf_prog_load() just for this seems like a big
and unnecessary overkill. As a corroborating evidence is the fact that
entire selftests/bpf code base required not adjustment whatsoever.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/bpf.c |  7 +++++--
 tools/lib/bpf/bpf.h | 11 +++++++++--
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index f1d04ee14d83..a031de0a707a 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -230,9 +230,9 @@ alloc_zero_tailing_info(const void *orecord, __u32 cnt,
 int bpf_prog_load(enum bpf_prog_type prog_type,
 		  const char *prog_name, const char *license,
 		  const struct bpf_insn *insns, size_t insn_cnt,
-		  const struct bpf_prog_load_opts *opts)
+		  struct bpf_prog_load_opts *opts)
 {
-	const size_t attr_sz = offsetofend(union bpf_attr, fd_array);
+	const size_t attr_sz = offsetofend(union bpf_attr, log_true_size);
 	void *finfo = NULL, *linfo = NULL;
 	const char *func_info, *line_info;
 	__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
@@ -312,6 +312,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 	}
 
 	fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
+	OPTS_SET(opts, log_true_size, attr.log_true_size);
 	if (fd >= 0)
 		return fd;
 
@@ -352,6 +353,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 		}
 
 		fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
+		OPTS_SET(opts, log_true_size, attr.log_true_size);
 		if (fd >= 0)
 			goto done;
 	}
@@ -366,6 +368,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 		attr.log_level = 1;
 
 		fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
+		OPTS_SET(opts, log_true_size, attr.log_true_size);
 	}
 done:
 	/* free() doesn't affect errno, so we don't need to restore it */
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index b073e73439ef..cfa0cdfa50f8 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -96,13 +96,20 @@ struct bpf_prog_load_opts {
 	__u32 log_level;
 	__u32 log_size;
 	char *log_buf;
+	/* output: actual total log contents size (including termintaing zero).
+	 * It could be both larger than original log_size (if log was
+	 * truncated), or smaller (if log buffer wasn't filled completely).
+	 * If kernel doesn't support this feature, log_size is left unchanged.
+	 */
+	__u32 log_true_size;
+	size_t :0;
 };
-#define bpf_prog_load_opts__last_field log_buf
+#define bpf_prog_load_opts__last_field log_true_size
 
 LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type,
 			     const char *prog_name, const char *license,
 			     const struct bpf_insn *insns, size_t insn_cnt,
-			     const struct bpf_prog_load_opts *opts);
+			     struct bpf_prog_load_opts *opts);
 
 /* Flags to direct loading requirements */
 #define MAPS_RELAX_COMPAT	0x01
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 16/19] libbpf: wire through log_true_size for bpf_btf_load() API
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (14 preceding siblings ...)
  2023-04-06 23:42 ` [PATCH v4 bpf-next 15/19] libbpf: wire through log_true_size returned from kernel for BPF_PROG_LOAD Andrii Nakryiko
@ 2023-04-06 23:42 ` Andrii Nakryiko
  2023-04-06 23:42 ` [PATCH v4 bpf-next 17/19] selftests/bpf: add tests to validate log_true_size feature Andrii Nakryiko
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:42 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Similar to what we did for bpf_prog_load() in previous patch, wire
returning of log_true_size value from kernel back to the user through
OPTS out field.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/bpf.c |  6 ++++--
 tools/lib/bpf/bpf.h | 11 +++++++++--
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index a031de0a707a..128ac723c4ea 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -1083,9 +1083,9 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd)
 	return libbpf_err_errno(fd);
 }
 
-int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_load_opts *opts)
+int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts)
 {
-	const size_t attr_sz = offsetofend(union bpf_attr, btf_log_level);
+	const size_t attr_sz = offsetofend(union bpf_attr, btf_log_true_size);
 	union bpf_attr attr;
 	char *log_buf;
 	size_t log_size;
@@ -1128,6 +1128,8 @@ int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_loa
 		attr.btf_log_level = 1;
 		fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz);
 	}
+
+	OPTS_SET(opts, log_true_size, attr.btf_log_true_size);
 	return libbpf_err_errno(fd);
 }
 
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index cfa0cdfa50f8..a2c091389b18 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -124,11 +124,18 @@ struct bpf_btf_load_opts {
 	char *log_buf;
 	__u32 log_level;
 	__u32 log_size;
+	/* output: actual total log contents size (including termintaing zero).
+	 * It could be both larger than original log_size (if log was
+	 * truncated), or smaller (if log buffer wasn't filled completely).
+	 * If kernel doesn't support this feature, log_size is left unchanged.
+	 */
+	__u32 log_true_size;
+	size_t :0;
 };
-#define bpf_btf_load_opts__last_field log_size
+#define bpf_btf_load_opts__last_field log_true_size
 
 LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size,
-			    const struct bpf_btf_load_opts *opts);
+			    struct bpf_btf_load_opts *opts);
 
 LIBBPF_API int bpf_map_update_elem(int fd, const void *key, const void *value,
 				   __u64 flags);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 17/19] selftests/bpf: add tests to validate log_true_size feature
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (15 preceding siblings ...)
  2023-04-06 23:42 ` [PATCH v4 bpf-next 16/19] libbpf: wire through log_true_size for bpf_btf_load() API Andrii Nakryiko
@ 2023-04-06 23:42 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-06 23:42 ` [PATCH v4 bpf-next 18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD Andrii Nakryiko
                   ` (4 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:42 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add additional test cases validating that log_true_size is consistent
between fixed and rotating log modes, and that log_true_size can be
used *exactly* without causing -ENOSPC, while using just 1 byte shorter
log buffer would cause -ENOSPC.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../selftests/bpf/prog_tests/verifier_log.c   | 92 +++++++++++++++----
 1 file changed, 76 insertions(+), 16 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier_log.c b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
index 3284108a6ce8..2ec82fc60c03 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier_log.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
@@ -18,25 +18,41 @@ static bool check_prog_load(int prog_fd, bool expect_err, const char *tag)
 		if (!ASSERT_GT(prog_fd, 0, tag))
 			return false;
 	}
+	if (prog_fd >= 0)
+		close(prog_fd);
 	return true;
 }
 
+static struct {
+	/* strategically placed before others to avoid accidental modification by kernel */
+	char filler[1024];
+	char buf[1024];
+	/* strategically placed after buf[] to catch more accidental corruptions */
+	char reference[1024];
+} logs;
+static const struct bpf_insn *insns;
+static size_t insn_cnt;
+
+static int load_prog(struct bpf_prog_load_opts *opts, bool expect_load_error)
+{
+	int prog_fd;
+
+	prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_prog",
+				"GPL", insns, insn_cnt, opts);
+	check_prog_load(prog_fd, expect_load_error, "prog_load");
+
+	return prog_fd;
+}
+
 static void verif_log_subtest(const char *name, bool expect_load_error, int log_level)
 {
 	LIBBPF_OPTS(bpf_prog_load_opts, opts);
-	struct {
-		/* strategically placed before others to avoid accidental modification by kernel */
-		char filler[1024];
-		char buf[1024];
-		/* strategically placed after buf[] to catch more accidental corruptions */
-		char reference[1024];
-	} logs;
 	char *exp_log, prog_name[16], op_name[32];
 	struct test_log_buf *skel;
 	struct bpf_program *prog;
-	const struct bpf_insn *insns;
-	size_t insn_cnt, fixed_log_sz;
-	int i, mode, err, prog_fd;
+	size_t fixed_log_sz;
+	__u32 log_true_sz_fixed, log_true_sz_rolling;
+	int i, mode, err, prog_fd, res;
 
 	skel = test_log_buf__open();
 	if (!ASSERT_OK_PTR(skel, "skel_open"))
@@ -61,11 +77,7 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 	opts.log_buf = logs.reference;
 	opts.log_size = sizeof(logs.reference);
 	opts.log_level = log_level | 8 /* BPF_LOG_FIXED */;
-	prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed",
-				"GPL", insns, insn_cnt, &opts);
-	if (!check_prog_load(prog_fd, expect_load_error, "fixed_buf_prog_load"))
-		goto cleanup;
-	close(prog_fd);
+	load_prog(&opts, expect_load_error);
 
 	fixed_log_sz = strlen(logs.reference) + 1;
 	if (!ASSERT_GT(fixed_log_sz, 50, "fixed_log_sz"))
@@ -89,7 +101,7 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 		opts.log_level = log_level | 8; /* fixed-length log */
 		opts.log_size = 25;
 
-		prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed50",
+		prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed25",
 					"GPL", insns, insn_cnt, &opts);
 		if (!ASSERT_EQ(prog_fd, -ENOSPC, "unexpected_log_fixed_prog_load_result")) {
 			if (prog_fd >= 0)
@@ -162,6 +174,54 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 		}
 	}
 
+	/* (FIXED) get actual log size */
+	opts.log_buf = logs.buf;
+	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
+	opts.log_size = sizeof(logs.buf);
+	res = load_prog(&opts, expect_load_error);
+	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_fixed");
+
+	log_true_sz_fixed = opts.log_true_size;
+	ASSERT_GT(log_true_sz_fixed, 0, "log_true_sz_fixed");
+
+	/* (ROLLING) get actual log size */
+	opts.log_buf = logs.buf;
+	opts.log_level = log_level;
+	opts.log_size = sizeof(logs.buf);
+	res = load_prog(&opts, expect_load_error);
+	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_rolling");
+
+	log_true_sz_rolling = opts.log_true_size;
+	ASSERT_EQ(log_true_sz_rolling, log_true_sz_fixed, "log_true_sz_eq");
+
+	/* (FIXED) expect -ENOSPC for one byte short log */
+	opts.log_buf = logs.buf;
+	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
+	opts.log_size = log_true_sz_fixed - 1;
+	res = load_prog(&opts, true /* should fail */);
+	ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_fixed");
+
+	/* (FIXED) expect *not* -ENOSPC with exact log_true_size buffer */
+	opts.log_buf = logs.buf;
+	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
+	opts.log_size = log_true_sz_fixed;
+	res = load_prog(&opts, expect_load_error);
+	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_fixed");
+
+	/* (ROLLING) expect -ENOSPC for one byte short log */
+	opts.log_buf = logs.buf;
+	opts.log_level = log_level;
+	opts.log_size = log_true_sz_rolling - 1;
+	res = load_prog(&opts, true /* should fail */);
+	ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_rolling");
+
+	/* (ROLLING) expect *not* -ENOSPC with exact log_true_size buffer */
+	opts.log_buf = logs.buf;
+	opts.log_level = log_level;
+	opts.log_size = log_true_sz_rolling;
+	res = load_prog(&opts, expect_load_error);
+	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_rolling");
+
 cleanup:
 	test_log_buf__destroy(skel);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (16 preceding siblings ...)
  2023-04-06 23:42 ` [PATCH v4 bpf-next 17/19] selftests/bpf: add tests to validate log_true_size feature Andrii Nakryiko
@ 2023-04-06 23:42 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-06 23:42 ` [PATCH v4 bpf-next 19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command Andrii Nakryiko
                   ` (3 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:42 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add few extra test conditions to validate that it's ok to pass
log_buf==NULL and log_size==0 to BPF_PROG_LOAD command with the intent
to get log_true_size without providing a buffer.

Test that log_buf==NULL condition *does not* return -ENOSPC.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../selftests/bpf/prog_tests/verifier_log.c   | 23 +++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier_log.c b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
index 2ec82fc60c03..9ae0ac6e3b25 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier_log.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
@@ -178,26 +178,47 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 	opts.log_buf = logs.buf;
 	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
 	opts.log_size = sizeof(logs.buf);
+	opts.log_true_size = 0;
 	res = load_prog(&opts, expect_load_error);
 	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_fixed");
 
 	log_true_sz_fixed = opts.log_true_size;
 	ASSERT_GT(log_true_sz_fixed, 0, "log_true_sz_fixed");
 
+	/* (FIXED, NULL) get actual log size */
+	opts.log_buf = NULL;
+	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
+	opts.log_size = 0;
+	opts.log_true_size = 0;
+	res = load_prog(&opts, expect_load_error);
+	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_fixed_null");
+	ASSERT_EQ(opts.log_true_size, log_true_sz_fixed, "log_sz_fixed_null_eq");
+
 	/* (ROLLING) get actual log size */
 	opts.log_buf = logs.buf;
 	opts.log_level = log_level;
 	opts.log_size = sizeof(logs.buf);
+	opts.log_true_size = 0;
 	res = load_prog(&opts, expect_load_error);
 	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_rolling");
 
 	log_true_sz_rolling = opts.log_true_size;
 	ASSERT_EQ(log_true_sz_rolling, log_true_sz_fixed, "log_true_sz_eq");
 
+	/* (ROLLING, NULL) get actual log size */
+	opts.log_buf = NULL;
+	opts.log_level = log_level;
+	opts.log_size = 0;
+	opts.log_true_size = 0;
+	res = load_prog(&opts, expect_load_error);
+	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_rolling_null");
+	ASSERT_EQ(opts.log_true_size, log_true_sz_rolling, "log_true_sz_null_eq");
+
 	/* (FIXED) expect -ENOSPC for one byte short log */
 	opts.log_buf = logs.buf;
 	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
 	opts.log_size = log_true_sz_fixed - 1;
+	opts.log_true_size = 0;
 	res = load_prog(&opts, true /* should fail */);
 	ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_fixed");
 
@@ -205,6 +226,7 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 	opts.log_buf = logs.buf;
 	opts.log_level = log_level | 8; /* BPF_LOG_FIXED */
 	opts.log_size = log_true_sz_fixed;
+	opts.log_true_size = 0;
 	res = load_prog(&opts, expect_load_error);
 	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_fixed");
 
@@ -219,6 +241,7 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 	opts.log_buf = logs.buf;
 	opts.log_level = log_level;
 	opts.log_size = log_true_sz_rolling;
+	opts.log_true_size = 0;
 	res = load_prog(&opts, expect_load_error);
 	ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_rolling");
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v4 bpf-next 19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (17 preceding siblings ...)
  2023-04-06 23:42 ` [PATCH v4 bpf-next 18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD Andrii Nakryiko
@ 2023-04-06 23:42 ` Andrii Nakryiko
  2023-04-11 10:44   ` Lorenz Bauer
  2023-04-11 14:03 ` [PATCH v4 bpf-next 00/19] BPF verifier rotating log Lorenz Bauer
                   ` (2 subsequent siblings)
  21 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 23:42 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge; +Cc: andrii, kernel-team

Add verifier log tests for BPF_BTF_LOAD command, which are very similar,
conceptually, to BPF_PROG_LOAD tests. These are two separate commands
dealing with verbose verifier log, so should be both tested separately.

Test that log_buf==NULL condition *does not* return -ENOSPC.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../selftests/bpf/prog_tests/verifier_log.c   | 188 ++++++++++++++++++
 1 file changed, 188 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier_log.c b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
index 9ae0ac6e3b25..475092a78deb 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier_log.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier_log.c
@@ -249,6 +249,190 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_
 	test_log_buf__destroy(skel);
 }
 
+static const void *btf_data;
+static u32 btf_data_sz;
+
+static int load_btf(struct bpf_btf_load_opts *opts, bool expect_err)
+{
+	int fd;
+
+	fd = bpf_btf_load(btf_data, btf_data_sz, opts);
+	if (fd >= 0)
+		close(fd);
+	if (expect_err)
+		ASSERT_LT(fd, 0, "btf_load_failure");
+	else /* !expect_err */
+		ASSERT_GT(fd, 0, "btf_load_success");
+	return fd;
+}
+
+static void verif_btf_log_subtest(bool bad_btf)
+{
+	LIBBPF_OPTS(bpf_btf_load_opts, opts);
+	struct btf *btf;
+	struct btf_type *t;
+	char *exp_log, op_name[32];
+	size_t fixed_log_sz;
+	__u32 log_true_sz_fixed, log_true_sz_rolling;
+	int i, res;
+
+	/* prepare simple BTF contents */
+	btf = btf__new_empty();
+	if (!ASSERT_OK_PTR(btf, "btf_new_empty"))
+		return;
+	res = btf__add_int(btf, "whatever", 4, 0);
+	if (!ASSERT_GT(res, 0, "btf_add_int_id"))
+		goto cleanup;
+	if (bad_btf) {
+		/* btf__add_int() doesn't allow bad value of size, so we'll just
+		 * force-cast btf_type pointer and manually override size to invalid
+		 * 3 if we need to simulate failure
+		 */
+		t = (void *)btf__type_by_id(btf, res);
+		if (!ASSERT_OK_PTR(t, "int_btf_type"))
+			goto cleanup;
+		t->size = 3;
+	}
+
+	btf_data = btf__raw_data(btf, &btf_data_sz);
+	if (!ASSERT_OK_PTR(btf_data, "btf_data"))
+		goto cleanup;
+
+	load_btf(&opts, bad_btf);
+
+	opts.log_buf = logs.reference;
+	opts.log_size = sizeof(logs.reference);
+	opts.log_level = 1 | 8 /* BPF_LOG_FIXED */;
+	load_btf(&opts, bad_btf);
+
+	fixed_log_sz = strlen(logs.reference) + 1;
+	if (!ASSERT_GT(fixed_log_sz, 50, "fixed_log_sz"))
+		goto cleanup;
+	memset(logs.reference + fixed_log_sz, 0, sizeof(logs.reference) - fixed_log_sz);
+
+	/* validate BPF_LOG_FIXED truncation works as verifier log used to work */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1 | 8; /* fixed-length log */
+	opts.log_size = 25;
+	res = load_btf(&opts, true);
+	ASSERT_EQ(res, -ENOSPC, "half_log_fd");
+	ASSERT_EQ(strlen(logs.buf), 24, "log_fixed_25");
+	ASSERT_STRNEQ(logs.buf, logs.reference, 24, op_name);
+
+	/* validate rolling verifier log logic: try all variations of log buf
+	 * length to force various truncation scenarios
+	 */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1; /* rolling log */
+
+	/* prefill logs.buf with 'A's to detect any write beyond allowed length */
+	memset(logs.filler, 'A', sizeof(logs.filler));
+	logs.filler[sizeof(logs.filler) - 1] = '\0';
+	memset(logs.buf, 'A', sizeof(logs.buf));
+	logs.buf[sizeof(logs.buf) - 1] = '\0';
+
+	for (i = 1; i < fixed_log_sz; i++) {
+		opts.log_size = i;
+
+		snprintf(op_name, sizeof(op_name), "log_roll_btf_load_%d", i);
+		res = load_btf(&opts, true);
+		if (!ASSERT_EQ(res, -ENOSPC, op_name))
+			goto cleanup;
+
+		exp_log = logs.reference + fixed_log_sz - i;
+		snprintf(op_name, sizeof(op_name), "log_roll_contents_%d", i);
+		if (!ASSERT_STREQ(logs.buf, exp_log, op_name)) {
+			printf("CMP:%d\nS1:'%s'\nS2:'%s'\n",
+				strcmp(logs.buf, exp_log),
+				logs.buf, exp_log);
+			goto cleanup;
+		}
+
+		/* check that unused portions of logs.buf are not overwritten */
+		snprintf(op_name, sizeof(op_name), "log_roll_unused_tail_%d", i);
+		if (!ASSERT_STREQ(logs.buf + i, logs.filler + i, op_name)) {
+			printf("CMP:%d\nS1:'%s'\nS2:'%s'\n",
+				strcmp(logs.buf + i, logs.filler + i),
+				logs.buf + i, logs.filler + i);
+			goto cleanup;
+		}
+	}
+
+	/* (FIXED) get actual log size */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1 | 8; /* BPF_LOG_FIXED */
+	opts.log_size = sizeof(logs.buf);
+	opts.log_true_size = 0;
+	res = load_btf(&opts, bad_btf);
+	ASSERT_NEQ(res, -ENOSPC, "btf_load_res_fixed");
+
+	log_true_sz_fixed = opts.log_true_size;
+	ASSERT_GT(log_true_sz_fixed, 0, "log_true_sz_fixed");
+
+	/* (FIXED, NULL) get actual log size */
+	opts.log_buf = NULL;
+	opts.log_level = 1 | 8; /* BPF_LOG_FIXED */
+	opts.log_size = 0;
+	opts.log_true_size = 0;
+	res = load_btf(&opts, bad_btf);
+	ASSERT_NEQ(res, -ENOSPC, "btf_load_res_fixed_null");
+	ASSERT_EQ(opts.log_true_size, log_true_sz_fixed, "log_sz_fixed_null_eq");
+
+	/* (ROLLING) get actual log size */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1;
+	opts.log_size = sizeof(logs.buf);
+	opts.log_true_size = 0;
+	res = load_btf(&opts, bad_btf);
+	ASSERT_NEQ(res, -ENOSPC, "btf_load_res_rolling");
+
+	log_true_sz_rolling = opts.log_true_size;
+	ASSERT_EQ(log_true_sz_rolling, log_true_sz_fixed, "log_true_sz_eq");
+
+	/* (ROLLING, NULL) get actual log size */
+	opts.log_buf = NULL;
+	opts.log_level = 1;
+	opts.log_size = 0;
+	opts.log_true_size = 0;
+	res = load_btf(&opts, bad_btf);
+	ASSERT_NEQ(res, -ENOSPC, "btf_load_res_rolling_null");
+	ASSERT_EQ(opts.log_true_size, log_true_sz_rolling, "log_true_sz_null_eq");
+
+	/* (FIXED) expect -ENOSPC for one byte short log */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1 | 8; /* BPF_LOG_FIXED */
+	opts.log_size = log_true_sz_fixed - 1;
+	opts.log_true_size = 0;
+	res = load_btf(&opts, true);
+	ASSERT_EQ(res, -ENOSPC, "btf_load_res_too_short_fixed");
+
+	/* (FIXED) expect *not* -ENOSPC with exact log_true_size buffer */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1 | 8; /* BPF_LOG_FIXED */
+	opts.log_size = log_true_sz_fixed;
+	opts.log_true_size = 0;
+	res = load_btf(&opts, bad_btf);
+	ASSERT_NEQ(res, -ENOSPC, "btf_load_res_just_right_fixed");
+
+	/* (ROLLING) expect -ENOSPC for one byte short log */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1;
+	opts.log_size = log_true_sz_rolling - 1;
+	res = load_btf(&opts, true);
+	ASSERT_EQ(res, -ENOSPC, "btf_load_res_too_short_rolling");
+
+	/* (ROLLING) expect *not* -ENOSPC with exact log_true_size buffer */
+	opts.log_buf = logs.buf;
+	opts.log_level = 1;
+	opts.log_size = log_true_sz_rolling;
+	opts.log_true_size = 0;
+	res = load_btf(&opts, bad_btf);
+	ASSERT_NEQ(res, -ENOSPC, "btf_load_res_just_right_rolling");
+
+cleanup:
+	btf__free(btf);
+}
+
 void test_verifier_log(void)
 {
 	if (test__start_subtest("good_prog-level1"))
@@ -259,4 +443,8 @@ void test_verifier_log(void)
 		verif_log_subtest("bad_prog", true, 1);
 	if (test__start_subtest("bad_prog-level2"))
 		verif_log_subtest("bad_prog", true, 2);
+	if (test__start_subtest("bad_btf"))
+		verif_btf_log_subtest(true /* bad btf */);
+	if (test__start_subtest("good_btf"))
+		verif_btf_log_subtest(false /* !bad btf */);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 03/19] bpf: switch BPF verifier log to be a rotating log by default
  2023-04-06 23:41 ` [PATCH v4 bpf-next 03/19] bpf: switch BPF verifier log to be a rotating log by default Andrii Nakryiko
@ 2023-04-11 10:28   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:28 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Currently, if user-supplied log buffer to collect BPF verifier log turns
> out to be too small to contain full log, bpf() syscall returns -ENOSPC,
> fails BPF program verification/load, and preserves first N-1 bytes of
> the verifier log (where N is the size of user-supplied buffer).
>
> This is problematic in a bunch of common scenarios, especially when
> working with real-world BPF programs that tend to be pretty complex as
> far as verification goes and require big log buffers. Typically, it's
> when debugging tricky cases at log level 2 (verbose). Also, when BPF program
> is successfully validated, log level 2 is the only way to actually see
> verifier state progression and all the important details.
>
> Even with log level 1, it's possible to get -ENOSPC even if the final
> verifier log fits in log buffer, if there is a code path that's deep
> enough to fill up entire log, even if normally it would be reset later
> on (there is a logic to chop off successfully validated portions of BPF
> verifier log).
>
> In short, it's not always possible to pre-size log buffer. Also, what's
> worse, in practice, the end of the log most often is way more important
> than the beginning, but verifier stops emitting log as soon as initial
> log buffer is filled up.
>
> This patch switches BPF verifier log behavior to effectively behave as
> rotating log. That is, if user-supplied log buffer turns out to be too
> short, verifier will keep overwriting previously written log,
> effectively treating user's log buffer as a ring buffer. -ENOSPC is
> still going to be returned at the end, to notify user that log contents
> was truncated, but the important last N bytes of the log would be
> returned, which might be all that user really needs. This consistent
> -ENOSPC behavior, regardless of rotating or fixed log behavior, allows
> to prevent backwards compatibility breakage. The only user-visible
> change is which portion of verifier log user ends up seeing *if buffer
> is too small*. Given contents of verifier log itself is not an ABI,
> there is no breakage due to this behavior change. Specialized tools that
> rely on specific contents of verifier log in -ENOSPC scenario are
> expected to be easily adapted to accommodate old and new behaviors.
>
> Importantly, though, to preserve good user experience and not require
> every user-space application to adopt to this new behavior, before
> exiting to user-space verifier will rotate log (in place) to make it
> start at the very beginning of user buffer as a continuous
> zero-terminated string. The contents will be a chopped off N-1 last
> bytes of full verifier log, of course.
>
> Given beginning of log is sometimes important as well, we add
> BPF_LOG_FIXED (which equals 8) flag to force old behavior, which allows
> tools like veristat to request first part of verifier log, if necessary.
> BPF_LOG_FIXED flag is also a simple and straightforward way to check if
> BPF verifier supports rotating behavior.
>
> On the implementation side, conceptually, it's all simple. We maintain
> 64-bit logical start and end positions. If we need to truncate the log,
> start position will be adjusted accordingly to lag end position by
> N bytes. We then use those logical positions to calculate their matching
> actual positions in user buffer and handle wrap around the end of the
> buffer properly. Finally, right before returning from bpf_check(), we
> rotate user log buffer contents in-place as necessary, to make log
> contents contiguous. See comments in relevant functions for details.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Reviewed-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file
  2023-04-06 23:41 ` [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file Andrii Nakryiko
@ 2023-04-11 10:43   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:43 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> kernel/bpf/verifier.c file is large and growing larger all the time. So
> it's good to start splitting off more or less self-contained parts into
> separate files to keep source code size (somewhat) somewhat under
> control.
>
> This patch is a one step in this direction, moving some of BPF verifier log
> routines into a separate kernel/bpf/log.c. Right now it's most low-level
> and isolated routines to append data to log, reset log to previous
> position, etc. Eventually we could probably move verifier state
> printing logic here as well, but this patch doesn't attempt to do that
> yet.
>
> Subsequent patches will add more logic to verifier log management, so
> having basics in a separate file will make sure verifier.c doesn't grow
> more with new changes.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side
  2023-04-06 23:41 ` [PATCH v4 bpf-next 04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> This basically prevents any forward compatibility. And we either way
> just return -EINVAL, which would otherwise be returned from bpf()
> syscall anyways.
>
> Similarly, drop enforcement of non-NULL log_buf when log_level > 0. This
> won't be true anymore soon.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 11/19] bpf: keep track of total log content size in both fixed and rolling modes
  2023-04-06 23:41 ` [PATCH v4 bpf-next 11/19] bpf: keep track of total log content size in both fixed and rolling modes Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:43 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Change how we do accounting in BPF_LOG_FIXED mode and adopt log->end_pos
> as *logical* log position. This means that we can go beyond physical log
> buffer size now and be able to tell what log buffer size should be to
> fit entire log contents without -ENOSPC.
>
> To do this for BPF_LOG_FIXED mode, we need to remove a short-circuiting
> logic of not vsnprintf()'ing further log content once we filled up
> user-provided buffer, which is done by bpf_verifier_log_needed() checks.
> We modify these checks to always keep going if log->level is non-zero
> (i.e., log is requested), even if log->ubuf was NULL'ed out due to
> copying data to user-space, or if entire log buffer is physically full.
> We adopt bpf_verifier_vlog() routine to work correctly with
> log->ubuf == NULL condition, performing log formatting into temporary
> kernel buffer, doing all the necessary accounting, but just avoiding
> copying data out if buffer is full or NULL'ed out.
>
> With these changes, it's now possible to do this sort of determination of
> log contents size in both BPF_LOG_FIXED and default rolling log mode.
> We need to keep in mind bpf_vlog_reset(), though, which shrinks log
> contents after successful verification of a particular code path. This
> log reset means that log->end_pos isn't always increasing, so to return
> back to users what should be the log buffer size to fit all log content
> without causing -ENOSPC even in the presence of log resetting, we need
> to keep maximum over "lifetime" of logging. We do this accounting in
> bpf_vlog_update_len_max() helper.
>
> A related and subtle aspect is that with this logical log->end_pos even in
> BPF_LOG_FIXED mode we could temporary "overflow" buffer, but then reset
> it back with bpf_vlog_reset() to a position inside user-supplied
> log_buf. In such situation we still want to properly maintain
> terminating zero. We will eventually return -ENOSPC even if final log
> buffer is small (we detect this through log->len_max check). This
> behavior is simpler to reason about and is consistent with current
> behavior of verifier log. Handling of this required a small addition to
> bpf_vlog_reset() logic to avoid doing put_user() beyond physical log
> buffer dimensions.
>
> Another issue to keep in mind is that we limit log buffer size to 32-bit
> value and keep such log length as u32, but theoretically verifier could
> produce huge log stretching beyond 4GB. Instead of keeping (and later
> returning) 64-bit log length, we cap it at UINT_MAX. Current UAPI makes
> it impossible to specify log buffer size bigger than 4GB anyways, so we
> don't really loose anything here and keep everything consistently 32-bit
> in UAPI. This property will be utilized in next patch.
>
> Doing the same determination of maximum log buffer for rolling mode is
> trivial, as log->end_pos and log->start_pos are already logical
> positions, so there is nothing new there.
>
> These changes do incidentally fix one small issue with previous logging
> logic. Previously, if use provided log buffer of size N, and actual log
> output was exactly N-1 bytes + terminating \0, kernel logic coun't
> distinguish this condition from log truncation scenario which would end
> up with truncated log contents of N-1 bytes + terminating \0 as well.
>
> But now with log->end_pos being logical position that could go beyond
> actual log buffer size, we can distinguish these two conditions, which
> we do in this patch. This plays nicely with returning log_size_actual
> (implemented in UAPI in the next patch), as we can now guarantee that if
> user takes such log_size_actual and provides log buffer of that exact
> size, they will not get -ENOSPC in return.
>
> All in all, all these changes do conceptually unify fixed and rolling
> log modes much better, and allow a nice feature requested by users:
> knowing what should be the size of the buffer to avoid -ENOSPC.
>
> We'll plumb this through the UAPI and the code in the next patch.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 12/19] bpf: add log_true_size output field to return necessary log buffer size
  2023-04-06 23:41 ` [PATCH v4 bpf-next 12/19] bpf: add log_true_size output field to return necessary log buffer size Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:43 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Add output-only log_true_size and btf_log_true_size field to
> BPF_PROG_LOAD and BPF_BTF_LOAD commands, respectively. It will return
> the size of log buffer necessary to fit in all the log contents at
> specified log_level. This is very useful for BPF loader libraries like
> libbpf to be able to size log buffer correctly, but could be used by
> users directly, if necessary, as well.
>
> This patch plumbs all this through the code, taking into account actual
> bpf_attr size provided by user to determine if these new fields are
> expected by users. And if they are, set them from kernel on return.
>
> We refactory btf_parse() function to accommodate this, moving attr and
> uattr handling inside it. The rest is very straightforward code, which
> is split from the logging accounting changes in the previous patch to
> make it simpler to review logic vs UAPI changes.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested
  2023-04-06 23:42 ` [PATCH v4 bpf-next 14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Drop the log_size>0 and log_buf!=NULL condition when log_level>0. This
> allows users to request log_true_size of a full log without providing
> actual (even if small) log buffer. Verifier log handling code was mostly
> ready to handle NULL log->ubuf, so only few small changes were necessary
> to prevent NULL log->ubuf from causing problems.
>
> Note, that if user provided NULL log_buf with log_level>0 we don't
> consider this a log truncation, and thus won't return -ENOSPC.
>
> We also enforce that either (log_buf==NULL && log_size==0) or
> (log_buf!=NULL && log_size>0).
>
> Suggested-by: Lorenz Bauer <lmb@isovalent.com>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Reviewed-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 17/19] selftests/bpf: add tests to validate log_true_size feature
  2023-04-06 23:42 ` [PATCH v4 bpf-next 17/19] selftests/bpf: add tests to validate log_true_size feature Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:43 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Add additional test cases validating that log_true_size is consistent
> between fixed and rotating log modes, and that log_true_size can be
> used *exactly* without causing -ENOSPC, while using just 1 byte shorter
> log buffer would cause -ENOSPC.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Great idea to test the size - 1 case!

Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD
  2023-04-06 23:42 ` [PATCH v4 bpf-next 18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:45 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Add few extra test conditions to validate that it's ok to pass
> log_buf==NULL and log_size==0 to BPF_PROG_LOAD command with the intent
> to get log_true_size without providing a buffer.
>
> Test that log_buf==NULL condition *does not* return -ENOSPC.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command
  2023-04-06 23:42 ` [PATCH v4 bpf-next 19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command Andrii Nakryiko
@ 2023-04-11 10:44   ` Lorenz Bauer
  0 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 10:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:45 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Add verifier log tests for BPF_BTF_LOAD command, which are very similar,
> conceptually, to BPF_PROG_LOAD tests. These are two separate commands
> dealing with verbose verifier log, so should be both tested separately.
>
> Test that log_buf==NULL condition *does not* return -ENOSPC.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Lorenz Bauer <lmb@isovalent.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 00/19] BPF verifier rotating log
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (18 preceding siblings ...)
  2023-04-06 23:42 ` [PATCH v4 bpf-next 19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command Andrii Nakryiko
@ 2023-04-11 14:03 ` Lorenz Bauer
  2023-04-11 16:10 ` patchwork-bot+netdevbpf
  2023-04-12 15:31 ` Lorenz Bauer
  21 siblings, 0 replies; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-11 14:03 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> This patch set changes BPF verifier log behavior to behave as a rotating log,
> by default. If user-supplied log buffer is big enough to contain entire
> verifier log output, there is no effective difference. But where previously
> user supplied too small log buffer and would get -ENOSPC error result and the
> beginning part of the verifier log, now there will be no error and user will
> get ending part of verifier log filling up user-supplied log buffer.  Which
> is, in absolute majority of cases, is exactly what's useful, relevant, and
> what users want and need, as the ending of the verifier log is containing
> details of verifier failure and relevant state that got us to that failure. So
> this rotating mode is made default, but for some niche advanced debugging
> scenarios it's possible to request old behavior by specifying additional
> BPF_LOG_FIXED (8) flag.

Thanks for your work, this is really nice!

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 00/19] BPF verifier rotating log
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (19 preceding siblings ...)
  2023-04-11 14:03 ` [PATCH v4 bpf-next 00/19] BPF verifier rotating log Lorenz Bauer
@ 2023-04-11 16:10 ` patchwork-bot+netdevbpf
  2023-04-12 15:31 ` Lorenz Bauer
  21 siblings, 0 replies; 33+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-04-11 16:10 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, lmb, timo, robin.goegge, kernel-team

Hello:

This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Thu, 6 Apr 2023 16:41:46 -0700 you wrote:
> This patch set changes BPF verifier log behavior to behave as a rotating log,
> by default. If user-supplied log buffer is big enough to contain entire
> verifier log output, there is no effective difference. But where previously
> user supplied too small log buffer and would get -ENOSPC error result and the
> beginning part of the verifier log, now there will be no error and user will
> get ending part of verifier log filling up user-supplied log buffer.  Which
> is, in absolute majority of cases, is exactly what's useful, relevant, and
> what users want and need, as the ending of the verifier log is containing
> details of verifier failure and relevant state that got us to that failure. So
> this rotating mode is made default, but for some niche advanced debugging
> scenarios it's possible to request old behavior by specifying additional
> BPF_LOG_FIXED (8) flag.
> 
> [...]

Here is the summary with links:
  - [v4,bpf-next,01/19] bpf: split off basic BPF verifier log into separate file
    https://git.kernel.org/bpf/bpf-next/c/4294a0a7ab62
  - [v4,bpf-next,02/19] bpf: remove minimum size restrictions on verifier log buffer
    https://git.kernel.org/bpf/bpf-next/c/03cc3aa6a533
  - [v4,bpf-next,03/19] bpf: switch BPF verifier log to be a rotating log by default
    https://git.kernel.org/bpf/bpf-next/c/121664093803
  - [v4,bpf-next,04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side
    https://git.kernel.org/bpf/bpf-next/c/e0aee1facccf
  - [v4,bpf-next,05/19] veristat: add more veristat control over verifier log options
    https://git.kernel.org/bpf/bpf-next/c/d0d75c67c45a
  - [v4,bpf-next,06/19] selftests/bpf: add fixed vs rotating verifier log tests
    https://git.kernel.org/bpf/bpf-next/c/b1a7a480a112
  - [v4,bpf-next,07/19] bpf: ignore verifier log reset in BPF_LOG_KERNEL mode
    https://git.kernel.org/bpf/bpf-next/c/24bc80887adb
  - [v4,bpf-next,08/19] bpf: fix missing -EFAULT return on user log buf error in btf_parse()
    https://git.kernel.org/bpf/bpf-next/c/971fb5057d78
  - [v4,bpf-next,09/19] bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode
    https://git.kernel.org/bpf/bpf-next/c/cbedb42a0da3
  - [v4,bpf-next,10/19] bpf: simplify logging-related error conditions handling
    https://git.kernel.org/bpf/bpf-next/c/8a6ca6bc553e
  - [v4,bpf-next,11/19] bpf: keep track of total log content size in both fixed and rolling modes
    https://git.kernel.org/bpf/bpf-next/c/fa1c7d5cc404
  - [v4,bpf-next,12/19] bpf: add log_true_size output field to return necessary log buffer size
    https://git.kernel.org/bpf/bpf-next/c/47a71c1f9af0
  - [v4,bpf-next,13/19] bpf: simplify internal verifier log interface
    https://git.kernel.org/bpf/bpf-next/c/bdcab4144f5d
  - [v4,bpf-next,14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested
    https://git.kernel.org/bpf/bpf-next/c/fac08d45e253
  - [v4,bpf-next,15/19] libbpf: wire through log_true_size returned from kernel for BPF_PROG_LOAD
    https://git.kernel.org/bpf/bpf-next/c/94e55c0fdaf4
  - [v4,bpf-next,16/19] libbpf: wire through log_true_size for bpf_btf_load() API
    https://git.kernel.org/bpf/bpf-next/c/097d8002b754
  - [v4,bpf-next,17/19] selftests/bpf: add tests to validate log_true_size feature
    https://git.kernel.org/bpf/bpf-next/c/5787540827a9
  - [v4,bpf-next,18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD
    https://git.kernel.org/bpf/bpf-next/c/be983f44274f
  - [v4,bpf-next,19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command
    https://git.kernel.org/bpf/bpf-next/c/054b6c7866c7

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 00/19] BPF verifier rotating log
  2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
                   ` (20 preceding siblings ...)
  2023-04-11 16:10 ` patchwork-bot+netdevbpf
@ 2023-04-12 15:31 ` Lorenz Bauer
  2023-04-12 17:03   ` Andrii Nakryiko
  21 siblings, 1 reply; 33+ messages in thread
From: Lorenz Bauer @ 2023-04-12 15:31 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, ast, daniel, martin.lau, timo, robin.goegge, kernel-team

On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> This patch set changes BPF verifier log behavior to behave as a rotating log,
> by default. If user-supplied log buffer is big enough to contain entire
> verifier log output, there is no effective difference. But where previously
> user supplied too small log buffer and would get -ENOSPC error result and the
> beginning part of the verifier log, now there will be no error and user will
> get ending part of verifier log filling up user-supplied log buffer.  Which
> is, in absolute majority of cases, is exactly what's useful, relevant, and
> what users want and need, as the ending of the verifier log is containing
> details of verifier failure and relevant state that got us to that failure. So
> this rotating mode is made default, but for some niche advanced debugging
> scenarios it's possible to request old behavior by specifying additional
> BPF_LOG_FIXED (8) flag.

I just ran selftest/bpf/test_verifier_log on top of bpf-next which now
fails with:

Test log_level 0...
Test log_size < 128...
ERROR: Program load returned: ret:-1/errno:28, expected ret:-1/errno:22

Seems like these tests are now superseded by test_progs?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 bpf-next 00/19] BPF verifier rotating log
  2023-04-12 15:31 ` Lorenz Bauer
@ 2023-04-12 17:03   ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2023-04-12 17:03 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, timo,
	robin.goegge, kernel-team

On Wed, Apr 12, 2023 at 8:31 AM Lorenz Bauer <lmb@isovalent.com> wrote:
>
> On Fri, Apr 7, 2023 at 12:42 AM Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > This patch set changes BPF verifier log behavior to behave as a rotating log,
> > by default. If user-supplied log buffer is big enough to contain entire
> > verifier log output, there is no effective difference. But where previously
> > user supplied too small log buffer and would get -ENOSPC error result and the
> > beginning part of the verifier log, now there will be no error and user will
> > get ending part of verifier log filling up user-supplied log buffer.  Which
> > is, in absolute majority of cases, is exactly what's useful, relevant, and
> > what users want and need, as the ending of the verifier log is containing
> > details of verifier failure and relevant state that got us to that failure. So
> > this rotating mode is made default, but for some niche advanced debugging
> > scenarios it's possible to request old behavior by specifying additional
> > BPF_LOG_FIXED (8) flag.
>
> I just ran selftest/bpf/test_verifier_log on top of bpf-next which now
> fails with:
>
> Test log_level 0...
> Test log_size < 128...
> ERROR: Program load returned: ret:-1/errno:28, expected ret:-1/errno:22
>
> Seems like these tests are now superseded by test_progs?


I didn't even know we have a separate test_verifier_log binary. It
seems like newly added tests in test_prog indeed cover all the same
use cases. I'll send a patch to remove test_verifier_log.c. Thanks for
spotting this!

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2023-04-12 17:04 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-06 23:41 [PATCH v4 bpf-next 00/19] BPF verifier rotating log Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 01/19] bpf: split off basic BPF verifier log into separate file Andrii Nakryiko
2023-04-11 10:43   ` Lorenz Bauer
2023-04-06 23:41 ` [PATCH v4 bpf-next 02/19] bpf: remove minimum size restrictions on verifier log buffer Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 03/19] bpf: switch BPF verifier log to be a rotating log by default Andrii Nakryiko
2023-04-11 10:28   ` Lorenz Bauer
2023-04-06 23:41 ` [PATCH v4 bpf-next 04/19] libbpf: don't enforce unnecessary verifier log restrictions on libbpf side Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-06 23:41 ` [PATCH v4 bpf-next 05/19] veristat: add more veristat control over verifier log options Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 06/19] selftests/bpf: add fixed vs rotating verifier log tests Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 07/19] bpf: ignore verifier log reset in BPF_LOG_KERNEL mode Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 08/19] bpf: fix missing -EFAULT return on user log buf error in btf_parse() Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 09/19] bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 10/19] bpf: simplify logging-related error conditions handling Andrii Nakryiko
2023-04-06 23:41 ` [PATCH v4 bpf-next 11/19] bpf: keep track of total log content size in both fixed and rolling modes Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-06 23:41 ` [PATCH v4 bpf-next 12/19] bpf: add log_true_size output field to return necessary log buffer size Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-06 23:41 ` [PATCH v4 bpf-next 13/19] bpf: simplify internal verifier log interface Andrii Nakryiko
2023-04-06 23:42 ` [PATCH v4 bpf-next 14/19] bpf: relax log_buf NULL conditions when log_level>0 is requested Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-06 23:42 ` [PATCH v4 bpf-next 15/19] libbpf: wire through log_true_size returned from kernel for BPF_PROG_LOAD Andrii Nakryiko
2023-04-06 23:42 ` [PATCH v4 bpf-next 16/19] libbpf: wire through log_true_size for bpf_btf_load() API Andrii Nakryiko
2023-04-06 23:42 ` [PATCH v4 bpf-next 17/19] selftests/bpf: add tests to validate log_true_size feature Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-06 23:42 ` [PATCH v4 bpf-next 18/19] selftests/bpf: add testing of log_buf==NULL condition for BPF_PROG_LOAD Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-06 23:42 ` [PATCH v4 bpf-next 19/19] selftests/bpf: add verifier log tests for BPF_BTF_LOAD command Andrii Nakryiko
2023-04-11 10:44   ` Lorenz Bauer
2023-04-11 14:03 ` [PATCH v4 bpf-next 00/19] BPF verifier rotating log Lorenz Bauer
2023-04-11 16:10 ` patchwork-bot+netdevbpf
2023-04-12 15:31 ` Lorenz Bauer
2023-04-12 17:03   ` Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).