All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 0/6] Dynamic pointers
@ 2022-05-09 22:42 Joanne Koong
  2022-05-09 22:42 ` [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag Joanne Koong
                   ` (5 more replies)
  0 siblings, 6 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

This patchset implements the basics of dynamic pointers in bpf.

A dynamic pointer (struct bpf_dynptr) is a pointer that stores extra metadata
alongside the address it points to. This abstraction is useful in bpf, given
that every memory access in a bpf program must be safe. The verifier and bpf
helper functions can use the metadata to enforce safety guarantees for things 
such as dynamically sized strings and kernel heap allocations.

From the program side, the bpf_dynptr is an opaque struct and the verifier
will enforce that its contents are never written to by the program.
It can only be written to through specific bpf helper functions.

There are several uses cases for dynamic pointers in bpf programs. A list of
some are: dynamically sized ringbuf reservations without any extra memcpys,
dynamic string parsing and memory comparisons, dynamic memory allocations that
can be persisted in a map, and dynamic parsing of sk_buff and xdp_md packet
data.

At a high-level, the patches are as follows:
1/6 - Adds MEM_UNINIT as a bpf_type_flag
2/6 - Adds verifier support for dynptrs and implements malloc dynptrs
3/6 - Adds dynptr support for ring buffers
4/6 - Adds bpf_dynptr_read and bpf_dynptr_write
5/6 - Adds dynptr data slices (ptr to underlying dynptr memory)
6/6 - Tests to check that verifier rejects certain fail cases and passes
certain success cases

This is the first dynptr patchset in a larger series. The next series of
patches will add persisting dynamic memory allocations in maps, parsing packet
data through dynptrs, convenience helpers for using dynptrs as iterators, and
more helper functions for interacting with strings and memory dynamically.

Changelog:
----------
v4 -> v3: 
v3:
https://lore.kernel.org/bpf/20220428211059.4065379-1-joannelkoong@gmail.com/

1/6 - Change mem ptr + size check to use more concise inequality expression
(David + Andrii) 
2/6 - Add check for meta->uninit_dynptr_regno not already set (Andrii)
      Move DYNPTR_TYPE_FLAG_MASK to include/linux/bpf.h (Andrii) 
3/6 - Remove four underscores for invoking BPF_CALL (Andrii)
      Add __BPF_TYPE_FLAG_MAX and use it for __BPF_TYPE_LAST_FLAG (Andrii)
4/6 - Fix capacity to be bpf_dynptr size value in check_off_len (Andrii)
      Change -EINVAL to -E2BIG if len + offset is out of bounds (Andrii)
5/6 - Add check for only 1 dynptr arg for dynptr data function (Andrii)
6/6 - For ringbuf map, set max_entries from userspace (Andrii)
      Use err ?: ... for interactring with dynptr APIs (Andrii)
      Define array_map2 for add_dynptr_to_map2 test where value is a struct
with an embedded dynptr
      Remove ref id from missing_put_callback message, since on different
environments, ref id is not always = 1

v3 -> v2:
v2:
https://lore.kernel.org/bpf/20220416063429.3314021-1-joannelkoong@gmail.com/

* Reorder patches (move ringbuffer patch to be right after the verifier +
* malloc
dynptr patchset)
* Remove local type dynptrs (Andrii + Alexei)
* Mark stack slot as STACK_MISC after any writes into a dynptr instead of
* explicitly prohibiting writes (Alexei)
* Pass number of slots, not memory size to is_spi_bounds_valid (Kumar) 
* Check reference leaks by adding dynptr id to state->refs instead of checking
stack slots (Alexei)

v1 -> v2:
v1: https://lore.kernel.org/bpf/20220402015826.3941317-1-joannekoong@fb.com/

1/7 -
    * Remove ARG_PTR_TO_MAP_VALUE_UNINIT alias and use
      ARG_PTR_TO_MAP_VALUE | MEM_UNINIT directly (Andrii)
    * Drop arg_type_is_mem_ptr() wrapper function (Andrii)
2/7 - 
    * Change name from MEM_RELEASE to OBJ_RELEASE (Andrii)
    * Use meta.release_ref instead of ref_obj_id != 0 to determine whether
      to release reference (Kumar)
    * Drop type_is_release_mem() wrapper function (Andrii) 
3/7 -
    * Add checks for nested dynptrs edge-cases, which could lead to corrupt
    * writes of the dynptr stack variable.
    * Add u64 flags to bpf_dynptr_from_mem() and bpf_dynptr_alloc() (Andrii)
    * Rename from bpf_malloc/bpf_free to bpf_dynptr_alloc/bpf_dynptr_put
      (Alexei)
    * Support alloc flag __GFP_ZERO (Andrii) 
    * Reserve upper 8 bits in dynptr size and offset fields instead of
      reserving just the upper 4 bits (Andrii)
    * Allow dynptr zero-slices (Andrii) 
    * Use the highest bit for is_rdonly instead of the 28th bit (Andrii)
    * Rename check_* functions to is_* functions for better readability
      (Andrii)
    * Add comment for code that checks the spi bounds (Andrii)
4/7 -
    * Fix doc description for bpf_dynpt_read (Toke)
    * Move bpf_dynptr_check_off_len() from function patch 1 to here (Andrii)
5/7 -
    * When finding the id for the dynptr to associate the data slice with,
      look for dynptr arg instead of assuming it is BPF_REG_1.
6/7 -
    * Add __force when casting from unsigned long to void * (kernel test
    * robot)
    * Expand on docs for ringbuf dynptr APIs (Andrii)
7/7 -
    * Use table approach for defining test programs and error messages
    * (Andrii)
    * Print out full log if there’s an error (Andrii)
    * Use bpf_object__find_program_by_name() instead of specifying
      program name as a string (Andrii)
    * Add 6 extra cases: invalid_nested_dynptrs1, invalid_nested_dynptrs2,
      invalid_ref_mem1, invalid_ref_mem2, zero_slice_access,
      and test_alloc_zero_bytes
    * Add checking for edge cases (eg allocing with invalid flags)


Joanne Koong (6):
  bpf: Add MEM_UNINIT as a bpf_type_flag
  bpf: Add verifier support for dynptrs and implement malloc dynptrs
  bpf: Dynptr support for ring buffers
  bpf: Add bpf_dynptr_read and bpf_dynptr_write
  bpf: Add dynptr data slices
  bpf: Dynptr tests

 include/linux/bpf.h                           | 107 +++-
 include/linux/bpf_verifier.h                  |  21 +
 include/uapi/linux/bpf.h                      |  96 +++
 kernel/bpf/helpers.c                          | 169 ++++-
 kernel/bpf/ringbuf.c                          |  78 +++
 kernel/bpf/verifier.c                         | 337 +++++++++-
 scripts/bpf_doc.py                            |   2 +
 tools/include/uapi/linux/bpf.h                |  96 +++
 .../testing/selftests/bpf/prog_tests/dynptr.c | 136 ++++
 .../testing/selftests/bpf/progs/dynptr_fail.c | 582 ++++++++++++++++++
 .../selftests/bpf/progs/dynptr_success.c      | 206 +++++++
 11 files changed, 1791 insertions(+), 39 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/dynptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/dynptr_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/dynptr_success.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag
  2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
@ 2022-05-09 22:42 ` Joanne Koong
  2022-05-13 14:11   ` David Vernet
  2022-05-09 22:42 ` [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs Joanne Koong
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

Instead of having uninitialized versions of arguments as separate
bpf_arg_types (eg ARG_PTR_TO_UNINIT_MEM as the uninitialized version
of ARG_PTR_TO_MEM), we can instead use MEM_UNINIT as a bpf_type_flag
modifier to denote that the argument is uninitialized.

Doing so cleans up some of the logic in the verifier. We no longer
need to do two checks against an argument type (eg "if
(base_type(arg_type) == ARG_PTR_TO_MEM || base_type(arg_type) ==
ARG_PTR_TO_UNINIT_MEM)"), since uninitialized and initialized
versions of the same argument type will now share the same base type.

In the near future, MEM_UNINIT will be used by dynptr helper functions
as well.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf.h   | 17 +++++++++--------
 kernel/bpf/helpers.c  |  4 ++--
 kernel/bpf/verifier.c | 28 ++++++++--------------------
 3 files changed, 19 insertions(+), 30 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index be94833d390a..d0c167865504 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -389,7 +389,9 @@ enum bpf_type_flag {
 	 */
 	PTR_UNTRUSTED		= BIT(6 + BPF_BASE_TYPE_BITS),
 
-	__BPF_TYPE_LAST_FLAG	= PTR_UNTRUSTED,
+	MEM_UNINIT		= BIT(7 + BPF_BASE_TYPE_BITS),
+
+	__BPF_TYPE_LAST_FLAG	= MEM_UNINIT,
 };
 
 /* Max number of base types. */
@@ -408,16 +410,11 @@ enum bpf_arg_type {
 	ARG_CONST_MAP_PTR,	/* const argument used as pointer to bpf_map */
 	ARG_PTR_TO_MAP_KEY,	/* pointer to stack used as map key */
 	ARG_PTR_TO_MAP_VALUE,	/* pointer to stack used as map value */
-	ARG_PTR_TO_UNINIT_MAP_VALUE,	/* pointer to valid memory used to store a map value */
 
-	/* the following constraints used to prototype bpf_memcmp() and other
-	 * functions that access data on eBPF program stack
+	/* Used to prototype bpf_memcmp() and other functions that access data
+	 * on eBPF program stack
 	 */
 	ARG_PTR_TO_MEM,		/* pointer to valid memory (stack, packet, map value) */
-	ARG_PTR_TO_UNINIT_MEM,	/* pointer to memory does not need to be initialized,
-				 * helper function must fill all bytes or clear
-				 * them in error case.
-				 */
 
 	ARG_CONST_SIZE,		/* number of bytes accessed from memory */
 	ARG_CONST_SIZE_OR_ZERO,	/* number of bytes accessed from memory or 0 */
@@ -449,6 +446,10 @@ enum bpf_arg_type {
 	ARG_PTR_TO_ALLOC_MEM_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_ALLOC_MEM,
 	ARG_PTR_TO_STACK_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_STACK,
 	ARG_PTR_TO_BTF_ID_OR_NULL	= PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID,
+	/* pointer to memory does not need to be initialized, helper function must fill
+	 * all bytes or clear them in error case.
+	 */
+	ARG_PTR_TO_UNINIT_MEM		= MEM_UNINIT | ARG_PTR_TO_MEM,
 
 	/* This must be the last entry. Its purpose is to ensure the enum is
 	 * wide enough to hold the higher bits reserved for bpf_type_flag.
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 3e709fed5306..8a2398ac14c2 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -103,7 +103,7 @@ const struct bpf_func_proto bpf_map_pop_elem_proto = {
 	.gpl_only	= false,
 	.ret_type	= RET_INTEGER,
 	.arg1_type	= ARG_CONST_MAP_PTR,
-	.arg2_type	= ARG_PTR_TO_UNINIT_MAP_VALUE,
+	.arg2_type	= ARG_PTR_TO_MAP_VALUE | MEM_UNINIT,
 };
 
 BPF_CALL_2(bpf_map_peek_elem, struct bpf_map *, map, void *, value)
@@ -116,7 +116,7 @@ const struct bpf_func_proto bpf_map_peek_elem_proto = {
 	.gpl_only	= false,
 	.ret_type	= RET_INTEGER,
 	.arg1_type	= ARG_CONST_MAP_PTR,
-	.arg2_type	= ARG_PTR_TO_UNINIT_MAP_VALUE,
+	.arg2_type	= ARG_PTR_TO_MAP_VALUE | MEM_UNINIT,
 };
 
 const struct bpf_func_proto bpf_get_prandom_u32_proto = {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 813f6ee80419..0fe1dea520ae 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5378,12 +5378,6 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
 	return 0;
 }
 
-static bool arg_type_is_mem_ptr(enum bpf_arg_type type)
-{
-	return base_type(type) == ARG_PTR_TO_MEM ||
-	       base_type(type) == ARG_PTR_TO_UNINIT_MEM;
-}
-
 static bool arg_type_is_mem_size(enum bpf_arg_type type)
 {
 	return type == ARG_CONST_SIZE ||
@@ -5523,7 +5517,6 @@ static const struct bpf_reg_types kptr_types = { .types = { PTR_TO_MAP_VALUE } }
 static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
 	[ARG_PTR_TO_MAP_KEY]		= &map_key_value_types,
 	[ARG_PTR_TO_MAP_VALUE]		= &map_key_value_types,
-	[ARG_PTR_TO_UNINIT_MAP_VALUE]	= &map_key_value_types,
 	[ARG_CONST_SIZE]		= &scalar_types,
 	[ARG_CONST_SIZE_OR_ZERO]	= &scalar_types,
 	[ARG_CONST_ALLOC_SIZE_OR_ZERO]	= &scalar_types,
@@ -5537,7 +5530,6 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
 	[ARG_PTR_TO_BTF_ID]		= &btf_ptr_types,
 	[ARG_PTR_TO_SPIN_LOCK]		= &spin_lock_types,
 	[ARG_PTR_TO_MEM]		= &mem_types,
-	[ARG_PTR_TO_UNINIT_MEM]		= &mem_types,
 	[ARG_PTR_TO_ALLOC_MEM]		= &alloc_mem_types,
 	[ARG_PTR_TO_INT]		= &int_ptr_types,
 	[ARG_PTR_TO_LONG]		= &int_ptr_types,
@@ -5711,8 +5703,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		return -EACCES;
 	}
 
-	if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE ||
-	    base_type(arg_type) == ARG_PTR_TO_UNINIT_MAP_VALUE) {
+	if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE) {
 		err = resolve_map_arg_type(env, meta, &arg_type);
 		if (err)
 			return err;
@@ -5798,8 +5789,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		err = check_helper_mem_access(env, regno,
 					      meta->map_ptr->key_size, false,
 					      NULL);
-	} else if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE ||
-		   base_type(arg_type) == ARG_PTR_TO_UNINIT_MAP_VALUE) {
+	} else if (base_type(arg_type) == ARG_PTR_TO_MAP_VALUE) {
 		if (type_may_be_null(arg_type) && register_is_null(reg))
 			return 0;
 
@@ -5811,7 +5801,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			verbose(env, "invalid map_ptr to access map->value\n");
 			return -EACCES;
 		}
-		meta->raw_mode = (arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE);
+		meta->raw_mode = arg_type & MEM_UNINIT;
 		err = check_helper_mem_access(env, regno,
 					      meta->map_ptr->value_size, false,
 					      meta);
@@ -5838,11 +5828,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			return -EACCES;
 	} else if (arg_type == ARG_PTR_TO_FUNC) {
 		meta->subprogno = reg->subprogno;
-	} else if (arg_type_is_mem_ptr(arg_type)) {
+	} else if (base_type(arg_type) == ARG_PTR_TO_MEM) {
 		/* The access to this pointer is only checked when we hit the
 		 * next is_mem_size argument below.
 		 */
-		meta->raw_mode = (arg_type == ARG_PTR_TO_UNINIT_MEM);
+		meta->raw_mode = arg_type & MEM_UNINIT;
 	} else if (arg_type_is_mem_size(arg_type)) {
 		bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO);
 
@@ -6189,10 +6179,8 @@ static bool check_raw_mode_ok(const struct bpf_func_proto *fn)
 static bool check_args_pair_invalid(enum bpf_arg_type arg_curr,
 				    enum bpf_arg_type arg_next)
 {
-	return (arg_type_is_mem_ptr(arg_curr) &&
-	        !arg_type_is_mem_size(arg_next)) ||
-	       (!arg_type_is_mem_ptr(arg_curr) &&
-		arg_type_is_mem_size(arg_next));
+	return (base_type(arg_curr) == ARG_PTR_TO_MEM) !=
+		arg_type_is_mem_size(arg_next);
 }
 
 static bool check_arg_pair_ok(const struct bpf_func_proto *fn)
@@ -6203,7 +6191,7 @@ static bool check_arg_pair_ok(const struct bpf_func_proto *fn)
 	 * helper function specification.
 	 */
 	if (arg_type_is_mem_size(fn->arg1_type) ||
-	    arg_type_is_mem_ptr(fn->arg5_type)  ||
+	    base_type(fn->arg5_type) == ARG_PTR_TO_MEM ||
 	    check_args_pair_invalid(fn->arg1_type, fn->arg2_type) ||
 	    check_args_pair_invalid(fn->arg2_type, fn->arg3_type) ||
 	    check_args_pair_invalid(fn->arg3_type, fn->arg4_type) ||
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
  2022-05-09 22:42 ` [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag Joanne Koong
@ 2022-05-09 22:42 ` Joanne Koong
  2022-05-12  0:05   ` Daniel Borkmann
                     ` (2 more replies)
  2022-05-09 22:42 ` [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers Joanne Koong
                   ` (3 subsequent siblings)
  5 siblings, 3 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

This patch adds the bulk of the verifier work for supporting dynamic
pointers (dynptrs) in bpf. This patch implements malloc-type dynptrs
through 2 new APIs (bpf_dynptr_alloc and bpf_dynptr_put) that can be
called by a bpf program. Malloc-type dynptrs are dynptrs that dynamically
allocate memory on behalf of the program.

A bpf_dynptr is opaque to the bpf program. It is a 16-byte structure
defined internally as:

struct bpf_dynptr_kern {
    void *data;
    u32 size;
    u32 offset;
} __aligned(8);

The upper 8 bits of *size* is reserved (it contains extra metadata about
read-only status and dynptr type); consequently, a dynptr only supports
memory less than 16 MB.

The 2 new APIs for malloc-type dynptrs are:

long bpf_dynptr_alloc(u32 size, u64 flags, struct bpf_dynptr *ptr);
void bpf_dynptr_put(struct bpf_dynptr *ptr);

Please note that there *must* be a corresponding bpf_dynptr_put for
every bpf_dynptr_alloc (even if the alloc fails). This is enforced
by the verifier.

In the verifier, dynptr state information will be tracked in stack
slots. When the program passes in an uninitialized dynptr
(ARG_PTR_TO_DYNPTR | MEM_UNINIT), the stack slots corresponding
to the frame pointer where the dynptr resides at are marked STACK_DYNPTR.

For helper functions that take in initialized dynptrs (eg
bpf_dynptr_read + bpf_dynptr_write which are added later in this
patchset), the verifier enforces that the dynptr has been initialized
properly by checking that their corresponding stack slots have been marked
as STACK_DYNPTR. Dynptr release functions (eg bpf_dynptr_put) will clear
the stack slots. The verifier enforces at program exit that there are no
referenced dynptrs that haven't been released.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
 include/linux/bpf.h            |  62 ++++++++-
 include/linux/bpf_verifier.h   |  21 +++
 include/uapi/linux/bpf.h       |  30 +++++
 kernel/bpf/helpers.c           |  75 +++++++++++
 kernel/bpf/verifier.c          | 228 ++++++++++++++++++++++++++++++++-
 scripts/bpf_doc.py             |   2 +
 tools/include/uapi/linux/bpf.h |  30 +++++
 7 files changed, 445 insertions(+), 3 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d0c167865504..e078b8a911fe 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -391,9 +391,14 @@ enum bpf_type_flag {
 
 	MEM_UNINIT		= BIT(7 + BPF_BASE_TYPE_BITS),
 
-	__BPF_TYPE_LAST_FLAG	= MEM_UNINIT,
+	/* DYNPTR points to dynamically allocated memory. */
+	DYNPTR_TYPE_MALLOC	= BIT(8 + BPF_BASE_TYPE_BITS),
+
+	__BPF_TYPE_LAST_FLAG	= DYNPTR_TYPE_MALLOC,
 };
 
+#define DYNPTR_TYPE_FLAG_MASK	DYNPTR_TYPE_MALLOC
+
 /* Max number of base types. */
 #define BPF_BASE_TYPE_LIMIT	(1UL << BPF_BASE_TYPE_BITS)
 
@@ -436,6 +441,7 @@ enum bpf_arg_type {
 	ARG_PTR_TO_CONST_STR,	/* pointer to a null terminated read-only string */
 	ARG_PTR_TO_TIMER,	/* pointer to bpf_timer */
 	ARG_PTR_TO_KPTR,	/* pointer to referenced kptr */
+	ARG_PTR_TO_DYNPTR,      /* pointer to bpf_dynptr. See bpf_type_flag for dynptr type */
 	__BPF_ARG_TYPE_MAX,
 
 	/* Extended arg_types. */
@@ -2347,4 +2353,58 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
 			u32 **bin_buf, u32 num_args);
 void bpf_bprintf_cleanup(void);
 
+/* the implementation of the opaque uapi struct bpf_dynptr */
+struct bpf_dynptr_kern {
+	void *data;
+	/* Size represents the number of usable bytes in the dynptr.
+	 * If for example the offset is at 200 for a malloc dynptr with
+	 * allocation size 256, the number of usable bytes is 56.
+	 *
+	 * The upper 8 bits are reserved.
+	 * Bit 31 denotes whether the dynptr is read-only.
+	 * Bits 28-30 denote the dynptr type.
+	 */
+	u32 size;
+	u32 offset;
+} __aligned(8);
+
+enum bpf_dynptr_type {
+	BPF_DYNPTR_TYPE_INVALID,
+	/* Memory allocated dynamically by the kernel for the dynptr */
+	BPF_DYNPTR_TYPE_MALLOC,
+};
+
+/* Since the upper 8 bits of dynptr->size is reserved, the
+ * maximum supported size is 2^24 - 1.
+ */
+#define DYNPTR_MAX_SIZE	((1UL << 24) - 1)
+#define DYNPTR_SIZE_MASK	0xFFFFFF
+#define DYNPTR_TYPE_SHIFT	28
+#define DYNPTR_TYPE_MASK	0x7
+
+static inline enum bpf_dynptr_type bpf_dynptr_get_type(struct bpf_dynptr_kern *ptr)
+{
+	return (ptr->size >> DYNPTR_TYPE_SHIFT) & DYNPTR_TYPE_MASK;
+}
+
+static inline void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_type type)
+{
+	ptr->size |= type << DYNPTR_TYPE_SHIFT;
+}
+
+static inline u32 bpf_dynptr_get_size(struct bpf_dynptr_kern *ptr)
+{
+	return ptr->size & DYNPTR_SIZE_MASK;
+}
+
+static inline int bpf_dynptr_check_size(u32 size)
+{
+	return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
+}
+
+void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
+		     u32 offset, u32 size);
+
+void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
+
 #endif /* _LINUX_BPF_H */
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 1f1e7f2ea967..830a0e11ae97 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -72,6 +72,18 @@ struct bpf_reg_state {
 
 		u32 mem_size; /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
 
+		/* For dynptr stack slots */
+		struct {
+			enum bpf_dynptr_type type;
+			/* A dynptr is 16 bytes so it takes up 2 stack slots.
+			 * We need to track which slot is the first slot
+			 * to protect against cases where the user may try to
+			 * pass in an address starting at the second slot of the
+			 * dynptr.
+			 */
+			bool first_slot;
+		} dynptr;
+
 		/* Max size from any of the above. */
 		struct {
 			unsigned long raw1;
@@ -88,6 +100,8 @@ struct bpf_reg_state {
 	 * for the purpose of tracking that it's freed.
 	 * For PTR_TO_SOCKET this is used to share which pointers retain the
 	 * same reference to the socket, to determine proper reference freeing.
+	 * For stack slots that are dynptrs, this is used to track references to
+	 * the dynptr to determine proper reference freeing.
 	 */
 	u32 id;
 	/* PTR_TO_SOCKET and PTR_TO_TCP_SOCK could be a ptr returned
@@ -174,9 +188,16 @@ enum bpf_stack_slot_type {
 	STACK_SPILL,      /* register spilled into stack */
 	STACK_MISC,	  /* BPF program wrote some data into this slot */
 	STACK_ZERO,	  /* BPF program wrote constant zero */
+	/* A dynptr is stored in this stack slot. The type of dynptr
+	 * is stored in bpf_stack_state->spilled_ptr.dynptr.type
+	 */
+	STACK_DYNPTR,
 };
 
 #define BPF_REG_SIZE 8	/* size of eBPF register in bytes */
+/* size of a struct bpf_dynptr in bytes */
+#define BPF_DYNPTR_SIZE sizeof(struct bpf_dynptr_kern)
+#define BPF_DYNPTR_NR_SLOTS (BPF_DYNPTR_SIZE / BPF_REG_SIZE)
 
 struct bpf_stack_state {
 	struct bpf_reg_state spilled_ptr;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 444fe6f1cf35..5a87ed654016 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5154,6 +5154,29 @@ union bpf_attr {
  *		if not NULL, is a reference which must be released using its
  *		corresponding release function, or moved into a BPF map before
  *		program exit.
+ *
+ * long bpf_dynptr_alloc(u32 size, u64 flags, struct bpf_dynptr *ptr)
+ *	Description
+ *		Allocate memory of *size* bytes.
+ *
+ *		Every call to bpf_dynptr_alloc must have a corresponding
+ *		bpf_dynptr_put, regardless of whether the bpf_dynptr_alloc
+ *		succeeded.
+ *
+ *		The maximum *size* supported is DYNPTR_MAX_SIZE.
+ *		Supported *flags* are __GFP_ZERO.
+ *	Return
+ *		0 on success, -ENOMEM if there is not enough memory for the
+ *		allocation, -E2BIG if the size exceeds DYNPTR_MAX_SIZE, -EINVAL
+ *		if the flags is not supported.
+ *
+ * void bpf_dynptr_put(struct bpf_dynptr *ptr)
+ *	Description
+ *		Free memory allocated by bpf_dynptr_alloc.
+ *
+ *		After this operation, *ptr* will be an invalidated dynptr.
+ *	Return
+ *		Void.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5351,6 +5374,8 @@ union bpf_attr {
 	FN(skb_set_tstamp),		\
 	FN(ima_file_hash),		\
 	FN(kptr_xchg),			\
+	FN(dynptr_alloc),		\
+	FN(dynptr_put),			\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -6498,6 +6523,11 @@ struct bpf_timer {
 	__u64 :64;
 } __attribute__((aligned(8)));
 
+struct bpf_dynptr {
+	__u64 :64;
+	__u64 :64;
+} __attribute__((aligned(8)));
+
 struct bpf_sysctl {
 	__u32	write;		/* Sysctl is being read (= 0) or written (= 1).
 				 * Allows 1,2,4-byte read, but no write.
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 8a2398ac14c2..a4272e9239ea 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1396,6 +1396,77 @@ const struct bpf_func_proto bpf_kptr_xchg_proto = {
 	.arg2_btf_id  = BPF_PTR_POISON,
 };
 
+void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
+		     u32 offset, u32 size)
+{
+	ptr->data = data;
+	ptr->offset = offset;
+	ptr->size = size;
+	bpf_dynptr_set_type(ptr, type);
+}
+
+void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
+{
+	memset(ptr, 0, sizeof(*ptr));
+}
+
+BPF_CALL_3(bpf_dynptr_alloc, u32, size, u64, flags, struct bpf_dynptr_kern *, ptr)
+{
+	gfp_t gfp_flags = GFP_ATOMIC;
+	void *data;
+	int err;
+
+	err = bpf_dynptr_check_size(size);
+	if (err)
+		goto error;
+
+	if (flags) {
+		if (flags == __GFP_ZERO) {
+			gfp_flags |= flags;
+		} else {
+			err = -EINVAL;
+			goto error;
+		}
+	}
+
+	data = kmalloc(size, gfp_flags);
+	if (!data) {
+		err = -ENOMEM;
+		goto error;
+	}
+
+	bpf_dynptr_init(ptr, data, BPF_DYNPTR_TYPE_MALLOC, 0, size);
+
+	return 0;
+
+error:
+	bpf_dynptr_set_null(ptr);
+	return err;
+}
+
+const struct bpf_func_proto bpf_dynptr_alloc_proto = {
+	.func		= bpf_dynptr_alloc,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_ANYTHING,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_MALLOC | MEM_UNINIT,
+};
+
+BPF_CALL_1(bpf_dynptr_put, struct bpf_dynptr_kern *, dynptr)
+{
+	kfree(dynptr->data);
+	bpf_dynptr_set_null(dynptr);
+	return 0;
+}
+
+const struct bpf_func_proto bpf_dynptr_put_proto = {
+	.func		= bpf_dynptr_put,
+	.gpl_only	= false,
+	.ret_type	= RET_VOID,
+	.arg1_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_MALLOC | OBJ_RELEASE,
+};
+
 const struct bpf_func_proto bpf_get_current_task_proto __weak;
 const struct bpf_func_proto bpf_get_current_task_btf_proto __weak;
 const struct bpf_func_proto bpf_probe_read_user_proto __weak;
@@ -1448,6 +1519,10 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 		return &bpf_loop_proto;
 	case BPF_FUNC_strncmp:
 		return &bpf_strncmp_proto;
+	case BPF_FUNC_dynptr_alloc:
+		return &bpf_dynptr_alloc_proto;
+	case BPF_FUNC_dynptr_put:
+		return &bpf_dynptr_put_proto;
 	default:
 		break;
 	}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0fe1dea520ae..8cdedc776987 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -187,6 +187,10 @@ struct bpf_verifier_stack_elem {
 					  POISON_POINTER_DELTA))
 #define BPF_MAP_PTR(X)		((struct bpf_map *)((X) & ~BPF_MAP_PTR_UNPRIV))
 
+static bool arg_type_is_mem_size(enum bpf_arg_type type);
+static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx);
+static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
+
 static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
 {
 	return BPF_MAP_PTR(aux->map_ptr_state) == BPF_MAP_PTR_POISON;
@@ -259,6 +263,7 @@ struct bpf_call_arg_meta {
 	u32 ret_btf_id;
 	u32 subprogno;
 	struct bpf_map_value_off_desc *kptr_off_desc;
+	u8 uninit_dynptr_regno;
 };
 
 struct btf *btf_vmlinux;
@@ -580,6 +585,7 @@ static char slot_type_char[] = {
 	[STACK_SPILL]	= 'r',
 	[STACK_MISC]	= 'm',
 	[STACK_ZERO]	= '0',
+	[STACK_DYNPTR]	= 'd',
 };
 
 static void print_liveness(struct bpf_verifier_env *env,
@@ -595,6 +601,25 @@ static void print_liveness(struct bpf_verifier_env *env,
 		verbose(env, "D");
 }
 
+static inline int get_spi(s32 off)
+{
+	return (-off - 1) / BPF_REG_SIZE;
+}
+
+static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots)
+{
+	int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
+
+	/* We need to check that slots between [spi - nr_slots + 1, spi] are
+	 * within [0, allocated_stack).
+	 *
+	 * Please note that the spi grows downwards. For example, a dynptr
+	 * takes the size of two stack slots; the first slot will be at
+	 * spi and the second slot will be at spi - 1.
+	 */
+	return spi - nr_slots + 1 >= 0 && spi < allocated_slots;
+}
+
 static struct bpf_func_state *func(struct bpf_verifier_env *env,
 				   const struct bpf_reg_state *reg)
 {
@@ -646,6 +671,130 @@ static void mark_verifier_state_scratched(struct bpf_verifier_env *env)
 	env->scratched_stack_slots = ~0ULL;
 }
 
+static int arg_to_dynptr_type(enum bpf_arg_type arg_type)
+{
+	switch (arg_type & DYNPTR_TYPE_FLAG_MASK) {
+	case DYNPTR_TYPE_MALLOC:
+		return BPF_DYNPTR_TYPE_MALLOC;
+	default:
+		return BPF_DYNPTR_TYPE_INVALID;
+	}
+}
+
+static inline bool dynptr_type_refcounted(enum bpf_dynptr_type type)
+{
+	return type == BPF_DYNPTR_TYPE_MALLOC;
+}
+
+static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+				   enum bpf_arg_type arg_type, int insn_idx)
+{
+	struct bpf_func_state *state = cur_func(env);
+	enum bpf_dynptr_type type;
+	int spi, id, i;
+
+	spi = get_spi(reg->off);
+
+	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
+		return -EINVAL;
+
+	for (i = 0; i < BPF_REG_SIZE; i++) {
+		state->stack[spi].slot_type[i] = STACK_DYNPTR;
+		state->stack[spi - 1].slot_type[i] = STACK_DYNPTR;
+	}
+
+	type = arg_to_dynptr_type(arg_type);
+	if (type == BPF_DYNPTR_TYPE_INVALID)
+		return -EINVAL;
+
+	state->stack[spi].spilled_ptr.dynptr.type = type;
+	state->stack[spi - 1].spilled_ptr.dynptr.type = type;
+
+	state->stack[spi].spilled_ptr.dynptr.first_slot = true;
+
+	if (dynptr_type_refcounted(type)) {
+		/* The id is used to track proper releasing */
+		id = acquire_reference_state(env, insn_idx);
+		if (id < 0)
+			return id;
+
+		state->stack[spi].spilled_ptr.id = id;
+		state->stack[spi - 1].spilled_ptr.id = id;
+	}
+
+	return 0;
+}
+
+static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+	struct bpf_func_state *state = func(env, reg);
+	int spi, i;
+
+	spi = get_spi(reg->off);
+
+	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
+		return -EINVAL;
+
+	for (i = 0; i < BPF_REG_SIZE; i++) {
+		state->stack[spi].slot_type[i] = STACK_INVALID;
+		state->stack[spi - 1].slot_type[i] = STACK_INVALID;
+	}
+
+	/* Invalidate any slices associated with this dynptr */
+	if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
+		release_reference(env, state->stack[spi].spilled_ptr.id);
+		state->stack[spi].spilled_ptr.id = 0;
+		state->stack[spi - 1].spilled_ptr.id = 0;
+	}
+
+	state->stack[spi].spilled_ptr.dynptr.type = 0;
+	state->stack[spi - 1].spilled_ptr.dynptr.type = 0;
+
+	return 0;
+}
+
+static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+	struct bpf_func_state *state = func(env, reg);
+	int spi = get_spi(reg->off);
+	int i;
+
+	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
+		return true;
+
+	for (i = 0; i < BPF_REG_SIZE; i++) {
+		if (state->stack[spi].slot_type[i] == STACK_DYNPTR ||
+		    state->stack[spi - 1].slot_type[i] == STACK_DYNPTR)
+			return false;
+	}
+
+	return true;
+}
+
+static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+				     enum bpf_arg_type arg_type)
+{
+	struct bpf_func_state *state = func(env, reg);
+	int spi = get_spi(reg->off);
+	int i;
+
+	if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
+	    !state->stack[spi].spilled_ptr.dynptr.first_slot)
+		return false;
+
+	for (i = 0; i < BPF_REG_SIZE; i++) {
+		if (state->stack[spi].slot_type[i] != STACK_DYNPTR ||
+		    state->stack[spi - 1].slot_type[i] != STACK_DYNPTR)
+			return false;
+	}
+
+	/* ARG_PTR_TO_DYNPTR takes any type of dynptr */
+	if (arg_type == ARG_PTR_TO_DYNPTR)
+		return true;
+
+	return state->stack[spi].spilled_ptr.dynptr.type == arg_to_dynptr_type(arg_type);
+}
+
 /* The reg state of a pointer or a bounded scalar was saved when
  * it was spilled to the stack.
  */
@@ -5400,6 +5549,16 @@ static bool arg_type_is_release(enum bpf_arg_type type)
 	return type & OBJ_RELEASE;
 }
 
+static inline bool arg_type_is_dynptr(enum bpf_arg_type type)
+{
+	return base_type(type) == ARG_PTR_TO_DYNPTR;
+}
+
+static inline bool arg_type_is_dynptr_uninit(enum bpf_arg_type type)
+{
+	return arg_type_is_dynptr(type) && (type & MEM_UNINIT);
+}
+
 static int int_ptr_type_to_size(enum bpf_arg_type type)
 {
 	if (type == ARG_PTR_TO_INT)
@@ -5539,6 +5698,7 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
 	[ARG_PTR_TO_CONST_STR]		= &const_str_ptr_types,
 	[ARG_PTR_TO_TIMER]		= &timer_types,
 	[ARG_PTR_TO_KPTR]		= &kptr_types,
+	[ARG_PTR_TO_DYNPTR]		= &stack_ptr_types,
 };
 
 static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
@@ -5725,7 +5885,16 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 
 skip_type_check:
 	if (arg_type_is_release(arg_type)) {
-		if (!reg->ref_obj_id && !register_is_null(reg)) {
+		if (arg_type_is_dynptr(arg_type)) {
+			struct bpf_func_state *state = func(env, reg);
+			int spi = get_spi(reg->off);
+
+			if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
+			    !state->stack[spi].spilled_ptr.id) {
+				verbose(env, "arg %d is an unacquired reference\n", regno);
+				return -EINVAL;
+			}
+		} else if (!reg->ref_obj_id && !register_is_null(reg)) {
 			verbose(env, "R%d must be referenced when passed to release function\n",
 				regno);
 			return -EINVAL;
@@ -5837,6 +6006,43 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO);
 
 		err = check_mem_size_reg(env, reg, regno, zero_size_allowed, meta);
+	} else if (arg_type_is_dynptr(arg_type)) {
+		/* Can't pass in a dynptr at a weird offset */
+		if (reg->off % BPF_REG_SIZE) {
+			verbose(env, "cannot pass in non-zero dynptr offset\n");
+			return -EINVAL;
+		}
+
+		if (arg_type & MEM_UNINIT)  {
+			if (!is_dynptr_reg_valid_uninit(env, reg)) {
+				verbose(env, "Arg #%d dynptr has to be an uninitialized dynptr\n",
+					arg + BPF_REG_1);
+				return -EINVAL;
+			}
+
+			/* We only support one dynptr being uninitialized at the moment,
+			 * which is sufficient for the helper functions we have right now.
+			 */
+			if (meta->uninit_dynptr_regno) {
+				verbose(env, "verifier internal error: more than one uninitialized dynptr arg\n");
+				return -EFAULT;
+			}
+
+			meta->uninit_dynptr_regno = arg + BPF_REG_1;
+		} else if (!is_dynptr_reg_valid_init(env, reg, arg_type)) {
+			const char *err_extra = "";
+
+			switch (arg_type & DYNPTR_TYPE_FLAG_MASK) {
+			case DYNPTR_TYPE_MALLOC:
+				err_extra = "malloc ";
+				break;
+			default:
+				break;
+			}
+			verbose(env, "Expected an initialized %sdynptr as arg #%d\n",
+				err_extra, arg + BPF_REG_1);
+			return -EINVAL;
+		}
 	} else if (arg_type_is_alloc_size(arg_type)) {
 		if (!tnum_is_const(reg->var_off)) {
 			verbose(env, "R%d is not a known constant'\n",
@@ -6963,9 +7169,27 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 
 	regs = cur_regs(env);
 
+	if (meta.uninit_dynptr_regno) {
+		/* we write BPF_DW bits (8 bytes) at a time */
+		for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
+			err = check_mem_access(env, insn_idx, meta.uninit_dynptr_regno,
+					       i, BPF_DW, BPF_WRITE, -1, false);
+			if (err)
+				return err;
+		}
+
+		err = mark_stack_slots_dynptr(env, &regs[meta.uninit_dynptr_regno],
+					      fn->arg_type[meta.uninit_dynptr_regno - BPF_REG_1],
+					      insn_idx);
+		if (err)
+			return err;
+	}
+
 	if (meta.release_regno) {
 		err = -EINVAL;
-		if (meta.ref_obj_id)
+		if (arg_type_is_dynptr(fn->arg_type[meta.release_regno - BPF_REG_1]))
+			err = unmark_stack_slots_dynptr(env, &regs[meta.release_regno]);
+		else if (meta.ref_obj_id)
 			err = release_reference(env, meta.ref_obj_id);
 		/* meta.ref_obj_id can only be 0 if register that is meant to be
 		 * released is NULL, which must be > R0.
diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py
index 096625242475..766dcbc73897 100755
--- a/scripts/bpf_doc.py
+++ b/scripts/bpf_doc.py
@@ -633,6 +633,7 @@ class PrinterHelpers(Printer):
             'struct socket',
             'struct file',
             'struct bpf_timer',
+            'struct bpf_dynptr',
     ]
     known_types = {
             '...',
@@ -682,6 +683,7 @@ class PrinterHelpers(Printer):
             'struct socket',
             'struct file',
             'struct bpf_timer',
+            'struct bpf_dynptr',
     }
     mapped_types = {
             'u8': '__u8',
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 444fe6f1cf35..5a87ed654016 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5154,6 +5154,29 @@ union bpf_attr {
  *		if not NULL, is a reference which must be released using its
  *		corresponding release function, or moved into a BPF map before
  *		program exit.
+ *
+ * long bpf_dynptr_alloc(u32 size, u64 flags, struct bpf_dynptr *ptr)
+ *	Description
+ *		Allocate memory of *size* bytes.
+ *
+ *		Every call to bpf_dynptr_alloc must have a corresponding
+ *		bpf_dynptr_put, regardless of whether the bpf_dynptr_alloc
+ *		succeeded.
+ *
+ *		The maximum *size* supported is DYNPTR_MAX_SIZE.
+ *		Supported *flags* are __GFP_ZERO.
+ *	Return
+ *		0 on success, -ENOMEM if there is not enough memory for the
+ *		allocation, -E2BIG if the size exceeds DYNPTR_MAX_SIZE, -EINVAL
+ *		if the flags is not supported.
+ *
+ * void bpf_dynptr_put(struct bpf_dynptr *ptr)
+ *	Description
+ *		Free memory allocated by bpf_dynptr_alloc.
+ *
+ *		After this operation, *ptr* will be an invalidated dynptr.
+ *	Return
+ *		Void.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5351,6 +5374,8 @@ union bpf_attr {
 	FN(skb_set_tstamp),		\
 	FN(ima_file_hash),		\
 	FN(kptr_xchg),			\
+	FN(dynptr_alloc),		\
+	FN(dynptr_put),			\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -6498,6 +6523,11 @@ struct bpf_timer {
 	__u64 :64;
 } __attribute__((aligned(8)));
 
+struct bpf_dynptr {
+	__u64 :64;
+	__u64 :64;
+} __attribute__((aligned(8)));
+
 struct bpf_sysctl {
 	__u32	write;		/* Sysctl is being read (= 0) or written (= 1).
 				 * Allows 1,2,4-byte read, but no write.
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers
  2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
  2022-05-09 22:42 ` [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag Joanne Koong
  2022-05-09 22:42 ` [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs Joanne Koong
@ 2022-05-09 22:42 ` Joanne Koong
  2022-05-13 21:02   ` Andrii Nakryiko
  2022-05-16 16:09   ` David Vernet
  2022-05-09 22:42 ` [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write Joanne Koong
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

Currently, our only way of writing dynamically-sized data into a ring
buffer is through bpf_ringbuf_output but this incurs an extra memcpy
cost. bpf_ringbuf_reserve + bpf_ringbuf_commit avoids this extra
memcpy, but it can only safely support reservation sizes that are
statically known since the verifier cannot guarantee that the bpf
program won’t access memory outside the reserved space.

The bpf_dynptr abstraction allows for dynamically-sized ring buffer
reservations without the extra memcpy.

There are 3 new APIs:

long bpf_ringbuf_reserve_dynptr(void *ringbuf, u32 size, u64 flags, struct bpf_dynptr *ptr);
void bpf_ringbuf_submit_dynptr(struct bpf_dynptr *ptr, u64 flags);
void bpf_ringbuf_discard_dynptr(struct bpf_dynptr *ptr, u64 flags);

These closely follow the functionalities of the original ringbuf APIs.
For example, all ringbuffer dynptrs that have been reserved must be
either submitted or discarded before the program exits.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
 include/linux/bpf.h            | 14 +++++-
 include/uapi/linux/bpf.h       | 35 +++++++++++++++
 kernel/bpf/helpers.c           |  6 +++
 kernel/bpf/ringbuf.c           | 78 ++++++++++++++++++++++++++++++++++
 kernel/bpf/verifier.c          | 16 ++++++-
 tools/include/uapi/linux/bpf.h | 35 +++++++++++++++
 6 files changed, 180 insertions(+), 4 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index e078b8a911fe..8fbe739b0dec 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -394,10 +394,15 @@ enum bpf_type_flag {
 	/* DYNPTR points to dynamically allocated memory. */
 	DYNPTR_TYPE_MALLOC	= BIT(8 + BPF_BASE_TYPE_BITS),
 
-	__BPF_TYPE_LAST_FLAG	= DYNPTR_TYPE_MALLOC,
+	/* DYNPTR points to a ringbuf record. */
+	DYNPTR_TYPE_RINGBUF	= BIT(9 + BPF_BASE_TYPE_BITS),
+
+	__BPF_TYPE_FLAG_MAX,
+
+	__BPF_TYPE_LAST_FLAG	= __BPF_TYPE_FLAG_MAX - 1,
 };
 
-#define DYNPTR_TYPE_FLAG_MASK	DYNPTR_TYPE_MALLOC
+#define DYNPTR_TYPE_FLAG_MASK	(DYNPTR_TYPE_MALLOC | DYNPTR_TYPE_RINGBUF)
 
 /* Max number of base types. */
 #define BPF_BASE_TYPE_LIMIT	(1UL << BPF_BASE_TYPE_BITS)
@@ -2205,6 +2210,9 @@ extern const struct bpf_func_proto bpf_ringbuf_reserve_proto;
 extern const struct bpf_func_proto bpf_ringbuf_submit_proto;
 extern const struct bpf_func_proto bpf_ringbuf_discard_proto;
 extern const struct bpf_func_proto bpf_ringbuf_query_proto;
+extern const struct bpf_func_proto bpf_ringbuf_reserve_dynptr_proto;
+extern const struct bpf_func_proto bpf_ringbuf_submit_dynptr_proto;
+extern const struct bpf_func_proto bpf_ringbuf_discard_dynptr_proto;
 extern const struct bpf_func_proto bpf_skc_to_tcp6_sock_proto;
 extern const struct bpf_func_proto bpf_skc_to_tcp_sock_proto;
 extern const struct bpf_func_proto bpf_skc_to_tcp_timewait_sock_proto;
@@ -2372,6 +2380,8 @@ enum bpf_dynptr_type {
 	BPF_DYNPTR_TYPE_INVALID,
 	/* Memory allocated dynamically by the kernel for the dynptr */
 	BPF_DYNPTR_TYPE_MALLOC,
+	/* Underlying data is a ringbuf record */
+	BPF_DYNPTR_TYPE_RINGBUF,
 };
 
 /* Since the upper 8 bits of dynptr->size is reserved, the
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 5a87ed654016..679f960d2514 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5177,6 +5177,38 @@ union bpf_attr {
  *		After this operation, *ptr* will be an invalidated dynptr.
  *	Return
  *		Void.
+ *
+ * long bpf_ringbuf_reserve_dynptr(void *ringbuf, u32 size, u64 flags, struct bpf_dynptr *ptr)
+ *	Description
+ *		Reserve *size* bytes of payload in a ring buffer *ringbuf*
+ *		through the dynptr interface. *flags* must be 0.
+ *
+ *		Please note that a corresponding bpf_ringbuf_submit_dynptr or
+ *		bpf_ringbuf_discard_dynptr must be called on *ptr*, even if the
+ *		reservation fails. This is enforced by the verifier.
+ *	Return
+ *		0 on success, or a negative error in case of failure.
+ *
+ * void bpf_ringbuf_submit_dynptr(struct bpf_dynptr *ptr, u64 flags)
+ *	Description
+ *		Submit reserved ring buffer sample, pointed to by *data*,
+ *		through the dynptr interface. This is a no-op if the dynptr is
+ *		invalid/null.
+ *
+ *		For more information on *flags*, please see
+ *		'bpf_ringbuf_submit'.
+ *	Return
+ *		Nothing. Always succeeds.
+ *
+ * void bpf_ringbuf_discard_dynptr(struct bpf_dynptr *ptr, u64 flags)
+ *	Description
+ *		Discard reserved ring buffer sample through the dynptr
+ *		interface. This is a no-op if the dynptr is invalid/null.
+ *
+ *		For more information on *flags*, please see
+ *		'bpf_ringbuf_discard'.
+ *	Return
+ *		Nothing. Always succeeds.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5376,6 +5408,9 @@ union bpf_attr {
 	FN(kptr_xchg),			\
 	FN(dynptr_alloc),		\
 	FN(dynptr_put),			\
+	FN(ringbuf_reserve_dynptr),	\
+	FN(ringbuf_submit_dynptr),	\
+	FN(ringbuf_discard_dynptr),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index a4272e9239ea..2d6f2e28b580 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1513,6 +1513,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 		return &bpf_ringbuf_discard_proto;
 	case BPF_FUNC_ringbuf_query:
 		return &bpf_ringbuf_query_proto;
+	case BPF_FUNC_ringbuf_reserve_dynptr:
+		return &bpf_ringbuf_reserve_dynptr_proto;
+	case BPF_FUNC_ringbuf_submit_dynptr:
+		return &bpf_ringbuf_submit_dynptr_proto;
+	case BPF_FUNC_ringbuf_discard_dynptr:
+		return &bpf_ringbuf_discard_dynptr_proto;
 	case BPF_FUNC_for_each_map_elem:
 		return &bpf_for_each_map_elem_proto;
 	case BPF_FUNC_loop:
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 311264ab80c4..ded4faeca192 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -475,3 +475,81 @@ const struct bpf_func_proto bpf_ringbuf_query_proto = {
 	.arg1_type	= ARG_CONST_MAP_PTR,
 	.arg2_type	= ARG_ANYTHING,
 };
+
+BPF_CALL_4(bpf_ringbuf_reserve_dynptr, struct bpf_map *, map, u32, size, u64, flags,
+	   struct bpf_dynptr_kern *, ptr)
+{
+	struct bpf_ringbuf_map *rb_map;
+	void *sample;
+	int err;
+
+	if (unlikely(flags)) {
+		bpf_dynptr_set_null(ptr);
+		return -EINVAL;
+	}
+
+	err = bpf_dynptr_check_size(size);
+	if (err) {
+		bpf_dynptr_set_null(ptr);
+		return err;
+	}
+
+	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+
+	sample = __bpf_ringbuf_reserve(rb_map->rb, size);
+	if (!sample) {
+		bpf_dynptr_set_null(ptr);
+		return -EINVAL;
+	}
+
+	bpf_dynptr_init(ptr, sample, BPF_DYNPTR_TYPE_RINGBUF, 0, size);
+
+	return 0;
+}
+
+const struct bpf_func_proto bpf_ringbuf_reserve_dynptr_proto = {
+	.func		= bpf_ringbuf_reserve_dynptr,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_ANYTHING,
+	.arg4_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | MEM_UNINIT,
+};
+
+BPF_CALL_2(bpf_ringbuf_submit_dynptr, struct bpf_dynptr_kern *, ptr, u64, flags)
+{
+	if (!ptr->data)
+		return 0;
+
+	bpf_ringbuf_commit(ptr->data, flags, false /* discard */);
+
+	bpf_dynptr_set_null(ptr);
+
+	return 0;
+}
+
+const struct bpf_func_proto bpf_ringbuf_submit_dynptr_proto = {
+	.func		= bpf_ringbuf_submit_dynptr,
+	.ret_type	= RET_VOID,
+	.arg1_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | OBJ_RELEASE,
+	.arg2_type	= ARG_ANYTHING,
+};
+
+BPF_CALL_2(bpf_ringbuf_discard_dynptr, struct bpf_dynptr_kern *, ptr, u64, flags)
+{
+	if (!ptr->data)
+		return 0;
+
+	bpf_ringbuf_commit(ptr->data, flags, true /* discard */);
+
+	bpf_dynptr_set_null(ptr);
+
+	return 0;
+}
+
+const struct bpf_func_proto bpf_ringbuf_discard_dynptr_proto = {
+	.func		= bpf_ringbuf_discard_dynptr,
+	.ret_type	= RET_VOID,
+	.arg1_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_RINGBUF | OBJ_RELEASE,
+	.arg2_type	= ARG_ANYTHING,
+};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8cdedc776987..c17df5f17ba1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -676,6 +676,8 @@ static int arg_to_dynptr_type(enum bpf_arg_type arg_type)
 	switch (arg_type & DYNPTR_TYPE_FLAG_MASK) {
 	case DYNPTR_TYPE_MALLOC:
 		return BPF_DYNPTR_TYPE_MALLOC;
+	case DYNPTR_TYPE_RINGBUF:
+		return BPF_DYNPTR_TYPE_RINGBUF;
 	default:
 		return BPF_DYNPTR_TYPE_INVALID;
 	}
@@ -683,7 +685,7 @@ static int arg_to_dynptr_type(enum bpf_arg_type arg_type)
 
 static inline bool dynptr_type_refcounted(enum bpf_dynptr_type type)
 {
-	return type == BPF_DYNPTR_TYPE_MALLOC;
+	return type == BPF_DYNPTR_TYPE_MALLOC || type == BPF_DYNPTR_TYPE_RINGBUF;
 }
 
 static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
@@ -6036,9 +6038,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			case DYNPTR_TYPE_MALLOC:
 				err_extra = "malloc ";
 				break;
+			case DYNPTR_TYPE_RINGBUF:
+				err_extra = "ringbuf ";
+				break;
 			default:
 				break;
 			}
+
 			verbose(env, "Expected an initialized %sdynptr as arg #%d\n",
 				err_extra, arg + BPF_REG_1);
 			return -EINVAL;
@@ -6164,7 +6170,10 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 	case BPF_MAP_TYPE_RINGBUF:
 		if (func_id != BPF_FUNC_ringbuf_output &&
 		    func_id != BPF_FUNC_ringbuf_reserve &&
-		    func_id != BPF_FUNC_ringbuf_query)
+		    func_id != BPF_FUNC_ringbuf_query &&
+		    func_id != BPF_FUNC_ringbuf_reserve_dynptr &&
+		    func_id != BPF_FUNC_ringbuf_submit_dynptr &&
+		    func_id != BPF_FUNC_ringbuf_discard_dynptr)
 			goto error;
 		break;
 	case BPF_MAP_TYPE_STACK_TRACE:
@@ -6280,6 +6289,9 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 	case BPF_FUNC_ringbuf_output:
 	case BPF_FUNC_ringbuf_reserve:
 	case BPF_FUNC_ringbuf_query:
+	case BPF_FUNC_ringbuf_reserve_dynptr:
+	case BPF_FUNC_ringbuf_submit_dynptr:
+	case BPF_FUNC_ringbuf_discard_dynptr:
 		if (map->map_type != BPF_MAP_TYPE_RINGBUF)
 			goto error;
 		break;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 5a87ed654016..679f960d2514 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5177,6 +5177,38 @@ union bpf_attr {
  *		After this operation, *ptr* will be an invalidated dynptr.
  *	Return
  *		Void.
+ *
+ * long bpf_ringbuf_reserve_dynptr(void *ringbuf, u32 size, u64 flags, struct bpf_dynptr *ptr)
+ *	Description
+ *		Reserve *size* bytes of payload in a ring buffer *ringbuf*
+ *		through the dynptr interface. *flags* must be 0.
+ *
+ *		Please note that a corresponding bpf_ringbuf_submit_dynptr or
+ *		bpf_ringbuf_discard_dynptr must be called on *ptr*, even if the
+ *		reservation fails. This is enforced by the verifier.
+ *	Return
+ *		0 on success, or a negative error in case of failure.
+ *
+ * void bpf_ringbuf_submit_dynptr(struct bpf_dynptr *ptr, u64 flags)
+ *	Description
+ *		Submit reserved ring buffer sample, pointed to by *data*,
+ *		through the dynptr interface. This is a no-op if the dynptr is
+ *		invalid/null.
+ *
+ *		For more information on *flags*, please see
+ *		'bpf_ringbuf_submit'.
+ *	Return
+ *		Nothing. Always succeeds.
+ *
+ * void bpf_ringbuf_discard_dynptr(struct bpf_dynptr *ptr, u64 flags)
+ *	Description
+ *		Discard reserved ring buffer sample through the dynptr
+ *		interface. This is a no-op if the dynptr is invalid/null.
+ *
+ *		For more information on *flags*, please see
+ *		'bpf_ringbuf_discard'.
+ *	Return
+ *		Nothing. Always succeeds.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5376,6 +5408,9 @@ union bpf_attr {
 	FN(kptr_xchg),			\
 	FN(dynptr_alloc),		\
 	FN(dynptr_put),			\
+	FN(ringbuf_reserve_dynptr),	\
+	FN(ringbuf_submit_dynptr),	\
+	FN(ringbuf_discard_dynptr),	\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write
  2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
                   ` (2 preceding siblings ...)
  2022-05-09 22:42 ` [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers Joanne Koong
@ 2022-05-09 22:42 ` Joanne Koong
  2022-05-13 21:06   ` Andrii Nakryiko
  2022-05-16 16:56   ` David Vernet
  2022-05-09 22:42 ` [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices Joanne Koong
  2022-05-09 22:42 ` [PATCH bpf-next v4 6/6] bpf: Dynptr tests Joanne Koong
  5 siblings, 2 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

This patch adds two helper functions, bpf_dynptr_read and
bpf_dynptr_write:

long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset);

long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len);

The dynptr passed into these functions must be valid dynptrs that have
been initialized.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
 include/linux/bpf.h            | 16 ++++++++++
 include/uapi/linux/bpf.h       | 19 ++++++++++++
 kernel/bpf/helpers.c           | 56 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h | 19 ++++++++++++
 4 files changed, 110 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 8fbe739b0dec..6f4fa0627620 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2391,6 +2391,12 @@ enum bpf_dynptr_type {
 #define DYNPTR_SIZE_MASK	0xFFFFFF
 #define DYNPTR_TYPE_SHIFT	28
 #define DYNPTR_TYPE_MASK	0x7
+#define DYNPTR_RDONLY_BIT	BIT(31)
+
+static inline bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr)
+{
+	return ptr->size & DYNPTR_RDONLY_BIT;
+}
 
 static inline enum bpf_dynptr_type bpf_dynptr_get_type(struct bpf_dynptr_kern *ptr)
 {
@@ -2412,6 +2418,16 @@ static inline int bpf_dynptr_check_size(u32 size)
 	return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
 }
 
+static inline int bpf_dynptr_check_off_len(struct bpf_dynptr_kern *ptr, u32 offset, u32 len)
+{
+	u32 size = bpf_dynptr_get_size(ptr);
+
+	if (len > size || offset > size - len)
+		return -E2BIG;
+
+	return 0;
+}
+
 void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
 		     u32 offset, u32 size);
 
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 679f960d2514..f0c5ca220d8e 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5209,6 +5209,23 @@ union bpf_attr {
  *		'bpf_ringbuf_discard'.
  *	Return
  *		Nothing. Always succeeds.
+ *
+ * long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset)
+ *	Description
+ *		Read *len* bytes from *src* into *dst*, starting from *offset*
+ *		into *src*.
+ *	Return
+ *		0 on success, -E2BIG if *offset* + *len* exceeds the length
+ *		of *src*'s data, -EINVAL if *src* is an invalid dynptr.
+ *
+ * long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len)
+ *	Description
+ *		Write *len* bytes from *src* into *dst*, starting from *offset*
+ *		into *dst*.
+ *	Return
+ *		0 on success, -E2BIG if *offset* + *len* exceeds the length
+ *		of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
+ *		is a read-only dynptr.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5411,6 +5428,8 @@ union bpf_attr {
 	FN(ringbuf_reserve_dynptr),	\
 	FN(ringbuf_submit_dynptr),	\
 	FN(ringbuf_discard_dynptr),	\
+	FN(dynptr_read),		\
+	FN(dynptr_write),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 2d6f2e28b580..7206b9e5322f 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1467,6 +1467,58 @@ const struct bpf_func_proto bpf_dynptr_put_proto = {
 	.arg1_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_MALLOC | OBJ_RELEASE,
 };
 
+BPF_CALL_4(bpf_dynptr_read, void *, dst, u32, len, struct bpf_dynptr_kern *, src, u32, offset)
+{
+	int err;
+
+	if (!src->data)
+		return -EINVAL;
+
+	err = bpf_dynptr_check_off_len(src, offset, len);
+	if (err)
+		return err;
+
+	memcpy(dst, src->data + src->offset + offset, len);
+
+	return 0;
+}
+
+const struct bpf_func_proto bpf_dynptr_read_proto = {
+	.func		= bpf_dynptr_read,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_UNINIT_MEM,
+	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
+	.arg3_type	= ARG_PTR_TO_DYNPTR,
+	.arg4_type	= ARG_ANYTHING,
+};
+
+BPF_CALL_4(bpf_dynptr_write, struct bpf_dynptr_kern *, dst, u32, offset, void *, src, u32, len)
+{
+	int err;
+
+	if (!dst->data || bpf_dynptr_is_rdonly(dst))
+		return -EINVAL;
+
+	err = bpf_dynptr_check_off_len(dst, offset, len);
+	if (err)
+		return err;
+
+	memcpy(dst->data + dst->offset + offset, src, len);
+
+	return 0;
+}
+
+const struct bpf_func_proto bpf_dynptr_write_proto = {
+	.func		= bpf_dynptr_write,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_DYNPTR,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
+	.arg4_type	= ARG_CONST_SIZE_OR_ZERO,
+};
+
 const struct bpf_func_proto bpf_get_current_task_proto __weak;
 const struct bpf_func_proto bpf_get_current_task_btf_proto __weak;
 const struct bpf_func_proto bpf_probe_read_user_proto __weak;
@@ -1529,6 +1581,10 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 		return &bpf_dynptr_alloc_proto;
 	case BPF_FUNC_dynptr_put:
 		return &bpf_dynptr_put_proto;
+	case BPF_FUNC_dynptr_read:
+		return &bpf_dynptr_read_proto;
+	case BPF_FUNC_dynptr_write:
+		return &bpf_dynptr_write_proto;
 	default:
 		break;
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 679f960d2514..f0c5ca220d8e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5209,6 +5209,23 @@ union bpf_attr {
  *		'bpf_ringbuf_discard'.
  *	Return
  *		Nothing. Always succeeds.
+ *
+ * long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset)
+ *	Description
+ *		Read *len* bytes from *src* into *dst*, starting from *offset*
+ *		into *src*.
+ *	Return
+ *		0 on success, -E2BIG if *offset* + *len* exceeds the length
+ *		of *src*'s data, -EINVAL if *src* is an invalid dynptr.
+ *
+ * long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len)
+ *	Description
+ *		Write *len* bytes from *src* into *dst*, starting from *offset*
+ *		into *dst*.
+ *	Return
+ *		0 on success, -E2BIG if *offset* + *len* exceeds the length
+ *		of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
+ *		is a read-only dynptr.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5411,6 +5428,8 @@ union bpf_attr {
 	FN(ringbuf_reserve_dynptr),	\
 	FN(ringbuf_submit_dynptr),	\
 	FN(ringbuf_discard_dynptr),	\
+	FN(dynptr_read),		\
+	FN(dynptr_write),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices
  2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
                   ` (3 preceding siblings ...)
  2022-05-09 22:42 ` [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write Joanne Koong
@ 2022-05-09 22:42 ` Joanne Koong
  2022-05-13 21:11   ` Andrii Nakryiko
  2022-05-13 21:37   ` Alexei Starovoitov
  2022-05-09 22:42 ` [PATCH bpf-next v4 6/6] bpf: Dynptr tests Joanne Koong
  5 siblings, 2 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

This patch adds a new helper function

void *bpf_dynptr_data(struct bpf_dynptr *ptr, u32 offset, u32 len);

which returns a pointer to the underlying data of a dynptr. *len*
must be a statically known value. The bpf program may access the returned
data slice as a normal buffer (eg can do direct reads and writes), since
the verifier associates the length with the returned pointer, and
enforces that no out of bounds accesses occur.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
 include/linux/bpf.h            |  4 ++
 include/uapi/linux/bpf.h       | 12 ++++++
 kernel/bpf/helpers.c           | 28 ++++++++++++++
 kernel/bpf/verifier.c          | 67 +++++++++++++++++++++++++++++++---
 tools/include/uapi/linux/bpf.h | 12 ++++++
 5 files changed, 117 insertions(+), 6 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 6f4fa0627620..1893c8d41301 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -397,6 +397,9 @@ enum bpf_type_flag {
 	/* DYNPTR points to a ringbuf record. */
 	DYNPTR_TYPE_RINGBUF	= BIT(9 + BPF_BASE_TYPE_BITS),
 
+	/* MEM is memory owned by a dynptr */
+	MEM_DYNPTR		= BIT(10 + BPF_BASE_TYPE_BITS),
+
 	__BPF_TYPE_FLAG_MAX,
 
 	__BPF_TYPE_LAST_FLAG	= __BPF_TYPE_FLAG_MAX - 1,
@@ -488,6 +491,7 @@ enum bpf_return_type {
 	RET_PTR_TO_TCP_SOCK_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK,
 	RET_PTR_TO_SOCK_COMMON_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON,
 	RET_PTR_TO_ALLOC_MEM_OR_NULL	= PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_ALLOC_MEM,
+	RET_PTR_TO_DYNPTR_MEM_OR_NULL	= PTR_MAYBE_NULL | MEM_DYNPTR | RET_PTR_TO_ALLOC_MEM,
 	RET_PTR_TO_BTF_ID_OR_NULL	= PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID,
 
 	/* This must be the last entry. Its purpose is to ensure the enum is
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index f0c5ca220d8e..edeff26fbccd 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5226,6 +5226,17 @@ union bpf_attr {
  *		0 on success, -E2BIG if *offset* + *len* exceeds the length
  *		of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
  *		is a read-only dynptr.
+ *
+ * void *bpf_dynptr_data(struct bpf_dynptr *ptr, u32 offset, u32 len)
+ *	Description
+ *		Get a pointer to the underlying dynptr data.
+ *
+ *		*len* must be a statically known value. The returned data slice
+ *		is invalidated whenever the dynptr is invalidated.
+ *	Return
+ *		Pointer to the underlying dynptr data, NULL if the dynptr is
+ *		read-only, if the dynptr is invalid, or if the offset and length
+ *		is out of bounds.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5430,6 +5441,7 @@ union bpf_attr {
 	FN(ringbuf_discard_dynptr),	\
 	FN(dynptr_read),		\
 	FN(dynptr_write),		\
+	FN(dynptr_data),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 7206b9e5322f..065815b9fb9f 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1519,6 +1519,32 @@ const struct bpf_func_proto bpf_dynptr_write_proto = {
 	.arg4_type	= ARG_CONST_SIZE_OR_ZERO,
 };
 
+BPF_CALL_3(bpf_dynptr_data, struct bpf_dynptr_kern *, ptr, u32, offset, u32, len)
+{
+	int err;
+
+	if (!ptr->data)
+		return 0;
+
+	err = bpf_dynptr_check_off_len(ptr, offset, len);
+	if (err)
+		return 0;
+
+	if (bpf_dynptr_is_rdonly(ptr))
+		return 0;
+
+	return (unsigned long)(ptr->data + ptr->offset + offset);
+}
+
+const struct bpf_func_proto bpf_dynptr_data_proto = {
+	.func		= bpf_dynptr_data,
+	.gpl_only	= false,
+	.ret_type	= RET_PTR_TO_DYNPTR_MEM_OR_NULL,
+	.arg1_type	= ARG_PTR_TO_DYNPTR,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_CONST_ALLOC_SIZE_OR_ZERO,
+};
+
 const struct bpf_func_proto bpf_get_current_task_proto __weak;
 const struct bpf_func_proto bpf_get_current_task_btf_proto __weak;
 const struct bpf_func_proto bpf_probe_read_user_proto __weak;
@@ -1585,6 +1611,8 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 		return &bpf_dynptr_read_proto;
 	case BPF_FUNC_dynptr_write:
 		return &bpf_dynptr_write_proto;
+	case BPF_FUNC_dynptr_data:
+		return &bpf_dynptr_data_proto;
 	default:
 		break;
 	}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c17df5f17ba1..4d6e25c1113e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -484,7 +484,8 @@ static bool may_be_acquire_function(enum bpf_func_id func_id)
 		func_id == BPF_FUNC_sk_lookup_udp ||
 		func_id == BPF_FUNC_skc_lookup_tcp ||
 		func_id == BPF_FUNC_map_lookup_elem ||
-	        func_id == BPF_FUNC_ringbuf_reserve;
+		func_id == BPF_FUNC_ringbuf_reserve ||
+		func_id == BPF_FUNC_dynptr_data;
 }
 
 static bool is_acquire_function(enum bpf_func_id func_id,
@@ -496,7 +497,8 @@ static bool is_acquire_function(enum bpf_func_id func_id,
 	    func_id == BPF_FUNC_sk_lookup_udp ||
 	    func_id == BPF_FUNC_skc_lookup_tcp ||
 	    func_id == BPF_FUNC_ringbuf_reserve ||
-	    func_id == BPF_FUNC_kptr_xchg)
+	    func_id == BPF_FUNC_kptr_xchg ||
+	    func_id == BPF_FUNC_dynptr_data)
 		return true;
 
 	if (func_id == BPF_FUNC_map_lookup_elem &&
@@ -518,6 +520,11 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
 		func_id == BPF_FUNC_skc_to_tcp_request_sock;
 }
 
+static inline bool is_dynptr_ref_function(enum bpf_func_id func_id)
+{
+	return func_id == BPF_FUNC_dynptr_data;
+}
+
 static bool is_cmpxchg_insn(const struct bpf_insn *insn)
 {
 	return BPF_CLASS(insn->code) == BPF_STX &&
@@ -568,6 +575,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
 		strncpy(prefix, "rdonly_", 32);
 	if (type & MEM_ALLOC)
 		strncpy(prefix, "alloc_", 32);
+	if (type & MEM_DYNPTR)
+		strncpy(prefix, "dynptr_", 32);
 	if (type & MEM_USER)
 		strncpy(prefix, "user_", 32);
 	if (type & MEM_PERCPU)
@@ -797,6 +806,20 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_re
 	return state->stack[spi].spilled_ptr.dynptr.type == arg_to_dynptr_type(arg_type);
 }
 
+static bool is_ref_obj_id_dynptr(struct bpf_func_state *state, u32 id)
+{
+	int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
+	int i;
+
+	for (i = 0; i < allocated_slots; i++) {
+		if (state->stack[i].slot_type[0] == STACK_DYNPTR &&
+		    state->stack[i].spilled_ptr.id == id)
+			return true;
+	}
+
+	return false;
+}
+
 /* The reg state of a pointer or a bounded scalar was saved when
  * it was spilled to the stack.
  */
@@ -5647,6 +5670,7 @@ static const struct bpf_reg_types mem_types = {
 		PTR_TO_MAP_VALUE,
 		PTR_TO_MEM,
 		PTR_TO_MEM | MEM_ALLOC,
+		PTR_TO_MEM | MEM_DYNPTR,
 		PTR_TO_BUF,
 	},
 };
@@ -5799,6 +5823,7 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env,
 	case PTR_TO_MEM:
 	case PTR_TO_MEM | MEM_RDONLY:
 	case PTR_TO_MEM | MEM_ALLOC:
+	case PTR_TO_MEM | MEM_DYNPTR:
 	case PTR_TO_BUF:
 	case PTR_TO_BUF | MEM_RDONLY:
 	case PTR_TO_STACK:
@@ -5833,6 +5858,14 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env,
 	return __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
 }
 
+static inline u32 stack_slot_get_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+{
+	struct bpf_func_state *state = func(env, reg);
+	int spi = get_spi(reg->off);
+
+	return state->stack[spi].spilled_ptr.id;
+}
+
 static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			  struct bpf_call_arg_meta *meta,
 			  const struct bpf_func_proto *fn)
@@ -7371,10 +7404,31 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 		/* For release_reference() */
 		regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
 	} else if (is_acquire_function(func_id, meta.map_ptr)) {
-		int id = acquire_reference_state(env, insn_idx);
+		int id = 0;
+
+		if (is_dynptr_ref_function(func_id)) {
+			int i;
+
+			/* Find the id of the dynptr we're acquiring a reference to */
+			for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
+				if (arg_type_is_dynptr(fn->arg_type[i])) {
+					if (id) {
+						verbose(env, "verifier internal error: more than one dynptr arg in a dynptr ref func\n");
+						return -EFAULT;
+					}
+					id = stack_slot_get_id(env, &regs[BPF_REG_1 + i]);
+				}
+			}
+			if (!id) {
+				verbose(env, "verifier internal error: no dynptr args to a dynptr ref func\n");
+				return -EFAULT;
+			}
+		} else {
+			id = acquire_reference_state(env, insn_idx);
+			if (id < 0)
+				return id;
+		}
 
-		if (id < 0)
-			return id;
 		/* For mark_ptr_or_null_reg() */
 		regs[BPF_REG_0].id = id;
 		/* For release_reference() */
@@ -9810,7 +9864,8 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
 	u32 id = regs[regno].id;
 	int i;
 
-	if (ref_obj_id && ref_obj_id == id && is_null)
+	if (ref_obj_id && ref_obj_id == id && is_null &&
+	    !is_ref_obj_id_dynptr(state, id))
 		/* regs[regno] is in the " == NULL" branch.
 		 * No one could have freed the reference state before
 		 * doing the NULL check.
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index f0c5ca220d8e..edeff26fbccd 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5226,6 +5226,17 @@ union bpf_attr {
  *		0 on success, -E2BIG if *offset* + *len* exceeds the length
  *		of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
  *		is a read-only dynptr.
+ *
+ * void *bpf_dynptr_data(struct bpf_dynptr *ptr, u32 offset, u32 len)
+ *	Description
+ *		Get a pointer to the underlying dynptr data.
+ *
+ *		*len* must be a statically known value. The returned data slice
+ *		is invalidated whenever the dynptr is invalidated.
+ *	Return
+ *		Pointer to the underlying dynptr data, NULL if the dynptr is
+ *		read-only, if the dynptr is invalid, or if the offset and length
+ *		is out of bounds.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5430,6 +5441,7 @@ union bpf_attr {
 	FN(ringbuf_discard_dynptr),	\
 	FN(dynptr_read),		\
 	FN(dynptr_write),		\
+	FN(dynptr_data),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 6/6] bpf: Dynptr tests
  2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
                   ` (4 preceding siblings ...)
  2022-05-09 22:42 ` [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices Joanne Koong
@ 2022-05-09 22:42 ` Joanne Koong
  5 siblings, 0 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-09 22:42 UTC (permalink / raw)
  To: bpf; +Cc: andrii, ast, daniel, Joanne Koong

This patch adds tests for dynptrs, which include cases that the
verifier needs to reject (for example, invalid bpf_dynptr_put usages
and  invalid writes/reads), as well as cases that should successfully
pass.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../testing/selftests/bpf/prog_tests/dynptr.c | 136 ++++
 .../testing/selftests/bpf/progs/dynptr_fail.c | 582 ++++++++++++++++++
 .../selftests/bpf/progs/dynptr_success.c      | 206 +++++++
 3 files changed, 924 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/dynptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/dynptr_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/dynptr_success.c

diff --git a/tools/testing/selftests/bpf/prog_tests/dynptr.c b/tools/testing/selftests/bpf/prog_tests/dynptr.c
new file mode 100644
index 000000000000..fa287d498d0b
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/dynptr.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Facebook */
+
+#include <test_progs.h>
+#include "dynptr_fail.skel.h"
+#include "dynptr_success.skel.h"
+
+static size_t log_buf_sz = 1048576; /* 1 MB */
+static char obj_log_buf[1048576];
+
+static struct {
+	const char *prog_name;
+	const char *expected_err_msg;
+} dynptr_tests[] = {
+	/* failure cases */
+	{"missing_put", "Unreleased reference id=1"},
+	{"missing_put_callback", "Unreleased reference id"},
+	{"put_nonalloc", "Expected an initialized malloc dynptr as arg #1"},
+	{"put_data_slice", "type=dynptr_mem expected=fp"},
+	{"put_uninit_dynptr", "arg 1 is an unacquired reference"},
+	{"use_after_put", "Expected an initialized dynptr as arg #3"},
+	{"alloc_twice", "Arg #3 dynptr has to be an uninitialized dynptr"},
+	{"add_dynptr_to_map1", "invalid indirect read from stack"},
+	{"add_dynptr_to_map2", "invalid indirect read from stack"},
+	{"ringbuf_invalid_access", "invalid mem access 'scalar'"},
+	{"ringbuf_invalid_api", "type=dynptr_mem expected=alloc_mem"},
+	{"ringbuf_out_of_bounds", "value is outside of the allowed memory range"},
+	{"data_slice_out_of_bounds", "value is outside of the allowed memory range"},
+	{"data_slice_use_after_put", "invalid mem access 'scalar'"},
+	{"invalid_helper1", "invalid indirect read from stack"},
+	{"invalid_helper2", "Expected an initialized dynptr as arg #3"},
+	{"invalid_write1", "Expected an initialized malloc dynptr as arg #1"},
+	{"invalid_write2", "Expected an initialized dynptr as arg #3"},
+	{"invalid_write3", "Expected an initialized malloc dynptr as arg #1"},
+	{"invalid_write4", "arg 1 is an unacquired reference"},
+	{"invalid_read1", "invalid read from stack"},
+	{"invalid_read2", "cannot pass in non-zero dynptr offset"},
+	{"invalid_read3", "invalid read from stack"},
+	{"invalid_offset", "invalid write to stack"},
+	{"global", "R3 type=map_value expected=fp"},
+	{"put_twice", "arg 1 is an unacquired reference"},
+	{"put_twice_callback", "arg 1 is an unacquired reference"},
+	{"zero_slice_access", "invalid access to memory, mem_size=0 off=0 size=1"},
+	/* success cases */
+	{"test_basic", NULL},
+	{"test_data_slice", NULL},
+	{"test_ringbuf", NULL},
+	{"test_alloc_zero_bytes", NULL},
+};
+
+static void verify_fail(const char *prog_name, const char *expected_err_msg)
+{
+	LIBBPF_OPTS(bpf_object_open_opts, opts);
+	struct bpf_program *prog;
+	struct dynptr_fail *skel;
+	int err;
+
+	opts.kernel_log_buf = obj_log_buf;
+	opts.kernel_log_size = log_buf_sz;
+	opts.kernel_log_level = 1;
+
+	skel = dynptr_fail__open_opts(&opts);
+	if (!ASSERT_OK_PTR(skel, "dynptr_fail__open_opts"))
+		goto cleanup;
+
+	prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+	if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+		goto cleanup;
+
+	bpf_program__set_autoload(prog, true);
+
+	bpf_map__set_max_entries(skel->maps.ringbuf, getpagesize());
+
+	err = dynptr_fail__load(skel);
+	if (!ASSERT_ERR(err, "unexpected load success"))
+		goto cleanup;
+
+	if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) {
+		fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg);
+		fprintf(stderr, "Verifier output: %s\n", obj_log_buf);
+	}
+
+cleanup:
+	dynptr_fail__destroy(skel);
+}
+
+static void verify_success(const char *prog_name)
+{
+	struct dynptr_success *skel;
+	struct bpf_program *prog;
+	struct bpf_link *link;
+
+	skel = dynptr_success__open();
+	if (!ASSERT_OK_PTR(skel, "dynptr_success__open"))
+		return;
+
+	skel->bss->pid = getpid();
+
+	bpf_map__set_max_entries(skel->maps.ringbuf, getpagesize());
+
+	dynptr_success__load(skel);
+	if (!ASSERT_OK_PTR(skel, "dynptr_success__load"))
+		goto cleanup;
+
+	prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+	if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+		goto cleanup;
+
+	link = bpf_program__attach(prog);
+	if (!ASSERT_OK_PTR(link, "bpf_program__attach"))
+		goto cleanup;
+
+	usleep(1);
+
+	ASSERT_EQ(skel->bss->err, 0, "err");
+
+	bpf_link__destroy(link);
+
+cleanup:
+	dynptr_success__destroy(skel);
+}
+
+void test_dynptr(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dynptr_tests); i++) {
+		if (!test__start_subtest(dynptr_tests[i].prog_name))
+			continue;
+
+		if (dynptr_tests[i].expected_err_msg)
+			verify_fail(dynptr_tests[i].prog_name, dynptr_tests[i].expected_err_msg);
+		else
+			verify_success(dynptr_tests[i].prog_name);
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
new file mode 100644
index 000000000000..dfa4593fa94a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -0,0 +1,582 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Facebook */
+
+#include <string.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct test_info {
+	int x;
+	struct bpf_dynptr ptr;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, struct bpf_dynptr);
+} array_map1 SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, struct test_info);
+} array_map2 SEC(".maps");
+
+struct sample {
+	int pid;
+	long value;
+	char comm[16];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+
+int err = 0;
+int val;
+
+/* Every bpf_dynptr_alloc call must have a corresponding bpf_dynptr_put call */
+SEC("?raw_tp/sys_nanosleep")
+int missing_put(void *ctx)
+{
+	struct bpf_dynptr mem;
+
+	bpf_dynptr_alloc(8, 0, &mem);
+
+	/* missing a call to bpf_dynptr_put(&mem) */
+
+	return 0;
+}
+
+static int missing_put_callback_fn(__u32 index, void *data)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	val = index;
+
+	/* missing bpf_dynptr_put(&ptr) */
+
+	return 0;
+}
+
+/* Any dynptr initialized within a callback must have bpf_dynptr_put called */
+SEC("?raw_tp/sys_nanosleep")
+int missing_put_callback(void *ctx)
+{
+	bpf_loop(10, missing_put_callback_fn, NULL, 0);
+	return 0;
+}
+
+/* A non-alloc-ed dynptr can't be used by bpf_dynptr_put */
+SEC("?raw_tp/sys_nanosleep")
+int put_nonalloc(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+	/* this should fail */
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A data slice from a dynptr can't be used by bpf_dynptr_put */
+SEC("?raw_tp/sys_nanosleep")
+int put_data_slice(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	void *data;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	data = bpf_dynptr_data(&ptr, 0, 8);
+	if (!data)
+		goto done;
+
+	/* this should fail */
+	bpf_dynptr_put(data);
+
+done:
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
+
+/* Can't call bpf_dynptr_put on a non-initialized dynptr */
+SEC("?raw_tp/sys_nanosleep")
+int put_uninit_dynptr(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	/* this should fail */
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A dynptr can't be used after bpf_dynptr_put has been called on it */
+SEC("?raw_tp/sys_nanosleep")
+int use_after_put(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	char read_data[64] = {};
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	bpf_dynptr_read(read_data, sizeof(read_data), &ptr, 0);
+
+	bpf_dynptr_put(&ptr);
+
+	/* this should fail */
+	bpf_dynptr_read(read_data, sizeof(read_data), &ptr, 0);
+
+	return 0;
+}
+
+/*
+ * Can't bpf_dynptr_alloc an existing allocated bpf_dynptr that bpf_dynptr_put
+ * hasn't been called on yet
+ */
+SEC("?raw_tp/sys_nanosleep")
+int alloc_twice(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	/* this should fail */
+	bpf_dynptr_alloc(2, 0, &ptr);
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/*
+ * Can't access a ring buffer record after submit or discard has been called
+ * on the dynptr
+ */
+SEC("?raw_tp/sys_nanosleep")
+int ringbuf_invalid_access(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	struct sample *sample;
+
+	err = bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
+	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
+	if (!sample)
+		goto done;
+
+	sample->pid = 123;
+
+	bpf_ringbuf_submit_dynptr(&ptr, 0);
+
+	/* this should fail */
+	err = sample->pid;
+
+	return 0;
+
+done:
+	bpf_ringbuf_discard_dynptr(&ptr, 0);
+	return 0;
+}
+
+/* Can't call non-dynptr ringbuf APIs on a dynptr ringbuf sample */
+SEC("?raw_tp/sys_nanosleep")
+int ringbuf_invalid_api(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	struct sample *sample;
+
+	err = bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
+	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
+	if (!sample)
+		goto done;
+
+	sample->pid = 123;
+
+	/* invalid API use. need to use dynptr API to submit/discard */
+	bpf_ringbuf_submit(sample, 0);
+
+done:
+	bpf_ringbuf_discard_dynptr(&ptr, 0);
+	return 0;
+}
+
+/* Can't access memory outside a ringbuf record range */
+SEC("?raw_tp/sys_nanosleep")
+int ringbuf_out_of_bounds(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	struct sample *sample;
+
+	err = bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
+	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
+	if (!sample)
+		goto done;
+
+	/* Can't access beyond sample range */
+	*(__u8 *)((void *)sample + sizeof(*sample)) = 123;
+
+	bpf_ringbuf_submit_dynptr(&ptr, 0);
+
+	return 0;
+
+done:
+	bpf_ringbuf_discard_dynptr(&ptr, 0);
+	return 0;
+}
+
+/* Can't add a dynptr to a map */
+SEC("?raw_tp/sys_nanosleep")
+int add_dynptr_to_map1(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	int key = 0;
+
+	err = bpf_dynptr_alloc(sizeof(val), 0, &ptr);
+
+	/* this should fail */
+	bpf_map_update_elem(&array_map1, &key, &ptr, 0);
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* Can't add a struct with an embedded dynptr to a map */
+SEC("?raw_tp/sys_nanosleep")
+int add_dynptr_to_map2(void *ctx)
+{
+	struct test_info x;
+	int key = 0;
+
+	bpf_dynptr_alloc(sizeof(val), 0, &x.ptr);
+
+	/* this should fail */
+	bpf_map_update_elem(&array_map2, &key, &x, 0);
+
+	return 0;
+}
+
+/* Can't pass in a dynptr as an arg to a helper function that doesn't take in a
+ * dynptr argument
+ */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_helper1(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	/* this should fail */
+	bpf_strncmp((const char *)&ptr, sizeof(ptr), "hello!");
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A dynptr can't be passed into a helper function at a non-zero offset */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_helper2(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	char read_data[64] = {};
+
+	bpf_dynptr_alloc(sizeof(val), 0, &ptr);
+
+	/* this should fail */
+	bpf_dynptr_read(read_data, sizeof(read_data), (void *)&ptr + 8, 0);
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A data slice can't be accessed out of bounds */
+SEC("?raw_tp/sys_nanosleep")
+int data_slice_out_of_bounds(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	void *data;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	data = bpf_dynptr_data(&ptr, 0, 8);
+	if (!data)
+		goto done;
+
+	/* can't index out of bounds of the data slice */
+	val = *((char *)data + 8);
+
+done:
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
+
+/* A data slice can't be used after bpf_dynptr_put is called */
+SEC("?raw_tp/sys_nanosleep")
+int data_slice_use_after_put(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	void *data;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	data = bpf_dynptr_data(&ptr, 0, 8);
+	if (!data)
+		goto done;
+
+	bpf_dynptr_put(&ptr);
+
+	/* this should fail */
+	val = *(__u8 *)data;
+
+done:
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
+
+/* A bpf_dynptr can't be used as a dynptr if it's been written into */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_write1(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	__u8 x = 0;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	memcpy(&ptr, &x, sizeof(x));
+
+	/* this should fail */
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/*
+ * A bpf_dynptr can't be used as a dynptr if an offset into it has been
+ * written into
+ */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_write2(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	char read_data[64] = {};
+	__u8 x = 0, y = 0;
+
+	bpf_dynptr_alloc(sizeof(x), 0, &ptr);
+
+	memcpy((void *)&ptr + 8, &y, sizeof(y));
+
+	/* this should fail */
+	bpf_dynptr_read(read_data, sizeof(read_data), &ptr, 0);
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/*
+ * A bpf_dynptr can't be used as a dynptr if a non-const offset into it
+ * has been written into
+ */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_write3(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	char stack_buf[16];
+	unsigned long len;
+	__u8 x = 0;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	memcpy(stack_buf, &val, sizeof(val));
+	len = stack_buf[0] & 0xf;
+
+	memcpy((void *)&ptr + len, &x, sizeof(x));
+
+	/* this should fail */
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+static int invalid_write4_callback(__u32 index, void *data)
+{
+	*(__u32 *)data = 123;
+
+	return 0;
+}
+
+/* If the dynptr is written into in a callback function, it should
+ * be invalidated as a dynptr
+ */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_write4(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	__u64 x = 0;
+
+	bpf_dynptr_alloc(sizeof(x), 0, &ptr);
+
+	bpf_loop(10, invalid_write4_callback, &ptr, 0);
+
+	/* this should fail */
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A globally-defined bpf_dynptr can't be used (it must reside as a stack frame) */
+struct bpf_dynptr global_dynptr;
+SEC("?raw_tp/sys_nanosleep")
+int global(void *ctx)
+{
+	/* this should fail */
+	bpf_dynptr_alloc(4, 0, &global_dynptr);
+
+	bpf_dynptr_put(&global_dynptr);
+
+	return 0;
+}
+
+/* A direct read should fail */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_read1(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	__u32 x = 2;
+
+	bpf_dynptr_alloc(sizeof(x), 0, &ptr);
+
+	/* this should fail */
+	val = *(int *)&ptr;
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A direct read at an offset should fail */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_read2(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+	char read_data[64] = {};
+	__u64 x = 0;
+
+	bpf_dynptr_alloc(sizeof(x), 0, &ptr);
+
+	/* this should fail */
+	bpf_dynptr_read(read_data, sizeof(read_data), (void *)&ptr + 1, 0);
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* A direct read at an offset into the lower stack slot should fail */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_read3(void *ctx)
+{
+	struct bpf_dynptr ptr1 = {};
+	struct bpf_dynptr ptr2 = {};
+
+	bpf_dynptr_alloc(sizeof(val), 0, &ptr1);
+	bpf_dynptr_alloc(sizeof(val), 0, &ptr2);
+
+	/* this should fail */
+	memcpy(&val, (void *)&ptr1 + 8, sizeof(val));
+
+	bpf_dynptr_put(&ptr1);
+	bpf_dynptr_put(&ptr2);
+
+	return 0;
+}
+
+/* Calling bpf_dynptr_alloc on an offset should fail */
+SEC("?raw_tp/sys_nanosleep")
+int invalid_offset(void *ctx)
+{
+	struct bpf_dynptr ptr = {};
+
+	/* this should fail */
+	bpf_dynptr_alloc(sizeof(val), 0, &ptr + 1);
+
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+/* Can't call bpf_dynptr_put twice */
+SEC("?raw_tp/sys_nanosleep")
+int put_twice(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	bpf_dynptr_put(&ptr);
+
+	/* this second put should fail */
+	bpf_dynptr_put(&ptr);
+
+	return 0;
+}
+
+static int put_twice_callback_fn(__u32 index, void *data)
+{
+	/* this should fail */
+	bpf_dynptr_put(data);
+	val = index;
+	return 0;
+}
+
+/* Test that calling bpf_dynptr_put twice, where the 2nd put happens within a
+ * calback function, fails
+ */
+SEC("?raw_tp/sys_nanosleep")
+int put_twice_callback(void *ctx)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_dynptr_alloc(8, 0, &ptr);
+
+	bpf_dynptr_put(&ptr);
+
+	bpf_loop(10, put_twice_callback_fn, &ptr, 0);
+
+	return 0;
+}
+
+/* Can't access memory in a zero-slice */
+SEC("?raw_tp/sys_nanosleep")
+int zero_slice_access(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	void *data;
+
+	bpf_dynptr_alloc(0, 0, &ptr);
+
+	data = bpf_dynptr_data(&ptr, 0, 0);
+	if (!data)
+		goto done;
+
+	/* this should fail */
+	*(__u8 *)data = 23;
+
+	val = *(__u8 *)data;
+
+done:
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c
new file mode 100644
index 000000000000..bbb34943a6c6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
@@ -0,0 +1,206 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Facebook */
+
+#include <string.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+#include "errno.h"
+
+char _license[] SEC("license") = "GPL";
+
+int pid;
+int err;
+int val;
+
+struct sample {
+	int pid;
+	int seq;
+	long value;
+	char comm[16];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_basic(void *ctx)
+{
+	char write_data[64] = "hello there, world!!";
+	char read_data[64] = {}, buf[64] = {};
+	struct bpf_dynptr ptr = {};
+	int i;
+
+	if (bpf_get_current_pid_tgid() >> 32 != pid)
+		return 0;
+
+	err = bpf_dynptr_alloc(sizeof(write_data), 0, &ptr);
+
+	/* Write data into the dynptr */
+	err = err ?: bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data));
+
+	/* Read the data that was written into the dynptr */
+	err = err ?: bpf_dynptr_read(read_data, sizeof(read_data), &ptr, 0);
+
+	/* Ensure the data we read matches the data we wrote */
+	for (i = 0; i < sizeof(read_data); i++) {
+		if (read_data[i] != write_data[i]) {
+			err = 1;
+			break;
+		}
+	}
+
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_data_slice(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	__u32 alloc_size = 16;
+	void *data;
+
+	if (bpf_get_current_pid_tgid() >> 32 != pid)
+		return 0;
+
+	/* test passing in an invalid flag */
+	err = bpf_dynptr_alloc(alloc_size, 1, &ptr);
+	if (err != -EINVAL) {
+		err = 1;
+		goto done;
+	}
+	bpf_dynptr_put(&ptr);
+
+	err = bpf_dynptr_alloc(alloc_size, 0, &ptr);
+	if (err)
+		goto done;
+
+	/* Try getting a data slice that is out of range */
+	data = bpf_dynptr_data(&ptr, alloc_size + 1, 1);
+	if (data) {
+		err = 2;
+		goto done;
+	}
+
+	/* Try getting more bytes than available */
+	data = bpf_dynptr_data(&ptr, 0, alloc_size + 1);
+	if (data) {
+		err = 3;
+		goto done;
+	}
+
+	data = bpf_dynptr_data(&ptr, 0, sizeof(int));
+	if (!data) {
+		err = 4;
+		goto done;
+	}
+
+	*(__u32 *)data = 999;
+
+	err = bpf_probe_read_kernel(&val, sizeof(val), data);
+	if (err)
+		goto done;
+
+	if (val != *(int *)data)
+		err = 5;
+
+done:
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
+
+static int ringbuf_callback(__u32 index, void *data)
+{
+	struct sample *sample;
+
+	struct bpf_dynptr *ptr = (struct bpf_dynptr *)data;
+
+	sample = bpf_dynptr_data(ptr, 0, sizeof(*sample));
+	if (!sample)
+		err = 2;
+	else
+		sample->pid += val;
+
+	return 0;
+}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_ringbuf(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	struct sample *sample;
+
+	if (bpf_get_current_pid_tgid() >> 32 != pid)
+		return 0;
+
+	val = 100;
+
+	/* check that you can reserve a dynamic size reservation */
+	err = bpf_ringbuf_reserve_dynptr(&ringbuf, val, 0, &ptr);
+
+	sample = err ? NULL : bpf_dynptr_data(&ptr, 0, sizeof(*sample));
+	if (!sample) {
+		err = 1;
+		goto done;
+	}
+
+	sample->pid = 123;
+
+	/* Can pass dynptr to callback functions */
+	bpf_loop(10, ringbuf_callback, &ptr, 0);
+
+	bpf_ringbuf_submit_dynptr(&ptr, 0);
+
+	return 0;
+
+done:
+	bpf_ringbuf_discard_dynptr(&ptr, 0);
+	return 0;
+}
+
+SEC("tp/syscalls/sys_enter_nanosleep")
+int test_alloc_zero_bytes(void *ctx)
+{
+	struct bpf_dynptr ptr;
+	void *data;
+	__u8 x = 0;
+
+	if (bpf_get_current_pid_tgid() >> 32 != pid)
+		return 0;
+
+	err = bpf_dynptr_alloc(0, 0, &ptr);
+	if (err)
+		goto done;
+
+	err = bpf_dynptr_write(&ptr, 0, &x, sizeof(x));
+	if (err != -E2BIG) {
+		err = 1;
+		goto done;
+	}
+
+	err = bpf_dynptr_read(&x, sizeof(x), &ptr, 0);
+	if (err != -E2BIG) {
+		err = 2;
+		goto done;
+	}
+	err = 0;
+
+	/* try to access memory we don't have access to */
+	data = bpf_dynptr_data(&ptr, 0, 1);
+	if (data) {
+		err = 3;
+		goto done;
+	}
+
+	data = bpf_dynptr_data(&ptr, 0, 0);
+	if (!data) {
+		err = 4;
+		goto done;
+	}
+
+done:
+	bpf_dynptr_put(&ptr);
+	return 0;
+}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-09 22:42 ` [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs Joanne Koong
@ 2022-05-12  0:05   ` Daniel Borkmann
  2022-05-12 20:03     ` Joanne Koong
  2022-05-16 20:52     ` Joanne Koong
  2022-05-13 20:59   ` Andrii Nakryiko
  2022-05-13 21:36   ` David Vernet
  2 siblings, 2 replies; 27+ messages in thread
From: Daniel Borkmann @ 2022-05-12  0:05 UTC (permalink / raw)
  To: Joanne Koong, bpf; +Cc: andrii, ast

On 5/10/22 12:42 AM, Joanne Koong wrote:
[...]
> @@ -6498,6 +6523,11 @@ struct bpf_timer {
>   	__u64 :64;
>   } __attribute__((aligned(8)));
>   
> +struct bpf_dynptr {
> +	__u64 :64;
> +	__u64 :64;
> +} __attribute__((aligned(8)));
> +
>   struct bpf_sysctl {
>   	__u32	write;		/* Sysctl is being read (= 0) or written (= 1).
>   				 * Allows 1,2,4-byte read, but no write.
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 8a2398ac14c2..a4272e9239ea 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1396,6 +1396,77 @@ const struct bpf_func_proto bpf_kptr_xchg_proto = {
>   	.arg2_btf_id  = BPF_PTR_POISON,
>   };
>   
> +void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
> +		     u32 offset, u32 size)
> +{
> +	ptr->data = data;
> +	ptr->offset = offset;
> +	ptr->size = size;
> +	bpf_dynptr_set_type(ptr, type);
> +}
> +
> +void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
> +{
> +	memset(ptr, 0, sizeof(*ptr));
> +}
> +
> +BPF_CALL_3(bpf_dynptr_alloc, u32, size, u64, flags, struct bpf_dynptr_kern *, ptr)
> +{
> +	gfp_t gfp_flags = GFP_ATOMIC;

nit: should also have __GFP_NOWARN

I presume mem accounting cannot be done on this one given there is no real "ownership"
of this piece of mem?

Was planning to run some more local tests tomorrow, but from glance at selftest side
I haven't seen sanity checks like these:

bpf_dynptr_alloc(8, 0, &ptr);
data = bpf_dynptr_data(&ptr, 0, 0);
bpf_dynptr_put(&ptr);
*(__u8 *)data = 23;

How is this prevented? I think you do a ptr id check in the is_dynptr_ref_function
check on the acquire function, but with above use, would our data pointer escape, or
get invalidated via last put?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-12  0:05   ` Daniel Borkmann
@ 2022-05-12 20:03     ` Joanne Koong
  2022-05-13 13:12       ` Daniel Borkmann
  2022-05-16 20:52     ` Joanne Koong
  1 sibling, 1 reply; 27+ messages in thread
From: Joanne Koong @ 2022-05-12 20:03 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov

On Wed, May 11, 2022 at 5:05 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 5/10/22 12:42 AM, Joanne Koong wrote:
> [...]
> > @@ -6498,6 +6523,11 @@ struct bpf_timer {
> >       __u64 :64;
> >   } __attribute__((aligned(8)));
> >
> > +struct bpf_dynptr {
> > +     __u64 :64;
> > +     __u64 :64;
> > +} __attribute__((aligned(8)));
> > +
> >   struct bpf_sysctl {
> >       __u32   write;          /* Sysctl is being read (= 0) or written (= 1).
> >                                * Allows 1,2,4-byte read, but no write.
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 8a2398ac14c2..a4272e9239ea 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -1396,6 +1396,77 @@ const struct bpf_func_proto bpf_kptr_xchg_proto = {
> >       .arg2_btf_id  = BPF_PTR_POISON,
> >   };
> >
> > +void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
> > +                  u32 offset, u32 size)
> > +{
> > +     ptr->data = data;
> > +     ptr->offset = offset;
> > +     ptr->size = size;
> > +     bpf_dynptr_set_type(ptr, type);
> > +}
> > +
> > +void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
> > +{
> > +     memset(ptr, 0, sizeof(*ptr));
> > +}
> > +
> > +BPF_CALL_3(bpf_dynptr_alloc, u32, size, u64, flags, struct bpf_dynptr_kern *, ptr)
> > +{
> > +     gfp_t gfp_flags = GFP_ATOMIC;
>
> nit: should also have __GFP_NOWARN
I will add this in to v5
>
> I presume mem accounting cannot be done on this one given there is no real "ownership"
> of this piece of mem?
I'm not too familiar with memory accounting, but I think the ownership
can get ambiguous given that the memory can be persisted in a map and
"owned" by different bpf programs (eg the one that frees it may not be
the same one that allocated it)
>
> Was planning to run some more local tests tomorrow, but from glance at selftest side
> I haven't seen sanity checks like these:
>
> bpf_dynptr_alloc(8, 0, &ptr);
> data = bpf_dynptr_data(&ptr, 0, 0);
> bpf_dynptr_put(&ptr);
> *(__u8 *)data = 23;
>
> How is this prevented? I think you do a ptr id check in the is_dynptr_ref_function
> check on the acquire function, but with above use, would our data pointer escape, or
> get invalidated via last put?

There's a subtest inside the dynptr_fail.c file called
"data_slice_use_after_put" that does:

bpf_dynptr_alloc(8, 0, &ptr);
data =bpf_dynptr_data(&ptr, 0, 8);
bpf_dynptr_put(&ptr);
val = *(__u8 *)data;

and checks that trying to dereference the data slice in that last line
fails the verifier (with error msg "invalid mem access 'scalar'")

In the verifier, the call to bpf_dynptr_put will invalidate any data
slices associated with the dyntpr. This happens in
unmark_stack_slots_dynptr() which calls release_reference() which
marks the data slice reg as an unknown scalar value. When you try to
then dereference the data slice, the verifier rejects it with an
"invalid mem access 'scalar'" message.

Thanks for your comments.
>
> Thanks,
> Daniel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-12 20:03     ` Joanne Koong
@ 2022-05-13 13:12       ` Daniel Borkmann
  2022-05-13 16:39         ` Alexei Starovoitov
  0 siblings, 1 reply; 27+ messages in thread
From: Daniel Borkmann @ 2022-05-13 13:12 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov

On 5/12/22 10:03 PM, Joanne Koong wrote:
> On Wed, May 11, 2022 at 5:05 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 5/10/22 12:42 AM, Joanne Koong wrote:
>> [...]
>>> @@ -6498,6 +6523,11 @@ struct bpf_timer {
>>>        __u64 :64;
>>>    } __attribute__((aligned(8)));
>>>
>>> +struct bpf_dynptr {
>>> +     __u64 :64;
>>> +     __u64 :64;
>>> +} __attribute__((aligned(8)));
>>> +
>>>    struct bpf_sysctl {
>>>        __u32   write;          /* Sysctl is being read (= 0) or written (= 1).
>>>                                 * Allows 1,2,4-byte read, but no write.
>>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
>>> index 8a2398ac14c2..a4272e9239ea 100644
>>> --- a/kernel/bpf/helpers.c
>>> +++ b/kernel/bpf/helpers.c
>>> @@ -1396,6 +1396,77 @@ const struct bpf_func_proto bpf_kptr_xchg_proto = {
>>>        .arg2_btf_id  = BPF_PTR_POISON,
>>>    };
>>>
>>> +void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
>>> +                  u32 offset, u32 size)
>>> +{
>>> +     ptr->data = data;
>>> +     ptr->offset = offset;
>>> +     ptr->size = size;
>>> +     bpf_dynptr_set_type(ptr, type);
>>> +}
>>> +
>>> +void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
>>> +{
>>> +     memset(ptr, 0, sizeof(*ptr));
>>> +}
>>> +
>>> +BPF_CALL_3(bpf_dynptr_alloc, u32, size, u64, flags, struct bpf_dynptr_kern *, ptr)
>>> +{
>>> +     gfp_t gfp_flags = GFP_ATOMIC;
>>
>> nit: should also have __GFP_NOWARN
> I will add this in to v5
>>
>> I presume mem accounting cannot be done on this one given there is no real "ownership"
>> of this piece of mem?
> I'm not too familiar with memory accounting, but I think the ownership
> can get ambiguous given that the memory can be persisted in a map and
> "owned" by different bpf programs (eg the one that frees it may not be
> the same one that allocated it)

Right, it's ambiguous. My worry in particular is that where you added the
BPF_FUNC_dynptr_alloc in the bpf_base_func_proto() with this, e.g. it would
also mean that even unprivileged bpf would be able to allocate huge chunks
of DYNPTR_MAX_SIZE (resp. whatever max kmalloc memory under atomic constraints
can provide) without holding anyone accountable for it.

Thinking more about it, is there even any value for BPF_FUNC_dynptr_* for
fully unpriv BPF if these are rejected anyway by the spectre mitigations
from verifier?

>> Was planning to run some more local tests tomorrow, but from glance at selftest side
>> I haven't seen sanity checks like these:
>>
>> bpf_dynptr_alloc(8, 0, &ptr);
>> data = bpf_dynptr_data(&ptr, 0, 0);
>> bpf_dynptr_put(&ptr);
>> *(__u8 *)data = 23;
>>
>> How is this prevented? I think you do a ptr id check in the is_dynptr_ref_function
>> check on the acquire function, but with above use, would our data pointer escape, or
>> get invalidated via last put?
> 
> There's a subtest inside the dynptr_fail.c file called
> "data_slice_use_after_put" that does:
> 
> bpf_dynptr_alloc(8, 0, &ptr);
> data =bpf_dynptr_data(&ptr, 0, 8);
> bpf_dynptr_put(&ptr);
> val = *(__u8 *)data;
> 
> and checks that trying to dereference the data slice in that last line
> fails the verifier (with error msg "invalid mem access 'scalar'")
> 
> In the verifier, the call to bpf_dynptr_put will invalidate any data
> slices associated with the dyntpr. This happens in
> unmark_stack_slots_dynptr() which calls release_reference() which
> marks the data slice reg as an unknown scalar value. When you try to
> then dereference the data slice, the verifier rejects it with an
> "invalid mem access 'scalar'" message.

Got it, thanks for claryfying! While playing a bit around locally with the
selftests and in relation to above earlier statement, one thing I noticed is
that bpf_dynptr_alloc() and subsequent read is allowed:

bpf_dynptr_alloc(8, 0, &ptr);
bpf_dynptr_read(tmp, sizeof(tmp), &ptr, 0);
bpf_dynptr_put(&ptr);

bpf_dynptr_alloc(8, 0, &ptr);
data = bpf_dynptr_data(&ptr, 0, 8);
// read access uninit data[x]
bpf_dynptr_put(&ptr);

So either for alloc, we always built-in __GFP_ZERO or bpf_dynptr_alloc()
helper usage should go under perfmon_capable() where it's allowed to read
kernel mem.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag
  2022-05-09 22:42 ` [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag Joanne Koong
@ 2022-05-13 14:11   ` David Vernet
  0 siblings, 0 replies; 27+ messages in thread
From: David Vernet @ 2022-05-13 14:11 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, andrii, ast, daniel

On Mon, May 09, 2022 at 03:42:52PM -0700, Joanne Koong wrote:
> Instead of having uninitialized versions of arguments as separate
> bpf_arg_types (eg ARG_PTR_TO_UNINIT_MEM as the uninitialized version
> of ARG_PTR_TO_MEM), we can instead use MEM_UNINIT as a bpf_type_flag
> modifier to denote that the argument is uninitialized.
> 
> Doing so cleans up some of the logic in the verifier. We no longer
> need to do two checks against an argument type (eg "if
> (base_type(arg_type) == ARG_PTR_TO_MEM || base_type(arg_type) ==
> ARG_PTR_TO_UNINIT_MEM)"), since uninitialized and initialized
> versions of the same argument type will now share the same base type.
> 
> In the near future, MEM_UNINIT will be used by dynptr helper functions
> as well.
> 
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>

Looks great, thanks.

Acked-by: David Vernet <void@manifault.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-13 13:12       ` Daniel Borkmann
@ 2022-05-13 16:39         ` Alexei Starovoitov
  2022-05-13 19:28           ` Daniel Borkmann
  0 siblings, 1 reply; 27+ messages in thread
From: Alexei Starovoitov @ 2022-05-13 16:39 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: Joanne Koong, bpf, Andrii Nakryiko, Alexei Starovoitov

On Fri, May 13, 2022 at 03:12:06PM +0200, Daniel Borkmann wrote:
> 
> Thinking more about it, is there even any value for BPF_FUNC_dynptr_* for
> fully unpriv BPF if these are rejected anyway by the spectre mitigations
> from verifier?
...
> So either for alloc, we always built-in __GFP_ZERO or bpf_dynptr_alloc()
> helper usage should go under perfmon_capable() where it's allowed to read
> kernel mem.

dynptr should probably by cap_bpf and cap_perfmon for now.
Otherwise we will start adding cap_perfmon checks in run-time to helpers
which is not easy to do. Some sort of prog or user context would need
to be passed as hidden arg into helper. That's too much hassle just
to enable dynptr for cap_bpf only.

Similar problem with gfp_account... remembering memcg and passing all
the way to bpf_dynptr_alloc helper is not easy. And it's not clear
which memcg to use. The one where task was that loaded that bpf prog?
That task could have been gone and cgroup is in dying stage.
bpf prog is executing some context and allocating memory for itself.
Like kernel allocates memory for its needs. It doesn't feel right to
charge prog's memcg in that case. It probably should be an explicit choice
by bpf program author. Maybe in the future we can introduce a fake map
for such accounting needs and bpf prog could pass a map pointer to
bpf_dynptr_alloc. When such fake and empty map is created the memcg
would be recorded the same way we do for existing normal maps.
Then the helper will look like:
bpf_dynptr_alloc(struct bpf_map *map, u32 size, u64 flags, struct bpf_dynptr *ptr)
{
  set_active_memcg(map->memcg);
  kmalloc into dynptr;
}

Should we do this change now and allow NULL to be passed as a map ?
This way the bpf prog will have a choice whether to account into memcg or not.
Maybe it's all overkill and none of this needed?

On the other side maybe map should be a mandatory argument and dynptr_alloc
can do its own memory accounting for stats ? atomic inc and dec is probably
an acceptable overhead? bpftool will print the dynptr allocation stats.
All sounds nice and extra visibility is great, but the kernel code that
allocates for the kernel doesn't use memcg. bpf progs semantically are part of
the kernel whereas memcg is a mechanism to restrict memory that kernel
allocated on behalf of user tasks. We abused memcg for bpf progs/maps
to have a limit. Not clear whether we should continue doing so for dynpr_alloc
and in the future for kptr_alloc. gfp_account adds overhead too. It's not free.
Thoughts?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-13 16:39         ` Alexei Starovoitov
@ 2022-05-13 19:28           ` Daniel Borkmann
  2022-05-13 21:04             ` Andrii Nakryiko
  2022-05-13 22:16             ` Alexei Starovoitov
  0 siblings, 2 replies; 27+ messages in thread
From: Daniel Borkmann @ 2022-05-13 19:28 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Joanne Koong, bpf, Andrii Nakryiko, Alexei Starovoitov

On 5/13/22 6:39 PM, Alexei Starovoitov wrote:
> On Fri, May 13, 2022 at 03:12:06PM +0200, Daniel Borkmann wrote:
>>
>> Thinking more about it, is there even any value for BPF_FUNC_dynptr_* for
>> fully unpriv BPF if these are rejected anyway by the spectre mitigations
>> from verifier?
> ...
>> So either for alloc, we always built-in __GFP_ZERO or bpf_dynptr_alloc()
>> helper usage should go under perfmon_capable() where it's allowed to read
>> kernel mem.
> 
> dynptr should probably by cap_bpf and cap_perfmon for now.
> Otherwise we will start adding cap_perfmon checks in run-time to helpers
> which is not easy to do. Some sort of prog or user context would need
> to be passed as hidden arg into helper. That's too much hassle just
> to enable dynptr for cap_bpf only.
> 
> Similar problem with gfp_account... remembering memcg and passing all
> the way to bpf_dynptr_alloc helper is not easy. And it's not clear
> which memcg to use. The one where task was that loaded that bpf prog?
> That task could have been gone and cgroup is in dying stage.
> bpf prog is executing some context and allocating memory for itself.
> Like kernel allocates memory for its needs. It doesn't feel right to
> charge prog's memcg in that case. It probably should be an explicit choice
> by bpf program author. Maybe in the future we can introduce a fake map
> for such accounting needs and bpf prog could pass a map pointer to
> bpf_dynptr_alloc. When such fake and empty map is created the memcg
> would be recorded the same way we do for existing normal maps.
> Then the helper will look like:
> bpf_dynptr_alloc(struct bpf_map *map, u32 size, u64 flags, struct bpf_dynptr *ptr)
> {
>    set_active_memcg(map->memcg);
>    kmalloc into dynptr;
> }
> 
> Should we do this change now and allow NULL to be passed as a map ?

Hm, this looks a bit too much like a hack, I wouldn't do that, fwiw.

> This way the bpf prog will have a choice whether to account into memcg or not.
> Maybe it's all overkill and none of this needed?
> 
> On the other side maybe map should be a mandatory argument and dynptr_alloc
> can do its own memory accounting for stats ? atomic inc and dec is probably
> an acceptable overhead? bpftool will print the dynptr allocation stats.
> All sounds nice and extra visibility is great, but the kernel code that
> allocates for the kernel doesn't use memcg. bpf progs semantically are part of
> the kernel whereas memcg is a mechanism to restrict memory that kernel
> allocated on behalf of user tasks. We abused memcg for bpf progs/maps
> to have a limit. Not clear whether we should continue doing so for dynpr_alloc
> and in the future for kptr_alloc. gfp_account adds overhead too. It's not free.
> Thoughts?

Great question, I think the memcg is useful, just that the ownership for bpf
progs/maps has been relying on current whereas current is not a real 'owner',
just the entity which did the loading.

Maybe we need some sort of memcg object for bpf where we can "bind" the prog
and map to it at load time, which is then different from current and can be
flexibly set, e.g. fd = open(/sys/fs/cgroup/memory/<foo>) and pass that fd to
BPF_PROG_LOAD and BPF_MAP_CREATE via bpf_attr (otherwise, if not set, then
no accounting)?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-09 22:42 ` [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs Joanne Koong
  2022-05-12  0:05   ` Daniel Borkmann
@ 2022-05-13 20:59   ` Andrii Nakryiko
  2022-05-13 21:36   ` David Vernet
  2 siblings, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2022-05-13 20:59 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann

On Mon, May 9, 2022 at 3:44 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> This patch adds the bulk of the verifier work for supporting dynamic
> pointers (dynptrs) in bpf. This patch implements malloc-type dynptrs
> through 2 new APIs (bpf_dynptr_alloc and bpf_dynptr_put) that can be
> called by a bpf program. Malloc-type dynptrs are dynptrs that dynamically
> allocate memory on behalf of the program.
>
> A bpf_dynptr is opaque to the bpf program. It is a 16-byte structure
> defined internally as:
>
> struct bpf_dynptr_kern {
>     void *data;
>     u32 size;
>     u32 offset;
> } __aligned(8);
>
> The upper 8 bits of *size* is reserved (it contains extra metadata about
> read-only status and dynptr type); consequently, a dynptr only supports
> memory less than 16 MB.
>
> The 2 new APIs for malloc-type dynptrs are:
>
> long bpf_dynptr_alloc(u32 size, u64 flags, struct bpf_dynptr *ptr);
> void bpf_dynptr_put(struct bpf_dynptr *ptr);
>
> Please note that there *must* be a corresponding bpf_dynptr_put for
> every bpf_dynptr_alloc (even if the alloc fails). This is enforced
> by the verifier.
>
> In the verifier, dynptr state information will be tracked in stack
> slots. When the program passes in an uninitialized dynptr
> (ARG_PTR_TO_DYNPTR | MEM_UNINIT), the stack slots corresponding
> to the frame pointer where the dynptr resides at are marked STACK_DYNPTR.
>
> For helper functions that take in initialized dynptrs (eg
> bpf_dynptr_read + bpf_dynptr_write which are added later in this
> patchset), the verifier enforces that the dynptr has been initialized
> properly by checking that their corresponding stack slots have been marked
> as STACK_DYNPTR. Dynptr release functions (eg bpf_dynptr_put) will clear
> the stack slots. The verifier enforces at program exit that there are no
> referenced dynptrs that haven't been released.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
>  include/linux/bpf.h            |  62 ++++++++-
>  include/linux/bpf_verifier.h   |  21 +++
>  include/uapi/linux/bpf.h       |  30 +++++
>  kernel/bpf/helpers.c           |  75 +++++++++++
>  kernel/bpf/verifier.c          | 228 ++++++++++++++++++++++++++++++++-
>  scripts/bpf_doc.py             |   2 +
>  tools/include/uapi/linux/bpf.h |  30 +++++
>  7 files changed, 445 insertions(+), 3 deletions(-)
>

Apart from what Daniel and Alexei are discussing, LGTM

Acked-by: Andrii Nakryiko <andrii@kernel.org>

[...]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers
  2022-05-09 22:42 ` [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers Joanne Koong
@ 2022-05-13 21:02   ` Andrii Nakryiko
  2022-05-16 16:09   ` David Vernet
  1 sibling, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2022-05-13 21:02 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann

On Mon, May 9, 2022 at 3:44 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> Currently, our only way of writing dynamically-sized data into a ring
> buffer is through bpf_ringbuf_output but this incurs an extra memcpy
> cost. bpf_ringbuf_reserve + bpf_ringbuf_commit avoids this extra
> memcpy, but it can only safely support reservation sizes that are
> statically known since the verifier cannot guarantee that the bpf
> program won’t access memory outside the reserved space.
>
> The bpf_dynptr abstraction allows for dynamically-sized ring buffer
> reservations without the extra memcpy.
>
> There are 3 new APIs:
>
> long bpf_ringbuf_reserve_dynptr(void *ringbuf, u32 size, u64 flags, struct bpf_dynptr *ptr);
> void bpf_ringbuf_submit_dynptr(struct bpf_dynptr *ptr, u64 flags);
> void bpf_ringbuf_discard_dynptr(struct bpf_dynptr *ptr, u64 flags);
>
> These closely follow the functionalities of the original ringbuf APIs.
> For example, all ringbuffer dynptrs that have been reserved must be
> either submitted or discarded before the program exits.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---

LGTM

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  include/linux/bpf.h            | 14 +++++-
>  include/uapi/linux/bpf.h       | 35 +++++++++++++++
>  kernel/bpf/helpers.c           |  6 +++
>  kernel/bpf/ringbuf.c           | 78 ++++++++++++++++++++++++++++++++++
>  kernel/bpf/verifier.c          | 16 ++++++-
>  tools/include/uapi/linux/bpf.h | 35 +++++++++++++++
>  6 files changed, 180 insertions(+), 4 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-13 19:28           ` Daniel Borkmann
@ 2022-05-13 21:04             ` Andrii Nakryiko
  2022-05-13 22:16             ` Alexei Starovoitov
  1 sibling, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2022-05-13 21:04 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Alexei Starovoitov, Joanne Koong, bpf, Andrii Nakryiko,
	Alexei Starovoitov

On Fri, May 13, 2022 at 12:28 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 5/13/22 6:39 PM, Alexei Starovoitov wrote:
> > On Fri, May 13, 2022 at 03:12:06PM +0200, Daniel Borkmann wrote:
> >>
> >> Thinking more about it, is there even any value for BPF_FUNC_dynptr_* for
> >> fully unpriv BPF if these are rejected anyway by the spectre mitigations
> >> from verifier?
> > ...
> >> So either for alloc, we always built-in __GFP_ZERO or bpf_dynptr_alloc()
> >> helper usage should go under perfmon_capable() where it's allowed to read
> >> kernel mem.
> >
> > dynptr should probably by cap_bpf and cap_perfmon for now.
> > Otherwise we will start adding cap_perfmon checks in run-time to helpers
> > which is not easy to do. Some sort of prog or user context would need
> > to be passed as hidden arg into helper. That's too much hassle just
> > to enable dynptr for cap_bpf only.
> >
> > Similar problem with gfp_account... remembering memcg and passing all
> > the way to bpf_dynptr_alloc helper is not easy. And it's not clear
> > which memcg to use. The one where task was that loaded that bpf prog?
> > That task could have been gone and cgroup is in dying stage.
> > bpf prog is executing some context and allocating memory for itself.
> > Like kernel allocates memory for its needs. It doesn't feel right to
> > charge prog's memcg in that case. It probably should be an explicit choice
> > by bpf program author. Maybe in the future we can introduce a fake map
> > for such accounting needs and bpf prog could pass a map pointer to
> > bpf_dynptr_alloc. When such fake and empty map is created the memcg
> > would be recorded the same way we do for existing normal maps.
> > Then the helper will look like:
> > bpf_dynptr_alloc(struct bpf_map *map, u32 size, u64 flags, struct bpf_dynptr *ptr)
> > {
> >    set_active_memcg(map->memcg);
> >    kmalloc into dynptr;
> > }
> >
> > Should we do this change now and allow NULL to be passed as a map ?
>
> Hm, this looks a bit too much like a hack, I wouldn't do that, fwiw.
>
> > This way the bpf prog will have a choice whether to account into memcg or not.
> > Maybe it's all overkill and none of this needed?
> >
> > On the other side maybe map should be a mandatory argument and dynptr_alloc
> > can do its own memory accounting for stats ? atomic inc and dec is probably
> > an acceptable overhead? bpftool will print the dynptr allocation stats.
> > All sounds nice and extra visibility is great, but the kernel code that
> > allocates for the kernel doesn't use memcg. bpf progs semantically are part of
> > the kernel whereas memcg is a mechanism to restrict memory that kernel
> > allocated on behalf of user tasks. We abused memcg for bpf progs/maps
> > to have a limit. Not clear whether we should continue doing so for dynpr_alloc
> > and in the future for kptr_alloc. gfp_account adds overhead too. It's not free.
> > Thoughts?
>
> Great question, I think the memcg is useful, just that the ownership for bpf
> progs/maps has been relying on current whereas current is not a real 'owner',
> just the entity which did the loading.
>
> Maybe we need some sort of memcg object for bpf where we can "bind" the prog
> and map to it at load time, which is then different from current and can be
> flexibly set, e.g. fd = open(/sys/fs/cgroup/memory/<foo>) and pass that fd to
> BPF_PROG_LOAD and BPF_MAP_CREATE via bpf_attr (otherwise, if not set, then
> no accounting)?
>

I think it would be great to have memory accounting for BPF program as
a separate entity from current. BPF program is sort of like a special
process w.r.t. memory that it owns. Good thing is that with
bpf_run_ctx (once wired for all program types) such "ambient" entities
can be easily accessed from helpers to do accounting without any
verifier magic involved.

> Thanks,
> Daniel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write
  2022-05-09 22:42 ` [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write Joanne Koong
@ 2022-05-13 21:06   ` Andrii Nakryiko
  2022-05-16 16:56   ` David Vernet
  1 sibling, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2022-05-13 21:06 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann

On Mon, May 9, 2022 at 3:44 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> This patch adds two helper functions, bpf_dynptr_read and
> bpf_dynptr_write:
>
> long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset);
>
> long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len);
>
> The dynptr passed into these functions must be valid dynptrs that have
> been initialized.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---

LGTM

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  include/linux/bpf.h            | 16 ++++++++++
>  include/uapi/linux/bpf.h       | 19 ++++++++++++
>  kernel/bpf/helpers.c           | 56 ++++++++++++++++++++++++++++++++++
>  tools/include/uapi/linux/bpf.h | 19 ++++++++++++
>  4 files changed, 110 insertions(+)
>

[...]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices
  2022-05-09 22:42 ` [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices Joanne Koong
@ 2022-05-13 21:11   ` Andrii Nakryiko
  2022-05-13 21:37   ` Alexei Starovoitov
  1 sibling, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2022-05-13 21:11 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann

On Mon, May 9, 2022 at 3:44 PM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> This patch adds a new helper function
>
> void *bpf_dynptr_data(struct bpf_dynptr *ptr, u32 offset, u32 len);
>
> which returns a pointer to the underlying data of a dynptr. *len*
> must be a statically known value. The bpf program may access the returned
> data slice as a normal buffer (eg can do direct reads and writes), since
> the verifier associates the length with the returned pointer, and
> enforces that no out of bounds accesses occur.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---

Minor nit below.

Acked-by: Andrii Nakryiko <andrii@kernel.org>


>  include/linux/bpf.h            |  4 ++
>  include/uapi/linux/bpf.h       | 12 ++++++
>  kernel/bpf/helpers.c           | 28 ++++++++++++++
>  kernel/bpf/verifier.c          | 67 +++++++++++++++++++++++++++++++---
>  tools/include/uapi/linux/bpf.h | 12 ++++++
>  5 files changed, 117 insertions(+), 6 deletions(-)
>

[...]

> @@ -797,6 +806,20 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_re
>         return state->stack[spi].spilled_ptr.dynptr.type == arg_to_dynptr_type(arg_type);
>  }
>
> +static bool is_ref_obj_id_dynptr(struct bpf_func_state *state, u32 id)
> +{
> +       int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
> +       int i;
> +
> +       for (i = 0; i < allocated_slots; i++) {
> +               if (state->stack[i].slot_type[0] == STACK_DYNPTR &&
> +                   state->stack[i].spilled_ptr.id == id)
> +                       return true;

there is probably no harm, but strictly speaking we should check only
stack slot that corresponds to the start of dynptr, right?

> +       }
> +
> +       return false;
> +}
> +
>  /* The reg state of a pointer or a bounded scalar was saved when
>   * it was spilled to the stack.
>   */

[...]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-09 22:42 ` [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs Joanne Koong
  2022-05-12  0:05   ` Daniel Borkmann
  2022-05-13 20:59   ` Andrii Nakryiko
@ 2022-05-13 21:36   ` David Vernet
  2 siblings, 0 replies; 27+ messages in thread
From: David Vernet @ 2022-05-13 21:36 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, andrii, ast, daniel

On Mon, May 09, 2022 at 03:42:53PM -0700, Joanne Koong wrote:
> This patch adds the bulk of the verifier work for supporting dynamic
> pointers (dynptrs) in bpf. This patch implements malloc-type dynptrs
> through 2 new APIs (bpf_dynptr_alloc and bpf_dynptr_put) that can be
> called by a bpf program. Malloc-type dynptrs are dynptrs that dynamically
> allocate memory on behalf of the program.
> 
> A bpf_dynptr is opaque to the bpf program. It is a 16-byte structure
> defined internally as:
> 
> struct bpf_dynptr_kern {
>     void *data;
>     u32 size;
>     u32 offset;
> } __aligned(8);
> 
> The upper 8 bits of *size* is reserved (it contains extra metadata about
> read-only status and dynptr type); consequently, a dynptr only supports
> memory less than 16 MB.
> 

Small nit: s/less than/up to?


[...]

> +/* the implementation of the opaque uapi struct bpf_dynptr */
> +struct bpf_dynptr_kern {
> +	void *data;
> +	/* Size represents the number of usable bytes in the dynptr.
> +	 * If for example the offset is at 200 for a malloc dynptr with
> +	 * allocation size 256, the number of usable bytes is 56.
> +	 *
> +	 * The upper 8 bits are reserved.
> +	 * Bit 31 denotes whether the dynptr is read-only.
> +	 * Bits 28-30 denote the dynptr type.

It's pretty clear from context, but just for completeness, could you also
explicitly specify what bits 0 - 27 denote (24 - 27 reserved, 0 - 23 size)?

> +	 */
> +	u32 size;
> +	u32 offset;
> +} __aligned(8);
> +
> +enum bpf_dynptr_type {
> +	BPF_DYNPTR_TYPE_INVALID,
> +	/* Memory allocated dynamically by the kernel for the dynptr */
> +	BPF_DYNPTR_TYPE_MALLOC,
> +};
> +
> +/* Since the upper 8 bits of dynptr->size is reserved, the
> + * maximum supported size is 2^24 - 1.
> + */
> +#define DYNPTR_MAX_SIZE	((1UL << 24) - 1)
> +#define DYNPTR_SIZE_MASK	0xFFFFFF
> +#define DYNPTR_TYPE_SHIFT	28
> +#define DYNPTR_TYPE_MASK	0x7

Should we add a static_assert(DYNPTR_SIZE_MASK >= DYNPTR_MAX_SIZE);
Potentially overkill, but if we're going to have separate macros for them
it might be prudent to add it?

[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 0fe1dea520ae..8cdedc776987 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -187,6 +187,10 @@ struct bpf_verifier_stack_elem {
>  					  POISON_POINTER_DELTA))
>  #define BPF_MAP_PTR(X)		((struct bpf_map *)((X) & ~BPF_MAP_PTR_UNPRIV))
>  
> +static bool arg_type_is_mem_size(enum bpf_arg_type type);
> +static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx);
> +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
> +
>  static bool bpf_map_ptr_poisoned(const struct bpf_insn_aux_data *aux)
>  {
>  	return BPF_MAP_PTR(aux->map_ptr_state) == BPF_MAP_PTR_POISON;
> @@ -259,6 +263,7 @@ struct bpf_call_arg_meta {
>  	u32 ret_btf_id;
>  	u32 subprogno;
>  	struct bpf_map_value_off_desc *kptr_off_desc;
> +	u8 uninit_dynptr_regno;
>  };
>  
>  struct btf *btf_vmlinux;
> @@ -580,6 +585,7 @@ static char slot_type_char[] = {
>  	[STACK_SPILL]	= 'r',
>  	[STACK_MISC]	= 'm',
>  	[STACK_ZERO]	= '0',
> +	[STACK_DYNPTR]	= 'd',
>  };
>  
>  static void print_liveness(struct bpf_verifier_env *env,
> @@ -595,6 +601,25 @@ static void print_liveness(struct bpf_verifier_env *env,
>  		verbose(env, "D");
>  }
>  
> +static inline int get_spi(s32 off)
> +{
> +	return (-off - 1) / BPF_REG_SIZE;
> +}

Small / optional nit: It's probably harmless to leave this as inline as the
compiler will almost certainly inline it for you, but to that point, it's
probably not necessary to mark this as inline. It looks like most other
static functions in verifier.c are non-inline, so IMO it's probably best to
follow that lead.

[...]

>  static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
> @@ -5725,7 +5885,16 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  
>  skip_type_check:
>  	if (arg_type_is_release(arg_type)) {
> -		if (!reg->ref_obj_id && !register_is_null(reg)) {
> +		if (arg_type_is_dynptr(arg_type)) {
> +			struct bpf_func_state *state = func(env, reg);
> +			int spi = get_spi(reg->off);
> +
> +			if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
> +			    !state->stack[spi].spilled_ptr.id) {
> +				verbose(env, "arg %d is an unacquired reference\n", regno);
> +				return -EINVAL;
> +			}
> +		} else if (!reg->ref_obj_id && !register_is_null(reg)) {
>  			verbose(env, "R%d must be referenced when passed to release function\n",
>  				regno);
>  			return -EINVAL;
> @@ -5837,6 +6006,43 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  		bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO);
>  
>  		err = check_mem_size_reg(env, reg, regno, zero_size_allowed, meta);
> +	} else if (arg_type_is_dynptr(arg_type)) {
> +		/* Can't pass in a dynptr at a weird offset */
> +		if (reg->off % BPF_REG_SIZE) {
> +			verbose(env, "cannot pass in non-zero dynptr offset\n");
> +			return -EINVAL;
> +		}

Should this check be moved to check_func_arg_reg_off()?

> +
> +		if (arg_type & MEM_UNINIT)  {
> +			if (!is_dynptr_reg_valid_uninit(env, reg)) {
> +				verbose(env, "Arg #%d dynptr has to be an uninitialized dynptr\n",
> +					arg + BPF_REG_1);
> +				return -EINVAL;
> +			}
> +
> +			/* We only support one dynptr being uninitialized at the moment,
> +			 * which is sufficient for the helper functions we have right now.
> +			 */
> +			if (meta->uninit_dynptr_regno) {
> +				verbose(env, "verifier internal error: more than one uninitialized dynptr arg\n");
> +				return -EFAULT;
> +			}
> +
> +			meta->uninit_dynptr_regno = arg + BPF_REG_1;

Can this be simplified to:

meta->uninit_dynptr_regno = regno;

[...]

Looks good otherwise, thanks!

Acked-by: David Vernet <void@manifault.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices
  2022-05-09 22:42 ` [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices Joanne Koong
  2022-05-13 21:11   ` Andrii Nakryiko
@ 2022-05-13 21:37   ` Alexei Starovoitov
  2022-05-16 17:13     ` Joanne Koong
  1 sibling, 1 reply; 27+ messages in thread
From: Alexei Starovoitov @ 2022-05-13 21:37 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, andrii, ast, daniel

On Mon, May 09, 2022 at 03:42:56PM -0700, Joanne Koong wrote:
>  	} else if (is_acquire_function(func_id, meta.map_ptr)) {
> -		int id = acquire_reference_state(env, insn_idx);
> +		int id = 0;
> +
> +		if (is_dynptr_ref_function(func_id)) {
> +			int i;
> +
> +			/* Find the id of the dynptr we're acquiring a reference to */
> +			for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
> +				if (arg_type_is_dynptr(fn->arg_type[i])) {
> +					if (id) {
> +						verbose(env, "verifier internal error: more than one dynptr arg in a dynptr ref func\n");
> +						return -EFAULT;
> +					}
> +					id = stack_slot_get_id(env, &regs[BPF_REG_1 + i]);

I'm afraid this approach doesn't work.
Consider:
  struct bpf_dynptr ptr;
  u32 *data1, *data2;

  bpf_dynptr_alloc(8, 0, &ptr);
  data1 = bpf_dynptr_data(&ptr, 0, 8);
  data2 = bpf_dynptr_data(&ptr, 8, 8);
  if (data1)
     *data2 = 0; /* this will succeed, but shouldn't */

The same 'id' is being reused for data1 and data2 to make sure
that bpf_dynptr_put(&ptr); will clear data1/data2,
but data1 and data2 will look the same in mark_ptr_or_null_reg().

> +				}
> +			}
> +			if (!id) {
> +				verbose(env, "verifier internal error: no dynptr args to a dynptr ref func\n");
> +				return -EFAULT;
> +			}
> +		} else {
> +			id = acquire_reference_state(env, insn_idx);
> +			if (id < 0)
> +				return id;
> +		}
>  
> -		if (id < 0)
> -			return id;
>  		/* For mark_ptr_or_null_reg() */
>  		regs[BPF_REG_0].id = id;
>  		/* For release_reference() */
> @@ -9810,7 +9864,8 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
>  	u32 id = regs[regno].id;
>  	int i;
>  
> -	if (ref_obj_id && ref_obj_id == id && is_null)
> +	if (ref_obj_id && ref_obj_id == id && is_null &&
> +	    !is_ref_obj_id_dynptr(state, id))

This bit is avoiding doing release of dynptr's id,
because id is shared between dynptr and slice's id.

In this patch I'm not sure what is the purpose of bpf_dynptr_data()
being an acquire function. data1 and data2 are not acquiring.
They're not incrementing refcnt of dynptr.

I think normal logic of check_helper_call() that does:
        if (type_may_be_null(regs[BPF_REG_0].type))
                regs[BPF_REG_0].id = ++env->id_gen;

should be preserved.
It will give different id-s to data1 and data2 and the problem
described earlier will not exist.

The transfer of ref_obj_id from dynptr into data1 and data2 needs to happen,
but this part:
        u32 ref_obj_id = regs[regno].ref_obj_id;
        u32 id = regs[regno].id;
        int i;

        if (ref_obj_id && ref_obj_id == id && is_null)
                /* regs[regno] is in the " == NULL" branch.
                 * No one could have freed the reference state before
                 * doing the NULL check.
                 */
                WARN_ON_ONCE(release_reference_state(state, id));

should be left alone.
bpf_dynptr_put(&ptr); will release dynptr and will clear data1 and data2.
if (!data1)
   will not release dynptr, because data1->id != data1->ref_obj_id.

In other words bpf_dynptr_data() should behave like is_ptr_cast_function().
It should trasnfer ref_obj_id to R0, but should give new R0->id.
See big comment in bpf_verifier.h next to ref_obj_id.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-13 19:28           ` Daniel Borkmann
  2022-05-13 21:04             ` Andrii Nakryiko
@ 2022-05-13 22:16             ` Alexei Starovoitov
  2022-05-16 20:29               ` Joanne Koong
  1 sibling, 1 reply; 27+ messages in thread
From: Alexei Starovoitov @ 2022-05-13 22:16 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: Joanne Koong, bpf, Andrii Nakryiko, Alexei Starovoitov

On Fri, May 13, 2022 at 09:28:03PM +0200, Daniel Borkmann wrote:
> On 5/13/22 6:39 PM, Alexei Starovoitov wrote:
> > On Fri, May 13, 2022 at 03:12:06PM +0200, Daniel Borkmann wrote:
> > > 
> > > Thinking more about it, is there even any value for BPF_FUNC_dynptr_* for
> > > fully unpriv BPF if these are rejected anyway by the spectre mitigations
> > > from verifier?
> > ...
> > > So either for alloc, we always built-in __GFP_ZERO or bpf_dynptr_alloc()
> > > helper usage should go under perfmon_capable() where it's allowed to read
> > > kernel mem.
> > 
> > dynptr should probably by cap_bpf and cap_perfmon for now.
> > Otherwise we will start adding cap_perfmon checks in run-time to helpers
> > which is not easy to do. Some sort of prog or user context would need
> > to be passed as hidden arg into helper. That's too much hassle just
> > to enable dynptr for cap_bpf only.
> > 
> > Similar problem with gfp_account... remembering memcg and passing all
> > the way to bpf_dynptr_alloc helper is not easy. And it's not clear
> > which memcg to use. The one where task was that loaded that bpf prog?
> > That task could have been gone and cgroup is in dying stage.
> > bpf prog is executing some context and allocating memory for itself.
> > Like kernel allocates memory for its needs. It doesn't feel right to
> > charge prog's memcg in that case. It probably should be an explicit choice
> > by bpf program author. Maybe in the future we can introduce a fake map
> > for such accounting needs and bpf prog could pass a map pointer to
> > bpf_dynptr_alloc. When such fake and empty map is created the memcg
> > would be recorded the same way we do for existing normal maps.
> > Then the helper will look like:
> > bpf_dynptr_alloc(struct bpf_map *map, u32 size, u64 flags, struct bpf_dynptr *ptr)
> > {
> >    set_active_memcg(map->memcg);
> >    kmalloc into dynptr;
> > }
> > 
> > Should we do this change now and allow NULL to be passed as a map ?
> 
> Hm, this looks a bit too much like a hack, I wouldn't do that, fwiw.
> 
> > This way the bpf prog will have a choice whether to account into memcg or not.
> > Maybe it's all overkill and none of this needed?
> > 
> > On the other side maybe map should be a mandatory argument and dynptr_alloc
> > can do its own memory accounting for stats ? atomic inc and dec is probably
> > an acceptable overhead? bpftool will print the dynptr allocation stats.
> > All sounds nice and extra visibility is great, but the kernel code that
> > allocates for the kernel doesn't use memcg. bpf progs semantically are part of
> > the kernel whereas memcg is a mechanism to restrict memory that kernel
> > allocated on behalf of user tasks. We abused memcg for bpf progs/maps
> > to have a limit. Not clear whether we should continue doing so for dynpr_alloc
> > and in the future for kptr_alloc. gfp_account adds overhead too. It's not free.
> > Thoughts?
> 
> Great question, I think the memcg is useful, just that the ownership for bpf
> progs/maps has been relying on current whereas current is not a real 'owner',
> just the entity which did the loading.
> 
> Maybe we need some sort of memcg object for bpf where we can "bind" the prog
> and map to it at load time, which is then different from current and can be
> flexibly set, e.g. fd = open(/sys/fs/cgroup/memory/<foo>) and pass that fd to
> BPF_PROG_LOAD and BPF_MAP_CREATE via bpf_attr (otherwise, if not set, then
> no accounting)?

Agree. Explicitly specifying memcg by FD would be nice.
It will be useful for normal maps and progs.
This is a bit orthogonal to having a map argument to bpf_dynptr/kptr_alloc.

Here is the main reason why we probably should have it mandatory:
kmalloc cannot be called from nmi and in general cannot be called from tracing.
kprobe/fentry could be inside slab or page alloc path and it might blow up.
That's the reason why hashmap defaults to pre-alloc.
In order to do pre-alloc in bpf_dynptr/kptr_alloc() it has to have a map-like
argument that will keep the info about preallocated memory.

How about the following api:
mem = bpf_map_create(BPF_MAP_TYPE_MEMORY); // form user space
bpf_mem_prealloc(mem, size); // preallocate memory. from sleepable or irqwork
bpf_dynptr_alloc(mem, size, flags, &dynptr); // non-sleepable
// returns 'size' bytes if they were available in preallocated memory

Right now bpf maps are either full prealloc or full kmalloc.
This approach will be a hybrid.
The bpf progs will be using it roughly like this:

// init from user space
mem = bpf_map_create(BPF_MAP_TYPE_MEMORY);
sys_bpf(mem_prealloc, 1Mbyte); // prealloc largest possible single dynptr_alloc

// from bpf prog
bpf_dynptr_alloc(mem, size, flags, &dynptr); // if (size < 1M) all good
bpf_irq_work_queue(replenish_prealloc, size); // refill mem's prealloc

void replenish_prealloc(sz) { bpf_mem_prealloc(mem, sz); }

bpf_dynptr_alloc would need to implement a memory allocator out of
reserved memory. We can probably reuse some of sl[oua]b code.
slob_alloc may fit the best (without dynamic slob_new_pages).
Song's pack_alloc probably good enough to start.

The gfp_account flag moves into bpf_mem_prealloc() helper.
It doesn't make sense in bpf_dynptr_alloc.
While gfp_zero makes sense only in bpf_dynptr_alloc.

Thoughts?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers
  2022-05-09 22:42 ` [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers Joanne Koong
  2022-05-13 21:02   ` Andrii Nakryiko
@ 2022-05-16 16:09   ` David Vernet
  1 sibling, 0 replies; 27+ messages in thread
From: David Vernet @ 2022-05-16 16:09 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, andrii, ast, daniel

On Mon, May 09, 2022 at 03:42:54PM -0700, Joanne Koong wrote:
> Currently, our only way of writing dynamically-sized data into a ring
> buffer is through bpf_ringbuf_output but this incurs an extra memcpy
> cost. bpf_ringbuf_reserve + bpf_ringbuf_commit avoids this extra
> memcpy, but it can only safely support reservation sizes that are
> statically known since the verifier cannot guarantee that the bpf
> program won’t access memory outside the reserved space.
> 
> The bpf_dynptr abstraction allows for dynamically-sized ring buffer
> reservations without the extra memcpy.
> 
> There are 3 new APIs:
> 
> long bpf_ringbuf_reserve_dynptr(void *ringbuf, u32 size, u64 flags, struct bpf_dynptr *ptr);
> void bpf_ringbuf_submit_dynptr(struct bpf_dynptr *ptr, u64 flags);
> void bpf_ringbuf_discard_dynptr(struct bpf_dynptr *ptr, u64 flags);
> 
> These closely follow the functionalities of the original ringbuf APIs.
> For example, all ringbuffer dynptrs that have been reserved must be
> either submitted or discarded before the program exits.
> 
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>

Looks good!

Acked-by: David Vernet <void@manifault.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write
  2022-05-09 22:42 ` [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write Joanne Koong
  2022-05-13 21:06   ` Andrii Nakryiko
@ 2022-05-16 16:56   ` David Vernet
  2022-05-16 17:23     ` Joanne Koong
  1 sibling, 1 reply; 27+ messages in thread
From: David Vernet @ 2022-05-16 16:56 UTC (permalink / raw)
  To: Joanne Koong; +Cc: bpf, andrii, ast, daniel

On Mon, May 09, 2022 at 03:42:55PM -0700, Joanne Koong wrote:
> This patch adds two helper functions, bpf_dynptr_read and
> bpf_dynptr_write:
> 
> long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset);
> 
> long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len);
> 
> The dynptr passed into these functions must be valid dynptrs that have
> been initialized.
> 
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
>  include/linux/bpf.h            | 16 ++++++++++
>  include/uapi/linux/bpf.h       | 19 ++++++++++++
>  kernel/bpf/helpers.c           | 56 ++++++++++++++++++++++++++++++++++
>  tools/include/uapi/linux/bpf.h | 19 ++++++++++++
>  4 files changed, 110 insertions(+)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 8fbe739b0dec..6f4fa0627620 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2391,6 +2391,12 @@ enum bpf_dynptr_type {
>  #define DYNPTR_SIZE_MASK	0xFFFFFF
>  #define DYNPTR_TYPE_SHIFT	28
>  #define DYNPTR_TYPE_MASK	0x7
> +#define DYNPTR_RDONLY_BIT	BIT(31)
> +
> +static inline bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr)
> +{
> +	return ptr->size & DYNPTR_RDONLY_BIT;
> +}
>  
>  static inline enum bpf_dynptr_type bpf_dynptr_get_type(struct bpf_dynptr_kern *ptr)
>  {
> @@ -2412,6 +2418,16 @@ static inline int bpf_dynptr_check_size(u32 size)
>  	return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
>  }
>  
> +static inline int bpf_dynptr_check_off_len(struct bpf_dynptr_kern *ptr, u32 offset, u32 len)
> +{
> +	u32 size = bpf_dynptr_get_size(ptr);
> +
> +	if (len > size || offset > size - len)
> +		return -E2BIG;
> +
> +	return 0;
> +}

Does this need to be in bpf.h? Or could it be brought into helpers.c as a
static function? I don't think there's any harm in leaving it here, but at
first glance it seems like a helper function that doesn't really need to be
exported.

> +
>  void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
>  		     u32 offset, u32 size);
>  
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 679f960d2514..f0c5ca220d8e 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -5209,6 +5209,23 @@ union bpf_attr {
>   *		'bpf_ringbuf_discard'.
>   *	Return
>   *		Nothing. Always succeeds.
> + *
> + * long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset)
> + *	Description
> + *		Read *len* bytes from *src* into *dst*, starting from *offset*
> + *		into *src*.
> + *	Return
> + *		0 on success, -E2BIG if *offset* + *len* exceeds the length
> + *		of *src*'s data, -EINVAL if *src* is an invalid dynptr.
> + *
> + * long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len)
> + *	Description
> + *		Write *len* bytes from *src* into *dst*, starting from *offset*
> + *		into *dst*.
> + *	Return
> + *		0 on success, -E2BIG if *offset* + *len* exceeds the length
> + *		of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
> + *		is a read-only dynptr.
>   */
>  #define __BPF_FUNC_MAPPER(FN)		\
>  	FN(unspec),			\
> @@ -5411,6 +5428,8 @@ union bpf_attr {
>  	FN(ringbuf_reserve_dynptr),	\
>  	FN(ringbuf_submit_dynptr),	\
>  	FN(ringbuf_discard_dynptr),	\
> +	FN(dynptr_read),		\
> +	FN(dynptr_write),		\
>  	/* */
>  
>  /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 2d6f2e28b580..7206b9e5322f 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1467,6 +1467,58 @@ const struct bpf_func_proto bpf_dynptr_put_proto = {
>  	.arg1_type	= ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_MALLOC | OBJ_RELEASE,
>  };
>  
> +BPF_CALL_4(bpf_dynptr_read, void *, dst, u32, len, struct bpf_dynptr_kern *, src, u32, offset)
> +{
> +	int err;
> +
> +	if (!src->data)
> +		return -EINVAL;
> +
> +	err = bpf_dynptr_check_off_len(src, offset, len);
> +	if (err)
> +		return err;
> +
> +	memcpy(dst, src->data + src->offset + offset, len);
> +
> +	return 0;
> +}
> +
> +const struct bpf_func_proto bpf_dynptr_read_proto = {
> +	.func		= bpf_dynptr_read,
> +	.gpl_only	= false,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_UNINIT_MEM,
> +	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
> +	.arg3_type	= ARG_PTR_TO_DYNPTR,
> +	.arg4_type	= ARG_ANYTHING,

I think what you have now is safe / correct, but is there a reason that we
don't use ARG_CONST_SIZE_OR_ZERO for both the len and the offset, given
that they're both bound by the size of a memory region? Same question
applies to the function proto for bpf_dynptr_write() as well.

> +};
> +
> +BPF_CALL_4(bpf_dynptr_write, struct bpf_dynptr_kern *, dst, u32, offset, void *, src, u32, len)
> +{
> +	int err;
> +
> +	if (!dst->data || bpf_dynptr_is_rdonly(dst))
> +		return -EINVAL;
> +
> +	err = bpf_dynptr_check_off_len(dst, offset, len);
> +	if (err)
> +		return err;
> +
> +	memcpy(dst->data + dst->offset + offset, src, len);
> +
> +	return 0;
> +}
> +
> +const struct bpf_func_proto bpf_dynptr_write_proto = {
> +	.func		= bpf_dynptr_write,
> +	.gpl_only	= false,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_DYNPTR,
> +	.arg2_type	= ARG_ANYTHING,
> +	.arg3_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
> +	.arg4_type	= ARG_CONST_SIZE_OR_ZERO,
> +};
> +

[...]

Overall looks great.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices
  2022-05-13 21:37   ` Alexei Starovoitov
@ 2022-05-16 17:13     ` Joanne Koong
  0 siblings, 0 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-16 17:13 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann

On Fri, May 13, 2022 at 2:37 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, May 09, 2022 at 03:42:56PM -0700, Joanne Koong wrote:
> >       } else if (is_acquire_function(func_id, meta.map_ptr)) {
> > -             int id = acquire_reference_state(env, insn_idx);
> > +             int id = 0;
> > +
> > +             if (is_dynptr_ref_function(func_id)) {
> > +                     int i;
> > +
> > +                     /* Find the id of the dynptr we're acquiring a reference to */
> > +                     for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
> > +                             if (arg_type_is_dynptr(fn->arg_type[i])) {
> > +                                     if (id) {
> > +                                             verbose(env, "verifier internal error: more than one dynptr arg in a dynptr ref func\n");
> > +                                             return -EFAULT;
> > +                                     }
> > +                                     id = stack_slot_get_id(env, &regs[BPF_REG_1 + i]);
>
> I'm afraid this approach doesn't work.
> Consider:
>   struct bpf_dynptr ptr;
>   u32 *data1, *data2;
>
>   bpf_dynptr_alloc(8, 0, &ptr);
>   data1 = bpf_dynptr_data(&ptr, 0, 8);
>   data2 = bpf_dynptr_data(&ptr, 8, 8);
>   if (data1)
>      *data2 = 0; /* this will succeed, but shouldn't */
>
> The same 'id' is being reused for data1 and data2 to make sure
> that bpf_dynptr_put(&ptr); will clear data1/data2,
> but data1 and data2 will look the same in mark_ptr_or_null_reg().
>
> > +                             }
> > +                     }
> > +                     if (!id) {
> > +                             verbose(env, "verifier internal error: no dynptr args to a dynptr ref func\n");
> > +                             return -EFAULT;
> > +                     }
> > +             } else {
> > +                     id = acquire_reference_state(env, insn_idx);
> > +                     if (id < 0)
> > +                             return id;
> > +             }
> >
> > -             if (id < 0)
> > -                     return id;
> >               /* For mark_ptr_or_null_reg() */
> >               regs[BPF_REG_0].id = id;
> >               /* For release_reference() */
> > @@ -9810,7 +9864,8 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
> >       u32 id = regs[regno].id;
> >       int i;
> >
> > -     if (ref_obj_id && ref_obj_id == id && is_null)
> > +     if (ref_obj_id && ref_obj_id == id && is_null &&
> > +         !is_ref_obj_id_dynptr(state, id))
>
> This bit is avoiding doing release of dynptr's id,
> because id is shared between dynptr and slice's id.
>
> In this patch I'm not sure what is the purpose of bpf_dynptr_data()
> being an acquire function. data1 and data2 are not acquiring.
> They're not incrementing refcnt of dynptr.
>
> I think normal logic of check_helper_call() that does:
>         if (type_may_be_null(regs[BPF_REG_0].type))
>                 regs[BPF_REG_0].id = ++env->id_gen;
>
> should be preserved.
> It will give different id-s to data1 and data2 and the problem
> described earlier will not exist.
>
> The transfer of ref_obj_id from dynptr into data1 and data2 needs to happen,
> but this part:
>         u32 ref_obj_id = regs[regno].ref_obj_id;
>         u32 id = regs[regno].id;
>         int i;
>
>         if (ref_obj_id && ref_obj_id == id && is_null)
>                 /* regs[regno] is in the " == NULL" branch.
>                  * No one could have freed the reference state before
>                  * doing the NULL check.
>                  */
>                 WARN_ON_ONCE(release_reference_state(state, id));
>
> should be left alone.
> bpf_dynptr_put(&ptr); will release dynptr and will clear data1 and data2.
> if (!data1)
>    will not release dynptr, because data1->id != data1->ref_obj_id.
>
> In other words bpf_dynptr_data() should behave like is_ptr_cast_function().
> It should trasnfer ref_obj_id to R0, but should give new R0->id.
> See big comment in bpf_verifier.h next to ref_obj_id.
Great, thanks for your feedback. I agree with everything you wrote. I
will make these changes for v5 and add your data1 data2 example as a
test case.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write
  2022-05-16 16:56   ` David Vernet
@ 2022-05-16 17:23     ` Joanne Koong
  0 siblings, 0 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-16 17:23 UTC (permalink / raw)
  To: David Vernet; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann

On Mon, May 16, 2022 at 9:56 AM David Vernet <void@manifault.com> wrote:
>
> On Mon, May 09, 2022 at 03:42:55PM -0700, Joanne Koong wrote:
> > This patch adds two helper functions, bpf_dynptr_read and
> > bpf_dynptr_write:
> >
> > long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset);
> >
> > long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len);
> >
> > The dynptr passed into these functions must be valid dynptrs that have
> > been initialized.
> >
> > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > ---
> >  include/linux/bpf.h            | 16 ++++++++++
> >  include/uapi/linux/bpf.h       | 19 ++++++++++++
> >  kernel/bpf/helpers.c           | 56 ++++++++++++++++++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h | 19 ++++++++++++
> >  4 files changed, 110 insertions(+)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 8fbe739b0dec..6f4fa0627620 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -2391,6 +2391,12 @@ enum bpf_dynptr_type {
> >  #define DYNPTR_SIZE_MASK     0xFFFFFF
> >  #define DYNPTR_TYPE_SHIFT    28
> >  #define DYNPTR_TYPE_MASK     0x7
> > +#define DYNPTR_RDONLY_BIT    BIT(31)
> > +
> > +static inline bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr)
> > +{
> > +     return ptr->size & DYNPTR_RDONLY_BIT;
> > +}
> >
> >  static inline enum bpf_dynptr_type bpf_dynptr_get_type(struct bpf_dynptr_kern *ptr)
> >  {
> > @@ -2412,6 +2418,16 @@ static inline int bpf_dynptr_check_size(u32 size)
> >       return size > DYNPTR_MAX_SIZE ? -E2BIG : 0;
> >  }
> >
> > +static inline int bpf_dynptr_check_off_len(struct bpf_dynptr_kern *ptr, u32 offset, u32 len)
> > +{
> > +     u32 size = bpf_dynptr_get_size(ptr);
> > +
> > +     if (len > size || offset > size - len)
> > +             return -E2BIG;
> > +
> > +     return 0;
> > +}
>
> Does this need to be in bpf.h? Or could it be brought into helpers.c as a
> static function? I don't think there's any harm in leaving it here, but at
> first glance it seems like a helper function that doesn't really need to be
> exported.
I will move this function back to helpers.c
>
> > +
> >  void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
> >                    u32 offset, u32 size);
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 679f960d2514..f0c5ca220d8e 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -5209,6 +5209,23 @@ union bpf_attr {
> >   *           'bpf_ringbuf_discard'.
> >   *   Return
> >   *           Nothing. Always succeeds.
> > + *
> > + * long bpf_dynptr_read(void *dst, u32 len, struct bpf_dynptr *src, u32 offset)
> > + *   Description
> > + *           Read *len* bytes from *src* into *dst*, starting from *offset*
> > + *           into *src*.
> > + *   Return
> > + *           0 on success, -E2BIG if *offset* + *len* exceeds the length
> > + *           of *src*'s data, -EINVAL if *src* is an invalid dynptr.
> > + *
> > + * long bpf_dynptr_write(struct bpf_dynptr *dst, u32 offset, void *src, u32 len)
> > + *   Description
> > + *           Write *len* bytes from *src* into *dst*, starting from *offset*
> > + *           into *dst*.
> > + *   Return
> > + *           0 on success, -E2BIG if *offset* + *len* exceeds the length
> > + *           of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
> > + *           is a read-only dynptr.
> >   */
> >  #define __BPF_FUNC_MAPPER(FN)                \
> >       FN(unspec),                     \
> > @@ -5411,6 +5428,8 @@ union bpf_attr {
> >       FN(ringbuf_reserve_dynptr),     \
> >       FN(ringbuf_submit_dynptr),      \
> >       FN(ringbuf_discard_dynptr),     \
> > +     FN(dynptr_read),                \
> > +     FN(dynptr_write),               \
> >       /* */
> >
> >  /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 2d6f2e28b580..7206b9e5322f 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -1467,6 +1467,58 @@ const struct bpf_func_proto bpf_dynptr_put_proto = {
> >       .arg1_type      = ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_MALLOC | OBJ_RELEASE,
> >  };
> >
> > +BPF_CALL_4(bpf_dynptr_read, void *, dst, u32, len, struct bpf_dynptr_kern *, src, u32, offset)
> > +{
> > +     int err;
> > +
> > +     if (!src->data)
> > +             return -EINVAL;
> > +
> > +     err = bpf_dynptr_check_off_len(src, offset, len);
> > +     if (err)
> > +             return err;
> > +
> > +     memcpy(dst, src->data + src->offset + offset, len);
> > +
> > +     return 0;
> > +}
> > +
> > +const struct bpf_func_proto bpf_dynptr_read_proto = {
> > +     .func           = bpf_dynptr_read,
> > +     .gpl_only       = false,
> > +     .ret_type       = RET_INTEGER,
> > +     .arg1_type      = ARG_PTR_TO_UNINIT_MEM,
> > +     .arg2_type      = ARG_CONST_SIZE_OR_ZERO,
> > +     .arg3_type      = ARG_PTR_TO_DYNPTR,
> > +     .arg4_type      = ARG_ANYTHING,
>
> I think what you have now is safe / correct, but is there a reason that we
> don't use ARG_CONST_SIZE_OR_ZERO for both the len and the offset, given
> that they're both bound by the size of a memory region? Same question
> applies to the function proto for bpf_dynptr_write() as well.
I think it offers more flexibility as an API if the offset doesn't
have to be statically known (eg the program can use an offset that is
set by the userspace application).
>
> > +};
> > +
> > +BPF_CALL_4(bpf_dynptr_write, struct bpf_dynptr_kern *, dst, u32, offset, void *, src, u32, len)
> > +{
> > +     int err;
> > +
> > +     if (!dst->data || bpf_dynptr_is_rdonly(dst))
> > +             return -EINVAL;
> > +
> > +     err = bpf_dynptr_check_off_len(dst, offset, len);
> > +     if (err)
> > +             return err;
> > +
> > +     memcpy(dst->data + dst->offset + offset, src, len);
> > +
> > +     return 0;
> > +}
> > +
> > +const struct bpf_func_proto bpf_dynptr_write_proto = {
> > +     .func           = bpf_dynptr_write,
> > +     .gpl_only       = false,
> > +     .ret_type       = RET_INTEGER,
> > +     .arg1_type      = ARG_PTR_TO_DYNPTR,
> > +     .arg2_type      = ARG_ANYTHING,
> > +     .arg3_type      = ARG_PTR_TO_MEM | MEM_RDONLY,
> > +     .arg4_type      = ARG_CONST_SIZE_OR_ZERO,
> > +};
> > +
>
> [...]
>
> Overall looks great.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-13 22:16             ` Alexei Starovoitov
@ 2022-05-16 20:29               ` Joanne Koong
  0 siblings, 0 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-16 20:29 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Daniel Borkmann, bpf, Andrii Nakryiko, Alexei Starovoitov

On Fri, May 13, 2022 at 3:16 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, May 13, 2022 at 09:28:03PM +0200, Daniel Borkmann wrote:
> > On 5/13/22 6:39 PM, Alexei Starovoitov wrote:
> > > On Fri, May 13, 2022 at 03:12:06PM +0200, Daniel Borkmann wrote:
> > > >
> > > > Thinking more about it, is there even any value for BPF_FUNC_dynptr_* for
> > > > fully unpriv BPF if these are rejected anyway by the spectre mitigations
> > > > from verifier?
> > > ...
> > > > So either for alloc, we always built-in __GFP_ZERO or bpf_dynptr_alloc()
> > > > helper usage should go under perfmon_capable() where it's allowed to read
> > > > kernel mem.
> > >
> > > dynptr should probably by cap_bpf and cap_perfmon for now.
> > > Otherwise we will start adding cap_perfmon checks in run-time to helpers
> > > which is not easy to do. Some sort of prog or user context would need
> > > to be passed as hidden arg into helper. That's too much hassle just
> > > to enable dynptr for cap_bpf only.
> > >
> > > Similar problem with gfp_account... remembering memcg and passing all
> > > the way to bpf_dynptr_alloc helper is not easy. And it's not clear
> > > which memcg to use. The one where task was that loaded that bpf prog?
> > > That task could have been gone and cgroup is in dying stage.
> > > bpf prog is executing some context and allocating memory for itself.
> > > Like kernel allocates memory for its needs. It doesn't feel right to
> > > charge prog's memcg in that case. It probably should be an explicit choice
> > > by bpf program author. Maybe in the future we can introduce a fake map
> > > for such accounting needs and bpf prog could pass a map pointer to
> > > bpf_dynptr_alloc. When such fake and empty map is created the memcg
> > > would be recorded the same way we do for existing normal maps.
> > > Then the helper will look like:
> > > bpf_dynptr_alloc(struct bpf_map *map, u32 size, u64 flags, struct bpf_dynptr *ptr)
> > > {
> > >    set_active_memcg(map->memcg);
> > >    kmalloc into dynptr;
> > > }
> > >
> > > Should we do this change now and allow NULL to be passed as a map ?
> >
> > Hm, this looks a bit too much like a hack, I wouldn't do that, fwiw.
> >
> > > This way the bpf prog will have a choice whether to account into memcg or not.
> > > Maybe it's all overkill and none of this needed?
> > >
> > > On the other side maybe map should be a mandatory argument and dynptr_alloc
> > > can do its own memory accounting for stats ? atomic inc and dec is probably
> > > an acceptable overhead? bpftool will print the dynptr allocation stats.
> > > All sounds nice and extra visibility is great, but the kernel code that
> > > allocates for the kernel doesn't use memcg. bpf progs semantically are part of
> > > the kernel whereas memcg is a mechanism to restrict memory that kernel
> > > allocated on behalf of user tasks. We abused memcg for bpf progs/maps
> > > to have a limit. Not clear whether we should continue doing so for dynpr_alloc
> > > and in the future for kptr_alloc. gfp_account adds overhead too. It's not free.
> > > Thoughts?
> >
> > Great question, I think the memcg is useful, just that the ownership for bpf
> > progs/maps has been relying on current whereas current is not a real 'owner',
> > just the entity which did the loading.
> >
> > Maybe we need some sort of memcg object for bpf where we can "bind" the prog
> > and map to it at load time, which is then different from current and can be
> > flexibly set, e.g. fd = open(/sys/fs/cgroup/memory/<foo>) and pass that fd to
> > BPF_PROG_LOAD and BPF_MAP_CREATE via bpf_attr (otherwise, if not set, then
> > no accounting)?
>
> Agree. Explicitly specifying memcg by FD would be nice.
> It will be useful for normal maps and progs.
> This is a bit orthogonal to having a map argument to bpf_dynptr/kptr_alloc.
>
> Here is the main reason why we probably should have it mandatory:
> kmalloc cannot be called from nmi and in general cannot be called from tracing.
> kprobe/fentry could be inside slab or page alloc path and it might blow up.
> That's the reason why hashmap defaults to pre-alloc.
> In order to do pre-alloc in bpf_dynptr/kptr_alloc() it has to have a map-like
> argument that will keep the info about preallocated memory.
>
> How about the following api:
> mem = bpf_map_create(BPF_MAP_TYPE_MEMORY); // form user space
> bpf_mem_prealloc(mem, size); // preallocate memory. from sleepable or irqwork
> bpf_dynptr_alloc(mem, size, flags, &dynptr); // non-sleepable
> // returns 'size' bytes if they were available in preallocated memory
>
> Right now bpf maps are either full prealloc or full kmalloc.
> This approach will be a hybrid.
> The bpf progs will be using it roughly like this:
>
> // init from user space
> mem = bpf_map_create(BPF_MAP_TYPE_MEMORY);
> sys_bpf(mem_prealloc, 1Mbyte); // prealloc largest possible single dynptr_alloc
>
> // from bpf prog
> bpf_dynptr_alloc(mem, size, flags, &dynptr); // if (size < 1M) all good
> bpf_irq_work_queue(replenish_prealloc, size); // refill mem's prealloc
>
> void replenish_prealloc(sz) { bpf_mem_prealloc(mem, sz); }
>
> bpf_dynptr_alloc would need to implement a memory allocator out of
> reserved memory. We can probably reuse some of sl[oua]b code.
> slob_alloc may fit the best (without dynamic slob_new_pages).
> Song's pack_alloc probably good enough to start.
>
> The gfp_account flag moves into bpf_mem_prealloc() helper.
> It doesn't make sense in bpf_dynptr_alloc.
> While gfp_zero makes sense only in bpf_dynptr_alloc.
>
> Thoughts?

Do you envision this also being used for accounting for kfunc memory
allocations?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs
  2022-05-12  0:05   ` Daniel Borkmann
  2022-05-12 20:03     ` Joanne Koong
@ 2022-05-16 20:52     ` Joanne Koong
  1 sibling, 0 replies; 27+ messages in thread
From: Joanne Koong @ 2022-05-16 20:52 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: bpf, Andrii Nakryiko, Alexei Starovoitov

On Wed, May 11, 2022 at 5:05 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 5/10/22 12:42 AM, Joanne Koong wrote:
> [...]
> > @@ -6498,6 +6523,11 @@ struct bpf_timer {
> >       __u64 :64;
> >   } __attribute__((aligned(8)));
> >
> > +struct bpf_dynptr {
> > +     __u64 :64;
> > +     __u64 :64;
> > +} __attribute__((aligned(8)));
> > +
> >   struct bpf_sysctl {
> >       __u32   write;          /* Sysctl is being read (= 0) or written (= 1).
> >                                * Allows 1,2,4-byte read, but no write.
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 8a2398ac14c2..a4272e9239ea 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -1396,6 +1396,77 @@ const struct bpf_func_proto bpf_kptr_xchg_proto = {
> >       .arg2_btf_id  = BPF_PTR_POISON,
> >   };
> >
> > +void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, enum bpf_dynptr_type type,
> > +                  u32 offset, u32 size)
> > +{
> > +     ptr->data = data;
> > +     ptr->offset = offset;
> > +     ptr->size = size;
> > +     bpf_dynptr_set_type(ptr, type);
> > +}
> > +
> > +void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
> > +{
> > +     memset(ptr, 0, sizeof(*ptr));
> > +}
> > +
> > +BPF_CALL_3(bpf_dynptr_alloc, u32, size, u64, flags, struct bpf_dynptr_kern *, ptr)
> > +{
> > +     gfp_t gfp_flags = GFP_ATOMIC;
>
> nit: should also have __GFP_NOWARN
>
> I presume mem accounting cannot be done on this one given there is no real "ownership"
> of this piece of mem?
While we figure out the details of memory accounting for allocations,
I will defer the malloc parts of this patchset to the 2nd dynptr
patchset. I will resubmit v5 without malloc-type dynptrs
>
> Was planning to run some more local tests tomorrow, but from glance at selftest side
> I haven't seen sanity checks like these:
>
> bpf_dynptr_alloc(8, 0, &ptr);
> data = bpf_dynptr_data(&ptr, 0, 0);
> bpf_dynptr_put(&ptr);
> *(__u8 *)data = 23;
>
> How is this prevented? I think you do a ptr id check in the is_dynptr_ref_function
> check on the acquire function, but with above use, would our data pointer escape, or
> get invalidated via last put?
>
> Thanks,
> Daniel

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2022-05-16 21:09 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-09 22:42 [PATCH bpf-next v4 0/6] Dynamic pointers Joanne Koong
2022-05-09 22:42 ` [PATCH bpf-next v4 1/6] bpf: Add MEM_UNINIT as a bpf_type_flag Joanne Koong
2022-05-13 14:11   ` David Vernet
2022-05-09 22:42 ` [PATCH bpf-next v4 2/6] bpf: Add verifier support for dynptrs and implement malloc dynptrs Joanne Koong
2022-05-12  0:05   ` Daniel Borkmann
2022-05-12 20:03     ` Joanne Koong
2022-05-13 13:12       ` Daniel Borkmann
2022-05-13 16:39         ` Alexei Starovoitov
2022-05-13 19:28           ` Daniel Borkmann
2022-05-13 21:04             ` Andrii Nakryiko
2022-05-13 22:16             ` Alexei Starovoitov
2022-05-16 20:29               ` Joanne Koong
2022-05-16 20:52     ` Joanne Koong
2022-05-13 20:59   ` Andrii Nakryiko
2022-05-13 21:36   ` David Vernet
2022-05-09 22:42 ` [PATCH bpf-next v4 3/6] bpf: Dynptr support for ring buffers Joanne Koong
2022-05-13 21:02   ` Andrii Nakryiko
2022-05-16 16:09   ` David Vernet
2022-05-09 22:42 ` [PATCH bpf-next v4 4/6] bpf: Add bpf_dynptr_read and bpf_dynptr_write Joanne Koong
2022-05-13 21:06   ` Andrii Nakryiko
2022-05-16 16:56   ` David Vernet
2022-05-16 17:23     ` Joanne Koong
2022-05-09 22:42 ` [PATCH bpf-next v4 5/6] bpf: Add dynptr data slices Joanne Koong
2022-05-13 21:11   ` Andrii Nakryiko
2022-05-13 21:37   ` Alexei Starovoitov
2022-05-16 17:13     ` Joanne Koong
2022-05-09 22:42 ` [PATCH bpf-next v4 6/6] bpf: Dynptr tests Joanne Koong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.