From: Alexei Starovoitov <ast@fb.com>
To: Dave Marchevsky <davemarchevsky@fb.com>, bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Kernel Team <kernel-team@fb.com>, Tejun Heo <tj@kernel.org>
Subject: Re: [RFC PATCH bpf-next 05/11] bpf: Add bpf_spin_lock member to rbtree
Date: Mon, 1 Aug 2022 15:17:29 -0700 [thread overview]
Message-ID: <61209d3a-bc15-e4f2-9079-7bdcfdd13cd0@fb.com> (raw)
In-Reply-To: <20220722183438.3319790-6-davemarchevsky@fb.com>
On 7/22/22 11:34 AM, Dave Marchevsky wrote:
> This patch adds a struct bpf_spin_lock *lock member to bpf_rbtree, as
> well as a bpf_rbtree_get_lock helper which allows bpf programs to access
> the lock.
>
> Ideally the bpf_spin_lock would be created independently oustide of the
> tree and associated with it before the tree is used, either as part of
> map definition or via some call like rbtree_init(&rbtree, &lock). Doing
> this in an ergonomic way is proving harder than expected, so for now use
> this workaround.
>
> Why is creating the bpf_spin_lock independently and associating it with
> the tree preferable? Because we want to be able to transfer nodes
> between trees atomically, and for this to work need same lock associated
> with 2 trees.
Right. We need one lock to protect multiple rbtrees.
Since add/find/remove helpers will look into rbtree->lock
the two different rbtree (== map) have to point to the same lock.
Other than rbtree_init(&rbtree, &lock) what would be an alternative ?
>
> Further locking-related patches will make it possible for the lock to be
> used in BPF programs and add code which enforces that the lock is held
> when doing any operation on the tree.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
> include/uapi/linux/bpf.h | 7 +++++++
> kernel/bpf/helpers.c | 3 +++
> kernel/bpf/rbtree.c | 24 ++++++++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 7 +++++++
> 4 files changed, 41 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 4688ce88caf4..c677d92de3bc 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -5385,6 +5385,12 @@ union bpf_attr {
> * Return
> * 0
> *
> + * void *bpf_rbtree_get_lock(struct bpf_map *map)
> + * Description
> + * Return the bpf_spin_lock associated with the rbtree
> + *
> + * Return
> + * Ptr to lock
> */
> #define __BPF_FUNC_MAPPER(FN) \
> FN(unspec), \
> @@ -5600,6 +5606,7 @@ union bpf_attr {
> FN(rbtree_find), \
> FN(rbtree_remove), \
> FN(rbtree_free_node), \
> + FN(rbtree_get_lock), \
> /* */
>
> /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 35eb66d11bf6..257a808bb767 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1587,6 +1587,7 @@ const struct bpf_func_proto bpf_rbtree_add_proto __weak;
> const struct bpf_func_proto bpf_rbtree_find_proto __weak;
> const struct bpf_func_proto bpf_rbtree_remove_proto __weak;
> const struct bpf_func_proto bpf_rbtree_free_node_proto __weak;
> +const struct bpf_func_proto bpf_rbtree_get_lock_proto __weak;
>
> const struct bpf_func_proto *
> bpf_base_func_proto(enum bpf_func_id func_id)
> @@ -1686,6 +1687,8 @@ bpf_base_func_proto(enum bpf_func_id func_id)
> return &bpf_rbtree_remove_proto;
> case BPF_FUNC_rbtree_free_node:
> return &bpf_rbtree_free_node_proto;
> + case BPF_FUNC_rbtree_get_lock:
> + return &bpf_rbtree_get_lock_proto;
> default:
> break;
> }
> diff --git a/kernel/bpf/rbtree.c b/kernel/bpf/rbtree.c
> index 250d62210804..c6f0a2a083f6 100644
> --- a/kernel/bpf/rbtree.c
> +++ b/kernel/bpf/rbtree.c
> @@ -9,6 +9,7 @@
> struct bpf_rbtree {
> struct bpf_map map;
> struct rb_root_cached root;
> + struct bpf_spin_lock *lock;
> };
>
> BTF_ID_LIST_SINGLE(bpf_rbtree_btf_ids, struct, rb_node);
> @@ -39,6 +40,14 @@ static struct bpf_map *rbtree_map_alloc(union bpf_attr *attr)
>
> tree->root = RB_ROOT_CACHED;
> bpf_map_init_from_attr(&tree->map, attr);
> +
> + tree->lock = bpf_map_kzalloc(&tree->map, sizeof(struct bpf_spin_lock),
> + GFP_KERNEL | __GFP_NOWARN);
> + if (!tree->lock) {
> + bpf_map_area_free(tree);
> + return ERR_PTR(-ENOMEM);
> + }
> +
> return &tree->map;
> }
>
> @@ -139,6 +148,7 @@ static void rbtree_map_free(struct bpf_map *map)
>
> bpf_rbtree_postorder_for_each_entry_safe(pos, n, &tree->root.rb_root)
> kfree(pos);
> + kfree(tree->lock);
> bpf_map_area_free(tree);
> }
>
> @@ -238,6 +248,20 @@ static int rbtree_map_get_next_key(struct bpf_map *map, void *key,
> return -ENOTSUPP;
> }
>
> +BPF_CALL_1(bpf_rbtree_get_lock, struct bpf_map *, map)
> +{
> + struct bpf_rbtree *tree = container_of(map, struct bpf_rbtree, map);
> +
> + return (u64)tree->lock;
> +}
> +
> +const struct bpf_func_proto bpf_rbtree_get_lock_proto = {
> + .func = bpf_rbtree_get_lock,
> + .gpl_only = true,
> + .ret_type = RET_PTR_TO_MAP_VALUE,
This hack and
+const struct bpf_func_proto bpf_rbtree_unlock_proto = {
+ .func = bpf_rbtree_unlock,
+ .gpl_only = true,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_SPIN_LOCK,
may be too much arm twisting to reuse bpf_spin_lock.
May be return ptr_to_btf_id here and bpf_rbtree_lock
should match the type?
It could be new 'struct bpf_lock' in kernel's BTF.
Let's figure out how to associate locks with rbtrees.
Reiterating my proposal that was done earlier in the context
of Delyan's irq_work, but for different type:
How about:
struct bpf_lock *l;
l = bpf_mem_alloc(allocator, bpf_core_type_id_kernel(*l));
that would allocate ptr_to_btf_id object from kernel's btf.
The bpf_lock would have constructor and destructor provided by the
kernel code.
constructor will set bpf_lock's refcnt to 1.
then bpf_rbtree_init(&map, lock) will bump the refcnt.
and dtor will eventually free it when all rbtrees are freed.
That would be similar to kptr's logic with kptr_get and dtor's
associated with kernel's btf_id-s.
next prev parent reply other threads:[~2022-08-01 22:17 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-22 18:34 [RFC PATCH bpf-next 00/11] bpf: Introduce rbtree map Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 01/11] bpf: Pull repeated reg access bounds check into helper fn Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 02/11] bpf: Add verifier support for custom callback return range Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 03/11] bpf: Add rb_node_off to bpf_map Dave Marchevsky
2022-08-01 22:19 ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 04/11] bpf: Add rbtree map Dave Marchevsky
2022-08-01 21:49 ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 05/11] bpf: Add bpf_spin_lock member to rbtree Dave Marchevsky
2022-08-01 22:17 ` Alexei Starovoitov [this message]
2022-08-02 13:59 ` Kumar Kartikeya Dwivedi
2022-08-02 15:30 ` Alexei Starovoitov
2022-08-10 21:46 ` Kumar Kartikeya Dwivedi
2022-08-10 22:06 ` Alexei Starovoitov
2022-08-10 23:16 ` Kumar Kartikeya Dwivedi
2022-08-15 5:33 ` Yonghong Song
2022-08-15 5:37 ` Kumar Kartikeya Dwivedi
2022-07-22 18:34 ` [RFC PATCH bpf-next 06/11] bpf: Add bpf_rbtree_{lock,unlock} helpers Dave Marchevsky
2022-08-01 21:58 ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 07/11] bpf: Enforce spinlock hold for bpf_rbtree_{add,remove,find} Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 08/11] bpf: Add OBJ_NON_OWNING_REF type flag Dave Marchevsky
2022-08-01 22:41 ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 09/11] bpf: Add CONDITIONAL_RELEASE " Dave Marchevsky
2022-08-01 22:23 ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 10/11] bpf: Introduce PTR_ITER and PTR_ITER_END type flags Dave Marchevsky
2022-07-29 16:31 ` Tejun Heo
2022-08-01 22:44 ` Alexei Starovoitov
2022-08-02 13:05 ` Kumar Kartikeya Dwivedi
2022-08-02 15:10 ` Alexei Starovoitov
2022-08-10 17:56 ` Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 11/11] selftests/bpf: Add rbtree map tests Dave Marchevsky
2022-07-28 7:18 ` Yonghong Song
2022-08-10 17:48 ` Dave Marchevsky
2022-07-28 7:04 ` [RFC PATCH bpf-next 00/11] bpf: Introduce rbtree map Yonghong Song
2022-08-10 17:54 ` Dave Marchevsky
2022-08-01 21:27 ` Alexei Starovoitov
2022-08-10 18:11 ` Dave Marchevsky
2022-08-02 22:02 ` Andrii Nakryiko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=61209d3a-bc15-e4f2-9079-7bdcfdd13cd0@fb.com \
--to=ast@fb.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davemarchevsky@fb.com \
--cc=kernel-team@fb.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).