bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexei Starovoitov <ast@fb.com>
To: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Dave Marchevsky <davemarchevsky@fb.com>,
	bpf@vger.kernel.org, Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	Kernel Team <kernel-team@fb.com>, Tejun Heo <tj@kernel.org>,
	"toke@redhat.com" <toke@redhat.com>
Subject: Re: [RFC PATCH bpf-next 05/11] bpf: Add bpf_spin_lock member to rbtree
Date: Tue, 2 Aug 2022 08:30:46 -0700	[thread overview]
Message-ID: <f49fdce5-5098-878d-092f-cc55ce28b644@fb.com> (raw)
In-Reply-To: <CAP01T74O+CS0VCXq9U7wyPvxTDdBr7ev0Oo-79ZcnkH6hagMcA@mail.gmail.com>

On 8/2/22 6:59 AM, Kumar Kartikeya Dwivedi wrote:
> On Tue, 2 Aug 2022 at 00:23, Alexei Starovoitov <ast@fb.com> wrote:
>>
>> On 7/22/22 11:34 AM, Dave Marchevsky wrote:
>>> This patch adds a struct bpf_spin_lock *lock member to bpf_rbtree, as
>>> well as a bpf_rbtree_get_lock helper which allows bpf programs to access
>>> the lock.
>>>
>>> Ideally the bpf_spin_lock would be created independently oustide of the
>>> tree and associated with it before the tree is used, either as part of
>>> map definition or via some call like rbtree_init(&rbtree, &lock). Doing
>>> this in an ergonomic way is proving harder than expected, so for now use
>>> this workaround.
>>>
>>> Why is creating the bpf_spin_lock independently and associating it with
>>> the tree preferable? Because we want to be able to transfer nodes
>>> between trees atomically, and for this to work need same lock associated
>>> with 2 trees.
>>
>> Right. We need one lock to protect multiple rbtrees.
>> Since add/find/remove helpers will look into rbtree->lock
>> the two different rbtree (== map) have to point to the same lock.
>> Other than rbtree_init(&rbtree, &lock) what would be an alternative ?
>>
> 
> Instead of dynamically associating locks with the rbtree map, why not
> have lock embedded with the rbtree map, and construct a global locking
> order among rbtree maps during prog load time that the verifier
> maintains globally.
> 
> Prog A always takes rb_mapA.lock, then rb_mapB.lock,
> If we see Prog B being loaded and it takes them in opposite order, we
> reject the load because it can lead to ABBA deadlock. We also know the
> map pointer statically so we ensure rb_mapA.lock cannot be called
> recursively.
> 
> Everytime a prog is loaded, it validates against this single list of
> lock orders amongst maps. Some of them may not have interdependencies
> at all. There is a single total order, and any cycles lead to
> verification failure. This might also work out for normal
> bpf_spin_locks allowing us to take more than 1 at a time.
> 
> Then you can do moves atomically across two maps, by acquiring the
> locks for both. Maybe we limit this to two locks for now only. There
> could also be a C++ style std::lock{lock1, lock2} helper that takes
> multiple locks to acquire in order, if you want to prevent anything
> from happening between those two calls; just an idea.

Consider the case of N rbtrees. One for each cpu.
In run-time the bpf prog might grab lock of X's rbtree and
then decide to transfer the node to a higher or lower cpu.
The order cannot be determined statically by the verifier.
The simple fix for this would be to share one lock across all rbtrees.
Not necessary such 'global' lock will scale. Just an example
where sharing the lock is useful.

> Probably need to make rb_node add/find helpers notrace so that the bpf
> program does not invoke lock again recursively when invoked from the
> helper.
> 
> Then you can embed locked state statically in map pointer reg, and you
> don't need to dynamically check whether map is locked. It will be
> passed into the helpers, and from there a member offset can be fixed
> which indicates whether the map is 'locked'.

considered that. So far doesn't look like we can avoid run-time checks.
And if we do run-time check there is no need to complicate the verifier.

> If you have already explored this, then please share what the
> problems/limitations were that you ran into, or if this is impossible.
> It does sound too good to be true, so maybe I missed something
> important here.
> 
> I was prototyping something similar for the pifomap in the XDP
> queueing series, where we were exploring exposing the underlying
> locking of the map to the user (to be able to batch insertions to the
> map). The major concern was leaking too many implementation details to
> the UAPI and locking (pun intended) ourselves out of improvements to
> the map implementation later, so we held off on that for now (and also
> wanted to evaluate other alternatives before doubling down on it).

That's probably not a concern. spin_lock concept is natural.
It's implementation could be different, but we already hide it in 
bpf_spin_lock.

As far as locking...
beyond rtbtree we will likely add link lists. They would need to be
protected by locks as well.
The same bpf struct element might have two fields: bpf_rb_node and
bpf_link_list_node. It could be a part of one rbtree while at the same
time linked into some other list.
Sharing one lock for rbtree operations and link list manipulations
would be necessary.

In the past we hid locks inside maps. Exposing 'bpf_lock' as a
standalone object and associating the lock with rbtrees, lists, etc
would allow for greater flexibility and allow bpf progs create
more advanced data structures.

  reply	other threads:[~2022-08-02 15:31 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-22 18:34 [RFC PATCH bpf-next 00/11] bpf: Introduce rbtree map Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 01/11] bpf: Pull repeated reg access bounds check into helper fn Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 02/11] bpf: Add verifier support for custom callback return range Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 03/11] bpf: Add rb_node_off to bpf_map Dave Marchevsky
2022-08-01 22:19   ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 04/11] bpf: Add rbtree map Dave Marchevsky
2022-08-01 21:49   ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 05/11] bpf: Add bpf_spin_lock member to rbtree Dave Marchevsky
2022-08-01 22:17   ` Alexei Starovoitov
2022-08-02 13:59     ` Kumar Kartikeya Dwivedi
2022-08-02 15:30       ` Alexei Starovoitov [this message]
2022-08-10 21:46     ` Kumar Kartikeya Dwivedi
2022-08-10 22:06       ` Alexei Starovoitov
2022-08-10 23:16         ` Kumar Kartikeya Dwivedi
2022-08-15  5:33       ` Yonghong Song
2022-08-15  5:37         ` Kumar Kartikeya Dwivedi
2022-07-22 18:34 ` [RFC PATCH bpf-next 06/11] bpf: Add bpf_rbtree_{lock,unlock} helpers Dave Marchevsky
2022-08-01 21:58   ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 07/11] bpf: Enforce spinlock hold for bpf_rbtree_{add,remove,find} Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 08/11] bpf: Add OBJ_NON_OWNING_REF type flag Dave Marchevsky
2022-08-01 22:41   ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 09/11] bpf: Add CONDITIONAL_RELEASE " Dave Marchevsky
2022-08-01 22:23   ` Alexei Starovoitov
2022-07-22 18:34 ` [RFC PATCH bpf-next 10/11] bpf: Introduce PTR_ITER and PTR_ITER_END type flags Dave Marchevsky
2022-07-29 16:31   ` Tejun Heo
2022-08-01 22:44   ` Alexei Starovoitov
2022-08-02 13:05     ` Kumar Kartikeya Dwivedi
2022-08-02 15:10       ` Alexei Starovoitov
2022-08-10 17:56     ` Dave Marchevsky
2022-07-22 18:34 ` [RFC PATCH bpf-next 11/11] selftests/bpf: Add rbtree map tests Dave Marchevsky
2022-07-28  7:18   ` Yonghong Song
2022-08-10 17:48     ` Dave Marchevsky
2022-07-28  7:04 ` [RFC PATCH bpf-next 00/11] bpf: Introduce rbtree map Yonghong Song
2022-08-10 17:54   ` Dave Marchevsky
2022-08-01 21:27 ` Alexei Starovoitov
2022-08-10 18:11   ` Dave Marchevsky
2022-08-02 22:02 ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f49fdce5-5098-878d-092f-cc55ce28b644@fb.com \
    --to=ast@fb.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davemarchevsky@fb.com \
    --cc=kernel-team@fb.com \
    --cc=memxor@gmail.com \
    --cc=tj@kernel.org \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).