bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator
@ 2020-09-02 23:53 Yonghong Song
  2020-09-02 23:53 ` [PATCH bpf 1/2] " Yonghong Song
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Yonghong Song @ 2020-09-02 23:53 UTC (permalink / raw)
  To: bpf, Lorenz Bauer, Martin KaFai Lau, netdev
  Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team

Currently, the bpf hashmap iterator takes a bucket_lock, a spin_lock,
before visiting each element in the bucket. This will cause a deadlock
if a map update/delete operates on an element with the same
bucket id of the visited map.

To avoid the deadlock, let us just use rcu_read_lock instead of
bucket_lock. This may result in visiting stale elements, missing some elements,
or repeating some elements, if concurrent map delete/update happens for the
same map. I think using rcu_read_lock is a reasonable compromise.
For users caring stale/missing/repeating element issues, bpf map batch
access syscall interface can be used.

Note that another approach is during bpf_iter link stage, we check
whether the iter program might be able to do update/delete to the visited
map. If it is, reject the link_create. Verifier needs to record whether
an update/delete operation happens for each map for this approach.
I just feel this checking is too specialized, hence still prefer
rcu_read_lock approach.

Patch #1 has the kernel implementation and Patch #2 added a selftest
which can trigger deadlock without Patch #1.

Yonghong Song (2):
  bpf: do not use bucket_lock for hashmap iterator
  selftests/bpf: add bpf_{update,delete}_map_elem in hashmap iter
    program

 kernel/bpf/hashtab.c                              | 15 ++++-----------
 .../selftests/bpf/progs/bpf_iter_bpf_hash_map.c   | 15 +++++++++++++++
 2 files changed, 19 insertions(+), 11 deletions(-)

-- 
2.24.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH bpf 1/2] bpf: do not use bucket_lock for hashmap iterator
  2020-09-02 23:53 [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Yonghong Song
@ 2020-09-02 23:53 ` Yonghong Song
  2020-09-03  1:25   ` Andrii Nakryiko
  2020-09-02 23:53 ` [PATCH bpf 2/2] selftests/bpf: add bpf_{update,delete}_map_elem in hashmap iter program Yonghong Song
  2020-09-04  0:44 ` [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Alexei Starovoitov
  2 siblings, 1 reply; 7+ messages in thread
From: Yonghong Song @ 2020-09-02 23:53 UTC (permalink / raw)
  To: bpf, Lorenz Bauer, Martin KaFai Lau, netdev
  Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team

Currently, for hashmap, the bpf iterator will grab a bucket lock, a
spinlock, before traversing the elements in the bucket. This can ensure
all bpf visted elements are valid. But this mechanism may cause
deadlock if update/deletion happens to the same bucket of the
visited map in the program. For example, if we added bpf_map_update_elem()
call to the same visited element in selftests bpf_iter_bpf_hash_map.c,
we will have the following deadlock:

  ============================================
  WARNING: possible recursive locking detected
  5.9.0-rc1+ #841 Not tainted
  --------------------------------------------
  test_progs/1750 is trying to acquire lock:
  ffff9a5bb73c5e70 (&htab->buckets[i].raw_lock){....}-{2:2}, at: htab_map_update_elem+0x1cf/0x410

  but task is already holding lock:
  ffff9a5bb73c5e20 (&htab->buckets[i].raw_lock){....}-{2:2}, at: bpf_hash_map_seq_find_next+0x94/0x120

  other info that might help us debug this:
   Possible unsafe locking scenario:

         CPU0
         ----
    lock(&htab->buckets[i].raw_lock);
    lock(&htab->buckets[i].raw_lock);

   *** DEADLOCK ***
   ...
  Call Trace:
   dump_stack+0x78/0xa0
   __lock_acquire.cold.74+0x209/0x2e3
   lock_acquire+0xba/0x380
   ? htab_map_update_elem+0x1cf/0x410
   ? __lock_acquire+0x639/0x20c0
   _raw_spin_lock_irqsave+0x3b/0x80
   ? htab_map_update_elem+0x1cf/0x410
   htab_map_update_elem+0x1cf/0x410
   ? lock_acquire+0xba/0x380
   bpf_prog_ad6dab10433b135d_dump_bpf_hash_map+0x88/0xa9c
   ? find_held_lock+0x34/0xa0
   bpf_iter_run_prog+0x81/0x16e
   __bpf_hash_map_seq_show+0x145/0x180
   bpf_seq_read+0xff/0x3d0
   vfs_read+0xad/0x1c0
   ksys_read+0x5f/0xe0
   do_syscall_64+0x33/0x40
   entry_SYSCALL_64_after_hwframe+0x44/0xa9
  ...

The bucket_lock first grabbed in seq_ops->next() called by bpf_seq_read(),
and then grabbed again in htab_map_update_elem() in the bpf program, causing
deadlocks.

Actually, we do not need bucket_lock here, we can just use rcu_read_lock()
similar to netlink iterator where the rcu_read_{lock,unlock} likes below:
 seq_ops->start():
     rcu_read_lock();
 seq_ops->next():
     rcu_read_unlock();
     /* next element */
     rcu_read_lock();
 seq_ops->stop();
     rcu_read_unlock();

Compared to old bucket_lock mechanism, if concurrent updata/delete happens,
we may visit stale elements, miss some elements, or repeat some elements.
I think this is a reasonable compromise. For users wanting to avoid
stale, missing/repeated accesses, bpf_map batch access syscall interface
can be used.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/hashtab.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 78dfff6a501b..7df28a45c66b 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1622,7 +1622,6 @@ struct bpf_iter_seq_hash_map_info {
 	struct bpf_map *map;
 	struct bpf_htab *htab;
 	void *percpu_value_buf; // non-zero means percpu hash
-	unsigned long flags;
 	u32 bucket_id;
 	u32 skip_elems;
 };
@@ -1632,7 +1631,6 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
 			   struct htab_elem *prev_elem)
 {
 	const struct bpf_htab *htab = info->htab;
-	unsigned long flags = info->flags;
 	u32 skip_elems = info->skip_elems;
 	u32 bucket_id = info->bucket_id;
 	struct hlist_nulls_head *head;
@@ -1656,19 +1654,18 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
 
 		/* not found, unlock and go to the next bucket */
 		b = &htab->buckets[bucket_id++];
-		htab_unlock_bucket(htab, b, flags);
+		rcu_read_unlock();
 		skip_elems = 0;
 	}
 
 	for (i = bucket_id; i < htab->n_buckets; i++) {
 		b = &htab->buckets[i];
-		flags = htab_lock_bucket(htab, b);
+		rcu_read_lock();
 
 		count = 0;
 		head = &b->head;
 		hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) {
 			if (count >= skip_elems) {
-				info->flags = flags;
 				info->bucket_id = i;
 				info->skip_elems = count;
 				return elem;
@@ -1676,7 +1673,7 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
 			count++;
 		}
 
-		htab_unlock_bucket(htab, b, flags);
+		rcu_read_unlock();
 		skip_elems = 0;
 	}
 
@@ -1754,14 +1751,10 @@ static int bpf_hash_map_seq_show(struct seq_file *seq, void *v)
 
 static void bpf_hash_map_seq_stop(struct seq_file *seq, void *v)
 {
-	struct bpf_iter_seq_hash_map_info *info = seq->private;
-
 	if (!v)
 		(void)__bpf_hash_map_seq_show(seq, NULL);
 	else
-		htab_unlock_bucket(info->htab,
-				   &info->htab->buckets[info->bucket_id],
-				   info->flags);
+		rcu_read_unlock();
 }
 
 static int bpf_iter_init_hash_map(void *priv_data,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf 2/2] selftests/bpf: add bpf_{update,delete}_map_elem in hashmap iter program
  2020-09-02 23:53 [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Yonghong Song
  2020-09-02 23:53 ` [PATCH bpf 1/2] " Yonghong Song
@ 2020-09-02 23:53 ` Yonghong Song
  2020-09-04  0:44 ` [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Alexei Starovoitov
  2 siblings, 0 replies; 7+ messages in thread
From: Yonghong Song @ 2020-09-02 23:53 UTC (permalink / raw)
  To: bpf, Lorenz Bauer, Martin KaFai Lau, netdev
  Cc: Alexei Starovoitov, Daniel Borkmann, kernel-team

Added bpf_{updata,delete}_map_elem to the very map element the
iter program is visiting. Due to rcu protection, the visited map
elements, although stale, should still contain correct values.
  $ ./test_progs -n 4/18
  #4/18 bpf_hash_map:OK
  #4 bpf_iter:OK
  Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/progs/bpf_iter_bpf_hash_map.c   | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
index 07ddbfdbcab7..6dfce3fd68bc 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c
@@ -47,7 +47,10 @@ int dump_bpf_hash_map(struct bpf_iter__bpf_map_elem *ctx)
 	__u32 seq_num = ctx->meta->seq_num;
 	struct bpf_map *map = ctx->map;
 	struct key_t *key = ctx->key;
+	struct key_t tmp_key;
 	__u64 *val = ctx->value;
+	__u64 tmp_val = 0;
+	int ret;
 
 	if (in_test_mode) {
 		/* test mode is used by selftests to
@@ -61,6 +64,18 @@ int dump_bpf_hash_map(struct bpf_iter__bpf_map_elem *ctx)
 		if (key == (void *)0 || val == (void *)0)
 			return 0;
 
+		/* update the value and then delete the <key, value> pair.
+		 * it should not impact the existing 'val' which is still
+		 * accessible under rcu.
+		 */
+		__builtin_memcpy(&tmp_key, key, sizeof(struct key_t));
+		ret = bpf_map_update_elem(&hashmap1, &tmp_key, &tmp_val, 0);
+		if (ret)
+			return 0;
+		ret = bpf_map_delete_elem(&hashmap1, &tmp_key);
+		if (ret)
+			return 0;
+
 		key_sum_a += key->a;
 		key_sum_b += key->b;
 		key_sum_c += key->c;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/2] bpf: do not use bucket_lock for hashmap iterator
  2020-09-02 23:53 ` [PATCH bpf 1/2] " Yonghong Song
@ 2020-09-03  1:25   ` Andrii Nakryiko
  2020-09-03  2:44     ` Yonghong Song
  0 siblings, 1 reply; 7+ messages in thread
From: Andrii Nakryiko @ 2020-09-03  1:25 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Lorenz Bauer, Martin KaFai Lau, Networking,
	Alexei Starovoitov, Daniel Borkmann, Kernel Team

On Wed, Sep 2, 2020 at 4:56 PM Yonghong Song <yhs@fb.com> wrote:
>
> Currently, for hashmap, the bpf iterator will grab a bucket lock, a
> spinlock, before traversing the elements in the bucket. This can ensure
> all bpf visted elements are valid. But this mechanism may cause
> deadlock if update/deletion happens to the same bucket of the
> visited map in the program. For example, if we added bpf_map_update_elem()
> call to the same visited element in selftests bpf_iter_bpf_hash_map.c,
> we will have the following deadlock:
>

[...]

>
> Compared to old bucket_lock mechanism, if concurrent updata/delete happens,
> we may visit stale elements, miss some elements, or repeat some elements.
> I think this is a reasonable compromise. For users wanting to avoid

I agree, the only reliable way to iterate map without duplicates and
missed elements is to not update that map during iteration (unless we
start supporting point-in-time snapshots, which is a very different
matter).


> stale, missing/repeated accesses, bpf_map batch access syscall interface
> can be used.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  kernel/bpf/hashtab.c | 15 ++++-----------
>  1 file changed, 4 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 78dfff6a501b..7df28a45c66b 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -1622,7 +1622,6 @@ struct bpf_iter_seq_hash_map_info {
>         struct bpf_map *map;
>         struct bpf_htab *htab;
>         void *percpu_value_buf; // non-zero means percpu hash
> -       unsigned long flags;
>         u32 bucket_id;
>         u32 skip_elems;
>  };
> @@ -1632,7 +1631,6 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
>                            struct htab_elem *prev_elem)
>  {
>         const struct bpf_htab *htab = info->htab;
> -       unsigned long flags = info->flags;
>         u32 skip_elems = info->skip_elems;
>         u32 bucket_id = info->bucket_id;
>         struct hlist_nulls_head *head;
> @@ -1656,19 +1654,18 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
>
>                 /* not found, unlock and go to the next bucket */
>                 b = &htab->buckets[bucket_id++];
> -               htab_unlock_bucket(htab, b, flags);
> +               rcu_read_unlock();

Just double checking as I don't yet completely understand all the
sleepable BPF implications. If the map is used from a sleepable BPF
program, we are still ok doing just rcu_read_lock/rcu_read_unlock when
accessing BPF map elements, right? No need for extra
rcu_read_lock_trace/rcu_read_unlock_trace?

>                 skip_elems = 0;
>         }
>

[...]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/2] bpf: do not use bucket_lock for hashmap iterator
  2020-09-03  1:25   ` Andrii Nakryiko
@ 2020-09-03  2:44     ` Yonghong Song
  2020-09-04  0:03       ` Alexei Starovoitov
  0 siblings, 1 reply; 7+ messages in thread
From: Yonghong Song @ 2020-09-03  2:44 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Lorenz Bauer, Martin KaFai Lau, Networking,
	Alexei Starovoitov, Daniel Borkmann, Kernel Team



On 9/2/20 6:25 PM, Andrii Nakryiko wrote:
> On Wed, Sep 2, 2020 at 4:56 PM Yonghong Song <yhs@fb.com> wrote:
>>
>> Currently, for hashmap, the bpf iterator will grab a bucket lock, a
>> spinlock, before traversing the elements in the bucket. This can ensure
>> all bpf visted elements are valid. But this mechanism may cause
>> deadlock if update/deletion happens to the same bucket of the
>> visited map in the program. For example, if we added bpf_map_update_elem()
>> call to the same visited element in selftests bpf_iter_bpf_hash_map.c,
>> we will have the following deadlock:
>>
> 
> [...]
> 
>>
>> Compared to old bucket_lock mechanism, if concurrent updata/delete happens,
>> we may visit stale elements, miss some elements, or repeat some elements.
>> I think this is a reasonable compromise. For users wanting to avoid
> 
> I agree, the only reliable way to iterate map without duplicates and
> missed elements is to not update that map during iteration (unless we
> start supporting point-in-time snapshots, which is a very different
> matter).
> 
> 
>> stale, missing/repeated accesses, bpf_map batch access syscall interface
>> can be used.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   kernel/bpf/hashtab.c | 15 ++++-----------
>>   1 file changed, 4 insertions(+), 11 deletions(-)
>>
>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>> index 78dfff6a501b..7df28a45c66b 100644
>> --- a/kernel/bpf/hashtab.c
>> +++ b/kernel/bpf/hashtab.c
>> @@ -1622,7 +1622,6 @@ struct bpf_iter_seq_hash_map_info {
>>          struct bpf_map *map;
>>          struct bpf_htab *htab;
>>          void *percpu_value_buf; // non-zero means percpu hash
>> -       unsigned long flags;
>>          u32 bucket_id;
>>          u32 skip_elems;
>>   };
>> @@ -1632,7 +1631,6 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
>>                             struct htab_elem *prev_elem)
>>   {
>>          const struct bpf_htab *htab = info->htab;
>> -       unsigned long flags = info->flags;
>>          u32 skip_elems = info->skip_elems;
>>          u32 bucket_id = info->bucket_id;
>>          struct hlist_nulls_head *head;
>> @@ -1656,19 +1654,18 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
>>
>>                  /* not found, unlock and go to the next bucket */
>>                  b = &htab->buckets[bucket_id++];
>> -               htab_unlock_bucket(htab, b, flags);
>> +               rcu_read_unlock();
> 
> Just double checking as I don't yet completely understand all the
> sleepable BPF implications. If the map is used from a sleepable BPF
> program, we are still ok doing just rcu_read_lock/rcu_read_unlock when
> accessing BPF map elements, right? No need for extra
> rcu_read_lock_trace/rcu_read_unlock_trace?
I think it is fine now since currently bpf_iter program cannot be 
sleepable and the current sleepable program framework already allows the 
following scenario.
   - map1 is a preallocated hashmap shared by two programs,
     prog1_nosleep and prog2_sleepable

...				  ...
rcu_read_lock()			  rcu_read_lock_trace()
run prog1_nosleep                 run prog2_sleepable
   lookup/update/delete map1 elem    lookup/update/delete map1 elem
rcu_read_unlock()		  rcu_read_unlock_trace()
...				  ...

The prog1_nosleep could be a bpf_iter program or a networking problem.

Alexei, could you confirm the above scenario is properly supported now?

> 
>>                  skip_elems = 0;
>>          }
>>
> 
> [...]
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/2] bpf: do not use bucket_lock for hashmap iterator
  2020-09-03  2:44     ` Yonghong Song
@ 2020-09-04  0:03       ` Alexei Starovoitov
  0 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2020-09-04  0:03 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Andrii Nakryiko, bpf, Lorenz Bauer, Martin KaFai Lau, Networking,
	Alexei Starovoitov, Daniel Borkmann, Kernel Team

On Wed, Sep 02, 2020 at 07:44:34PM -0700, Yonghong Song wrote:
> 
> 
> On 9/2/20 6:25 PM, Andrii Nakryiko wrote:
> > On Wed, Sep 2, 2020 at 4:56 PM Yonghong Song <yhs@fb.com> wrote:
> > > 
> > > Currently, for hashmap, the bpf iterator will grab a bucket lock, a
> > > spinlock, before traversing the elements in the bucket. This can ensure
> > > all bpf visted elements are valid. But this mechanism may cause
> > > deadlock if update/deletion happens to the same bucket of the
> > > visited map in the program. For example, if we added bpf_map_update_elem()
> > > call to the same visited element in selftests bpf_iter_bpf_hash_map.c,
> > > we will have the following deadlock:
> > > 
> > 
> > [...]
> > 
> > > 
> > > Compared to old bucket_lock mechanism, if concurrent updata/delete happens,
> > > we may visit stale elements, miss some elements, or repeat some elements.
> > > I think this is a reasonable compromise. For users wanting to avoid
> > 
> > I agree, the only reliable way to iterate map without duplicates and
> > missed elements is to not update that map during iteration (unless we
> > start supporting point-in-time snapshots, which is a very different
> > matter).
> > 
> > 
> > > stale, missing/repeated accesses, bpf_map batch access syscall interface
> > > can be used.
> > > 
> > > Signed-off-by: Yonghong Song <yhs@fb.com>
> > > ---
> > >   kernel/bpf/hashtab.c | 15 ++++-----------
> > >   1 file changed, 4 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > > index 78dfff6a501b..7df28a45c66b 100644
> > > --- a/kernel/bpf/hashtab.c
> > > +++ b/kernel/bpf/hashtab.c
> > > @@ -1622,7 +1622,6 @@ struct bpf_iter_seq_hash_map_info {
> > >          struct bpf_map *map;
> > >          struct bpf_htab *htab;
> > >          void *percpu_value_buf; // non-zero means percpu hash
> > > -       unsigned long flags;
> > >          u32 bucket_id;
> > >          u32 skip_elems;
> > >   };
> > > @@ -1632,7 +1631,6 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
> > >                             struct htab_elem *prev_elem)
> > >   {
> > >          const struct bpf_htab *htab = info->htab;
> > > -       unsigned long flags = info->flags;
> > >          u32 skip_elems = info->skip_elems;
> > >          u32 bucket_id = info->bucket_id;
> > >          struct hlist_nulls_head *head;
> > > @@ -1656,19 +1654,18 @@ bpf_hash_map_seq_find_next(struct bpf_iter_seq_hash_map_info *info,
> > > 
> > >                  /* not found, unlock and go to the next bucket */
> > >                  b = &htab->buckets[bucket_id++];
> > > -               htab_unlock_bucket(htab, b, flags);
> > > +               rcu_read_unlock();
> > 
> > Just double checking as I don't yet completely understand all the
> > sleepable BPF implications. If the map is used from a sleepable BPF
> > program, we are still ok doing just rcu_read_lock/rcu_read_unlock when
> > accessing BPF map elements, right? No need for extra
> > rcu_read_lock_trace/rcu_read_unlock_trace?
> I think it is fine now since currently bpf_iter program cannot be sleepable
> and the current sleepable program framework already allows the following
> scenario.
>   - map1 is a preallocated hashmap shared by two programs,
>     prog1_nosleep and prog2_sleepable
> 
> ...				  ...
> rcu_read_lock()			  rcu_read_lock_trace()
> run prog1_nosleep                 run prog2_sleepable
>   lookup/update/delete map1 elem    lookup/update/delete map1 elem
> rcu_read_unlock()		  rcu_read_unlock_trace()
> ...				  ...

rcu_trace doesn't protect the map. It protects the program. Even for
prog2_sleepable the map is protected by rcu. The whole map including all
elements will be freed after both sleepable and non-sleepable progs stop
executing. This rcu_read_lock is needed for non-preallocated hash maps where
individual elements are rcu protected. See free_htab_elem() doing call_rcu().
When the combination of sleepable progs and non-prealloc hashmap is enabled
we would need to revisit this iterator assumption.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator
  2020-09-02 23:53 [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Yonghong Song
  2020-09-02 23:53 ` [PATCH bpf 1/2] " Yonghong Song
  2020-09-02 23:53 ` [PATCH bpf 2/2] selftests/bpf: add bpf_{update,delete}_map_elem in hashmap iter program Yonghong Song
@ 2020-09-04  0:44 ` Alexei Starovoitov
  2 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2020-09-04  0:44 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Lorenz Bauer, Martin KaFai Lau, Network Development,
	Alexei Starovoitov, Daniel Borkmann, Kernel Team

On Wed, Sep 2, 2020 at 4:54 PM Yonghong Song <yhs@fb.com> wrote:
>
> Currently, the bpf hashmap iterator takes a bucket_lock, a spin_lock,
> before visiting each element in the bucket. This will cause a deadlock
> if a map update/delete operates on an element with the same
> bucket id of the visited map.
>
> To avoid the deadlock, let us just use rcu_read_lock instead of
> bucket_lock. This may result in visiting stale elements, missing some elements,
> or repeating some elements, if concurrent map delete/update happens for the
> same map. I think using rcu_read_lock is a reasonable compromise.
> For users caring stale/missing/repeating element issues, bpf map batch
> access syscall interface can be used.
>
> Note that another approach is during bpf_iter link stage, we check
> whether the iter program might be able to do update/delete to the visited
> map. If it is, reject the link_create. Verifier needs to record whether
> an update/delete operation happens for each map for this approach.
> I just feel this checking is too specialized, hence still prefer
> rcu_read_lock approach.
>
> Patch #1 has the kernel implementation and Patch #2 added a selftest
> which can trigger deadlock without Patch #1.

Applied. Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-09-04  0:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-02 23:53 [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Yonghong Song
2020-09-02 23:53 ` [PATCH bpf 1/2] " Yonghong Song
2020-09-03  1:25   ` Andrii Nakryiko
2020-09-03  2:44     ` Yonghong Song
2020-09-04  0:03       ` Alexei Starovoitov
2020-09-02 23:53 ` [PATCH bpf 2/2] selftests/bpf: add bpf_{update,delete}_map_elem in hashmap iter program Yonghong Song
2020-09-04  0:44 ` [PATCH bpf 0/2] bpf: do not use bucket_lock for hashmap iterator Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).