bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket
@ 2023-10-05  3:23 Song Liu
  2023-10-05  6:31 ` Song Liu
  0 siblings, 1 reply; 4+ messages in thread
From: Song Liu @ 2023-10-05  3:23 UTC (permalink / raw)
  To: bpf; +Cc: ast, daniel, andrii, martin.lau, kernel-team, Song Liu, Tejun Heo

htab_lock_bucket uses the following logic to avoid recursion:

1. preempt_disable();
2. check percpu counter htab->map_locked[hash] for recursion;
   2.1. if map_lock[hash] is already taken, return -BUSY;
3. raw_spin_lock_irqsave();

However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
logic will not able to access the same hash of the hashtab and get -EBUSY.
This -EBUSY is not really necessary. Fix it by disabling IRQ before
checking map_locked:

1. preempt_disable();
2. local_irq_save();
3. check percpu counter htab->map_locked[hash] for recursion;
   3.1. if map_lock[hash] is already taken, return -BUSY;
4. raw_spin_lock().

Similarly, use raw_spin_unlock() and local_irq_restore() in
htab_unlock_bucket().

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Song Liu <song@kernel.org>

---
Changes in v3:
1. Use raw_local_irq_* APIs instead.

Changes in v2:
1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket().
   (Andrii)
---
 kernel/bpf/hashtab.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index a8c7e1c5abfa..74c8d1b41dd5 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
 
 	preempt_disable();
+	raw_local_irq_save(flags);
 	if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
 		__this_cpu_dec(*(htab->map_locked[hash]));
+		raw_local_irq_restore(flags);
 		preempt_enable();
 		return -EBUSY;
 	}
 
-	raw_spin_lock_irqsave(&b->raw_lock, flags);
+	raw_spin_lock(&b->raw_lock);
 	*pflags = flags;
 
 	return 0;
@@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
 				      unsigned long flags)
 {
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
-	raw_spin_unlock_irqrestore(&b->raw_lock, flags);
+	raw_spin_unlock(&b->raw_lock);
 	__this_cpu_dec(*(htab->map_locked[hash]));
+	raw_local_irq_restore(flags);
 	preempt_enable();
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket
  2023-10-05  3:23 [PATCH v3 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket Song Liu
@ 2023-10-05  6:31 ` Song Liu
  0 siblings, 0 replies; 4+ messages in thread
From: Song Liu @ 2023-10-05  6:31 UTC (permalink / raw)
  To: Song Liu
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Kernel Team, Tejun Heo



> On Oct 4, 2023, at 8:23 PM, Song Liu <song@kernel.org> wrote:
> 
> htab_lock_bucket uses the following logic to avoid recursion:
> 
> 1. preempt_disable();
> 2. check percpu counter htab->map_locked[hash] for recursion;
>   2.1. if map_lock[hash] is already taken, return -BUSY;
> 3. raw_spin_lock_irqsave();
> 
> However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
> logic will not able to access the same hash of the hashtab and get -EBUSY.
> This -EBUSY is not really necessary. Fix it by disabling IRQ before
> checking map_locked:
> 
> 1. preempt_disable();
> 2. local_irq_save();
> 3. check percpu counter htab->map_locked[hash] for recursion;
>   3.1. if map_lock[hash] is already taken, return -BUSY;
> 4. raw_spin_lock().
> 
> Similarly, use raw_spin_unlock() and local_irq_restore() in
> htab_unlock_bucket().
> 
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Song Liu <song@kernel.org>

Somehow this didn't make to lore and thus not to patchwork. Let
me resend, sorry for the noise. 

Song


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket
  2023-10-05  6:31 Song Liu
@ 2023-10-05 17:52 ` Song Liu
  0 siblings, 0 replies; 4+ messages in thread
From: Song Liu @ 2023-10-05 17:52 UTC (permalink / raw)
  To: bpf; +Cc: ast, daniel, andrii, martin.lau, kernel-team, Tejun Heo

On Wed, Oct 4, 2023 at 11:32 PM Song Liu <song@kernel.org> wrote:
>
> htab_lock_bucket uses the following logic to avoid recursion:
>
> 1. preempt_disable();
> 2. check percpu counter htab->map_locked[hash] for recursion;
>    2.1. if map_lock[hash] is already taken, return -BUSY;
> 3. raw_spin_lock_irqsave();
>
> However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
> logic will not able to access the same hash of the hashtab and get -EBUSY.
> This -EBUSY is not really necessary. Fix it by disabling IRQ before
> checking map_locked:
>
> 1. preempt_disable();
> 2. local_irq_save();
> 3. check percpu counter htab->map_locked[hash] for recursion;
>    3.1. if map_lock[hash] is already taken, return -BUSY;
> 4. raw_spin_lock().
>
> Similarly, use raw_spin_unlock() and local_irq_restore() in
> htab_unlock_bucket().
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Song Liu <song@kernel.org>

This still doesn't look right. Let me try more..

Thanks,
Song

>
> ---
> Changes in v3:
> 1. Use raw_local_irq_* APIs instead.
>
> Changes in v2:
> 1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket().
>    (Andrii)
> ---
>  kernel/bpf/hashtab.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index a8c7e1c5abfa..74c8d1b41dd5 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
>         hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
>
>         preempt_disable();
> +       raw_local_irq_save(flags);
>         if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
>                 __this_cpu_dec(*(htab->map_locked[hash]));
> +               raw_local_irq_restore(flags);
>                 preempt_enable();
>                 return -EBUSY;
>         }
>
> -       raw_spin_lock_irqsave(&b->raw_lock, flags);
> +       raw_spin_lock(&b->raw_lock);
>         *pflags = flags;
>
>         return 0;
> @@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
>                                       unsigned long flags)
>  {
>         hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
> -       raw_spin_unlock_irqrestore(&b->raw_lock, flags);
> +       raw_spin_unlock(&b->raw_lock);
>         __this_cpu_dec(*(htab->map_locked[hash]));
> +       raw_local_irq_restore(flags);
>         preempt_enable();
>  }
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket
@ 2023-10-05  6:31 Song Liu
  2023-10-05 17:52 ` Song Liu
  0 siblings, 1 reply; 4+ messages in thread
From: Song Liu @ 2023-10-05  6:31 UTC (permalink / raw)
  To: bpf; +Cc: ast, daniel, andrii, martin.lau, kernel-team, Song Liu, Tejun Heo

htab_lock_bucket uses the following logic to avoid recursion:

1. preempt_disable();
2. check percpu counter htab->map_locked[hash] for recursion;
   2.1. if map_lock[hash] is already taken, return -BUSY;
3. raw_spin_lock_irqsave();

However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
logic will not able to access the same hash of the hashtab and get -EBUSY.
This -EBUSY is not really necessary. Fix it by disabling IRQ before
checking map_locked:

1. preempt_disable();
2. local_irq_save();
3. check percpu counter htab->map_locked[hash] for recursion;
   3.1. if map_lock[hash] is already taken, return -BUSY;
4. raw_spin_lock().

Similarly, use raw_spin_unlock() and local_irq_restore() in
htab_unlock_bucket().

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Song Liu <song@kernel.org>

---
Changes in v3:
1. Use raw_local_irq_* APIs instead.

Changes in v2:
1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket().
   (Andrii)
---
 kernel/bpf/hashtab.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index a8c7e1c5abfa..74c8d1b41dd5 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
 
 	preempt_disable();
+	raw_local_irq_save(flags);
 	if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
 		__this_cpu_dec(*(htab->map_locked[hash]));
+		raw_local_irq_restore(flags);
 		preempt_enable();
 		return -EBUSY;
 	}
 
-	raw_spin_lock_irqsave(&b->raw_lock, flags);
+	raw_spin_lock(&b->raw_lock);
 	*pflags = flags;
 
 	return 0;
@@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
 				      unsigned long flags)
 {
 	hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
-	raw_spin_unlock_irqrestore(&b->raw_lock, flags);
+	raw_spin_unlock(&b->raw_lock);
 	__this_cpu_dec(*(htab->map_locked[hash]));
+	raw_local_irq_restore(flags);
 	preempt_enable();
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-10-05 17:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-05  3:23 [PATCH v3 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket Song Liu
2023-10-05  6:31 ` Song Liu
2023-10-05  6:31 Song Liu
2023-10-05 17:52 ` Song Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).