All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap()
@ 2022-09-07 15:57 Punit Agrawal
  2022-09-08  0:55 ` Song Liu
  2022-09-10 23:30 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 6+ messages in thread
From: Punit Agrawal @ 2022-09-07 15:57 UTC (permalink / raw)
  To: ast
  Cc: Punit Agrawal, bpf, linux-kernel, zhoufeng.zf, daniel, andrii,
	martin.lau, song, yhs, john.fastabend, kpsingh, jolsa

In the percpu freelist code, it is a common pattern to iterate over
the possible CPUs mask starting with the current CPU. The pattern is
implemented using a hand rolled while loop with the loop variable
increment being open-coded.

Simplify the code by using for_each_cpu_wrap() helper to iterate over
the possible cpus starting with the current CPU. As a result, some of
the special-casing in the loop also gets simplified.

No functional change intended.

Signed-off-by: Punit Agrawal <punit.agrawal@bytedance.com>
---
v1 -> v2:
* Fixed the incorrect transformation changing semantics of __pcpu_freelist_push_nmi()

Previous version -
v1: https://lore.kernel.org/all/20220817130807.68279-1-punit.agrawal@bytedance.com/

 kernel/bpf/percpu_freelist.c | 48 ++++++++++++------------------------
 1 file changed, 16 insertions(+), 32 deletions(-)

diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
index 00b874c8e889..b6e7f5c5b9ab 100644
--- a/kernel/bpf/percpu_freelist.c
+++ b/kernel/bpf/percpu_freelist.c
@@ -58,23 +58,21 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
 {
 	int cpu, orig_cpu;
 
-	orig_cpu = cpu = raw_smp_processor_id();
+	orig_cpu = raw_smp_processor_id();
 	while (1) {
-		struct pcpu_freelist_head *head;
+		for_each_cpu_wrap(cpu, cpu_possible_mask, orig_cpu) {
+			struct pcpu_freelist_head *head;
 
-		head = per_cpu_ptr(s->freelist, cpu);
-		if (raw_spin_trylock(&head->lock)) {
-			pcpu_freelist_push_node(head, node);
-			raw_spin_unlock(&head->lock);
-			return;
+			head = per_cpu_ptr(s->freelist, cpu);
+			if (raw_spin_trylock(&head->lock)) {
+				pcpu_freelist_push_node(head, node);
+				raw_spin_unlock(&head->lock);
+				return;
+			}
 		}
-		cpu = cpumask_next(cpu, cpu_possible_mask);
-		if (cpu >= nr_cpu_ids)
-			cpu = 0;
 
 		/* cannot lock any per cpu lock, try extralist */
-		if (cpu == orig_cpu &&
-		    pcpu_freelist_try_push_extra(s, node))
+		if (pcpu_freelist_try_push_extra(s, node))
 			return;
 	}
 }
@@ -125,13 +123,12 @@ static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s)
 {
 	struct pcpu_freelist_head *head;
 	struct pcpu_freelist_node *node;
-	int orig_cpu, cpu;
+	int cpu;
 
-	orig_cpu = cpu = raw_smp_processor_id();
-	while (1) {
+	for_each_cpu_wrap(cpu, cpu_possible_mask, raw_smp_processor_id()) {
 		head = per_cpu_ptr(s->freelist, cpu);
 		if (!READ_ONCE(head->first))
-			goto next_cpu;
+			continue;
 		raw_spin_lock(&head->lock);
 		node = head->first;
 		if (node) {
@@ -140,12 +137,6 @@ static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s)
 			return node;
 		}
 		raw_spin_unlock(&head->lock);
-next_cpu:
-		cpu = cpumask_next(cpu, cpu_possible_mask);
-		if (cpu >= nr_cpu_ids)
-			cpu = 0;
-		if (cpu == orig_cpu)
-			break;
 	}
 
 	/* per cpu lists are all empty, try extralist */
@@ -164,13 +155,12 @@ ___pcpu_freelist_pop_nmi(struct pcpu_freelist *s)
 {
 	struct pcpu_freelist_head *head;
 	struct pcpu_freelist_node *node;
-	int orig_cpu, cpu;
+	int cpu;
 
-	orig_cpu = cpu = raw_smp_processor_id();
-	while (1) {
+	for_each_cpu_wrap(cpu, cpu_possible_mask, raw_smp_processor_id()) {
 		head = per_cpu_ptr(s->freelist, cpu);
 		if (!READ_ONCE(head->first))
-			goto next_cpu;
+			continue;
 		if (raw_spin_trylock(&head->lock)) {
 			node = head->first;
 			if (node) {
@@ -180,12 +170,6 @@ ___pcpu_freelist_pop_nmi(struct pcpu_freelist *s)
 			}
 			raw_spin_unlock(&head->lock);
 		}
-next_cpu:
-		cpu = cpumask_next(cpu, cpu_possible_mask);
-		if (cpu >= nr_cpu_ids)
-			cpu = 0;
-		if (cpu == orig_cpu)
-			break;
 	}
 
 	/* cannot pop from per cpu lists, try extralist */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap()
  2022-09-07 15:57 [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap() Punit Agrawal
@ 2022-09-08  0:55 ` Song Liu
  2022-09-08 10:45   ` Punit Agrawal
  2022-09-10 23:30 ` patchwork-bot+netdevbpf
  1 sibling, 1 reply; 6+ messages in thread
From: Song Liu @ 2022-09-08  0:55 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: Alexei Starovoitov, bpf, open list, zhoufeng.zf, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Yonghong Song, John Fastabend,
	KP Singh, Jiri Olsa

On Wed, Sep 7, 2022 at 8:58 AM Punit Agrawal
<punit.agrawal@bytedance.com> wrote:
>
> In the percpu freelist code, it is a common pattern to iterate over
> the possible CPUs mask starting with the current CPU. The pattern is
> implemented using a hand rolled while loop with the loop variable
> increment being open-coded.
>
> Simplify the code by using for_each_cpu_wrap() helper to iterate over
> the possible cpus starting with the current CPU. As a result, some of
> the special-casing in the loop also gets simplified.
>
> No functional change intended.
>
> Signed-off-by: Punit Agrawal <punit.agrawal@bytedance.com>
> ---
> v1 -> v2:
> * Fixed the incorrect transformation changing semantics of __pcpu_freelist_push_nmi()
>
> Previous version -
> v1: https://lore.kernel.org/all/20220817130807.68279-1-punit.agrawal@bytedance.com/
>
>  kernel/bpf/percpu_freelist.c | 48 ++++++++++++------------------------
>  1 file changed, 16 insertions(+), 32 deletions(-)
>
> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
> index 00b874c8e889..b6e7f5c5b9ab 100644
> --- a/kernel/bpf/percpu_freelist.c
> +++ b/kernel/bpf/percpu_freelist.c
> @@ -58,23 +58,21 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
>  {
>         int cpu, orig_cpu;
>
> -       orig_cpu = cpu = raw_smp_processor_id();
> +       orig_cpu = raw_smp_processor_id();
>         while (1) {
> -               struct pcpu_freelist_head *head;
> +               for_each_cpu_wrap(cpu, cpu_possible_mask, orig_cpu) {
> +                       struct pcpu_freelist_head *head;
>
> -               head = per_cpu_ptr(s->freelist, cpu);
> -               if (raw_spin_trylock(&head->lock)) {
> -                       pcpu_freelist_push_node(head, node);
> -                       raw_spin_unlock(&head->lock);
> -                       return;
> +                       head = per_cpu_ptr(s->freelist, cpu);
> +                       if (raw_spin_trylock(&head->lock)) {
> +                               pcpu_freelist_push_node(head, node);
> +                               raw_spin_unlock(&head->lock);
> +                               return;
> +                       }
>                 }
> -               cpu = cpumask_next(cpu, cpu_possible_mask);
> -               if (cpu >= nr_cpu_ids)
> -                       cpu = 0;

I personally don't like nested loops here. Maybe we can keep
the original while loop and use cpumask_next_wrap()?

Thanks,
Song

>
>                 /* cannot lock any per cpu lock, try extralist */
> -               if (cpu == orig_cpu &&
> -                   pcpu_freelist_try_push_extra(s, node))
> +               if (pcpu_freelist_try_push_extra(s, node))
>                         return;
>         }
>  }
> @@ -125,13 +123,12 @@ static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s)
>  {
>         struct pcpu_freelist_head *head;
>         struct pcpu_freelist_node *node;
> -       int orig_cpu, cpu;
> +       int cpu;
>
> -       orig_cpu = cpu = raw_smp_processor_id();
> -       while (1) {
> +       for_each_cpu_wrap(cpu, cpu_possible_mask, raw_smp_processor_id()) {
>                 head = per_cpu_ptr(s->freelist, cpu);
>                 if (!READ_ONCE(head->first))
> -                       goto next_cpu;
> +                       continue;
>                 raw_spin_lock(&head->lock);
>                 node = head->first;
>                 if (node) {
> @@ -140,12 +137,6 @@ static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s)
>                         return node;
>                 }
>                 raw_spin_unlock(&head->lock);
> -next_cpu:
> -               cpu = cpumask_next(cpu, cpu_possible_mask);
> -               if (cpu >= nr_cpu_ids)
> -                       cpu = 0;
> -               if (cpu == orig_cpu)
> -                       break;
>         }
>
>         /* per cpu lists are all empty, try extralist */
> @@ -164,13 +155,12 @@ ___pcpu_freelist_pop_nmi(struct pcpu_freelist *s)
>  {
>         struct pcpu_freelist_head *head;
>         struct pcpu_freelist_node *node;
> -       int orig_cpu, cpu;
> +       int cpu;
>
> -       orig_cpu = cpu = raw_smp_processor_id();
> -       while (1) {
> +       for_each_cpu_wrap(cpu, cpu_possible_mask, raw_smp_processor_id()) {
>                 head = per_cpu_ptr(s->freelist, cpu);
>                 if (!READ_ONCE(head->first))
> -                       goto next_cpu;
> +                       continue;
>                 if (raw_spin_trylock(&head->lock)) {
>                         node = head->first;
>                         if (node) {
> @@ -180,12 +170,6 @@ ___pcpu_freelist_pop_nmi(struct pcpu_freelist *s)
>                         }
>                         raw_spin_unlock(&head->lock);
>                 }
> -next_cpu:
> -               cpu = cpumask_next(cpu, cpu_possible_mask);
> -               if (cpu >= nr_cpu_ids)
> -                       cpu = 0;
> -               if (cpu == orig_cpu)
> -                       break;
>         }
>
>         /* cannot pop from per cpu lists, try extralist */
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Re: [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap()
  2022-09-08  0:55 ` Song Liu
@ 2022-09-08 10:45   ` Punit Agrawal
  2022-09-08 20:21     ` Song Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Punit Agrawal @ 2022-09-08 10:45 UTC (permalink / raw)
  To: Song Liu
  Cc: Punit Agrawal, Alexei Starovoitov, bpf, open list, zhoufeng.zf,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Yonghong Song, John Fastabend, KP Singh, Jiri Olsa

Hi Song,

Thanks for taking a look.

Song Liu <song@kernel.org> writes:

> On Wed, Sep 7, 2022 at 8:58 AM Punit Agrawal
> <punit.agrawal@bytedance.com> wrote:
>>
>> In the percpu freelist code, it is a common pattern to iterate over
>> the possible CPUs mask starting with the current CPU. The pattern is
>> implemented using a hand rolled while loop with the loop variable
>> increment being open-coded.
>>
>> Simplify the code by using for_each_cpu_wrap() helper to iterate over
>> the possible cpus starting with the current CPU. As a result, some of
>> the special-casing in the loop also gets simplified.
>>
>> No functional change intended.
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@bytedance.com>
>> ---
>> v1 -> v2:
>> * Fixed the incorrect transformation changing semantics of __pcpu_freelist_push_nmi()
>>
>> Previous version -
>> v1: https://lore.kernel.org/all/20220817130807.68279-1-punit.agrawal@bytedance.com/
>>
>>  kernel/bpf/percpu_freelist.c | 48 ++++++++++++------------------------
>>  1 file changed, 16 insertions(+), 32 deletions(-)
>>
>> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
>> index 00b874c8e889..b6e7f5c5b9ab 100644
>> --- a/kernel/bpf/percpu_freelist.c
>> +++ b/kernel/bpf/percpu_freelist.c
>> @@ -58,23 +58,21 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
>>  {
>>         int cpu, orig_cpu;
>>
>> -       orig_cpu = cpu = raw_smp_processor_id();
>> +       orig_cpu = raw_smp_processor_id();
>>         while (1) {
>> -               struct pcpu_freelist_head *head;
>> +               for_each_cpu_wrap(cpu, cpu_possible_mask, orig_cpu) {
>> +                       struct pcpu_freelist_head *head;
>>
>> -               head = per_cpu_ptr(s->freelist, cpu);
>> -               if (raw_spin_trylock(&head->lock)) {
>> -                       pcpu_freelist_push_node(head, node);
>> -                       raw_spin_unlock(&head->lock);
>> -                       return;
>> +                       head = per_cpu_ptr(s->freelist, cpu);
>> +                       if (raw_spin_trylock(&head->lock)) {
>> +                               pcpu_freelist_push_node(head, node);
>> +                               raw_spin_unlock(&head->lock);
>> +                               return;
>> +                       }
>>                 }
>> -               cpu = cpumask_next(cpu, cpu_possible_mask);
>> -               if (cpu >= nr_cpu_ids)
>> -                       cpu = 0;
>
> I personally don't like nested loops here. Maybe we can keep
> the original while loop and use cpumask_next_wrap()?

Out of curiosity, is there a reason to avoid nesting here? The nested
loop avoids the "cpu == orig_cpu" unnecessary check every iteration.

As suggested, it's possible to use cpumask_next_wrap() like below -

diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
index 00b874c8e889..19e8eab70c40 100644
--- a/kernel/bpf/percpu_freelist.c
+++ b/kernel/bpf/percpu_freelist.c
@@ -68,9 +68,7 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
                        raw_spin_unlock(&head->lock);
                        return;
                }
-               cpu = cpumask_next(cpu, cpu_possible_mask);
-               if (cpu >= nr_cpu_ids)
-                       cpu = 0;
+               cpu = cpumask_next_wrap(cpu, cpu_possible_mask, orig_cpu, false);

                /* cannot lock any per cpu lock, try extralist */
                if (cpu == orig_cpu &&


I can send an updated patch if this is preferred.

Thanks,
Punit

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: Re: [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap()
  2022-09-08 10:45   ` Punit Agrawal
@ 2022-09-08 20:21     ` Song Liu
  2022-09-09  8:59       ` [External] " Punit Agrawal
  0 siblings, 1 reply; 6+ messages in thread
From: Song Liu @ 2022-09-08 20:21 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: Alexei Starovoitov, bpf, open list, zhoufeng.zf, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Yonghong Song, John Fastabend,
	KP Singh, Jiri Olsa

On Thu, Sep 8, 2022 at 3:45 AM Punit Agrawal
<punit.agrawal@bytedance.com> wrote:
>
> Hi Song,
>
> Thanks for taking a look.
>
> Song Liu <song@kernel.org> writes:
>
> > On Wed, Sep 7, 2022 at 8:58 AM Punit Agrawal
> > <punit.agrawal@bytedance.com> wrote:
> >>
> >> In the percpu freelist code, it is a common pattern to iterate over
> >> the possible CPUs mask starting with the current CPU. The pattern is
> >> implemented using a hand rolled while loop with the loop variable
> >> increment being open-coded.
> >>
> >> Simplify the code by using for_each_cpu_wrap() helper to iterate over
> >> the possible cpus starting with the current CPU. As a result, some of
> >> the special-casing in the loop also gets simplified.
> >>
> >> No functional change intended.
> >>
> >> Signed-off-by: Punit Agrawal <punit.agrawal@bytedance.com>
> >> ---
> >> v1 -> v2:
> >> * Fixed the incorrect transformation changing semantics of __pcpu_freelist_push_nmi()
> >>
> >> Previous version -
> >> v1: https://lore.kernel.org/all/20220817130807.68279-1-punit.agrawal@bytedance.com/
> >>
> >>  kernel/bpf/percpu_freelist.c | 48 ++++++++++++------------------------
> >>  1 file changed, 16 insertions(+), 32 deletions(-)
> >>
> >> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
> >> index 00b874c8e889..b6e7f5c5b9ab 100644
> >> --- a/kernel/bpf/percpu_freelist.c
> >> +++ b/kernel/bpf/percpu_freelist.c
> >> @@ -58,23 +58,21 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
> >>  {
> >>         int cpu, orig_cpu;
> >>
> >> -       orig_cpu = cpu = raw_smp_processor_id();
> >> +       orig_cpu = raw_smp_processor_id();
> >>         while (1) {
> >> -               struct pcpu_freelist_head *head;
> >> +               for_each_cpu_wrap(cpu, cpu_possible_mask, orig_cpu) {
> >> +                       struct pcpu_freelist_head *head;
> >>
> >> -               head = per_cpu_ptr(s->freelist, cpu);
> >> -               if (raw_spin_trylock(&head->lock)) {
> >> -                       pcpu_freelist_push_node(head, node);
> >> -                       raw_spin_unlock(&head->lock);
> >> -                       return;
> >> +                       head = per_cpu_ptr(s->freelist, cpu);
> >> +                       if (raw_spin_trylock(&head->lock)) {
> >> +                               pcpu_freelist_push_node(head, node);
> >> +                               raw_spin_unlock(&head->lock);
> >> +                               return;
> >> +                       }
> >>                 }
> >> -               cpu = cpumask_next(cpu, cpu_possible_mask);
> >> -               if (cpu >= nr_cpu_ids)
> >> -                       cpu = 0;
> >
> > I personally don't like nested loops here. Maybe we can keep
> > the original while loop and use cpumask_next_wrap()?
>
> Out of curiosity, is there a reason to avoid nesting here? The nested
> loop avoids the "cpu == orig_cpu" unnecessary check every iteration.

for_each_cpu_wrap is a more complex loop, so we are using some
checks either way.

OTOH, the nesting is not too deep (two loops then one if), so I guess
current version is fine.

Acked-by: Song Liu <song@kernel.org>


>
> As suggested, it's possible to use cpumask_next_wrap() like below -
>
> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
> index 00b874c8e889..19e8eab70c40 100644
> --- a/kernel/bpf/percpu_freelist.c
> +++ b/kernel/bpf/percpu_freelist.c
> @@ -68,9 +68,7 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
>                         raw_spin_unlock(&head->lock);
>                         return;
>                 }
> -               cpu = cpumask_next(cpu, cpu_possible_mask);
> -               if (cpu >= nr_cpu_ids)
> -                       cpu = 0;
> +               cpu = cpumask_next_wrap(cpu, cpu_possible_mask, orig_cpu, false);
>
>                 /* cannot lock any per cpu lock, try extralist */
>                 if (cpu == orig_cpu &&
>
>
> I can send an updated patch if this is preferred.
>
> Thanks,
> Punit

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [External] Re: Re: [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap()
  2022-09-08 20:21     ` Song Liu
@ 2022-09-09  8:59       ` Punit Agrawal
  0 siblings, 0 replies; 6+ messages in thread
From: Punit Agrawal @ 2022-09-09  8:59 UTC (permalink / raw)
  To: Song Liu
  Cc: Punit Agrawal, Alexei Starovoitov, bpf, open list, zhoufeng.zf,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
	Yonghong Song, John Fastabend, KP Singh, Jiri Olsa

Song Liu <song@kernel.org> writes:

> On Thu, Sep 8, 2022 at 3:45 AM Punit Agrawal
> <punit.agrawal@bytedance.com> wrote:
>>
>> Hi Song,
>>
>> Thanks for taking a look.
>>
>> Song Liu <song@kernel.org> writes:
>>
>> > On Wed, Sep 7, 2022 at 8:58 AM Punit Agrawal
>> > <punit.agrawal@bytedance.com> wrote:
>> >>
>> >> In the percpu freelist code, it is a common pattern to iterate over
>> >> the possible CPUs mask starting with the current CPU. The pattern is
>> >> implemented using a hand rolled while loop with the loop variable
>> >> increment being open-coded.
>> >>
>> >> Simplify the code by using for_each_cpu_wrap() helper to iterate over
>> >> the possible cpus starting with the current CPU. As a result, some of
>> >> the special-casing in the loop also gets simplified.
>> >>
>> >> No functional change intended.
>> >>
>> >> Signed-off-by: Punit Agrawal <punit.agrawal@bytedance.com>
>> >> ---
>> >> v1 -> v2:
>> >> * Fixed the incorrect transformation changing semantics of __pcpu_freelist_push_nmi()
>> >>
>> >> Previous version -
>> >> v1: https://lore.kernel.org/all/20220817130807.68279-1-punit.agrawal@bytedance.com/
>> >>
>> >>  kernel/bpf/percpu_freelist.c | 48 ++++++++++++------------------------
>> >>  1 file changed, 16 insertions(+), 32 deletions(-)
>> >>
>> >> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
>> >> index 00b874c8e889..b6e7f5c5b9ab 100644
>> >> --- a/kernel/bpf/percpu_freelist.c
>> >> +++ b/kernel/bpf/percpu_freelist.c
>> >> @@ -58,23 +58,21 @@ static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
>> >>  {
>> >>         int cpu, orig_cpu;
>> >>
>> >> -       orig_cpu = cpu = raw_smp_processor_id();
>> >> +       orig_cpu = raw_smp_processor_id();
>> >>         while (1) {
>> >> -               struct pcpu_freelist_head *head;
>> >> +               for_each_cpu_wrap(cpu, cpu_possible_mask, orig_cpu) {
>> >> +                       struct pcpu_freelist_head *head;
>> >>
>> >> -               head = per_cpu_ptr(s->freelist, cpu);
>> >> -               if (raw_spin_trylock(&head->lock)) {
>> >> -                       pcpu_freelist_push_node(head, node);
>> >> -                       raw_spin_unlock(&head->lock);
>> >> -                       return;
>> >> +                       head = per_cpu_ptr(s->freelist, cpu);
>> >> +                       if (raw_spin_trylock(&head->lock)) {
>> >> +                               pcpu_freelist_push_node(head, node);
>> >> +                               raw_spin_unlock(&head->lock);
>> >> +                               return;
>> >> +                       }
>> >>                 }
>> >> -               cpu = cpumask_next(cpu, cpu_possible_mask);
>> >> -               if (cpu >= nr_cpu_ids)
>> >> -                       cpu = 0;
>> >
>> > I personally don't like nested loops here. Maybe we can keep
>> > the original while loop and use cpumask_next_wrap()?
>>
>> Out of curiosity, is there a reason to avoid nesting here? The nested
>> loop avoids the "cpu == orig_cpu" unnecessary check every iteration.
>
> for_each_cpu_wrap is a more complex loop, so we are using some
> checks either way.

That's true, indeed. While putting the patch together I wondering about
the need for a simpler / optimized version of for_each_cpu_wrap().

> OTOH, the nesting is not too deep (two loops then one if), so I guess
> current version is fine.
>
> Acked-by: Song Liu <song@kernel.org>
>

Thanks!

[...]


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap()
  2022-09-07 15:57 [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap() Punit Agrawal
  2022-09-08  0:55 ` Song Liu
@ 2022-09-10 23:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-09-10 23:30 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: ast, bpf, linux-kernel, zhoufeng.zf, daniel, andrii, martin.lau,
	song, yhs, john.fastabend, kpsingh, jolsa

Hello:

This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Wed,  7 Sep 2022 16:57:46 +0100 you wrote:
> In the percpu freelist code, it is a common pattern to iterate over
> the possible CPUs mask starting with the current CPU. The pattern is
> implemented using a hand rolled while loop with the loop variable
> increment being open-coded.
> 
> Simplify the code by using for_each_cpu_wrap() helper to iterate over
> the possible cpus starting with the current CPU. As a result, some of
> the special-casing in the loop also gets simplified.
> 
> [...]

Here is the summary with links:
  - [v2] bpf: Simplify code by using for_each_cpu_wrap()
    https://git.kernel.org/bpf/bpf-next/c/57c92f11a215

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-09-10 23:30 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-07 15:57 [PATCH v2] bpf: Simplify code by using for_each_cpu_wrap() Punit Agrawal
2022-09-08  0:55 ` Song Liu
2022-09-08 10:45   ` Punit Agrawal
2022-09-08 20:21     ` Song Liu
2022-09-09  8:59       ` [External] " Punit Agrawal
2022-09-10 23:30 ` patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.