[v2] sched: Optimize __calc_delta.
diff mbox series

Message ID 20210303224653.2579656-1-joshdon@google.com
State Accepted
Commit 1e17fb8edc5ad6587e9303ccdebce853bc8cf30c
Headers show
Series
  • [v2] sched: Optimize __calc_delta.
Related show

Commit Message

Josh Don March 3, 2021, 10:46 p.m. UTC
From: Clement Courbet <courbet@google.com>

A significant portion of __calc_delta time is spent in the loop
shifting a u64 by 32 bits. Use `fls` instead of iterating.

This is ~7x faster on benchmarks.

The generic `fls` implementation (`generic_fls`) is still ~4x faster
than the loop.
Architectures that have a better implementation will make use of it. For
example, on X86 we get an additional factor 2 in speed without dedicated
implementation.

On gcc, the asm versions of `fls` are about the same speed as the
builtin. On clang, the versions that use fls are more than twice as
slow as the builtin. This is because the way the `fls` function is
written, clang puts the value in memory:
https://godbolt.org/z/EfMbYe. This bug is filed at
https://bugs.llvm.org/show_bug.cgi?id=49406.

```
name                                   cpu/op
BM_Calc<__calc_delta_loop>             9.57ms ±12%
BM_Calc<__calc_delta_generic_fls>      2.36ms ±13%
BM_Calc<__calc_delta_asm_fls>          2.45ms ±13%
BM_Calc<__calc_delta_asm_fls_nomem>    1.66ms ±12%
BM_Calc<__calc_delta_asm_fls64>        2.46ms ±13%
BM_Calc<__calc_delta_asm_fls64_nomem>  1.34ms ±15%
BM_Calc<__calc_delta_builtin>          1.32ms ±11%
```

Signed-off-by: Clement Courbet <courbet@google.com>
Signed-off-by: Josh Don <joshdon@google.com>
---
 kernel/sched/fair.c  | 19 +++++++++++--------
 kernel/sched/sched.h |  1 +
 2 files changed, 12 insertions(+), 8 deletions(-)

Comments

Peter Zijlstra March 4, 2021, 8:31 a.m. UTC | #1
On Wed, Mar 03, 2021 at 02:46:53PM -0800, Josh Don wrote:
> From: Clement Courbet <courbet@google.com>
> 
> A significant portion of __calc_delta time is spent in the loop
> shifting a u64 by 32 bits. Use `fls` instead of iterating.
> 
> This is ~7x faster on benchmarks.
> 
> The generic `fls` implementation (`generic_fls`) is still ~4x faster
> than the loop.
> Architectures that have a better implementation will make use of it. For
> example, on X86 we get an additional factor 2 in speed without dedicated
> implementation.
> 
> On gcc, the asm versions of `fls` are about the same speed as the
> builtin. On clang, the versions that use fls are more than twice as
> slow as the builtin. This is because the way the `fls` function is
> written, clang puts the value in memory:
> https://godbolt.org/z/EfMbYe. This bug is filed at
> https://bugs.llvm.org/show_bug.cgi?id=49406.
> 
> ```
> name                                   cpu/op
> BM_Calc<__calc_delta_loop>             9.57ms ±12%
> BM_Calc<__calc_delta_generic_fls>      2.36ms ±13%
> BM_Calc<__calc_delta_asm_fls>          2.45ms ±13%
> BM_Calc<__calc_delta_asm_fls_nomem>    1.66ms ±12%
> BM_Calc<__calc_delta_asm_fls64>        2.46ms ±13%
> BM_Calc<__calc_delta_asm_fls64_nomem>  1.34ms ±15%
> BM_Calc<__calc_delta_builtin>          1.32ms ±11%
> ```
> 
> Signed-off-by: Clement Courbet <courbet@google.com>
> Signed-off-by: Josh Don <joshdon@google.com>

Thanks!
Nick Desaulniers March 4, 2021, 5:34 p.m. UTC | #2
On Wed, Mar 3, 2021 at 2:48 PM Josh Don <joshdon@google.com> wrote:
>
> From: Clement Courbet <courbet@google.com>
>
> A significant portion of __calc_delta time is spent in the loop
> shifting a u64 by 32 bits. Use `fls` instead of iterating.
>
> This is ~7x faster on benchmarks.
>
> The generic `fls` implementation (`generic_fls`) is still ~4x faster
> than the loop.
> Architectures that have a better implementation will make use of it. For
> example, on X86 we get an additional factor 2 in speed without dedicated
> implementation.
>
> On gcc, the asm versions of `fls` are about the same speed as the
> builtin. On clang, the versions that use fls are more than twice as
> slow as the builtin. This is because the way the `fls` function is
> written, clang puts the value in memory:
> https://godbolt.org/z/EfMbYe. This bug is filed at
> https://bugs.llvm.org/show_bug.cgi?id=49406.

Hi Josh, Thanks for helping get this patch across the finish line.
Would you mind updating the commit message to point to
https://bugs.llvm.org/show_bug.cgi?id=20197?

>
> ```
> name                                   cpu/op
> BM_Calc<__calc_delta_loop>             9.57ms ±12%
> BM_Calc<__calc_delta_generic_fls>      2.36ms ±13%
> BM_Calc<__calc_delta_asm_fls>          2.45ms ±13%
> BM_Calc<__calc_delta_asm_fls_nomem>    1.66ms ±12%
> BM_Calc<__calc_delta_asm_fls64>        2.46ms ±13%
> BM_Calc<__calc_delta_asm_fls64_nomem>  1.34ms ±15%
> BM_Calc<__calc_delta_builtin>          1.32ms ±11%
> ```
>
> Signed-off-by: Clement Courbet <courbet@google.com>
> Signed-off-by: Josh Don <joshdon@google.com>
> ---
>  kernel/sched/fair.c  | 19 +++++++++++--------
>  kernel/sched/sched.h |  1 +
>  2 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b13634..a691371960ae 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
>  static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
>  {
>         u64 fact = scale_load_down(weight);
> +       u32 fact_hi = (u32)(fact >> 32);
>         int shift = WMULT_SHIFT;
> +       int fs;
>
>         __update_inv_weight(lw);
>
> -       if (unlikely(fact >> 32)) {
> -               while (fact >> 32) {
> -                       fact >>= 1;
> -                       shift--;
> -               }
> +       if (unlikely(fact_hi)) {
> +               fs = fls(fact_hi);
> +               shift -= fs;
> +               fact >>= fs;
>         }
>
>         fact = mul_u32_u32(fact, lw->inv_weight);
>
> -       while (fact >> 32) {
> -               fact >>= 1;
> -               shift--;
> +       fact_hi = (u32)(fact >> 32);
> +       if (fact_hi) {
> +               fs = fls(fact_hi);
> +               shift -= fs;
> +               fact >>= fs;
>         }
>
>         return mul_u64_u32_shr(delta_exec, fact, shift);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522b1e30..714af71cf983 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -36,6 +36,7 @@
>  #include <uapi/linux/sched/types.h>
>
>  #include <linux/binfmts.h>
> +#include <linux/bitops.h>

This hunk of the patch is curious.  I assume that bitops.h is needed
for fls(); if so, why not #include it in kernel/sched/fair.c?
Otherwise this potentially hurts compile time for all TUs that include
kernel/sched/sched.h.

>  #include <linux/blkdev.h>
>  #include <linux/compat.h>
>  #include <linux/context_tracking.h>
> --
> 2.30.1.766.gb4fecdf3b7-goog
>
Sedat Dilek March 4, 2021, 6:24 p.m. UTC | #3
On Thu, Mar 4, 2021 at 6:34 PM 'Nick Desaulniers' via Clang Built
Linux <clang-built-linux@googlegroups.com> wrote:
>
> On Wed, Mar 3, 2021 at 2:48 PM Josh Don <joshdon@google.com> wrote:
> >
> > From: Clement Courbet <courbet@google.com>
> >
> > A significant portion of __calc_delta time is spent in the loop
> > shifting a u64 by 32 bits. Use `fls` instead of iterating.
> >
> > This is ~7x faster on benchmarks.
> >
> > The generic `fls` implementation (`generic_fls`) is still ~4x faster
> > than the loop.
> > Architectures that have a better implementation will make use of it. For
> > example, on X86 we get an additional factor 2 in speed without dedicated
> > implementation.
> >
> > On gcc, the asm versions of `fls` are about the same speed as the
> > builtin. On clang, the versions that use fls are more than twice as
> > slow as the builtin. This is because the way the `fls` function is
> > written, clang puts the value in memory:
> > https://godbolt.org/z/EfMbYe. This bug is filed at
> > https://bugs.llvm.org/show_bug.cgi?id=49406.
>
> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?
>
> >
> > ```
> > name                                   cpu/op
> > BM_Calc<__calc_delta_loop>             9.57ms ±12%
> > BM_Calc<__calc_delta_generic_fls>      2.36ms ±13%
> > BM_Calc<__calc_delta_asm_fls>          2.45ms ±13%
> > BM_Calc<__calc_delta_asm_fls_nomem>    1.66ms ±12%
> > BM_Calc<__calc_delta_asm_fls64>        2.46ms ±13%
> > BM_Calc<__calc_delta_asm_fls64_nomem>  1.34ms ±15%
> > BM_Calc<__calc_delta_builtin>          1.32ms ±11%
> > ```
> >
> > Signed-off-by: Clement Courbet <courbet@google.com>
> > Signed-off-by: Josh Don <joshdon@google.com>
> > ---
> >  kernel/sched/fair.c  | 19 +++++++++++--------
> >  kernel/sched/sched.h |  1 +
> >  2 files changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b13634..a691371960ae 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> >  static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> >  {
> >         u64 fact = scale_load_down(weight);
> > +       u32 fact_hi = (u32)(fact >> 32);
> >         int shift = WMULT_SHIFT;
> > +       int fs;
> >
> >         __update_inv_weight(lw);
> >
> > -       if (unlikely(fact >> 32)) {
> > -               while (fact >> 32) {
> > -                       fact >>= 1;
> > -                       shift--;
> > -               }
> > +       if (unlikely(fact_hi)) {
> > +               fs = fls(fact_hi);
> > +               shift -= fs;
> > +               fact >>= fs;
> >         }
> >
> >         fact = mul_u32_u32(fact, lw->inv_weight);
> >
> > -       while (fact >> 32) {
> > -               fact >>= 1;
> > -               shift--;
> > +       fact_hi = (u32)(fact >> 32);
> > +       if (fact_hi) {
> > +               fs = fls(fact_hi);
> > +               shift -= fs;
> > +               fact >>= fs;
> >         }
> >
> >         return mul_u64_u32_shr(delta_exec, fact, shift);
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 10a1522b1e30..714af71cf983 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -36,6 +36,7 @@
> >  #include <uapi/linux/sched/types.h>
> >
> >  #include <linux/binfmts.h>
> > +#include <linux/bitops.h>
>
> This hunk of the patch is curious.  I assume that bitops.h is needed
> for fls(); if so, why not #include it in kernel/sched/fair.c?
> Otherwise this potentially hurts compile time for all TUs that include
> kernel/sched/sched.h.
>

I have v2 as-is in my custom patchset and booted right now on bare metal.

As Nick points out moving the include makes sense to me.
We have a lot of include at the wrong places increasing build-time.

- Sedat -

> >  #include <linux/blkdev.h>
> >  #include <linux/compat.h>
> >  #include <linux/context_tracking.h>
> > --
> > 2.30.1.766.gb4fecdf3b7-goog
> >
>
>
> --
> Thanks,
> ~Nick Desaulniers
>
> --
> You received this message because you are subscribed to the Google Groups "Clang Built Linux" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to clang-built-linux+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/clang-built-linux/CAKwvOdmijctJfM3gNfwEVjaQyp3LZkhnAwgsT7EBhsSBJyfLAA%40mail.gmail.com.
Sedat Dilek March 4, 2021, 7:21 p.m. UTC | #4
On Thu, Mar 4, 2021 at 7:24 PM Sedat Dilek <sedat.dilek@gmail.com> wrote:
>
> On Thu, Mar 4, 2021 at 6:34 PM 'Nick Desaulniers' via Clang Built
> Linux <clang-built-linux@googlegroups.com> wrote:
> >
> > On Wed, Mar 3, 2021 at 2:48 PM Josh Don <joshdon@google.com> wrote:
> > >
> > > From: Clement Courbet <courbet@google.com>
> > >
> > > A significant portion of __calc_delta time is spent in the loop
> > > shifting a u64 by 32 bits. Use `fls` instead of iterating.
> > >
> > > This is ~7x faster on benchmarks.
> > >
> > > The generic `fls` implementation (`generic_fls`) is still ~4x faster
> > > than the loop.
> > > Architectures that have a better implementation will make use of it. For
> > > example, on X86 we get an additional factor 2 in speed without dedicated
> > > implementation.
> > >
> > > On gcc, the asm versions of `fls` are about the same speed as the
> > > builtin. On clang, the versions that use fls are more than twice as
> > > slow as the builtin. This is because the way the `fls` function is
> > > written, clang puts the value in memory:
> > > https://godbolt.org/z/EfMbYe. This bug is filed at
> > > https://bugs.llvm.org/show_bug.cgi?id=49406.
> >
> > Hi Josh, Thanks for helping get this patch across the finish line.
> > Would you mind updating the commit message to point to
> > https://bugs.llvm.org/show_bug.cgi?id=20197?
> >
> > >
> > > ```
> > > name                                   cpu/op
> > > BM_Calc<__calc_delta_loop>             9.57ms ±12%
> > > BM_Calc<__calc_delta_generic_fls>      2.36ms ±13%
> > > BM_Calc<__calc_delta_asm_fls>          2.45ms ±13%
> > > BM_Calc<__calc_delta_asm_fls_nomem>    1.66ms ±12%
> > > BM_Calc<__calc_delta_asm_fls64>        2.46ms ±13%
> > > BM_Calc<__calc_delta_asm_fls64_nomem>  1.34ms ±15%
> > > BM_Calc<__calc_delta_builtin>          1.32ms ±11%
> > > ```
> > >
> > > Signed-off-by: Clement Courbet <courbet@google.com>
> > > Signed-off-by: Josh Don <joshdon@google.com>
> > > ---
> > >  kernel/sched/fair.c  | 19 +++++++++++--------
> > >  kernel/sched/sched.h |  1 +
> > >  2 files changed, 12 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 8a8bd7b13634..a691371960ae 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> > >  static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> > >  {
> > >         u64 fact = scale_load_down(weight);
> > > +       u32 fact_hi = (u32)(fact >> 32);
> > >         int shift = WMULT_SHIFT;
> > > +       int fs;
> > >
> > >         __update_inv_weight(lw);
> > >
> > > -       if (unlikely(fact >> 32)) {
> > > -               while (fact >> 32) {
> > > -                       fact >>= 1;
> > > -                       shift--;
> > > -               }
> > > +       if (unlikely(fact_hi)) {
> > > +               fs = fls(fact_hi);
> > > +               shift -= fs;
> > > +               fact >>= fs;
> > >         }
> > >
> > >         fact = mul_u32_u32(fact, lw->inv_weight);
> > >
> > > -       while (fact >> 32) {
> > > -               fact >>= 1;
> > > -               shift--;
> > > +       fact_hi = (u32)(fact >> 32);
> > > +       if (fact_hi) {
> > > +               fs = fls(fact_hi);
> > > +               shift -= fs;
> > > +               fact >>= fs;
> > >         }
> > >
> > >         return mul_u64_u32_shr(delta_exec, fact, shift);
> > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > > index 10a1522b1e30..714af71cf983 100644
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -36,6 +36,7 @@
> > >  #include <uapi/linux/sched/types.h>
> > >
> > >  #include <linux/binfmts.h>
> > > +#include <linux/bitops.h>
> >
> > This hunk of the patch is curious.  I assume that bitops.h is needed
> > for fls(); if so, why not #include it in kernel/sched/fair.c?
> > Otherwise this potentially hurts compile time for all TUs that include
> > kernel/sched/sched.h.
> >
>
> I have v2 as-is in my custom patchset and booted right now on bare metal.
>
> As Nick points out moving the include makes sense to me.
> We have a lot of include at the wrong places increasing build-time.
>

I tried with the attached patch.

$ LC_ALL=C ll kernel/sched/fair.o
-rw-r--r-- 1 dileks dileks 1.2M Mar  4 20:11 kernel/sched/fair.o

- Sedat -
Josh Don March 5, 2021, 1:04 a.m. UTC | #5
On Thu, Mar 4, 2021 at 9:34 AM Nick Desaulniers <ndesaulniers@google.com> wrote:
>
>
> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?

Sure thing, just saw that it got marked as a dup.

Peter, since you've already pulled the patch, can you modify the
commit message directly? Nick also recommended dropping the
punctuation in the commit oneline.

> >  #include <linux/binfmts.h>
> > +#include <linux/bitops.h>
>
> This hunk of the patch is curious.  I assume that bitops.h is needed
> for fls(); if so, why not #include it in kernel/sched/fair.c?
> Otherwise this potentially hurts compile time for all TUs that include
> kernel/sched/sched.h.

bitops.h is already included in sched.h via another include, so this
was just meant to make it more explicit. Motivation for putting it
here vs. fair.c was 325ea10c080940.
David Laight March 5, 2021, 5:13 p.m. UTC | #6
> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?

Is it worth an audit of all the asm() constraints
and potentially changing all the "mr" to "r" for clang?

The explicit 'load into a register' won't make much
difference even if a direct "m" operand could be used on x86.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Patch
diff mbox series

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8a8bd7b13634..a691371960ae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -229,22 +229,25 @@  static void __update_inv_weight(struct load_weight *lw)
 static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
 {
 	u64 fact = scale_load_down(weight);
+	u32 fact_hi = (u32)(fact >> 32);
 	int shift = WMULT_SHIFT;
+	int fs;
 
 	__update_inv_weight(lw);
 
-	if (unlikely(fact >> 32)) {
-		while (fact >> 32) {
-			fact >>= 1;
-			shift--;
-		}
+	if (unlikely(fact_hi)) {
+		fs = fls(fact_hi);
+		shift -= fs;
+		fact >>= fs;
 	}
 
 	fact = mul_u32_u32(fact, lw->inv_weight);
 
-	while (fact >> 32) {
-		fact >>= 1;
-		shift--;
+	fact_hi = (u32)(fact >> 32);
+	if (fact_hi) {
+		fs = fls(fact_hi);
+		shift -= fs;
+		fact >>= fs;
 	}
 
 	return mul_u64_u32_shr(delta_exec, fact, shift);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 10a1522b1e30..714af71cf983 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -36,6 +36,7 @@ 
 #include <uapi/linux/sched/types.h>
 
 #include <linux/binfmts.h>
+#include <linux/bitops.h>
 #include <linux/blkdev.h>
 #include <linux/compat.h>
 #include <linux/context_tracking.h>