On Thu, Mar 4, 2021 at 7:24 PM Sedat Dilek wrote: > > On Thu, Mar 4, 2021 at 6:34 PM 'Nick Desaulniers' via Clang Built > Linux wrote: > > > > On Wed, Mar 3, 2021 at 2:48 PM Josh Don wrote: > > > > > > From: Clement Courbet > > > > > > A significant portion of __calc_delta time is spent in the loop > > > shifting a u64 by 32 bits. Use `fls` instead of iterating. > > > > > > This is ~7x faster on benchmarks. > > > > > > The generic `fls` implementation (`generic_fls`) is still ~4x faster > > > than the loop. > > > Architectures that have a better implementation will make use of it. For > > > example, on X86 we get an additional factor 2 in speed without dedicated > > > implementation. > > > > > > On gcc, the asm versions of `fls` are about the same speed as the > > > builtin. On clang, the versions that use fls are more than twice as > > > slow as the builtin. This is because the way the `fls` function is > > > written, clang puts the value in memory: > > > https://godbolt.org/z/EfMbYe. This bug is filed at > > > https://bugs.llvm.org/show_bug.cgi?id=49406. > > > > Hi Josh, Thanks for helping get this patch across the finish line. > > Would you mind updating the commit message to point to > > https://bugs.llvm.org/show_bug.cgi?id=20197? > > > > > > > > ``` > > > name cpu/op > > > BM_Calc<__calc_delta_loop> 9.57ms ±12% > > > BM_Calc<__calc_delta_generic_fls> 2.36ms ±13% > > > BM_Calc<__calc_delta_asm_fls> 2.45ms ±13% > > > BM_Calc<__calc_delta_asm_fls_nomem> 1.66ms ±12% > > > BM_Calc<__calc_delta_asm_fls64> 2.46ms ±13% > > > BM_Calc<__calc_delta_asm_fls64_nomem> 1.34ms ±15% > > > BM_Calc<__calc_delta_builtin> 1.32ms ±11% > > > ``` > > > > > > Signed-off-by: Clement Courbet > > > Signed-off-by: Josh Don > > > --- > > > kernel/sched/fair.c | 19 +++++++++++-------- > > > kernel/sched/sched.h | 1 + > > > 2 files changed, 12 insertions(+), 8 deletions(-) > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index 8a8bd7b13634..a691371960ae 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw) > > > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) > > > { > > > u64 fact = scale_load_down(weight); > > > + u32 fact_hi = (u32)(fact >> 32); > > > int shift = WMULT_SHIFT; > > > + int fs; > > > > > > __update_inv_weight(lw); > > > > > > - if (unlikely(fact >> 32)) { > > > - while (fact >> 32) { > > > - fact >>= 1; > > > - shift--; > > > - } > > > + if (unlikely(fact_hi)) { > > > + fs = fls(fact_hi); > > > + shift -= fs; > > > + fact >>= fs; > > > } > > > > > > fact = mul_u32_u32(fact, lw->inv_weight); > > > > > > - while (fact >> 32) { > > > - fact >>= 1; > > > - shift--; > > > + fact_hi = (u32)(fact >> 32); > > > + if (fact_hi) { > > > + fs = fls(fact_hi); > > > + shift -= fs; > > > + fact >>= fs; > > > } > > > > > > return mul_u64_u32_shr(delta_exec, fact, shift); > > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > > index 10a1522b1e30..714af71cf983 100644 > > > --- a/kernel/sched/sched.h > > > +++ b/kernel/sched/sched.h > > > @@ -36,6 +36,7 @@ > > > #include > > > > > > #include > > > +#include > > > > This hunk of the patch is curious. I assume that bitops.h is needed > > for fls(); if so, why not #include it in kernel/sched/fair.c? > > Otherwise this potentially hurts compile time for all TUs that include > > kernel/sched/sched.h. > > > > I have v2 as-is in my custom patchset and booted right now on bare metal. > > As Nick points out moving the include makes sense to me. > We have a lot of include at the wrong places increasing build-time. > I tried with the attached patch. $ LC_ALL=C ll kernel/sched/fair.o -rw-r--r-- 1 dileks dileks 1.2M Mar 4 20:11 kernel/sched/fair.o - Sedat -