* [PATCH] sched/cpuacct: Use __this_cpu_add() instead of this_cpu_ptr()
@ 2020-04-16 6:53 Muchun Song
2020-04-16 7:27 ` Peter Zijlstra
0 siblings, 1 reply; 3+ messages in thread
From: Muchun Song @ 2020-04-16 6:53 UTC (permalink / raw)
To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, bsegall, mgorman, mingo
Cc: linux-kernel, Muchun Song
There seems to be no difference between the two, but on some
architectures(e.g. x86_64), there will be optimizations for
__this_cpu_add(). We can disassemble the code for you to see
the difference between them on x86_64.
1) this_cpu_ptr(ca->cpuusage)->usages[index] += cputime;
ffffffff810d7227: add %gs:0x7ef37fa9(%rip),%rax # f1d8 <this_cpu_off>
ffffffff810d722f: add %rsi,(%rax) # %rsi is @cputime
This result in two add instructions emitted by the compiler.
2) __this_cpu_add(ca->cpuusage->usages[index], cputime);
ffffffff810d7227: add %rsi,%gs:(%rax) # %rsi is @cputime
This result in only one add instruction emitted by the compiler.
So we have enough reasons to use the __this_cpu_add().
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
kernel/sched/cpuacct.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
index 9fbb103834345..6448b0438ffb2 100644
--- a/kernel/sched/cpuacct.c
+++ b/kernel/sched/cpuacct.c
@@ -347,7 +347,7 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
rcu_read_lock();
for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
- this_cpu_ptr(ca->cpuusage)->usages[index] += cputime;
+ __this_cpu_add(ca->cpuusage->usages[index], cputime);
rcu_read_unlock();
}
@@ -363,7 +363,7 @@ void cpuacct_account_field(struct task_struct *tsk, int index, u64 val)
rcu_read_lock();
for (ca = task_ca(tsk); ca != &root_cpuacct; ca = parent_ca(ca))
- this_cpu_ptr(ca->cpustat)->cpustat[index] += val;
+ __this_cpu_add(ca->cpustat->cpustat[index], val);
rcu_read_unlock();
}
--
2.11.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] sched/cpuacct: Use __this_cpu_add() instead of this_cpu_ptr()
2020-04-16 6:53 [PATCH] sched/cpuacct: Use __this_cpu_add() instead of this_cpu_ptr() Muchun Song
@ 2020-04-16 7:27 ` Peter Zijlstra
2020-04-16 8:17 ` [External] " Muchun Song
0 siblings, 1 reply; 3+ messages in thread
From: Peter Zijlstra @ 2020-04-16 7:27 UTC (permalink / raw)
To: Muchun Song
Cc: mingo, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt,
bsegall, mgorman, mingo, linux-kernel
On Thu, Apr 16, 2020 at 02:53:10PM +0800, Muchun Song wrote:
> There seems to be no difference between the two, but on some
> architectures(e.g. x86_64), there will be optimizations for
> __this_cpu_add(). We can disassemble the code for you to see
> the difference between them on x86_64.
>
> 1) this_cpu_ptr(ca->cpuusage)->usages[index] += cputime;
>
> ffffffff810d7227: add %gs:0x7ef37fa9(%rip),%rax # f1d8 <this_cpu_off>
> ffffffff810d722f: add %rsi,(%rax) # %rsi is @cputime
>
> This result in two add instructions emitted by the compiler.
>
> 2) __this_cpu_add(ca->cpuusage->usages[index], cputime);
>
> ffffffff810d7227: add %rsi,%gs:(%rax) # %rsi is @cputime
>
> This result in only one add instruction emitted by the compiler.
>
> So we have enough reasons to use the __this_cpu_add().
The patch is OK, but I can't take it with such complete nonsense for a
Changelog.
The reason this_cpu_add() and __this_cpu_add() exist and are different
is for different calling context. this_cpu_*() is always safe and
correct, but as you notice, not always optimal. __this_cpu_*() relies on
the caller already having preemption (and or IRQs disabled) to allow for
better code-gen.
Now, the below call-sites have rq->lock taken, and this means preemption
(and IRQs) are indeed disabled, so it is safe to use __this_cpu_*().
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> kernel/sched/cpuacct.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
> index 9fbb103834345..6448b0438ffb2 100644
> --- a/kernel/sched/cpuacct.c
> +++ b/kernel/sched/cpuacct.c
> @@ -347,7 +347,7 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
> rcu_read_lock();
>
> for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
> - this_cpu_ptr(ca->cpuusage)->usages[index] += cputime;
> + __this_cpu_add(ca->cpuusage->usages[index], cputime);
>
> rcu_read_unlock();
> }
> @@ -363,7 +363,7 @@ void cpuacct_account_field(struct task_struct *tsk, int index, u64 val)
>
> rcu_read_lock();
> for (ca = task_ca(tsk); ca != &root_cpuacct; ca = parent_ca(ca))
> - this_cpu_ptr(ca->cpustat)->cpustat[index] += val;
> + __this_cpu_add(ca->cpustat->cpustat[index], val);
> rcu_read_unlock();
> }
>
> --
> 2.11.0
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [External] Re: [PATCH] sched/cpuacct: Use __this_cpu_add() instead of this_cpu_ptr()
2020-04-16 7:27 ` Peter Zijlstra
@ 2020-04-16 8:17 ` Muchun Song
0 siblings, 0 replies; 3+ messages in thread
From: Muchun Song @ 2020-04-16 8:17 UTC (permalink / raw)
To: Peter Zijlstra
Cc: mingo, juri.lelli, Vincent Guittot, dietmar.eggemann, rostedt,
Benjamin Segall, mgorman, mingo, linux-kernel
On Thu, Apr 16, 2020 at 3:27 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Thu, Apr 16, 2020 at 02:53:10PM +0800, Muchun Song wrote:
> > There seems to be no difference between the two, but on some
> > architectures(e.g. x86_64), there will be optimizations for
> > __this_cpu_add(). We can disassemble the code for you to see
> > the difference between them on x86_64.
> >
> > 1) this_cpu_ptr(ca->cpuusage)->usages[index] += cputime;
> >
> > ffffffff810d7227: add %gs:0x7ef37fa9(%rip),%rax # f1d8 <this_cpu_off>
> > ffffffff810d722f: add %rsi,(%rax) # %rsi is @cputime
> >
> > This result in two add instructions emitted by the compiler.
> >
> > 2) __this_cpu_add(ca->cpuusage->usages[index], cputime);
> >
> > ffffffff810d7227: add %rsi,%gs:(%rax) # %rsi is @cputime
> >
> > This result in only one add instruction emitted by the compiler.
> >
> > So we have enough reasons to use the __this_cpu_add().
>
> The patch is OK, but I can't take it with such complete nonsense for a
> Changelog.
>
> The reason this_cpu_add() and __this_cpu_add() exist and are different
> is for different calling context. this_cpu_*() is always safe and
> correct, but as you notice, not always optimal. __this_cpu_*() relies on
> the caller already having preemption (and or IRQs disabled) to allow for
> better code-gen.
>
> Now, the below call-sites have rq->lock taken, and this means preemption
> (and IRQs) are indeed disabled, so it is safe to use __this_cpu_*().
Thanks Peter. I will update the changelog.
>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> > kernel/sched/cpuacct.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
> > index 9fbb103834345..6448b0438ffb2 100644
> > --- a/kernel/sched/cpuacct.c
> > +++ b/kernel/sched/cpuacct.c
> > @@ -347,7 +347,7 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
> > rcu_read_lock();
> >
> > for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
> > - this_cpu_ptr(ca->cpuusage)->usages[index] += cputime;
> > + __this_cpu_add(ca->cpuusage->usages[index], cputime);
> >
> > rcu_read_unlock();
> > }
> > @@ -363,7 +363,7 @@ void cpuacct_account_field(struct task_struct *tsk, int index, u64 val)
> >
> > rcu_read_lock();
> > for (ca = task_ca(tsk); ca != &root_cpuacct; ca = parent_ca(ca))
> > - this_cpu_ptr(ca->cpustat)->cpustat[index] += val;
> > + __this_cpu_add(ca->cpustat->cpustat[index], val);
> > rcu_read_unlock();
> > }
> >
> > --
> > 2.11.0
> >
--
Yours,
Muchun
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-04-16 8:18 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-16 6:53 [PATCH] sched/cpuacct: Use __this_cpu_add() instead of this_cpu_ptr() Muchun Song
2020-04-16 7:27 ` Peter Zijlstra
2020-04-16 8:17 ` [External] " Muchun Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).