* [PATCH 0/5] sched/cputime: irqtime cleanups
@ 2016-09-02 14:03 Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 1/5] irqtime: No need for preempt-safe accessors Frederic Weisbecker
` (4 more replies)
0 siblings, 5 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-02 14:03 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
This series contains a few optimizations against irq disabling, and the
rest is consolidation.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
sched/irqtime
HEAD: 63024a0947091e0a20bafa33ccccb685ae33c275
Thanks,
Frederic
---
Frederic Weisbecker (5):
irqtime: No need for preempt-safe accessors
irqtime: Remove needless IRQs disablement on kcpustat update
u64_stats: Introduce IRQs disabled helpers
irqtime: Consolidate accounting synchronization with u64_stats API
irqtime: Consolidate irqtime flushing code
include/linux/u64_stats_sync.h | 49 +++++++++++++++++++++-----------------
kernel/sched/cputime.c | 51 ++++++++++++++++------------------------
kernel/sched/sched.h | 53 ++++++++++++------------------------------
3 files changed, 63 insertions(+), 90 deletions(-)
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/5] irqtime: No need for preempt-safe accessors
2016-09-02 14:03 [PATCH 0/5] sched/cputime: irqtime cleanups Frederic Weisbecker
@ 2016-09-02 14:03 ` Frederic Weisbecker
2016-09-06 16:59 ` Rik van Riel
2016-09-02 14:03 ` [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update Frederic Weisbecker
` (3 subsequent siblings)
4 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-02 14:03 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
We can safely use the preempt-unsafe accessors for irqtime when we
flush its counters to kcpustat as IRQs are disabled at this time.
Cc: Rik van Riel <riel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
kernel/sched/cputime.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index b93c72d..f111076 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -82,7 +82,7 @@ static cputime_t irqtime_account_hi_update(cputime_t maxtime)
cputime_t irq_cputime;
local_irq_save(flags);
- irq_cputime = nsecs_to_cputime64(this_cpu_read(cpu_hardirq_time)) -
+ irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
cpustat[CPUTIME_IRQ];
irq_cputime = min(irq_cputime, maxtime);
cpustat[CPUTIME_IRQ] += irq_cputime;
@@ -97,7 +97,7 @@ static cputime_t irqtime_account_si_update(cputime_t maxtime)
cputime_t softirq_cputime;
local_irq_save(flags);
- softirq_cputime = nsecs_to_cputime64(this_cpu_read(cpu_softirq_time)) -
+ softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
cpustat[CPUTIME_SOFTIRQ];
softirq_cputime = min(softirq_cputime, maxtime);
cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
--
2.7.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-02 14:03 [PATCH 0/5] sched/cputime: irqtime cleanups Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 1/5] irqtime: No need for preempt-safe accessors Frederic Weisbecker
@ 2016-09-02 14:03 ` Frederic Weisbecker
2016-09-02 14:53 ` Paolo Bonzini
` (2 more replies)
2016-09-02 14:03 ` [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers Frederic Weisbecker
` (2 subsequent siblings)
4 siblings, 3 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-02 14:03 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
The callers of the functions performing irqtime kcpustat updates have
IRQS disabled, no need to disable them again.
Cc: Rik van Riel <riel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
kernel/sched/cputime.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index f111076..d4d12a9 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -78,30 +78,26 @@ EXPORT_SYMBOL_GPL(irqtime_account_irq);
static cputime_t irqtime_account_hi_update(cputime_t maxtime)
{
u64 *cpustat = kcpustat_this_cpu->cpustat;
- unsigned long flags;
cputime_t irq_cputime;
- local_irq_save(flags);
irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
cpustat[CPUTIME_IRQ];
irq_cputime = min(irq_cputime, maxtime);
cpustat[CPUTIME_IRQ] += irq_cputime;
- local_irq_restore(flags);
+
return irq_cputime;
}
static cputime_t irqtime_account_si_update(cputime_t maxtime)
{
u64 *cpustat = kcpustat_this_cpu->cpustat;
- unsigned long flags;
cputime_t softirq_cputime;
- local_irq_save(flags);
softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
cpustat[CPUTIME_SOFTIRQ];
softirq_cputime = min(softirq_cputime, maxtime);
cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
- local_irq_restore(flags);
+
return softirq_cputime;
}
--
2.7.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers
2016-09-02 14:03 [PATCH 0/5] sched/cputime: irqtime cleanups Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 1/5] irqtime: No need for preempt-safe accessors Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update Frederic Weisbecker
@ 2016-09-02 14:03 ` Frederic Weisbecker
2016-09-02 14:35 ` Paolo Bonzini
2016-09-02 14:03 ` [PATCH 4/5] irqtime: Consolidate accounting synchronization with u64_stats API Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 5/5] irqtime: Consolidate irqtime flushing code Frederic Weisbecker
4 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-02 14:03 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
Introduce light versions of u64_stats helpers for context where
either preempt or IRQs are disabled. This way we can make this library
usable by scheduler irqtime accounting which currenty implement its
ad-hoc version.
Cc: Rik van Riel <riel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
include/linux/u64_stats_sync.h | 55 ++++++++++++++++++++++++------------------
1 file changed, 31 insertions(+), 24 deletions(-)
diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
index d3a2bb7..9101d23 100644
--- a/include/linux/u64_stats_sync.h
+++ b/include/linux/u64_stats_sync.h
@@ -103,28 +103,41 @@ static inline void u64_stats_update_end_raw(struct u64_stats_sync *syncp)
#endif
}
+static inline unsigned int __u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
+{
+#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
+ return read_seqcount_begin(&syncp->seq);
+#else
+ return 0;
+#endif
+}
+
static inline unsigned int u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
- return read_seqcount_begin(&syncp->seq);
-#else
-#if BITS_PER_LONG==32
+#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
preempt_disable();
+#else
+ return __u64_stats_fetch_begin(syncp);
#endif
- return 0;
+}
+
+static inline bool __u64_stats_fetch_retry(const struct u64_stats_sync *syncp,
+ unsigned int start)
+{
+#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
+ return read_seqcount_retry(&syncp->seq, start);
+#else
+ return false;
#endif
}
static inline bool u64_stats_fetch_retry(const struct u64_stats_sync *syncp,
unsigned int start)
{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
- return read_seqcount_retry(&syncp->seq, start);
-#else
-#if BITS_PER_LONG==32
+#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
preempt_enable();
-#endif
- return false;
+#else
+ return __u64_stats_fetch_retry(syncp, start);
#endif
}
@@ -136,26 +149,20 @@ static inline bool u64_stats_fetch_retry(const struct u64_stats_sync *syncp,
*/
static inline unsigned int u64_stats_fetch_begin_irq(const struct u64_stats_sync *syncp)
{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
- return read_seqcount_begin(&syncp->seq);
-#else
-#if BITS_PER_LONG==32
+#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
local_irq_disable();
-#endif
- return 0;
+#else
+ return __u64_stats_fetch_begin(syncp);
#endif
}
static inline bool u64_stats_fetch_retry_irq(const struct u64_stats_sync *syncp,
- unsigned int start)
+ unsigned int start)
{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
- return read_seqcount_retry(&syncp->seq, start);
-#else
-#if BITS_PER_LONG==32
+#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
local_irq_enable();
-#endif
- return false;
+#else
+ return __u64_stats_fetch_retry(syncp, start);
#endif
}
--
2.7.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 4/5] irqtime: Consolidate accounting synchronization with u64_stats API
2016-09-02 14:03 [PATCH 0/5] sched/cputime: irqtime cleanups Frederic Weisbecker
` (2 preceding siblings ...)
2016-09-02 14:03 ` [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers Frederic Weisbecker
@ 2016-09-02 14:03 ` Frederic Weisbecker
2016-09-06 17:03 ` Rik van Riel
2016-09-02 14:03 ` [PATCH 5/5] irqtime: Consolidate irqtime flushing code Frederic Weisbecker
4 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-02 14:03 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
The irqtime accounting currently implement its own ad hoc implementation
of u64_stats API. Lets rather consolidate it with the appropriate
library.
Cc: Rik van Riel <riel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
kernel/sched/cputime.c | 31 +++++++++++++---------------
kernel/sched/sched.h | 55 +++++++++++++++-----------------------------------
2 files changed, 30 insertions(+), 56 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index d4d12a9..f51c9a9 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -23,10 +23,8 @@
* task when irq is in progress while we read rq->clock. That is a worthy
* compromise in place of having locks on each irq in account_system_time.
*/
-DEFINE_PER_CPU(u64, cpu_hardirq_time);
-DEFINE_PER_CPU(u64, cpu_softirq_time);
+DEFINE_PER_CPU(struct irqtime, cpu_irqtime);
-static DEFINE_PER_CPU(u64, irq_start_time);
static int sched_clock_irqtime;
void enable_sched_clock_irqtime(void)
@@ -39,16 +37,13 @@ void disable_sched_clock_irqtime(void)
sched_clock_irqtime = 0;
}
-#ifndef CONFIG_64BIT
-DEFINE_PER_CPU(seqcount_t, irq_time_seq);
-#endif /* CONFIG_64BIT */
-
/*
* Called before incrementing preempt_count on {soft,}irq_enter
* and before decrementing preempt_count on {soft,}irq_exit.
*/
void irqtime_account_irq(struct task_struct *curr)
{
+ struct irqtime *irqtime = this_cpu_ptr(&cpu_irqtime);
s64 delta;
int cpu;
@@ -56,10 +51,10 @@ void irqtime_account_irq(struct task_struct *curr)
return;
cpu = smp_processor_id();
- delta = sched_clock_cpu(cpu) - __this_cpu_read(irq_start_time);
- __this_cpu_add(irq_start_time, delta);
+ delta = sched_clock_cpu(cpu) - irqtime->irq_start_time;
+ irqtime->irq_start_time += delta;
- irq_time_write_begin();
+ u64_stats_update_begin(&irqtime->sync);
/*
* We do not account for softirq time from ksoftirqd here.
* We want to continue accounting softirq time to ksoftirqd thread
@@ -67,11 +62,11 @@ void irqtime_account_irq(struct task_struct *curr)
* that do not consume any time, but still wants to run.
*/
if (hardirq_count())
- __this_cpu_add(cpu_hardirq_time, delta);
+ irqtime->hardirq_time += delta;
else if (in_serving_softirq() && curr != this_cpu_ksoftirqd())
- __this_cpu_add(cpu_softirq_time, delta);
+ irqtime->softirq_time += delta;
- irq_time_write_end();
+ u64_stats_update_end(&irqtime->sync);
}
EXPORT_SYMBOL_GPL(irqtime_account_irq);
@@ -79,9 +74,10 @@ static cputime_t irqtime_account_hi_update(cputime_t maxtime)
{
u64 *cpustat = kcpustat_this_cpu->cpustat;
cputime_t irq_cputime;
+ u64 nsecs;
- irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
- cpustat[CPUTIME_IRQ];
+ nsecs = __this_cpu_read(cpu_irqtime.hardirq_time);
+ irq_cputime = nsecs_to_cputime64(nsecs) - cpustat[CPUTIME_IRQ];
irq_cputime = min(irq_cputime, maxtime);
cpustat[CPUTIME_IRQ] += irq_cputime;
@@ -92,9 +88,10 @@ static cputime_t irqtime_account_si_update(cputime_t maxtime)
{
u64 *cpustat = kcpustat_this_cpu->cpustat;
cputime_t softirq_cputime;
+ u64 nsecs;
- softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
- cpustat[CPUTIME_SOFTIRQ];
+ nsecs = __this_cpu_read(cpu_irqtime.softirq_time);
+ softirq_cputime = nsecs_to_cputime64(nsecs) - cpustat[CPUTIME_SOFTIRQ];
softirq_cputime = min(softirq_cputime, maxtime);
cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 420c05d..daf9193 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2,6 +2,7 @@
#include <linux/sched.h>
#include <linux/sched/sysctl.h>
#include <linux/sched/rt.h>
+#include <linux/u64_stats_sync.h>
#include <linux/sched/deadline.h>
#include <linux/binfmts.h>
#include <linux/mutex.h>
@@ -1711,52 +1712,28 @@ static inline void nohz_balance_exit_idle(unsigned int cpu) { }
#endif
#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-
-DECLARE_PER_CPU(u64, cpu_hardirq_time);
-DECLARE_PER_CPU(u64, cpu_softirq_time);
-
-#ifndef CONFIG_64BIT
-DECLARE_PER_CPU(seqcount_t, irq_time_seq);
-
-static inline void irq_time_write_begin(void)
-{
- __this_cpu_inc(irq_time_seq.sequence);
- smp_wmb();
-}
-
-static inline void irq_time_write_end(void)
-{
- smp_wmb();
- __this_cpu_inc(irq_time_seq.sequence);
-}
+struct irqtime {
+ u64 hardirq_time;
+ u64 softirq_time;
+ u64 irq_start_time;
+ struct u64_stats_sync sync;
+};
+
+DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
static inline u64 irq_time_read(int cpu)
{
- u64 irq_time;
- unsigned seq;
+ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
+ unsigned int seq;
+ u64 total;
do {
- seq = read_seqcount_begin(&per_cpu(irq_time_seq, cpu));
- irq_time = per_cpu(cpu_softirq_time, cpu) +
- per_cpu(cpu_hardirq_time, cpu);
- } while (read_seqcount_retry(&per_cpu(irq_time_seq, cpu), seq));
+ seq = __u64_stats_fetch_begin(&irqtime->sync);
+ total = irqtime->softirq_time + irqtime->hardirq_time;
+ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
- return irq_time;
+ return total;
}
-#else /* CONFIG_64BIT */
-static inline void irq_time_write_begin(void)
-{
-}
-
-static inline void irq_time_write_end(void)
-{
-}
-
-static inline u64 irq_time_read(int cpu)
-{
- return per_cpu(cpu_softirq_time, cpu) + per_cpu(cpu_hardirq_time, cpu);
-}
-#endif /* CONFIG_64BIT */
#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
#ifdef CONFIG_CPU_FREQ
--
2.7.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 5/5] irqtime: Consolidate irqtime flushing code
2016-09-02 14:03 [PATCH 0/5] sched/cputime: irqtime cleanups Frederic Weisbecker
` (3 preceding siblings ...)
2016-09-02 14:03 ` [PATCH 4/5] irqtime: Consolidate accounting synchronization with u64_stats API Frederic Weisbecker
@ 2016-09-02 14:03 ` Frederic Weisbecker
2016-09-06 16:52 ` Rik van Riel
4 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-02 14:03 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
The code performing irqtime nsecs stats flushing to kcpustat is roughly
the same for hardirq and softirq. So lets consolidate that common code.
Cc: Rik van Riel <riel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
kernel/sched/cputime.c | 26 +++++++++++---------------
1 file changed, 11 insertions(+), 15 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index f51c9a9..f8de5d3 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -70,32 +70,28 @@ void irqtime_account_irq(struct task_struct *curr)
}
EXPORT_SYMBOL_GPL(irqtime_account_irq);
-static cputime_t irqtime_account_hi_update(cputime_t maxtime)
+static cputime_t irqtime_account_update(u64 irqtime, int idx, cputime_t maxtime)
{
u64 *cpustat = kcpustat_this_cpu->cpustat;
cputime_t irq_cputime;
- u64 nsecs;
- nsecs = __this_cpu_read(cpu_irqtime.hardirq_time);
- irq_cputime = nsecs_to_cputime64(nsecs) - cpustat[CPUTIME_IRQ];
+ irq_cputime = nsecs_to_cputime64(irqtime) - cpustat[idx];
irq_cputime = min(irq_cputime, maxtime);
- cpustat[CPUTIME_IRQ] += irq_cputime;
+ cpustat[idx] += irq_cputime;
return irq_cputime;
}
+static cputime_t irqtime_account_hi_update(cputime_t maxtime)
+{
+ return irqtime_account_update(__this_cpu_read(cpu_irqtime.hardirq_time),
+ CPUTIME_IRQ, maxtime);
+}
+
static cputime_t irqtime_account_si_update(cputime_t maxtime)
{
- u64 *cpustat = kcpustat_this_cpu->cpustat;
- cputime_t softirq_cputime;
- u64 nsecs;
-
- nsecs = __this_cpu_read(cpu_irqtime.softirq_time);
- softirq_cputime = nsecs_to_cputime64(nsecs) - cpustat[CPUTIME_SOFTIRQ];
- softirq_cputime = min(softirq_cputime, maxtime);
- cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
-
- return softirq_cputime;
+ return irqtime_account_update(__this_cpu_read(cpu_irqtime.softirq_time),
+ CPUTIME_SOFTIRQ, maxtime);
}
#else /* CONFIG_IRQ_TIME_ACCOUNTING */
--
2.7.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers
2016-09-02 14:03 ` [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers Frederic Weisbecker
@ 2016-09-02 14:35 ` Paolo Bonzini
2016-09-18 13:35 ` Frederic Weisbecker
0 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2016-09-02 14:35 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Peter Zijlstra, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On 02/09/2016 16:03, Frederic Weisbecker wrote:
> static inline unsigned int u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
> {
> -#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> - return read_seqcount_begin(&syncp->seq);
> -#else
> -#if BITS_PER_LONG==32
> +#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
> preempt_disable();
> +#else
This should be #endif, or this side ends without a "return" statement.
> + return __u64_stats_fetch_begin(syncp);
> #endif
> - return 0;
> +}
...
>
> static inline bool u64_stats_fetch_retry(const struct u64_stats_sync *syncp,
> unsigned int start)
> {
> -#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> - return read_seqcount_retry(&syncp->seq, start);
> -#else
> -#if BITS_PER_LONG==32
> +#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
> preempt_enable();
> -#endif
> - return false;
> +#else
Same here.
> + return __u64_stats_fetch_retry(syncp, start);
> #endif
> }
...
>
> - return read_seqcount_begin(&syncp->seq);
> -#else
> -#if BITS_PER_LONG==32
> +#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
> local_irq_disable();
> -#endif
> - return 0;
> +#else
Same here.
> + return __u64_stats_fetch_begin(syncp);
> #endif
>
> -#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> - return read_seqcount_retry(&syncp->seq, start);
> -#else
> -#if BITS_PER_LONG==32
> +#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
> local_irq_enable();
> -#endif
> - return false;
> +#else
Same here.
> + return __u64_stats_fetch_retry(syncp, start);
> #endif
Thanks,
Paolo
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-02 14:03 ` [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update Frederic Weisbecker
@ 2016-09-02 14:53 ` Paolo Bonzini
2016-09-02 16:30 ` Peter Zijlstra
2016-09-06 17:00 ` Rik van Riel
2016-09-07 7:59 ` Peter Zijlstra
2 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2016-09-02 14:53 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Peter Zijlstra, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On 02/09/2016 16:03, Frederic Weisbecker wrote:
> The callers of the functions performing irqtime kcpustat updates have
> IRQS disabled, no need to disable them again.
They do, but perhaps this should be annotated through some sparse magic.
It's starting to be hairy, with the requirement spanning many separate
files.
Something like
#define __irq_disabled __must_hold(IRQ)
together with __acquire and __release annotations in
include/linux/irqflags.h would do. I'm not sure how to handle
local_irq_save/local_irq_restore, but I guess sparse would be fine with
((void)({
raw_local_irq_save(flags);
if (flags) __acquire(IRQ);
}))
and
((void)({
if (flags) __release(IRQ);
raw_local_irq_restore(flags);
}))
since below that it's assembly.
Starting from irqtime_account_hi_update, irqtime_account_si_update and
irqtime_account_irq you'd get quite a few functions annotated.
Paolo
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Wanpeng Li <wanpeng.li@hotmail.com>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> ---
> kernel/sched/cputime.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index f111076..d4d12a9 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -78,30 +78,26 @@ EXPORT_SYMBOL_GPL(irqtime_account_irq);
> static cputime_t irqtime_account_hi_update(cputime_t maxtime)
> {
> u64 *cpustat = kcpustat_this_cpu->cpustat;
> - unsigned long flags;
> cputime_t irq_cputime;
>
> - local_irq_save(flags);
> irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
> cpustat[CPUTIME_IRQ];
> irq_cputime = min(irq_cputime, maxtime);
> cpustat[CPUTIME_IRQ] += irq_cputime;
> - local_irq_restore(flags);
> +
> return irq_cputime;
> }
>
> static cputime_t irqtime_account_si_update(cputime_t maxtime)
> {
> u64 *cpustat = kcpustat_this_cpu->cpustat;
> - unsigned long flags;
> cputime_t softirq_cputime;
>
> - local_irq_save(flags);
> softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
> cpustat[CPUTIME_SOFTIRQ];
> softirq_cputime = min(softirq_cputime, maxtime);
> cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
> - local_irq_restore(flags);
> +
> return softirq_cputime;
> }
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-02 14:53 ` Paolo Bonzini
@ 2016-09-02 16:30 ` Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2016-09-02 16:30 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Frederic Weisbecker, LKML, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On Fri, Sep 02, 2016 at 04:53:47PM +0200, Paolo Bonzini wrote:
>
>
> On 02/09/2016 16:03, Frederic Weisbecker wrote:
> > The callers of the functions performing irqtime kcpustat updates have
> > IRQS disabled, no need to disable them again.
>
> They do, but perhaps this should be annotated through some sparse magic.
> It's starting to be hairy, with the requirement spanning many separate
> files.
Sparse sucks for those things...
maybe just add something like lockdep_assert_irqsoff(). Such a call both
documents the requirement and validates at runtime when CONFIG_LOCKDEP.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 5/5] irqtime: Consolidate irqtime flushing code
2016-09-02 14:03 ` [PATCH 5/5] irqtime: Consolidate irqtime flushing code Frederic Weisbecker
@ 2016-09-06 16:52 ` Rik van Riel
0 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 2016-09-06 16:52 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Paolo Bonzini, Peter Zijlstra, Wanpeng Li, Eric Dumazet,
Ingo Molnar, Mike Galbraith
[-- Attachment #1: Type: text/plain, Size: 650 bytes --]
On Fri, 2016-09-02 at 16:03 +0200, Frederic Weisbecker wrote:
> The code performing irqtime nsecs stats flushing to kcpustat is
> roughly
> the same for hardirq and softirq. So lets consolidate that common
> code.
>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Wanpeng Li <wanpeng.li@hotmail.com>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
>
Reviewed-by: Rik van Riel <riel@redhat.com>
--
All Rights Reversed.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/5] irqtime: No need for preempt-safe accessors
2016-09-02 14:03 ` [PATCH 1/5] irqtime: No need for preempt-safe accessors Frederic Weisbecker
@ 2016-09-06 16:59 ` Rik van Riel
0 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 2016-09-06 16:59 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Paolo Bonzini, Peter Zijlstra, Wanpeng Li, Eric Dumazet,
Ingo Molnar, Mike Galbraith
[-- Attachment #1: Type: text/plain, Size: 633 bytes --]
On Fri, 2016-09-02 at 16:03 +0200, Frederic Weisbecker wrote:
> We can safely use the preempt-unsafe accessors for irqtime when we
> flush its counters to kcpustat as IRQs are disabled at this time.
>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Wanpeng Li <wanpeng.li@hotmail.com>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
>
Reviewed-by: Rik van Riel <riel@redhat.com>
--
All Rights Reversed.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-02 14:03 ` [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update Frederic Weisbecker
2016-09-02 14:53 ` Paolo Bonzini
@ 2016-09-06 17:00 ` Rik van Riel
2016-09-07 7:59 ` Peter Zijlstra
2 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 2016-09-06 17:00 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Paolo Bonzini, Peter Zijlstra, Wanpeng Li, Eric Dumazet,
Ingo Molnar, Mike Galbraith
[-- Attachment #1: Type: text/plain, Size: 614 bytes --]
On Fri, 2016-09-02 at 16:03 +0200, Frederic Weisbecker wrote:
> The callers of the functions performing irqtime kcpustat updates have
> IRQS disabled, no need to disable them again.
>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Wanpeng Li <wanpeng.li@hotmail.com>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
--
All Rights Reversed.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 4/5] irqtime: Consolidate accounting synchronization with u64_stats API
2016-09-02 14:03 ` [PATCH 4/5] irqtime: Consolidate accounting synchronization with u64_stats API Frederic Weisbecker
@ 2016-09-06 17:03 ` Rik van Riel
0 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 2016-09-06 17:03 UTC (permalink / raw)
To: Frederic Weisbecker, LKML
Cc: Paolo Bonzini, Peter Zijlstra, Wanpeng Li, Eric Dumazet,
Ingo Molnar, Mike Galbraith
[-- Attachment #1: Type: text/plain, Size: 652 bytes --]
On Fri, 2016-09-02 at 16:03 +0200, Frederic Weisbecker wrote:
> The irqtime accounting currently implement its own ad hoc
> implementation
> of u64_stats API. Lets rather consolidate it with the appropriate
> library.
>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Wanpeng Li <wanpeng.li@hotmail.com>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
--
All Rights Reversed.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-02 14:03 ` [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update Frederic Weisbecker
2016-09-02 14:53 ` Paolo Bonzini
2016-09-06 17:00 ` Rik van Riel
@ 2016-09-07 7:59 ` Peter Zijlstra
2016-09-18 13:47 ` Frederic Weisbecker
2 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2016-09-07 7:59 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: LKML, Paolo Bonzini, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On Fri, Sep 02, 2016 at 04:03:02PM +0200, Frederic Weisbecker wrote:
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index f111076..d4d12a9 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -78,30 +78,26 @@ EXPORT_SYMBOL_GPL(irqtime_account_irq);
> static cputime_t irqtime_account_hi_update(cputime_t maxtime)
> {
> u64 *cpustat = kcpustat_this_cpu->cpustat;
> - unsigned long flags;
> cputime_t irq_cputime;
>
+ lockdep_assert_irqs_disabled();
> - local_irq_save(flags);
> irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
> cpustat[CPUTIME_IRQ];
> irq_cputime = min(irq_cputime, maxtime);
> cpustat[CPUTIME_IRQ] += irq_cputime;
> - local_irq_restore(flags);
> +
> return irq_cputime;
> }
>
> static cputime_t irqtime_account_si_update(cputime_t maxtime)
> {
> u64 *cpustat = kcpustat_this_cpu->cpustat;
> - unsigned long flags;
> cputime_t softirq_cputime;
>
+ lockdep_assert_irqs_disabled();
> - local_irq_save(flags);
> softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
> cpustat[CPUTIME_SOFTIRQ];
> softirq_cputime = min(softirq_cputime, maxtime);
> cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
> - local_irq_restore(flags);
> +
> return softirq_cputime;
> }
---
Subject: locking/lockdep: Provide IRQ state assertion helpers
Provide a cheap alternative to:
WARN_ON(!irqs_disabled());
This patch provides:
lockdep_assert_irqs_disabled();
lockdep_assert_softirqs_disabled();
Which compile away for CONFIG_LOCKDEP=n kernels.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -526,6 +526,23 @@ static inline void print_irqtrace_events
}
#endif
+#if defined(CONFIG_LOCKDEP) && defined(CONFIG_TRACE_IRQFLAGS)
+
+#define lockdep_assert_irqs_disabled() do { \
+ WARN_ON(debug_locks && current->hardirqs_enabled); \
+ } while (0)
+
+#define lockdep_assert_softirqs_disabled() do { \
+ WARN_ON(debug_locks && current->softirqs_enabled); \
+ } while (0)
+
+#else
+
+#define lockdep_assert_irqs_disabled() do { } while (0)
+#define lockdep_assert_softirqs_disabled() do { } while (0)
+
+#endif
+
/*
* For trivial one-depth nesting of a lock-class, the following
* global define can be used. (Subsystems with multiple levels
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers
2016-09-02 14:35 ` Paolo Bonzini
@ 2016-09-18 13:35 ` Frederic Weisbecker
0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-18 13:35 UTC (permalink / raw)
To: Paolo Bonzini
Cc: LKML, Peter Zijlstra, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On Fri, Sep 02, 2016 at 04:35:54PM +0200, Paolo Bonzini wrote:
>
>
> On 02/09/2016 16:03, Frederic Weisbecker wrote:
> > static inline unsigned int u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
> > {
> > -#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> > - return read_seqcount_begin(&syncp->seq);
> > -#else
> > -#if BITS_PER_LONG==32
> > +#if BITS_PER_LONG==32 && !defined(CONFIG_SMP)
> > preempt_disable();
> > +#else
>
> This should be #endif, or this side ends without a "return" statement.
Good catch!
Thanks Paolo!
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-07 7:59 ` Peter Zijlstra
@ 2016-09-18 13:47 ` Frederic Weisbecker
2016-09-18 15:54 ` Peter Zijlstra
0 siblings, 1 reply; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-18 13:47 UTC (permalink / raw)
To: Peter Zijlstra
Cc: LKML, Paolo Bonzini, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On Wed, Sep 07, 2016 at 09:59:13AM +0200, Peter Zijlstra wrote:
> On Fri, Sep 02, 2016 at 04:03:02PM +0200, Frederic Weisbecker wrote:
>
> > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> > index f111076..d4d12a9 100644
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -78,30 +78,26 @@ EXPORT_SYMBOL_GPL(irqtime_account_irq);
> > static cputime_t irqtime_account_hi_update(cputime_t maxtime)
> > {
> > u64 *cpustat = kcpustat_this_cpu->cpustat;
> > - unsigned long flags;
> > cputime_t irq_cputime;
> >
> + lockdep_assert_irqs_disabled();
>
> > - local_irq_save(flags);
> > irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
> > cpustat[CPUTIME_IRQ];
> > irq_cputime = min(irq_cputime, maxtime);
> > cpustat[CPUTIME_IRQ] += irq_cputime;
> > - local_irq_restore(flags);
> > +
> > return irq_cputime;
> > }
> >
> > static cputime_t irqtime_account_si_update(cputime_t maxtime)
> > {
> > u64 *cpustat = kcpustat_this_cpu->cpustat;
> > - unsigned long flags;
> > cputime_t softirq_cputime;
> >
> + lockdep_assert_irqs_disabled();
>
> > - local_irq_save(flags);
> > softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
> > cpustat[CPUTIME_SOFTIRQ];
> > softirq_cputime = min(softirq_cputime, maxtime);
> > cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
> > - local_irq_restore(flags);
> > +
> > return softirq_cputime;
> > }
>
> ---
> Subject: locking/lockdep: Provide IRQ state assertion helpers
>
> Provide a cheap alternative to:
>
> WARN_ON(!irqs_disabled());
>
> This patch provides:
>
> lockdep_assert_irqs_disabled();
> lockdep_assert_softirqs_disabled();
>
> Which compile away for CONFIG_LOCKDEP=n kernels.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -526,6 +526,23 @@ static inline void print_irqtrace_events
> }
> #endif
>
> +#if defined(CONFIG_LOCKDEP) && defined(CONFIG_TRACE_IRQFLAGS)
> +
> +#define lockdep_assert_irqs_disabled() do { \
> + WARN_ON(debug_locks && current->hardirqs_enabled); \
> + } while (0)
> +
> +#define lockdep_assert_softirqs_disabled() do { \
> + WARN_ON(debug_locks && current->softirqs_enabled); \
> + } while (0)
> +
> +#else
> +
> +#define lockdep_assert_irqs_disabled() do { } while (0)
> +#define lockdep_assert_softirqs_disabled() do { } while (0)
> +
> +#endif
> +
> /*
> * For trivial one-depth nesting of a lock-class, the following
> * global define can be used. (Subsystems with multiple levels
I love that! I've seen so many cases where I wanted this runtime check without
the overhead of it on production kernels.
I can take that patch, now since this is lockdep code and my series is scheduler's
code that depend on it, how can we manage the dependency between the two branches?
Perhaps I can add the lockdep patch in the series, there will likely be no
wicked conflicts agains potential changes in the lockdep tree.
Another alternative is to use WARN_ON_(!irqs_disabled()) on my series, then
on the lockdep branch we can convert all the potential users including the current
one.. The lockdep branch would then depend on the others. That way looks better as I
can think of several sites to convert.
Thanks.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update
2016-09-18 13:47 ` Frederic Weisbecker
@ 2016-09-18 15:54 ` Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2016-09-18 15:54 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: LKML, Paolo Bonzini, Wanpeng Li, Eric Dumazet, Ingo Molnar,
Mike Galbraith, Rik van Riel
On Sun, Sep 18, 2016 at 03:47:43PM +0200, Frederic Weisbecker wrote:
> I love that! I've seen so many cases where I wanted this runtime check without
> the overhead of it on production kernels.
>
> I can take that patch, now since this is lockdep code and my series is scheduler's
> code that depend on it, how can we manage the dependency between the two branches?
>
> Perhaps I can add the lockdep patch in the series, there will likely be no
> wicked conflicts agains potential changes in the lockdep tree.
Yeah, just take it along with the rest of the patches.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/5] irqtime: No need for preempt-safe accessors
2016-09-26 0:29 [PATCH 0/5] sched/cputime: irqtime cleanups v2 Frederic Weisbecker
@ 2016-09-26 0:29 ` Frederic Weisbecker
0 siblings, 0 replies; 18+ messages in thread
From: Frederic Weisbecker @ 2016-09-26 0:29 UTC (permalink / raw)
To: LKML
Cc: Frederic Weisbecker, Paolo Bonzini, Peter Zijlstra, Wanpeng Li,
Eric Dumazet, Ingo Molnar, Mike Galbraith, Rik van Riel
We can safely use the preempt-unsafe accessors for irqtime when we
flush its counters to kcpustat as IRQs are disabled at this time.
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
kernel/sched/cputime.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index b93c72d..f111076 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -82,7 +82,7 @@ static cputime_t irqtime_account_hi_update(cputime_t maxtime)
cputime_t irq_cputime;
local_irq_save(flags);
- irq_cputime = nsecs_to_cputime64(this_cpu_read(cpu_hardirq_time)) -
+ irq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_hardirq_time)) -
cpustat[CPUTIME_IRQ];
irq_cputime = min(irq_cputime, maxtime);
cpustat[CPUTIME_IRQ] += irq_cputime;
@@ -97,7 +97,7 @@ static cputime_t irqtime_account_si_update(cputime_t maxtime)
cputime_t softirq_cputime;
local_irq_save(flags);
- softirq_cputime = nsecs_to_cputime64(this_cpu_read(cpu_softirq_time)) -
+ softirq_cputime = nsecs_to_cputime64(__this_cpu_read(cpu_softirq_time)) -
cpustat[CPUTIME_SOFTIRQ];
softirq_cputime = min(softirq_cputime, maxtime);
cpustat[CPUTIME_SOFTIRQ] += softirq_cputime;
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
end of thread, other threads:[~2016-09-26 0:29 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-02 14:03 [PATCH 0/5] sched/cputime: irqtime cleanups Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 1/5] irqtime: No need for preempt-safe accessors Frederic Weisbecker
2016-09-06 16:59 ` Rik van Riel
2016-09-02 14:03 ` [PATCH 2/5] irqtime: Remove needless IRQs disablement on kcpustat update Frederic Weisbecker
2016-09-02 14:53 ` Paolo Bonzini
2016-09-02 16:30 ` Peter Zijlstra
2016-09-06 17:00 ` Rik van Riel
2016-09-07 7:59 ` Peter Zijlstra
2016-09-18 13:47 ` Frederic Weisbecker
2016-09-18 15:54 ` Peter Zijlstra
2016-09-02 14:03 ` [PATCH 3/5] u64_stats: Introduce IRQs disabled helpers Frederic Weisbecker
2016-09-02 14:35 ` Paolo Bonzini
2016-09-18 13:35 ` Frederic Weisbecker
2016-09-02 14:03 ` [PATCH 4/5] irqtime: Consolidate accounting synchronization with u64_stats API Frederic Weisbecker
2016-09-06 17:03 ` Rik van Riel
2016-09-02 14:03 ` [PATCH 5/5] irqtime: Consolidate irqtime flushing code Frederic Weisbecker
2016-09-06 16:52 ` Rik van Riel
2016-09-26 0:29 [PATCH 0/5] sched/cputime: irqtime cleanups v2 Frederic Weisbecker
2016-09-26 0:29 ` [PATCH 1/5] irqtime: No need for preempt-safe accessors Frederic Weisbecker
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.