All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] perf: Fixes on event accounting
@ 2013-08-02 16:29 Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex Frederic Weisbecker
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-02 16:29 UTC (permalink / raw)
  To: Peter Zijlstra, Jiri Olsa
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Paul E. McKenney,
	Ingo Molnar, Thomas Gleixner, Borislav Petkov, Li Zhong,
	Mike Galbraith, Kevin Hilman, Namhyung Kim,
	Arnaldo Carvalho de Melo, Stephane Eranian

Peter, Jiri,

Here is a proposed set of fixes after the discussion we had about the
last changes in the event accounting code.

Thanks,
	Frederic
---

Frederic Weisbecker (3):
      perf: Rollback callchain buffer refcount under the callchain mutex
      perf: Account freq events globally
      nohz: Include local CPU in full dynticks global kick


 kernel/events/callchain.c |    3 ++-
 kernel/events/core.c      |   19 ++++++++-----------
 kernel/time/tick-sched.c  |    1 +
 3 files changed, 11 insertions(+), 12 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex
  2013-08-02 16:29 [PATCH 0/3] perf: Fixes on event accounting Frederic Weisbecker
@ 2013-08-02 16:29 ` Frederic Weisbecker
  2013-08-09 10:28   ` Jiri Olsa
  2013-08-16 18:47   ` [tip:perf/core] perf: Roll back " tip-bot for Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 2/3] perf: Account freq events globally Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 3/3] nohz: Include local CPU in full dynticks global kick Frederic Weisbecker
  2 siblings, 2 replies; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-02 16:29 UTC (permalink / raw)
  To: Peter Zijlstra, Jiri Olsa
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Namhyung Kim,
	Arnaldo Carvalho de Melo, Stephane Eranian

When we fail to allocate the callchain buffers, we rollback the refcount
we did and return from get_callchain_buffers().

However we take the refcount and allocate under the callchain lock
but the rollback is done outside the lock.

As a result, while we rollback, some concurrent callchain user may
call get_callchain_buffers(), see the non-zero refcount and give up
because the buffers are NULL without itself retrying the allocation.

The consequences aren't that bad but that behaviour looks weird enough and
it's better to give their chances to the following callchain users where
we failed.

Reported-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
---
 kernel/events/callchain.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
index 76a8bc5..97b67df 100644
--- a/kernel/events/callchain.c
+++ b/kernel/events/callchain.c
@@ -116,10 +116,11 @@ int get_callchain_buffers(void)
 
 	err = alloc_callchain_buffers();
 exit:
-	mutex_unlock(&callchain_mutex);
 	if (err)
 		atomic_dec(&nr_callchain_events);
 
+	mutex_unlock(&callchain_mutex);
+
 	return err;
 }
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] perf: Account freq events globally
  2013-08-02 16:29 [PATCH 0/3] perf: Fixes on event accounting Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex Frederic Weisbecker
@ 2013-08-02 16:29 ` Frederic Weisbecker
  2013-08-09 10:33   ` Jiri Olsa
  2013-08-16 18:47   ` [tip:perf/core] " tip-bot for Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 3/3] nohz: Include local CPU in full dynticks global kick Frederic Weisbecker
  2 siblings, 2 replies; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-02 16:29 UTC (permalink / raw)
  To: Peter Zijlstra, Jiri Olsa
  Cc: LKML, Frederic Weisbecker, Ingo Molnar, Namhyung Kim,
	Arnaldo Carvalho de Melo, Stephane Eranian

Freq events may not always be affine to a particular CPU. As such,
account_event_cpu() may crash if we account per cpu a freq event
that has event->cpu == -1.

To solve this, lets account freq events globally. In practice
this doesn't change much the picture because perf tools create
per-task perf events with one event per CPU by default. Profiling a
single CPU is usually a corner case so there is no much point in
optimizing things that way.

Reported-by: Jiri Olsa <jolsa@redhat.com>
Suggested-by: Peter Zijlstra <peterz@infradead.or>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
---
 kernel/events/core.c |   19 ++++++++-----------
 1 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 916cf1f..617f980 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -141,11 +141,11 @@ enum event_type_t {
 struct static_key_deferred perf_sched_events __read_mostly;
 static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
 static DEFINE_PER_CPU(atomic_t, perf_branch_stack_events);
-static DEFINE_PER_CPU(atomic_t, perf_freq_events);
 
 static atomic_t nr_mmap_events __read_mostly;
 static atomic_t nr_comm_events __read_mostly;
 static atomic_t nr_task_events __read_mostly;
+static atomic_t nr_freq_events __read_mostly;
 
 static LIST_HEAD(pmus);
 static DEFINE_MUTEX(pmus_lock);
@@ -1871,9 +1871,6 @@ static int  __perf_install_in_context(void *info)
 	perf_pmu_enable(cpuctx->ctx.pmu);
 	perf_ctx_unlock(cpuctx, task_ctx);
 
-	if (atomic_read(&__get_cpu_var(perf_freq_events)))
-		tick_nohz_full_kick();
-
 	return 0;
 }
 
@@ -2811,7 +2808,7 @@ done:
 #ifdef CONFIG_NO_HZ_FULL
 bool perf_event_can_stop_tick(void)
 {
-	if (atomic_read(&__get_cpu_var(perf_freq_events)) ||
+	if (atomic_read(&nr_freq_events) ||
 	    __this_cpu_read(perf_throttled_count))
 		return false;
 	else
@@ -3140,9 +3137,6 @@ static void unaccount_event_cpu(struct perf_event *event, int cpu)
 	}
 	if (is_cgroup_event(event))
 		atomic_dec(&per_cpu(perf_cgroup_events, cpu));
-
-	if (event->attr.freq)
-		atomic_dec(&per_cpu(perf_freq_events, cpu));
 }
 
 static void unaccount_event(struct perf_event *event)
@@ -3158,6 +3152,8 @@ static void unaccount_event(struct perf_event *event)
 		atomic_dec(&nr_comm_events);
 	if (event->attr.task)
 		atomic_dec(&nr_task_events);
+	if (event->attr.freq)
+		atomic_dec(&nr_freq_events);
 	if (is_cgroup_event(event))
 		static_key_slow_dec_deferred(&perf_sched_events);
 	if (has_branch_stack(event))
@@ -6479,9 +6475,6 @@ static void account_event_cpu(struct perf_event *event, int cpu)
 	}
 	if (is_cgroup_event(event))
 		atomic_inc(&per_cpu(perf_cgroup_events, cpu));
-
-	if (event->attr.freq)
-		atomic_inc(&per_cpu(perf_freq_events, cpu));
 }
 
 static void account_event(struct perf_event *event)
@@ -6497,6 +6490,10 @@ static void account_event(struct perf_event *event)
 		atomic_inc(&nr_comm_events);
 	if (event->attr.task)
 		atomic_inc(&nr_task_events);
+	if (event->attr.freq) {
+		if (atomic_inc_return(&nr_freq_events) == 1)
+			tick_nohz_full_kick_all();
+	}
 	if (has_branch_stack(event))
 		static_key_slow_inc(&perf_sched_events.key);
 	if (is_cgroup_event(event))
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] nohz: Include local CPU in full dynticks global kick
  2013-08-02 16:29 [PATCH 0/3] perf: Fixes on event accounting Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex Frederic Weisbecker
  2013-08-02 16:29 ` [PATCH 2/3] perf: Account freq events globally Frederic Weisbecker
@ 2013-08-02 16:29 ` Frederic Weisbecker
  2013-08-16 18:46   ` [tip:timers/nohz] " tip-bot for Frederic Weisbecker
  2 siblings, 1 reply; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-02 16:29 UTC (permalink / raw)
  To: Peter Zijlstra, Jiri Olsa
  Cc: LKML, Frederic Weisbecker, Steven Rostedt, Paul E. McKenney,
	Ingo Molnar, Thomas Gleixner, Borislav Petkov, Li Zhong,
	Mike Galbraith, Kevin Hilman

tick_nohz_full_kick_all() is useful to notify all full dynticks
CPUs that there is a system state change to checkout before
re-evaluating the need for the tick.

Unfortunately this is implemented using smp_call_function_many()
that ignores the local CPU. This CPU also needs to re-evaluate
the tick.

on_each_cpu_mask() is not useful either because we don't want to
re-evaluate the tick state in place but asynchronously from an IPI
to avoid messing up with any random locking scenario.

Sol lets call tick_nohz_full_kick() from tick_nohz_full_kick_all()
so that the usual irq work takes care of it.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Kevin Hilman <khilman@linaro.org>
---
 kernel/time/tick-sched.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index e80183f..30849d4 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -244,6 +244,7 @@ void tick_nohz_full_kick_all(void)
 	preempt_disable();
 	smp_call_function_many(nohz_full_mask,
 			       nohz_full_kick_ipi, NULL, false);
+	tick_nohz_full_kick();
 	preempt_enable();
 }
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex
  2013-08-02 16:29 ` [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex Frederic Weisbecker
@ 2013-08-09 10:28   ` Jiri Olsa
  2013-08-16 18:47   ` [tip:perf/core] perf: Roll back " tip-bot for Frederic Weisbecker
  1 sibling, 0 replies; 9+ messages in thread
From: Jiri Olsa @ 2013-08-09 10:28 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Peter Zijlstra, LKML, Ingo Molnar, Namhyung Kim,
	Arnaldo Carvalho de Melo, Stephane Eranian

On Fri, Aug 02, 2013 at 06:29:54PM +0200, Frederic Weisbecker wrote:
> When we fail to allocate the callchain buffers, we rollback the refcount
> we did and return from get_callchain_buffers().
> 
> However we take the refcount and allocate under the callchain lock
> but the rollback is done outside the lock.
> 
> As a result, while we rollback, some concurrent callchain user may
> call get_callchain_buffers(), see the non-zero refcount and give up
> because the buffers are NULL without itself retrying the allocation.
> 
> The consequences aren't that bad but that behaviour looks weird enough and
> it's better to give their chances to the following callchain users where
> we failed.
> 
> Reported-by: Jiri Olsa <jolsa@redhat.com>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Jiri Olsa <jolsa@redhat.com>

Acked-by: Jiri Olsa <jolsa@redhat.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] perf: Account freq events globally
  2013-08-02 16:29 ` [PATCH 2/3] perf: Account freq events globally Frederic Weisbecker
@ 2013-08-09 10:33   ` Jiri Olsa
  2013-08-16 18:47   ` [tip:perf/core] " tip-bot for Frederic Weisbecker
  1 sibling, 0 replies; 9+ messages in thread
From: Jiri Olsa @ 2013-08-09 10:33 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Peter Zijlstra, LKML, Ingo Molnar, Namhyung Kim,
	Arnaldo Carvalho de Melo, Stephane Eranian

On Fri, Aug 02, 2013 at 06:29:55PM +0200, Frederic Weisbecker wrote:
> Freq events may not always be affine to a particular CPU. As such,
> account_event_cpu() may crash if we account per cpu a freq event
> that has event->cpu == -1.
> 
> To solve this, lets account freq events globally. In practice
> this doesn't change much the picture because perf tools create
> per-task perf events with one event per CPU by default. Profiling a
> single CPU is usually a corner case so there is no much point in
> optimizing things that way.
> 
> Reported-by: Jiri Olsa <jolsa@redhat.com>
> Suggested-by: Peter Zijlstra <peterz@infradead.or>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
> Cc: Stephane Eranian <eranian@google.com>

no more OOPSes ;-)

Tested-by: Jiri Olsa <jolsa@redhat.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [tip:timers/nohz] nohz: Include local CPU in full dynticks global kick
  2013-08-02 16:29 ` [PATCH 3/3] nohz: Include local CPU in full dynticks global kick Frederic Weisbecker
@ 2013-08-16 18:46   ` tip-bot for Frederic Weisbecker
  0 siblings, 0 replies; 9+ messages in thread
From: tip-bot for Frederic Weisbecker @ 2013-08-16 18:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, peterz, efault, bp, paulmck, zhong,
	fweisbec, rostedt, khilman, tglx

Commit-ID:  c2e7fcf53c3cb02b4ada1c66a9bc8a4d97d58aba
Gitweb:     http://git.kernel.org/tip/c2e7fcf53c3cb02b4ada1c66a9bc8a4d97d58aba
Author:     Frederic Weisbecker <fweisbec@gmail.com>
AuthorDate: Fri, 2 Aug 2013 18:29:56 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 16 Aug 2013 17:55:33 +0200

nohz: Include local CPU in full dynticks global kick

tick_nohz_full_kick_all() is useful to notify all full dynticks
CPUs that there is a system state change to checkout before
re-evaluating the need for the tick.

Unfortunately this is implemented using smp_call_function_many()
that ignores the local CPU. This CPU also needs to re-evaluate
the tick.

on_each_cpu_mask() is not useful either because we don't want to
re-evaluate the tick state in place but asynchronously from an IPI
to avoid messing up with any random locking scenario.

So lets call tick_nohz_full_kick() from tick_nohz_full_kick_all()
so that the usual irq work takes care of it.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1375460996-16329-4-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/time/tick-sched.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index adea6fc3..3612fc7 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -246,6 +246,7 @@ void tick_nohz_full_kick_all(void)
 	preempt_disable();
 	smp_call_function_many(tick_nohz_full_mask,
 			       nohz_full_kick_ipi, NULL, false);
+	tick_nohz_full_kick();
 	preempt_enable();
 }
 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [tip:perf/core] perf: Roll back callchain buffer refcount under the callchain mutex
  2013-08-02 16:29 ` [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex Frederic Weisbecker
  2013-08-09 10:28   ` Jiri Olsa
@ 2013-08-16 18:47   ` tip-bot for Frederic Weisbecker
  1 sibling, 0 replies; 9+ messages in thread
From: tip-bot for Frederic Weisbecker @ 2013-08-16 18:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, eranian, acme, hpa, mingo, peterz, namhyung, jolsa,
	fweisbec, tglx

Commit-ID:  fc3b86d673e41ac66b4ba5b75a90c2fcafb90089
Gitweb:     http://git.kernel.org/tip/fc3b86d673e41ac66b4ba5b75a90c2fcafb90089
Author:     Frederic Weisbecker <fweisbec@gmail.com>
AuthorDate: Fri, 2 Aug 2013 18:29:54 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 16 Aug 2013 17:55:50 +0200

perf: Roll back callchain buffer refcount under the callchain mutex

When we fail to allocate the callchain buffers, we roll back the refcount
we did and return from get_callchain_buffers().

However we take the refcount and allocate under the callchain lock
but the rollback is done outside the lock.

As a result, while we roll back, some concurrent callchain user may
call get_callchain_buffers(), see the non-zero refcount and give up
because the buffers are NULL without itself retrying the allocation.

The consequences aren't that bad but that behaviour looks weird enough and
it's better to give their chances to the following callchain users where
we failed.

Reported-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1375460996-16329-2-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/events/callchain.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
index 76a8bc5..97b67df 100644
--- a/kernel/events/callchain.c
+++ b/kernel/events/callchain.c
@@ -116,10 +116,11 @@ int get_callchain_buffers(void)
 
 	err = alloc_callchain_buffers();
 exit:
-	mutex_unlock(&callchain_mutex);
 	if (err)
 		atomic_dec(&nr_callchain_events);
 
+	mutex_unlock(&callchain_mutex);
+
 	return err;
 }
 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [tip:perf/core] perf: Account freq events globally
  2013-08-02 16:29 ` [PATCH 2/3] perf: Account freq events globally Frederic Weisbecker
  2013-08-09 10:33   ` Jiri Olsa
@ 2013-08-16 18:47   ` tip-bot for Frederic Weisbecker
  1 sibling, 0 replies; 9+ messages in thread
From: tip-bot for Frederic Weisbecker @ 2013-08-16 18:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, eranian, acme, hpa, mingo, peterz, namhyung, jolsa,
	fweisbec, tglx

Commit-ID:  948b26b6ddd08a57cb95ebb0dc96fde2edd5c383
Gitweb:     http://git.kernel.org/tip/948b26b6ddd08a57cb95ebb0dc96fde2edd5c383
Author:     Frederic Weisbecker <fweisbec@gmail.com>
AuthorDate: Fri, 2 Aug 2013 18:29:55 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 16 Aug 2013 17:55:51 +0200

perf: Account freq events globally

Freq events may not always be affine to a particular CPU. As such,
account_event_cpu() may crash if we account per cpu a freq event
that has event->cpu == -1.

To solve this, lets account freq events globally. In practice
this doesn't change much the picture because perf tools create
per-task perf events with one event per CPU by default. Profiling a
single CPU is usually a corner case so there is no much point in
optimizing things that way.

Reported-by: Jiri Olsa <jolsa@redhat.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tested-by: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1375460996-16329-3-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/events/core.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index e82e700..2e675e8 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -141,11 +141,11 @@ enum event_type_t {
 struct static_key_deferred perf_sched_events __read_mostly;
 static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
 static DEFINE_PER_CPU(atomic_t, perf_branch_stack_events);
-static DEFINE_PER_CPU(atomic_t, perf_freq_events);
 
 static atomic_t nr_mmap_events __read_mostly;
 static atomic_t nr_comm_events __read_mostly;
 static atomic_t nr_task_events __read_mostly;
+static atomic_t nr_freq_events __read_mostly;
 
 static LIST_HEAD(pmus);
 static DEFINE_MUTEX(pmus_lock);
@@ -1871,9 +1871,6 @@ static int  __perf_install_in_context(void *info)
 	perf_pmu_enable(cpuctx->ctx.pmu);
 	perf_ctx_unlock(cpuctx, task_ctx);
 
-	if (atomic_read(&__get_cpu_var(perf_freq_events)))
-		tick_nohz_full_kick();
-
 	return 0;
 }
 
@@ -2811,7 +2808,7 @@ done:
 #ifdef CONFIG_NO_HZ_FULL
 bool perf_event_can_stop_tick(void)
 {
-	if (atomic_read(&__get_cpu_var(perf_freq_events)) ||
+	if (atomic_read(&nr_freq_events) ||
 	    __this_cpu_read(perf_throttled_count))
 		return false;
 	else
@@ -3140,9 +3137,6 @@ static void unaccount_event_cpu(struct perf_event *event, int cpu)
 	}
 	if (is_cgroup_event(event))
 		atomic_dec(&per_cpu(perf_cgroup_events, cpu));
-
-	if (event->attr.freq)
-		atomic_dec(&per_cpu(perf_freq_events, cpu));
 }
 
 static void unaccount_event(struct perf_event *event)
@@ -3158,6 +3152,8 @@ static void unaccount_event(struct perf_event *event)
 		atomic_dec(&nr_comm_events);
 	if (event->attr.task)
 		atomic_dec(&nr_task_events);
+	if (event->attr.freq)
+		atomic_dec(&nr_freq_events);
 	if (is_cgroup_event(event))
 		static_key_slow_dec_deferred(&perf_sched_events);
 	if (has_branch_stack(event))
@@ -6489,9 +6485,6 @@ static void account_event_cpu(struct perf_event *event, int cpu)
 	}
 	if (is_cgroup_event(event))
 		atomic_inc(&per_cpu(perf_cgroup_events, cpu));
-
-	if (event->attr.freq)
-		atomic_inc(&per_cpu(perf_freq_events, cpu));
 }
 
 static void account_event(struct perf_event *event)
@@ -6507,6 +6500,10 @@ static void account_event(struct perf_event *event)
 		atomic_inc(&nr_comm_events);
 	if (event->attr.task)
 		atomic_inc(&nr_task_events);
+	if (event->attr.freq) {
+		if (atomic_inc_return(&nr_freq_events) == 1)
+			tick_nohz_full_kick_all();
+	}
 	if (has_branch_stack(event))
 		static_key_slow_inc(&perf_sched_events.key);
 	if (is_cgroup_event(event))

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-08-16 18:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-02 16:29 [PATCH 0/3] perf: Fixes on event accounting Frederic Weisbecker
2013-08-02 16:29 ` [PATCH 1/3] perf: Rollback callchain buffer refcount under the callchain mutex Frederic Weisbecker
2013-08-09 10:28   ` Jiri Olsa
2013-08-16 18:47   ` [tip:perf/core] perf: Roll back " tip-bot for Frederic Weisbecker
2013-08-02 16:29 ` [PATCH 2/3] perf: Account freq events globally Frederic Weisbecker
2013-08-09 10:33   ` Jiri Olsa
2013-08-16 18:47   ` [tip:perf/core] " tip-bot for Frederic Weisbecker
2013-08-02 16:29 ` [PATCH 3/3] nohz: Include local CPU in full dynticks global kick Frederic Weisbecker
2013-08-16 18:46   ` [tip:timers/nohz] " tip-bot for Frederic Weisbecker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.