* [PATCH 0/7] perf counter patches
@ 2009-05-23 16:28 Peter Zijlstra
2009-05-23 16:28 ` [PATCH 1/7] perf_counter: fix dynamic irq_period logging Peter Zijlstra
` (6 more replies)
0 siblings, 7 replies; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:28 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
Not quite tested, since my upgrade to F11 the userspace bits seem
to segfault within the dynamic linker :/
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/7] perf_counter: fix dynamic irq_period logging
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
@ 2009-05-23 16:28 ` Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Fix " tip-bot for Peter Zijlstra
2009-05-25 0:08 ` [PATCH 1/7] perf_counter: fix " Paul Mackerras
2009-05-23 16:28 ` [PATCH 2/7] perf_counter: sanitize counter->mutex Peter Zijlstra
` (5 subsequent siblings)
6 siblings, 2 replies; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:28 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-fix-dyn-period.patch --]
[-- Type: text/plain, Size: 1806 bytes --]
We call perf_adjust_freq() from perf_counter_task_tick() which is is called
under the rq->lock causing lock recursion. However, it's no longer required
to be called under the rq->lock, so remove it from under it.
Also, fix up some related comments.
LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/perf_counter.h | 1 +
kernel/perf_counter.c | 3 ++-
kernel/sched.c | 3 ++-
3 files changed, 5 insertions(+), 2 deletions(-)
Index: linux-2.6/include/linux/perf_counter.h
===================================================================
--- linux-2.6.orig/include/linux/perf_counter.h
+++ linux-2.6/include/linux/perf_counter.h
@@ -260,6 +260,7 @@ enum perf_event_type {
/*
* struct {
* struct perf_event_header header;
+ * u64 time;
* u64 irq_period;
* };
*/
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -2559,7 +2559,8 @@ void perf_counter_munmap(unsigned long a
}
/*
- *
+ * Log irq_period changes so that analyzing tools can re-normalize the
+ * event flow.
*/
static void perf_log_period(struct perf_counter *counter, u64 period)
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -5113,9 +5113,10 @@ void scheduler_tick(void)
update_rq_clock(rq);
update_cpu_load(rq);
curr->sched_class->task_tick(rq, curr, 0);
- perf_counter_task_tick(curr, cpu);
spin_unlock(&rq->lock);
+ perf_counter_task_tick(curr, cpu);
+
#ifdef CONFIG_SMP
rq->idle_at_tick = idle_cpu(cpu);
trigger_load_balance(rq, cpu);
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 2/7] perf_counter: sanitize counter->mutex
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
2009-05-23 16:28 ` [PATCH 1/7] perf_counter: fix dynamic irq_period logging Peter Zijlstra
@ 2009-05-23 16:28 ` Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Sanitize counter->mutex tip-bot for Peter Zijlstra
2009-05-23 16:28 ` [PATCH 3/7] perf_counter: sanitize context locking Peter Zijlstra
` (4 subsequent siblings)
6 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:28 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-inherit-mutex.patch --]
[-- Type: text/plain, Size: 6137 bytes --]
s/counter->mutex/counter->child_mutex/ and make sure its only used to protect
child_list.
The usage in __perf_counter_exit_task() doesn't appear to be problematic
since ctx->mutex also covers anything related to fd tear-down.
LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/perf_counter.h | 7 ++----
kernel/perf_counter.c | 47 +++++++++++++++++--------------------------
2 files changed, 22 insertions(+), 32 deletions(-)
Index: linux-2.6/include/linux/perf_counter.h
===================================================================
--- linux-2.6.orig/include/linux/perf_counter.h
+++ linux-2.6/include/linux/perf_counter.h
@@ -452,9 +452,6 @@ struct perf_counter {
struct perf_counter_context *ctx;
struct file *filp;
- struct perf_counter *parent;
- struct list_head child_list;
-
/*
* These accumulate total time (in nanoseconds) that children
* counters have been enabled and running, respectively.
@@ -465,7 +462,9 @@ struct perf_counter {
/*
* Protect attach/detach and child_list:
*/
- struct mutex mutex;
+ struct mutex child_mutex;
+ struct list_head child_list;
+ struct perf_counter *parent;
int oncpu;
int cpu;
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -111,6 +111,10 @@ static void put_ctx(struct perf_counter_
}
}
+/*
+ * Add a counter from the lists for its context.
+ * Must be called with ctx->mutex and ctx->lock held.
+ */
static void
list_add_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
{
@@ -136,7 +140,7 @@ list_add_counter(struct perf_counter *co
/*
* Remove a counter from the lists for its context.
- * Must be called with counter->mutex and ctx->mutex held.
+ * Must be called with ctx->mutex and ctx->lock held.
*/
static void
list_del_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
@@ -276,7 +280,7 @@ static void __perf_counter_remove_from_c
/*
* Remove the counter from a task's (or a CPU's) list of counters.
*
- * Must be called with counter->mutex and ctx->mutex held.
+ * Must be called with ctx->mutex held.
*
* CPU counters are removed with a smp call. For task counters we only
* call when the task is on a CPU.
@@ -1407,11 +1411,7 @@ static int perf_release(struct inode *in
file->private_data = NULL;
mutex_lock(&ctx->mutex);
- mutex_lock(&counter->mutex);
-
perf_counter_remove_from_context(counter);
-
- mutex_unlock(&counter->mutex);
mutex_unlock(&ctx->mutex);
free_counter(counter);
@@ -1437,7 +1437,7 @@ perf_read_hw(struct perf_counter *counte
if (counter->state == PERF_COUNTER_STATE_ERROR)
return 0;
- mutex_lock(&counter->mutex);
+ mutex_lock(&counter->child_mutex);
values[0] = perf_counter_read(counter);
n = 1;
if (counter->hw_event.read_format & PERF_FORMAT_TOTAL_TIME_ENABLED)
@@ -1446,7 +1446,7 @@ perf_read_hw(struct perf_counter *counte
if (counter->hw_event.read_format & PERF_FORMAT_TOTAL_TIME_RUNNING)
values[n++] = counter->total_time_running +
atomic64_read(&counter->child_total_time_running);
- mutex_unlock(&counter->mutex);
+ mutex_unlock(&counter->child_mutex);
if (count < n * sizeof(u64))
return -EINVAL;
@@ -1510,11 +1510,11 @@ static void perf_counter_for_each_child(
{
struct perf_counter *child;
- mutex_lock(&counter->mutex);
+ mutex_lock(&counter->child_mutex);
func(counter);
list_for_each_entry(child, &counter->child_list, child_list)
func(child);
- mutex_unlock(&counter->mutex);
+ mutex_unlock(&counter->child_mutex);
}
static void perf_counter_for_each(struct perf_counter *counter,
@@ -1522,11 +1522,11 @@ static void perf_counter_for_each(struct
{
struct perf_counter *child;
- mutex_lock(&counter->mutex);
+ mutex_lock(&counter->child_mutex);
perf_counter_for_each_sibling(counter, func);
list_for_each_entry(child, &counter->child_list, child_list)
perf_counter_for_each_sibling(child, func);
- mutex_unlock(&counter->mutex);
+ mutex_unlock(&counter->child_mutex);
}
static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
@@ -3106,7 +3106,9 @@ perf_counter_alloc(struct perf_counter_h
if (!group_leader)
group_leader = counter;
- mutex_init(&counter->mutex);
+ mutex_init(&counter->child_mutex);
+ INIT_LIST_HEAD(&counter->child_list);
+
INIT_LIST_HEAD(&counter->list_entry);
INIT_LIST_HEAD(&counter->event_entry);
INIT_LIST_HEAD(&counter->sibling_list);
@@ -3114,8 +3116,6 @@ perf_counter_alloc(struct perf_counter_h
mutex_init(&counter->mmap_mutex);
- INIT_LIST_HEAD(&counter->child_list);
-
counter->cpu = cpu;
counter->hw_event = *hw_event;
counter->group_leader = group_leader;
@@ -3346,10 +3346,9 @@ inherit_counter(struct perf_counter *par
/*
* Link this into the parent counter's child list
*/
- mutex_lock(&parent_counter->mutex);
+ mutex_lock(&parent_counter->child_mutex);
list_add_tail(&child_counter->child_list, &parent_counter->child_list);
-
- mutex_unlock(&parent_counter->mutex);
+ mutex_unlock(&parent_counter->child_mutex);
return child_counter;
}
@@ -3396,9 +3395,9 @@ static void sync_child_counter(struct pe
/*
* Remove this counter from the parent's list
*/
- mutex_lock(&parent_counter->mutex);
+ mutex_lock(&parent_counter->child_mutex);
list_del_init(&child_counter->child_list);
- mutex_unlock(&parent_counter->mutex);
+ mutex_unlock(&parent_counter->child_mutex);
/*
* Release the parent counter, if this was the last
@@ -3414,17 +3413,9 @@ __perf_counter_exit_task(struct task_str
{
struct perf_counter *parent_counter;
- /*
- * Protect against concurrent operations on child_counter
- * due its fd getting closed, etc.
- */
- mutex_lock(&child_counter->mutex);
-
update_counter_times(child_counter);
list_del_counter(child_counter, child_ctx);
- mutex_unlock(&child_counter->mutex);
-
parent_counter = child_counter->parent;
/*
* It can happen that parent exits first, and has counters
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 3/7] perf_counter: sanitize context locking
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
2009-05-23 16:28 ` [PATCH 1/7] perf_counter: fix dynamic irq_period logging Peter Zijlstra
2009-05-23 16:28 ` [PATCH 2/7] perf_counter: sanitize counter->mutex Peter Zijlstra
@ 2009-05-23 16:28 ` Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Sanitize " tip-bot for Peter Zijlstra
2009-05-23 16:28 ` [PATCH 4/7] perf_counter: fix userspace build Peter Zijlstra
` (3 subsequent siblings)
6 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:28 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-ctx-lock.patch --]
[-- Type: text/plain, Size: 1574 bytes --]
Ensure we're consistent with the context locks.
context->mutex
context->lock
list_{add,del}_counter();
so that either lock is sufficient to stabilize the context.
LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
kernel/perf_counter.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -597,6 +597,8 @@ static void add_counter_to_ctx(struct pe
/*
* Cross CPU call to install and enable a performance counter
+ *
+ * Must be called with ctx->mutex held
*/
static void __perf_install_in_context(void *info)
{
@@ -1496,13 +1498,13 @@ static void perf_counter_for_each_siblin
struct perf_counter_context *ctx = counter->ctx;
struct perf_counter *sibling;
- spin_lock_irq(&ctx->lock);
+ mutex_lock(&ctx->mutex);
counter = counter->group_leader;
func(counter);
list_for_each_entry(sibling, &counter->sibling_list, list_entry)
func(sibling);
- spin_unlock_irq(&ctx->lock);
+ mutex_unlock(&ctx->mutex);
}
static void perf_counter_for_each_child(struct perf_counter *counter,
@@ -3414,7 +3416,10 @@ __perf_counter_exit_task(struct task_str
struct perf_counter *parent_counter;
update_counter_times(child_counter);
+
+ spin_lock_irq(&child_ctx->lock);
list_del_counter(child_counter, child_ctx);
+ spin_unlock_irq(&child_ctx->lock);
parent_counter = child_counter->parent;
/*
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 4/7] perf_counter: fix userspace build
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
` (2 preceding siblings ...)
2009-05-23 16:28 ` [PATCH 3/7] perf_counter: sanitize context locking Peter Zijlstra
@ 2009-05-23 16:28 ` Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Fix " tip-bot for Peter Zijlstra
2009-05-23 16:28 ` [PATCH 5/7] perf_counter: simplify context cleanup Peter Zijlstra
` (2 subsequent siblings)
6 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:28 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-userspace-fix-build.patch --]
[-- Type: text/plain, Size: 3975 bytes --]
recent userspace (F11) seems to already include the linux/unistd.h bits
which means we cannot include the version in the kernel sources due to
the header guards being the same.
Ensure we include the kernel version first.
LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
Documentation/perf_counter/builtin-record.c | 3 --
Documentation/perf_counter/builtin-stat.c | 5 ----
Documentation/perf_counter/builtin-top.c | 5 ----
Documentation/perf_counter/perf.h | 31 +++++++++++++++++-----------
4 files changed, 22 insertions(+), 22 deletions(-)
Index: linux-2.6/Documentation/perf_counter/builtin-record.c
===================================================================
--- linux-2.6.orig/Documentation/perf_counter/builtin-record.c
+++ linux-2.6/Documentation/perf_counter/builtin-record.c
@@ -1,5 +1,6 @@
+#include "perf.h"
#include "util/util.h"
#include <sys/types.h>
@@ -30,9 +31,7 @@
#include <linux/unistd.h>
#include <linux/types.h>
-#include "../../include/linux/perf_counter.h"
-#include "perf.h"
#define ALIGN(x,a) __ALIGN_MASK(x,(typeof(x))(a)-1)
#define __ALIGN_MASK(x,mask) (((x)+(mask))&~(mask))
Index: linux-2.6/Documentation/perf_counter/builtin-stat.c
===================================================================
--- linux-2.6.orig/Documentation/perf_counter/builtin-stat.c
+++ linux-2.6/Documentation/perf_counter/builtin-stat.c
@@ -61,6 +61,7 @@
* Released under the GPL v2. (and only v2, not any later version)
*/
+#include "perf.h"
#include "util/util.h"
#include <getopt.h>
@@ -83,10 +84,6 @@
#include <linux/unistd.h>
#include <linux/types.h>
-#include "../../include/linux/perf_counter.h"
-
-#include "perf.h"
-
#define EVENT_MASK_KERNEL 1
#define EVENT_MASK_USER 2
Index: linux-2.6/Documentation/perf_counter/builtin-top.c
===================================================================
--- linux-2.6.orig/Documentation/perf_counter/builtin-top.c
+++ linux-2.6/Documentation/perf_counter/builtin-top.c
@@ -42,6 +42,7 @@
* Released under the GPL v2. (and only v2, not any later version)
*/
+#include "perf.h"
#include "util/util.h"
#include <getopt.h>
@@ -64,10 +65,6 @@
#include <linux/unistd.h>
#include <linux/types.h>
-#include "../../include/linux/perf_counter.h"
-
-#include "perf.h"
-
static int system_wide = 0;
static int nr_counters = 0;
Index: linux-2.6/Documentation/perf_counter/perf.h
===================================================================
--- linux-2.6.orig/Documentation/perf_counter/perf.h
+++ linux-2.6/Documentation/perf_counter/perf.h
@@ -1,6 +1,25 @@
#ifndef _PERF_PERF_H
#define _PERF_PERF_H
+#if defined(__x86_64__) || defined(__i386__)
+#include "../../arch/x86/include/asm/unistd.h"
+#define rmb() asm volatile("lfence" ::: "memory")
+#define cpu_relax() asm volatile("rep; nop" ::: "memory");
+#endif
+
+#ifdef __powerpc__
+#include "../../arch/powerpc/include/asm/unistd.h"
+#define rmb() asm volatile ("sync" ::: "memory")
+#define cpu_relax() asm volatile ("" ::: "memory");
+#endif
+
+#include <time.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include "../../include/linux/perf_counter.h"
+
/*
* prctl(PR_TASK_PERF_COUNTERS_DISABLE) will (cheaply) disable all
* counters in the current task.
@@ -26,18 +45,6 @@ static inline unsigned long long rdclock
#define __user
#define asmlinkage
-#if defined(__x86_64__) || defined(__i386__)
-#include "../../arch/x86/include/asm/unistd.h"
-#define rmb() asm volatile("lfence" ::: "memory")
-#define cpu_relax() asm volatile("rep; nop" ::: "memory");
-#endif
-
-#ifdef __powerpc__
-#include "../../arch/powerpc/include/asm/unistd.h"
-#define rmb() asm volatile ("sync" ::: "memory")
-#define cpu_relax() asm volatile ("" ::: "memory");
-#endif
-
#define unlikely(x) __builtin_expect(!!(x), 0)
#define min(x, y) ({ \
typeof(x) _min1 = (x); \
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 5/7] perf_counter: simplify context cleanup
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
` (3 preceding siblings ...)
2009-05-23 16:28 ` [PATCH 4/7] perf_counter: fix userspace build Peter Zijlstra
@ 2009-05-23 16:28 ` Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Simplify " tip-bot for Peter Zijlstra
2009-05-23 16:29 ` [PATCH 6/7] perf_counter: change pctrl() behaviour Peter Zijlstra
2009-05-23 16:29 ` [PATCH 7/7] perf_counter: remove perf_counter_context::nr_enabled Peter Zijlstra
6 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:28 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-cleanup-exit.patch --]
[-- Type: text/plain, Size: 775 bytes --]
Use perf_counter_remove_from_context() to remove counters from the context.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
kernel/perf_counter.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -3416,10 +3416,7 @@ __perf_counter_exit_task(struct task_str
struct perf_counter *parent_counter;
update_counter_times(child_counter);
-
- spin_lock_irq(&child_ctx->lock);
- list_del_counter(child_counter, child_ctx);
- spin_unlock_irq(&child_ctx->lock);
+ perf_counter_remove_from_context(child_counter);
parent_counter = child_counter->parent;
/*
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 6/7] perf_counter: change pctrl() behaviour
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
` (4 preceding siblings ...)
2009-05-23 16:28 ` [PATCH 5/7] perf_counter: simplify context cleanup Peter Zijlstra
@ 2009-05-23 16:29 ` Peter Zijlstra
2009-05-23 19:04 ` Peter Zijlstra
2009-05-24 7:02 ` [tip:perfcounters/core] perf_counter: Change " tip-bot for Peter Zijlstra
2009-05-23 16:29 ` [PATCH 7/7] perf_counter: remove perf_counter_context::nr_enabled Peter Zijlstra
6 siblings, 2 replies; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:29 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-ctrl-all.patch --]
[-- Type: text/plain, Size: 5335 bytes --]
Instead of en/dis-abling all counters acting on a particular task, en/dis-
able all counters we created.
LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/init_task.h | 10 +++++
include/linux/perf_counter.h | 3 +
include/linux/sched.h | 2 +
kernel/perf_counter.c | 84 ++++++++++---------------------------------
4 files changed, 36 insertions(+), 63 deletions(-)
Index: linux-2.6/include/linux/init_task.h
===================================================================
--- linux-2.6.orig/include/linux/init_task.h
+++ linux-2.6/include/linux/init_task.h
@@ -108,6 +108,15 @@ extern struct group_info init_groups;
extern struct cred init_cred;
+#ifdef CONFIG_PERF_COUNTERS
+# define INIT_PERF_COUNTERS(tsk) \
+ .perf_counter_mutex = \
+ __MUTEX_INITIALIZER(tsk.perf_counter_mutex), \
+ .perf_counter_list = LIST_HEAD_INIT(tsk.perf_counter_list),
+#else
+# define INIT_PERF_COUNTERS(tsk)
+#endif
+
/*
* INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -171,6 +180,7 @@ extern struct cred init_cred;
}, \
.dirties = INIT_PROP_LOCAL_SINGLE(dirties), \
INIT_IDS \
+ INIT_PERF_COUNTERS(tsk) \
INIT_TRACE_IRQFLAGS \
INIT_LOCKDEP \
INIT_FTRACE_GRAPH \
Index: linux-2.6/include/linux/perf_counter.h
===================================================================
--- linux-2.6.orig/include/linux/perf_counter.h
+++ linux-2.6/include/linux/perf_counter.h
@@ -469,6 +469,9 @@ struct perf_counter {
int oncpu;
int cpu;
+ struct list_head owner_entry;
+ struct task_struct *owner;
+
/* mmap bits */
struct mutex mmap_mutex;
atomic_t mmap_count;
Index: linux-2.6/include/linux/sched.h
===================================================================
--- linux-2.6.orig/include/linux/sched.h
+++ linux-2.6/include/linux/sched.h
@@ -1403,6 +1403,8 @@ struct task_struct {
#endif
#ifdef CONFIG_PERF_COUNTERS
struct perf_counter_context *perf_counter_ctxp;
+ struct mutex perf_counter_mutex;
+ struct list_head perf_counter_list;
#endif
#ifdef CONFIG_NUMA
struct mempolicy *mempolicy;
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -1076,79 +1076,26 @@ static void perf_counter_cpu_sched_in(st
__perf_counter_sched_in(ctx, cpuctx, cpu);
}
-int perf_counter_task_disable(void)
+int perf_counter_task_enable(void)
{
- struct task_struct *curr = current;
- struct perf_counter_context *ctx = curr->perf_counter_ctxp;
struct perf_counter *counter;
- unsigned long flags;
-
- if (!ctx || !ctx->nr_counters)
- return 0;
-
- local_irq_save(flags);
-
- __perf_counter_task_sched_out(ctx);
-
- spin_lock(&ctx->lock);
-
- /*
- * Disable all the counters:
- */
- perf_disable();
-
- list_for_each_entry(counter, &ctx->counter_list, list_entry) {
- if (counter->state != PERF_COUNTER_STATE_ERROR) {
- update_group_times(counter);
- counter->state = PERF_COUNTER_STATE_OFF;
- }
- }
- perf_enable();
-
- spin_unlock_irqrestore(&ctx->lock, flags);
+ mutex_lock(¤t->perf_counter_mutex);
+ list_for_each_entry(counter, ¤t->perf_counter_list, owner_entry)
+ perf_counter_enable(counter);
+ mutex_unlock(¤t->perf_counter_mutex);
return 0;
}
-int perf_counter_task_enable(void)
+int perf_counter_task_disable(void)
{
- struct task_struct *curr = current;
- struct perf_counter_context *ctx = curr->perf_counter_ctxp;
struct perf_counter *counter;
- unsigned long flags;
- int cpu;
- if (!ctx || !ctx->nr_counters)
- return 0;
-
- local_irq_save(flags);
- cpu = smp_processor_id();
-
- __perf_counter_task_sched_out(ctx);
-
- spin_lock(&ctx->lock);
-
- /*
- * Disable all the counters:
- */
- perf_disable();
-
- list_for_each_entry(counter, &ctx->counter_list, list_entry) {
- if (counter->state > PERF_COUNTER_STATE_OFF)
- continue;
- counter->state = PERF_COUNTER_STATE_INACTIVE;
- counter->tstamp_enabled =
- ctx->time - counter->total_time_enabled;
- counter->hw_event.disabled = 0;
- }
- perf_enable();
-
- spin_unlock(&ctx->lock);
-
- perf_counter_task_sched_in(curr, cpu);
-
- local_irq_restore(flags);
+ mutex_lock(¤t->perf_counter_mutex);
+ list_for_each_entry(counter, ¤t->perf_counter_list, owner_entry)
+ perf_counter_disable(counter);
+ mutex_unlock(¤t->perf_counter_mutex);
return 0;
}
@@ -1416,6 +1363,11 @@ static int perf_release(struct inode *in
perf_counter_remove_from_context(counter);
mutex_unlock(&ctx->mutex);
+ mutex_lock(&counter->owner->perf_counter_mutex);
+ list_del_init(&counter->owner_entry);
+ mutex_unlock(&counter->owner->perf_counter_mutex);
+ put_task_struct(counter->owner);
+
free_counter(counter);
put_context(ctx);
@@ -3272,6 +3224,12 @@ SYSCALL_DEFINE5(perf_counter_open,
perf_install_in_context(ctx, counter, cpu);
mutex_unlock(&ctx->mutex);
+ counter->owner = current;
+ get_task_struct(current);
+ mutex_lock(¤t->perf_counter_mutex);
+ list_add_tail(&counter->owner_entry, ¤t->perf_counter_list);
+ mutex_unlock(¤t->perf_counter_mutex);
+
fput_light(counter_file, fput_needed2);
out_fput:
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 7/7] perf_counter: remove perf_counter_context::nr_enabled
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
` (5 preceding siblings ...)
2009-05-23 16:29 ` [PATCH 6/7] perf_counter: change pctrl() behaviour Peter Zijlstra
@ 2009-05-23 16:29 ` Peter Zijlstra
2009-05-24 7:02 ` [tip:perfcounters/core] perf_counter: Remove perf_counter_context::nr_enabled tip-bot for Peter Zijlstra
6 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 16:29 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel, Peter Zijlstra,
Arnaldo Carvalho de Melo, John Kacur
[-- Attachment #1: perf_counter-ctrl-all-cleanup.patch --]
[-- Type: text/plain, Size: 2787 bytes --]
now that pctrl() no longer disables other people's counters, remove the
PMU cache code that deals with that.
LKML-Reference: <new-submission>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/perf_counter.h | 1 -
kernel/perf_counter.c | 11 +----------
2 files changed, 1 insertion(+), 11 deletions(-)
Index: linux-2.6/include/linux/perf_counter.h
===================================================================
--- linux-2.6.orig/include/linux/perf_counter.h
+++ linux-2.6/include/linux/perf_counter.h
@@ -516,7 +516,6 @@ struct perf_counter_context {
struct list_head event_list;
int nr_counters;
int nr_active;
- int nr_enabled;
int is_active;
atomic_t refcount;
struct task_struct *task;
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -134,8 +134,6 @@ list_add_counter(struct perf_counter *co
list_add_rcu(&counter->event_entry, &ctx->event_list);
ctx->nr_counters++;
- if (counter->state >= PERF_COUNTER_STATE_INACTIVE)
- ctx->nr_enabled++;
}
/*
@@ -150,8 +148,6 @@ list_del_counter(struct perf_counter *co
if (list_empty(&counter->list_entry))
return;
ctx->nr_counters--;
- if (counter->state >= PERF_COUNTER_STATE_INACTIVE)
- ctx->nr_enabled--;
list_del_init(&counter->list_entry);
list_del_rcu(&counter->event_entry);
@@ -406,7 +402,6 @@ static void __perf_counter_disable(void
else
counter_sched_out(counter, cpuctx, ctx);
counter->state = PERF_COUNTER_STATE_OFF;
- ctx->nr_enabled--;
}
spin_unlock_irqrestore(&ctx->lock, flags);
@@ -448,7 +443,6 @@ static void perf_counter_disable(struct
if (counter->state == PERF_COUNTER_STATE_INACTIVE) {
update_counter_times(counter);
counter->state = PERF_COUNTER_STATE_OFF;
- ctx->nr_enabled--;
}
spin_unlock_irq(&ctx->lock);
@@ -759,7 +753,6 @@ static void __perf_counter_enable(void *
goto unlock;
counter->state = PERF_COUNTER_STATE_INACTIVE;
counter->tstamp_enabled = ctx->time - counter->total_time_enabled;
- ctx->nr_enabled++;
/*
* If the counter is in a group and isn't the group leader,
@@ -850,7 +843,6 @@ static void perf_counter_enable(struct p
counter->state = PERF_COUNTER_STATE_INACTIVE;
counter->tstamp_enabled =
ctx->time - counter->total_time_enabled;
- ctx->nr_enabled++;
}
out:
spin_unlock_irq(&ctx->lock);
@@ -910,8 +902,7 @@ static int context_equiv(struct perf_cou
struct perf_counter_context *ctx2)
{
return ctx1->parent_ctx && ctx1->parent_ctx == ctx2->parent_ctx
- && ctx1->parent_gen == ctx2->parent_gen
- && ctx1->nr_enabled == ctx2->nr_enabled;
+ && ctx1->parent_gen == ctx2->parent_gen;
}
/*
--
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 6/7] perf_counter: change pctrl() behaviour
2009-05-23 16:29 ` [PATCH 6/7] perf_counter: change pctrl() behaviour Peter Zijlstra
@ 2009-05-23 19:04 ` Peter Zijlstra
2009-05-24 7:02 ` [tip:perfcounters/core] perf_counter: Change " tip-bot for Peter Zijlstra
1 sibling, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-23 19:04 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Mackerras, Corey Ashford, linux-kernel,
Arnaldo Carvalho de Melo, John Kacur
On Sat, 2009-05-23 at 18:29 +0200, Peter Zijlstra wrote:
> plain text document attachment (perf_counter-ctrl-all.patch)
> Instead of en/dis-abling all counters acting on a particular task, en/dis-
> able all counters we created.
>
> LKML-Reference: <new-submission>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
Please fold in the below so as to avoid an explosion on first
sys_perf_counter_open().
Index: linux-2.6/kernel/perf_counter.c
===================================================================
--- linux-2.6.orig/kernel/perf_counter.c
+++ linux-2.6/kernel/perf_counter.c
@@ -3446,6 +3446,9 @@ void perf_counter_init_task(struct task_
child->perf_counter_ctxp = NULL;
+ mutex_init(&child->perf_counter_mutex);
+ INIT_LIST_HEAD(&child->perf_counter_list);
+
/*
* This is executed from the parent task context, so inherit
* counters that have been marked for cloning.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Fix dynamic irq_period logging
2009-05-23 16:28 ` [PATCH 1/7] perf_counter: fix dynamic irq_period logging Peter Zijlstra
@ 2009-05-24 7:01 ` tip-bot for Peter Zijlstra
2009-05-25 0:08 ` [PATCH 1/7] perf_counter: fix " Paul Mackerras
1 sibling, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: e220d2dcb944c5c488b6855d15ec66d76900514f
Gitweb: http://git.kernel.org/tip/e220d2dcb944c5c488b6855d15ec66d76900514f
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:28:55 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 23 May 2009 19:37:44 +0200
perf_counter: Fix dynamic irq_period logging
We call perf_adjust_freq() from perf_counter_task_tick() which
is is called under the rq->lock causing lock recursion.
However, it's no longer required to be called under the
rq->lock, so remove it from under it.
Also, fix up some related comments.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163012.476197912@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
include/linux/perf_counter.h | 1 +
kernel/perf_counter.c | 3 ++-
kernel/sched.c | 3 ++-
3 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index 2eedae8..23ddd29 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -260,6 +260,7 @@ enum perf_event_type {
/*
* struct {
* struct perf_event_header header;
+ * u64 time;
* u64 irq_period;
* };
*/
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index c100554..2f410ea 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -2559,7 +2559,8 @@ void perf_counter_munmap(unsigned long addr, unsigned long len,
}
/*
- *
+ * Log irq_period changes so that analyzing tools can re-normalize the
+ * event flow.
*/
static void perf_log_period(struct perf_counter *counter, u64 period)
diff --git a/kernel/sched.c b/kernel/sched.c
index 4c0d58b..ad079f0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4875,9 +4875,10 @@ void scheduler_tick(void)
update_rq_clock(rq);
update_cpu_load(rq);
curr->sched_class->task_tick(rq, curr, 0);
- perf_counter_task_tick(curr, cpu);
spin_unlock(&rq->lock);
+ perf_counter_task_tick(curr, cpu);
+
#ifdef CONFIG_SMP
rq->idle_at_tick = idle_cpu(cpu);
trigger_load_balance(rq, cpu);
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Sanitize counter->mutex
2009-05-23 16:28 ` [PATCH 2/7] perf_counter: sanitize counter->mutex Peter Zijlstra
@ 2009-05-24 7:01 ` tip-bot for Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: fccc714b3148ab9741fafc1e90c3876d50df6093
Gitweb: http://git.kernel.org/tip/fccc714b3148ab9741fafc1e90c3876d50df6093
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:28:56 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 23 May 2009 19:37:45 +0200
perf_counter: Sanitize counter->mutex
s/counter->mutex/counter->child_mutex/ and make sure its only
used to protect child_list.
The usage in __perf_counter_exit_task() doesn't appear to be
problematic since ctx->mutex also covers anything related to fd
tear-down.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163012.533186528@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
include/linux/perf_counter.h | 7 ++---
kernel/perf_counter.c | 47 +++++++++++++++++-------------------------
2 files changed, 22 insertions(+), 32 deletions(-)
diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index 23ddd29..4ab8050 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -452,9 +452,6 @@ struct perf_counter {
struct perf_counter_context *ctx;
struct file *filp;
- struct perf_counter *parent;
- struct list_head child_list;
-
/*
* These accumulate total time (in nanoseconds) that children
* counters have been enabled and running, respectively.
@@ -465,7 +462,9 @@ struct perf_counter {
/*
* Protect attach/detach and child_list:
*/
- struct mutex mutex;
+ struct mutex child_mutex;
+ struct list_head child_list;
+ struct perf_counter *parent;
int oncpu;
int cpu;
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 2f410ea..679c3b5 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -111,6 +111,10 @@ static void put_ctx(struct perf_counter_context *ctx)
}
}
+/*
+ * Add a counter from the lists for its context.
+ * Must be called with ctx->mutex and ctx->lock held.
+ */
static void
list_add_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
{
@@ -136,7 +140,7 @@ list_add_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
/*
* Remove a counter from the lists for its context.
- * Must be called with counter->mutex and ctx->mutex held.
+ * Must be called with ctx->mutex and ctx->lock held.
*/
static void
list_del_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
@@ -276,7 +280,7 @@ static void __perf_counter_remove_from_context(void *info)
/*
* Remove the counter from a task's (or a CPU's) list of counters.
*
- * Must be called with counter->mutex and ctx->mutex held.
+ * Must be called with ctx->mutex held.
*
* CPU counters are removed with a smp call. For task counters we only
* call when the task is on a CPU.
@@ -1407,11 +1411,7 @@ static int perf_release(struct inode *inode, struct file *file)
file->private_data = NULL;
mutex_lock(&ctx->mutex);
- mutex_lock(&counter->mutex);
-
perf_counter_remove_from_context(counter);
-
- mutex_unlock(&counter->mutex);
mutex_unlock(&ctx->mutex);
free_counter(counter);
@@ -1437,7 +1437,7 @@ perf_read_hw(struct perf_counter *counter, char __user *buf, size_t count)
if (counter->state == PERF_COUNTER_STATE_ERROR)
return 0;
- mutex_lock(&counter->mutex);
+ mutex_lock(&counter->child_mutex);
values[0] = perf_counter_read(counter);
n = 1;
if (counter->hw_event.read_format & PERF_FORMAT_TOTAL_TIME_ENABLED)
@@ -1446,7 +1446,7 @@ perf_read_hw(struct perf_counter *counter, char __user *buf, size_t count)
if (counter->hw_event.read_format & PERF_FORMAT_TOTAL_TIME_RUNNING)
values[n++] = counter->total_time_running +
atomic64_read(&counter->child_total_time_running);
- mutex_unlock(&counter->mutex);
+ mutex_unlock(&counter->child_mutex);
if (count < n * sizeof(u64))
return -EINVAL;
@@ -1510,11 +1510,11 @@ static void perf_counter_for_each_child(struct perf_counter *counter,
{
struct perf_counter *child;
- mutex_lock(&counter->mutex);
+ mutex_lock(&counter->child_mutex);
func(counter);
list_for_each_entry(child, &counter->child_list, child_list)
func(child);
- mutex_unlock(&counter->mutex);
+ mutex_unlock(&counter->child_mutex);
}
static void perf_counter_for_each(struct perf_counter *counter,
@@ -1522,11 +1522,11 @@ static void perf_counter_for_each(struct perf_counter *counter,
{
struct perf_counter *child;
- mutex_lock(&counter->mutex);
+ mutex_lock(&counter->child_mutex);
perf_counter_for_each_sibling(counter, func);
list_for_each_entry(child, &counter->child_list, child_list)
perf_counter_for_each_sibling(child, func);
- mutex_unlock(&counter->mutex);
+ mutex_unlock(&counter->child_mutex);
}
static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
@@ -3106,7 +3106,9 @@ perf_counter_alloc(struct perf_counter_hw_event *hw_event,
if (!group_leader)
group_leader = counter;
- mutex_init(&counter->mutex);
+ mutex_init(&counter->child_mutex);
+ INIT_LIST_HEAD(&counter->child_list);
+
INIT_LIST_HEAD(&counter->list_entry);
INIT_LIST_HEAD(&counter->event_entry);
INIT_LIST_HEAD(&counter->sibling_list);
@@ -3114,8 +3116,6 @@ perf_counter_alloc(struct perf_counter_hw_event *hw_event,
mutex_init(&counter->mmap_mutex);
- INIT_LIST_HEAD(&counter->child_list);
-
counter->cpu = cpu;
counter->hw_event = *hw_event;
counter->group_leader = group_leader;
@@ -3346,10 +3346,9 @@ inherit_counter(struct perf_counter *parent_counter,
/*
* Link this into the parent counter's child list
*/
- mutex_lock(&parent_counter->mutex);
+ mutex_lock(&parent_counter->child_mutex);
list_add_tail(&child_counter->child_list, &parent_counter->child_list);
-
- mutex_unlock(&parent_counter->mutex);
+ mutex_unlock(&parent_counter->child_mutex);
return child_counter;
}
@@ -3396,9 +3395,9 @@ static void sync_child_counter(struct perf_counter *child_counter,
/*
* Remove this counter from the parent's list
*/
- mutex_lock(&parent_counter->mutex);
+ mutex_lock(&parent_counter->child_mutex);
list_del_init(&child_counter->child_list);
- mutex_unlock(&parent_counter->mutex);
+ mutex_unlock(&parent_counter->child_mutex);
/*
* Release the parent counter, if this was the last
@@ -3414,17 +3413,9 @@ __perf_counter_exit_task(struct task_struct *child,
{
struct perf_counter *parent_counter;
- /*
- * Protect against concurrent operations on child_counter
- * due its fd getting closed, etc.
- */
- mutex_lock(&child_counter->mutex);
-
update_counter_times(child_counter);
list_del_counter(child_counter, child_ctx);
- mutex_unlock(&child_counter->mutex);
-
parent_counter = child_counter->parent;
/*
* It can happen that parent exits first, and has counters
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Sanitize context locking
2009-05-23 16:28 ` [PATCH 3/7] perf_counter: sanitize context locking Peter Zijlstra
@ 2009-05-24 7:01 ` tip-bot for Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: 682076ae1de0aba9c2da509f7b19dc03e30a6e1f
Gitweb: http://git.kernel.org/tip/682076ae1de0aba9c2da509f7b19dc03e30a6e1f
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:28:57 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 23 May 2009 19:37:46 +0200
perf_counter: Sanitize context locking
Ensure we're consistent with the context locks.
context->mutex
context->lock
list_{add,del}_counter();
so that either lock is sufficient to stabilize the context.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163012.618790733@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
kernel/perf_counter.c | 9 +++++++--
1 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 679c3b5..d162d2f 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -597,6 +597,8 @@ static void add_counter_to_ctx(struct perf_counter *counter,
/*
* Cross CPU call to install and enable a performance counter
+ *
+ * Must be called with ctx->mutex held
*/
static void __perf_install_in_context(void *info)
{
@@ -1496,13 +1498,13 @@ static void perf_counter_for_each_sibling(struct perf_counter *counter,
struct perf_counter_context *ctx = counter->ctx;
struct perf_counter *sibling;
- spin_lock_irq(&ctx->lock);
+ mutex_lock(&ctx->mutex);
counter = counter->group_leader;
func(counter);
list_for_each_entry(sibling, &counter->sibling_list, list_entry)
func(sibling);
- spin_unlock_irq(&ctx->lock);
+ mutex_unlock(&ctx->mutex);
}
static void perf_counter_for_each_child(struct perf_counter *counter,
@@ -3414,7 +3416,10 @@ __perf_counter_exit_task(struct task_struct *child,
struct perf_counter *parent_counter;
update_counter_times(child_counter);
+
+ spin_lock_irq(&child_ctx->lock);
list_del_counter(child_counter, child_ctx);
+ spin_unlock_irq(&child_ctx->lock);
parent_counter = child_counter->parent;
/*
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Fix userspace build
2009-05-23 16:28 ` [PATCH 4/7] perf_counter: fix userspace build Peter Zijlstra
@ 2009-05-24 7:01 ` tip-bot for Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: 1a482f38c5aafeb3576079a38a5b21b46619f3d2
Gitweb: http://git.kernel.org/tip/1a482f38c5aafeb3576079a38a5b21b46619f3d2
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:28:58 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 23 May 2009 19:37:46 +0200
perf_counter: Fix userspace build
recent userspace (F11) seems to already include the
linux/unistd.h bits which means we cannot include the version
in the kernel sources due to the header guards being the same.
Ensure we include the kernel version first.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163012.739756497@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
Documentation/perf_counter/builtin-record.c | 3 +-
Documentation/perf_counter/builtin-stat.c | 5 +---
Documentation/perf_counter/builtin-top.c | 5 +---
Documentation/perf_counter/perf.h | 31 ++++++++++++++++----------
4 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/Documentation/perf_counter/builtin-record.c b/Documentation/perf_counter/builtin-record.c
index efb8759..1b19f18 100644
--- a/Documentation/perf_counter/builtin-record.c
+++ b/Documentation/perf_counter/builtin-record.c
@@ -1,5 +1,6 @@
+#include "perf.h"
#include "util/util.h"
#include <sys/types.h>
@@ -30,9 +31,7 @@
#include <linux/unistd.h>
#include <linux/types.h>
-#include "../../include/linux/perf_counter.h"
-#include "perf.h"
#define ALIGN(x,a) __ALIGN_MASK(x,(typeof(x))(a)-1)
#define __ALIGN_MASK(x,mask) (((x)+(mask))&~(mask))
diff --git a/Documentation/perf_counter/builtin-stat.c b/Documentation/perf_counter/builtin-stat.c
index 03518d7..8ae01d5 100644
--- a/Documentation/perf_counter/builtin-stat.c
+++ b/Documentation/perf_counter/builtin-stat.c
@@ -61,6 +61,7 @@
* Released under the GPL v2. (and only v2, not any later version)
*/
+#include "perf.h"
#include "util/util.h"
#include <getopt.h>
@@ -83,10 +84,6 @@
#include <linux/unistd.h>
#include <linux/types.h>
-#include "../../include/linux/perf_counter.h"
-
-#include "perf.h"
-
#define EVENT_MASK_KERNEL 1
#define EVENT_MASK_USER 2
diff --git a/Documentation/perf_counter/builtin-top.c b/Documentation/perf_counter/builtin-top.c
index 814b2e4..a3216a6 100644
--- a/Documentation/perf_counter/builtin-top.c
+++ b/Documentation/perf_counter/builtin-top.c
@@ -42,6 +42,7 @@
* Released under the GPL v2. (and only v2, not any later version)
*/
+#include "perf.h"
#include "util/util.h"
#include <getopt.h>
@@ -64,10 +65,6 @@
#include <linux/unistd.h>
#include <linux/types.h>
-#include "../../include/linux/perf_counter.h"
-
-#include "perf.h"
-
static int system_wide = 0;
static int nr_counters = 0;
diff --git a/Documentation/perf_counter/perf.h b/Documentation/perf_counter/perf.h
index 81a7374..a517683 100644
--- a/Documentation/perf_counter/perf.h
+++ b/Documentation/perf_counter/perf.h
@@ -1,6 +1,25 @@
#ifndef _PERF_PERF_H
#define _PERF_PERF_H
+#if defined(__x86_64__) || defined(__i386__)
+#include "../../arch/x86/include/asm/unistd.h"
+#define rmb() asm volatile("lfence" ::: "memory")
+#define cpu_relax() asm volatile("rep; nop" ::: "memory");
+#endif
+
+#ifdef __powerpc__
+#include "../../arch/powerpc/include/asm/unistd.h"
+#define rmb() asm volatile ("sync" ::: "memory")
+#define cpu_relax() asm volatile ("" ::: "memory");
+#endif
+
+#include <time.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/syscall.h>
+
+#include "../../include/linux/perf_counter.h"
+
/*
* prctl(PR_TASK_PERF_COUNTERS_DISABLE) will (cheaply) disable all
* counters in the current task.
@@ -26,18 +45,6 @@ static inline unsigned long long rdclock(void)
#define __user
#define asmlinkage
-#if defined(__x86_64__) || defined(__i386__)
-#include "../../arch/x86/include/asm/unistd.h"
-#define rmb() asm volatile("lfence" ::: "memory")
-#define cpu_relax() asm volatile("rep; nop" ::: "memory");
-#endif
-
-#ifdef __powerpc__
-#include "../../arch/powerpc/include/asm/unistd.h"
-#define rmb() asm volatile ("sync" ::: "memory")
-#define cpu_relax() asm volatile ("" ::: "memory");
-#endif
-
#define unlikely(x) __builtin_expect(!!(x), 0)
#define min(x, y) ({ \
typeof(x) _min1 = (x); \
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Simplify context cleanup
2009-05-23 16:28 ` [PATCH 5/7] perf_counter: simplify context cleanup Peter Zijlstra
@ 2009-05-24 7:01 ` tip-bot for Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: aa9c67f53d1969cf1db4c9c2db3a78c4ceb96469
Gitweb: http://git.kernel.org/tip/aa9c67f53d1969cf1db4c9c2db3a78c4ceb96469
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:28:59 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 23 May 2009 19:37:47 +0200
perf_counter: Simplify context cleanup
Use perf_counter_remove_from_context() to remove counters from
the context.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163012.796275849@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
kernel/perf_counter.c | 5 +----
1 files changed, 1 insertions(+), 4 deletions(-)
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index d162d2f..0e97f89 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -3416,10 +3416,7 @@ __perf_counter_exit_task(struct task_struct *child,
struct perf_counter *parent_counter;
update_counter_times(child_counter);
-
- spin_lock_irq(&child_ctx->lock);
- list_del_counter(child_counter, child_ctx);
- spin_unlock_irq(&child_ctx->lock);
+ perf_counter_remove_from_context(child_counter);
parent_counter = child_counter->parent;
/*
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Change pctrl() behaviour
2009-05-23 16:29 ` [PATCH 6/7] perf_counter: change pctrl() behaviour Peter Zijlstra
2009-05-23 19:04 ` Peter Zijlstra
@ 2009-05-24 7:02 ` tip-bot for Peter Zijlstra
1 sibling, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:02 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: 082ff5a2767a0679ee543f14883adbafb631ffbe
Gitweb: http://git.kernel.org/tip/082ff5a2767a0679ee543f14883adbafb631ffbe
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:29:00 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sun, 24 May 2009 08:24:08 +0200
perf_counter: Change pctrl() behaviour
Instead of en/dis-abling all counters acting on a particular
task, en/dis- able all counters we created.
[ v2: fix crash on first counter enable ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163012.916937244@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
include/linux/init_task.h | 10 +++++
include/linux/perf_counter.h | 3 +
include/linux/sched.h | 2 +
kernel/perf_counter.c | 87 +++++++++++------------------------------
4 files changed, 39 insertions(+), 63 deletions(-)
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index d87247d..353c0ac 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -108,6 +108,15 @@ extern struct group_info init_groups;
extern struct cred init_cred;
+#ifdef CONFIG_PERF_COUNTERS
+# define INIT_PERF_COUNTERS(tsk) \
+ .perf_counter_mutex = \
+ __MUTEX_INITIALIZER(tsk.perf_counter_mutex), \
+ .perf_counter_list = LIST_HEAD_INIT(tsk.perf_counter_list),
+#else
+# define INIT_PERF_COUNTERS(tsk)
+#endif
+
/*
* INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB)
@@ -171,6 +180,7 @@ extern struct cred init_cred;
}, \
.dirties = INIT_PROP_LOCAL_SINGLE(dirties), \
INIT_IDS \
+ INIT_PERF_COUNTERS(tsk) \
INIT_TRACE_IRQFLAGS \
INIT_LOCKDEP \
INIT_FTRACE_GRAPH \
diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index 4ab8050..4159ee5 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -469,6 +469,9 @@ struct perf_counter {
int oncpu;
int cpu;
+ struct list_head owner_entry;
+ struct task_struct *owner;
+
/* mmap bits */
struct mutex mmap_mutex;
atomic_t mmap_count;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9714d45..bc9326d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1389,6 +1389,8 @@ struct task_struct {
#endif
#ifdef CONFIG_PERF_COUNTERS
struct perf_counter_context *perf_counter_ctxp;
+ struct mutex perf_counter_mutex;
+ struct list_head perf_counter_list;
#endif
#ifdef CONFIG_NUMA
struct mempolicy *mempolicy;
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 0e97f89..4c86a63 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -1076,79 +1076,26 @@ static void perf_counter_cpu_sched_in(struct perf_cpu_context *cpuctx, int cpu)
__perf_counter_sched_in(ctx, cpuctx, cpu);
}
-int perf_counter_task_disable(void)
+int perf_counter_task_enable(void)
{
- struct task_struct *curr = current;
- struct perf_counter_context *ctx = curr->perf_counter_ctxp;
struct perf_counter *counter;
- unsigned long flags;
-
- if (!ctx || !ctx->nr_counters)
- return 0;
-
- local_irq_save(flags);
- __perf_counter_task_sched_out(ctx);
-
- spin_lock(&ctx->lock);
-
- /*
- * Disable all the counters:
- */
- perf_disable();
-
- list_for_each_entry(counter, &ctx->counter_list, list_entry) {
- if (counter->state != PERF_COUNTER_STATE_ERROR) {
- update_group_times(counter);
- counter->state = PERF_COUNTER_STATE_OFF;
- }
- }
-
- perf_enable();
-
- spin_unlock_irqrestore(&ctx->lock, flags);
+ mutex_lock(¤t->perf_counter_mutex);
+ list_for_each_entry(counter, ¤t->perf_counter_list, owner_entry)
+ perf_counter_enable(counter);
+ mutex_unlock(¤t->perf_counter_mutex);
return 0;
}
-int perf_counter_task_enable(void)
+int perf_counter_task_disable(void)
{
- struct task_struct *curr = current;
- struct perf_counter_context *ctx = curr->perf_counter_ctxp;
struct perf_counter *counter;
- unsigned long flags;
- int cpu;
-
- if (!ctx || !ctx->nr_counters)
- return 0;
-
- local_irq_save(flags);
- cpu = smp_processor_id();
-
- __perf_counter_task_sched_out(ctx);
-
- spin_lock(&ctx->lock);
- /*
- * Disable all the counters:
- */
- perf_disable();
-
- list_for_each_entry(counter, &ctx->counter_list, list_entry) {
- if (counter->state > PERF_COUNTER_STATE_OFF)
- continue;
- counter->state = PERF_COUNTER_STATE_INACTIVE;
- counter->tstamp_enabled =
- ctx->time - counter->total_time_enabled;
- counter->hw_event.disabled = 0;
- }
- perf_enable();
-
- spin_unlock(&ctx->lock);
-
- perf_counter_task_sched_in(curr, cpu);
-
- local_irq_restore(flags);
+ mutex_lock(¤t->perf_counter_mutex);
+ list_for_each_entry(counter, ¤t->perf_counter_list, owner_entry)
+ perf_counter_disable(counter);
+ mutex_unlock(¤t->perf_counter_mutex);
return 0;
}
@@ -1416,6 +1363,11 @@ static int perf_release(struct inode *inode, struct file *file)
perf_counter_remove_from_context(counter);
mutex_unlock(&ctx->mutex);
+ mutex_lock(&counter->owner->perf_counter_mutex);
+ list_del_init(&counter->owner_entry);
+ mutex_unlock(&counter->owner->perf_counter_mutex);
+ put_task_struct(counter->owner);
+
free_counter(counter);
put_context(ctx);
@@ -3272,6 +3224,12 @@ SYSCALL_DEFINE5(perf_counter_open,
perf_install_in_context(ctx, counter, cpu);
mutex_unlock(&ctx->mutex);
+ counter->owner = current;
+ get_task_struct(current);
+ mutex_lock(¤t->perf_counter_mutex);
+ list_add_tail(&counter->owner_entry, ¤t->perf_counter_list);
+ mutex_unlock(¤t->perf_counter_mutex);
+
fput_light(counter_file, fput_needed2);
out_fput:
@@ -3488,6 +3446,9 @@ void perf_counter_init_task(struct task_struct *child)
child->perf_counter_ctxp = NULL;
+ mutex_init(&child->perf_counter_mutex);
+ INIT_LIST_HEAD(&child->perf_counter_list);
+
/*
* This is executed from the parent task context, so inherit
* counters that have been marked for cloning.
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [tip:perfcounters/core] perf_counter: Remove perf_counter_context::nr_enabled
2009-05-23 16:29 ` [PATCH 7/7] perf_counter: remove perf_counter_context::nr_enabled Peter Zijlstra
@ 2009-05-24 7:02 ` tip-bot for Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-05-24 7:02 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, acme, paulus, hpa, mingo, jkacur, a.p.zijlstra,
tglx, cjashfor, mingo
Commit-ID: 475c55797323b67435083f6e2eb8ee670f6410ec
Gitweb: http://git.kernel.org/tip/475c55797323b67435083f6e2eb8ee670f6410ec
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 23 May 2009 18:29:01 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Sun, 24 May 2009 08:24:30 +0200
perf_counter: Remove perf_counter_context::nr_enabled
now that pctrl() no longer disables other people's counters,
remove the PMU cache code that deals with that.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <20090523163013.032998331@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
include/linux/perf_counter.h | 1 -
kernel/perf_counter.c | 11 +----------
2 files changed, 1 insertions(+), 11 deletions(-)
diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h
index 4159ee5..2ddf5e3 100644
--- a/include/linux/perf_counter.h
+++ b/include/linux/perf_counter.h
@@ -516,7 +516,6 @@ struct perf_counter_context {
struct list_head event_list;
int nr_counters;
int nr_active;
- int nr_enabled;
int is_active;
atomic_t refcount;
struct task_struct *task;
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c
index 4c86a63..cb40625 100644
--- a/kernel/perf_counter.c
+++ b/kernel/perf_counter.c
@@ -134,8 +134,6 @@ list_add_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
list_add_rcu(&counter->event_entry, &ctx->event_list);
ctx->nr_counters++;
- if (counter->state >= PERF_COUNTER_STATE_INACTIVE)
- ctx->nr_enabled++;
}
/*
@@ -150,8 +148,6 @@ list_del_counter(struct perf_counter *counter, struct perf_counter_context *ctx)
if (list_empty(&counter->list_entry))
return;
ctx->nr_counters--;
- if (counter->state >= PERF_COUNTER_STATE_INACTIVE)
- ctx->nr_enabled--;
list_del_init(&counter->list_entry);
list_del_rcu(&counter->event_entry);
@@ -406,7 +402,6 @@ static void __perf_counter_disable(void *info)
else
counter_sched_out(counter, cpuctx, ctx);
counter->state = PERF_COUNTER_STATE_OFF;
- ctx->nr_enabled--;
}
spin_unlock_irqrestore(&ctx->lock, flags);
@@ -448,7 +443,6 @@ static void perf_counter_disable(struct perf_counter *counter)
if (counter->state == PERF_COUNTER_STATE_INACTIVE) {
update_counter_times(counter);
counter->state = PERF_COUNTER_STATE_OFF;
- ctx->nr_enabled--;
}
spin_unlock_irq(&ctx->lock);
@@ -759,7 +753,6 @@ static void __perf_counter_enable(void *info)
goto unlock;
counter->state = PERF_COUNTER_STATE_INACTIVE;
counter->tstamp_enabled = ctx->time - counter->total_time_enabled;
- ctx->nr_enabled++;
/*
* If the counter is in a group and isn't the group leader,
@@ -850,7 +843,6 @@ static void perf_counter_enable(struct perf_counter *counter)
counter->state = PERF_COUNTER_STATE_INACTIVE;
counter->tstamp_enabled =
ctx->time - counter->total_time_enabled;
- ctx->nr_enabled++;
}
out:
spin_unlock_irq(&ctx->lock);
@@ -910,8 +902,7 @@ static int context_equiv(struct perf_counter_context *ctx1,
struct perf_counter_context *ctx2)
{
return ctx1->parent_ctx && ctx1->parent_ctx == ctx2->parent_ctx
- && ctx1->parent_gen == ctx2->parent_gen
- && ctx1->nr_enabled == ctx2->nr_enabled;
+ && ctx1->parent_gen == ctx2->parent_gen;
}
/*
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 1/7] perf_counter: fix dynamic irq_period logging
2009-05-23 16:28 ` [PATCH 1/7] perf_counter: fix dynamic irq_period logging Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Fix " tip-bot for Peter Zijlstra
@ 2009-05-25 0:08 ` Paul Mackerras
2009-05-25 7:19 ` Peter Zijlstra
1 sibling, 1 reply; 18+ messages in thread
From: Paul Mackerras @ 2009-05-25 0:08 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Corey Ashford, linux-kernel,
Arnaldo Carvalho de Melo, John Kacur
Peter Zijlstra writes:
> We call perf_adjust_freq() from perf_counter_task_tick() which is is called
> under the rq->lock causing lock recursion.
What was the lock recursion? I see perf_adjust_freq taking ctx->lock,
but we were careful not to take any rq->lock within a ctx->lock, at
least in the past.
Paul.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/7] perf_counter: fix dynamic irq_period logging
2009-05-25 0:08 ` [PATCH 1/7] perf_counter: fix " Paul Mackerras
@ 2009-05-25 7:19 ` Peter Zijlstra
0 siblings, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2009-05-25 7:19 UTC (permalink / raw)
To: Paul Mackerras
Cc: Ingo Molnar, Corey Ashford, linux-kernel,
Arnaldo Carvalho de Melo, John Kacur
On Mon, 2009-05-25 at 10:08 +1000, Paul Mackerras wrote:
> Peter Zijlstra writes:
>
> > We call perf_adjust_freq() from perf_counter_task_tick() which is is called
> > under the rq->lock causing lock recursion.
>
> What was the lock recursion? I see perf_adjust_freq taking ctx->lock,
> but we were careful not to take any rq->lock within a ctx->lock, at
> least in the past.
perf_output_end() can end up calling a wake_up().
perf_output_end()
perf_output_unlock()
perf_output_wakeup()
perf_counter_wakeup()
wake_up_all();
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2009-05-25 7:20 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-23 16:28 [PATCH 0/7] perf counter patches Peter Zijlstra
2009-05-23 16:28 ` [PATCH 1/7] perf_counter: fix dynamic irq_period logging Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Fix " tip-bot for Peter Zijlstra
2009-05-25 0:08 ` [PATCH 1/7] perf_counter: fix " Paul Mackerras
2009-05-25 7:19 ` Peter Zijlstra
2009-05-23 16:28 ` [PATCH 2/7] perf_counter: sanitize counter->mutex Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Sanitize counter->mutex tip-bot for Peter Zijlstra
2009-05-23 16:28 ` [PATCH 3/7] perf_counter: sanitize context locking Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Sanitize " tip-bot for Peter Zijlstra
2009-05-23 16:28 ` [PATCH 4/7] perf_counter: fix userspace build Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Fix " tip-bot for Peter Zijlstra
2009-05-23 16:28 ` [PATCH 5/7] perf_counter: simplify context cleanup Peter Zijlstra
2009-05-24 7:01 ` [tip:perfcounters/core] perf_counter: Simplify " tip-bot for Peter Zijlstra
2009-05-23 16:29 ` [PATCH 6/7] perf_counter: change pctrl() behaviour Peter Zijlstra
2009-05-23 19:04 ` Peter Zijlstra
2009-05-24 7:02 ` [tip:perfcounters/core] perf_counter: Change " tip-bot for Peter Zijlstra
2009-05-23 16:29 ` [PATCH 7/7] perf_counter: remove perf_counter_context::nr_enabled Peter Zijlstra
2009-05-24 7:02 ` [tip:perfcounters/core] perf_counter: Remove perf_counter_context::nr_enabled tip-bot for Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).