linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/9] perf: Rework event scheduling
@ 2011-04-09 19:17 Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 1/9] perf: Optimize ctx_sched_out Peter Zijlstra
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar; +Cc: linux-kernel, Stephane Eranian

This series is a broken out and tested version of a patch I proposed in
an earlier thread with Jiri and Oleg:

  https://lkml.org/lkml/2011/3/31/232

I've not yet gone over the final patch enough to convince myself we can indeed
re-enable the jump_label optimization of the sched_out hook, but at least the
patches are reasonably readable and result in a working kernel (as opposed to
the earlier posting).

Also still to do is look at the cgroup event scheduling code which currently
also violates our event scheduling rules.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 1/9] perf: Optimize ctx_sched_out
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:38   ` [tip:perf/core] perf: Optimize ctx_sched_out() tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 2/9] perf: Clean up ctx reference counting Peter Zijlstra
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-cleanup-ctx_sched_out.patch --]
[-- Type: text/plain, Size: 1235 bytes --]

Oleg noted that ctx_sched_out() disables the PMU even though it might
not actually do something, avoid needless PMU-disabling.

Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -1767,7 +1767,6 @@ static void ctx_sched_out(struct perf_ev
 	struct perf_event *event;
 
 	raw_spin_lock(&ctx->lock);
-	perf_pmu_disable(ctx->pmu);
 	ctx->is_active = 0;
 	if (likely(!ctx->nr_events))
 		goto out;
@@ -1777,6 +1776,7 @@ static void ctx_sched_out(struct perf_ev
 	if (!ctx->nr_active)
 		goto out;
 
+	perf_pmu_disable(ctx->pmu);
 	if (event_type & EVENT_PINNED) {
 		list_for_each_entry(event, &ctx->pinned_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
@@ -1786,8 +1786,8 @@ static void ctx_sched_out(struct perf_ev
 		list_for_each_entry(event, &ctx->flexible_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 	}
-out:
 	perf_pmu_enable(ctx->pmu);
+out:
 	raw_spin_unlock(&ctx->lock);
 }
 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 2/9] perf: Clean up ctx reference counting
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 1/9] perf: Optimize ctx_sched_out Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-04-11  6:05   ` Lin Ming
  2011-05-28 16:39   ` [tip:perf/core] perf: Clean up 'ctx' " tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 3/9] perf: Change event scheduling locking Peter Zijlstra
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-cleanup-ctx-ref.patch --]
[-- Type: text/plain, Size: 1228 bytes --]

Small cleanup to how we refcount in find_get_context(), this also
allows us to use put_ctx() to free things instead of using kfree().

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |   10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -2831,16 +2831,12 @@ find_get_context(struct pmu *pmu, struct
 		unclone_ctx(ctx);
 		++ctx->pin_count;
 		raw_spin_unlock_irqrestore(&ctx->lock, flags);
-	}
-
-	if (!ctx) {
+	} else {
 		ctx = alloc_perf_context(pmu, task);
 		err = -ENOMEM;
 		if (!ctx)
 			goto errout;
 
-		get_ctx(ctx);
-
 		err = 0;
 		mutex_lock(&task->perf_event_mutex);
 		/*
@@ -2852,14 +2848,14 @@ find_get_context(struct pmu *pmu, struct
 		else if (task->perf_event_ctxp[ctxn])
 			err = -EAGAIN;
 		else {
+			get_ctx(ctx);
 			++ctx->pin_count;
 			rcu_assign_pointer(task->perf_event_ctxp[ctxn], ctx);
 		}
 		mutex_unlock(&task->perf_event_mutex);
 
 		if (unlikely(err)) {
-			put_task_struct(task);
-			kfree(ctx);
+			put_ctx(ctx);
 
 			if (err == -EAGAIN)
 				goto retry;



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 3/9] perf: Change event scheduling locking
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 1/9] perf: Optimize ctx_sched_out Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 2/9] perf: Clean up ctx reference counting Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:39   ` [tip:perf/core] perf: Optimize " tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 4/9] perf: Remove task_ctx_sched_in Peter Zijlstra
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-change-locking.patch --]
[-- Type: text/plain, Size: 7071 bytes --]

Currently we only hold one ctx->lock at a time, which results in us
flipping back and forth between cpuctx->ctx.lock and task_ctx->lock.

Avoid this and gain large atomic regions by holding both locks. We
nest the task lock inside the cpu lock, since with task scheduling we
might have to change task ctx while holding the cpu ctx lock.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |   63 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 36 insertions(+), 27 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -200,6 +200,22 @@ __get_cpu_context(struct perf_event_cont
 	return this_cpu_ptr(ctx->pmu->pmu_cpu_context);
 }
 
+static void perf_ctx_lock(struct perf_cpu_context *cpuctx,
+			  struct perf_event_context *ctx)
+{
+	raw_spin_lock(&cpuctx->ctx.lock);
+	if (ctx)
+		raw_spin_lock(&ctx->lock);
+}
+
+static void perf_ctx_unlock(struct perf_cpu_context *cpuctx,
+			    struct perf_event_context *ctx)
+{
+	if (ctx)
+		raw_spin_unlock(&ctx->lock);
+	raw_spin_unlock(&cpuctx->ctx.lock);
+}
+
 #ifdef CONFIG_CGROUP_PERF
 
 /*
@@ -340,11 +356,8 @@ void perf_cgroup_switch(struct task_stru
 	rcu_read_lock();
 
 	list_for_each_entry_rcu(pmu, &pmus, entry) {
-
 		cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
 
-		perf_pmu_disable(cpuctx->ctx.pmu);
-
 		/*
 		 * perf_cgroup_events says at least one
 		 * context on this CPU has cgroup events.
@@ -353,6 +366,8 @@ void perf_cgroup_switch(struct task_stru
 		 * events for a context.
 		 */
 		if (cpuctx->ctx.nr_cgroups > 0) {
+			perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+			perf_pmu_disable(cpuctx->ctx.pmu);
 
 			if (mode & PERF_CGROUP_SWOUT) {
 				cpu_ctx_sched_out(cpuctx, EVENT_ALL);
@@ -371,9 +386,9 @@ void perf_cgroup_switch(struct task_stru
 				cpuctx->cgrp = perf_cgroup_from_task(task);
 				cpu_ctx_sched_in(cpuctx, EVENT_ALL, task);
 			}
+			perf_pmu_enable(cpuctx->ctx.pmu);
+			perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
 		}
-
-		perf_pmu_enable(cpuctx->ctx.pmu);
 	}
 
 	rcu_read_unlock();
@@ -1766,15 +1781,14 @@ static void ctx_sched_out(struct perf_ev
 {
 	struct perf_event *event;
 
-	raw_spin_lock(&ctx->lock);
 	ctx->is_active = 0;
 	if (likely(!ctx->nr_events))
-		goto out;
+		return;
+
 	update_context_time(ctx);
 	update_cgrp_time_from_cpuctx(cpuctx);
-
 	if (!ctx->nr_active)
-		goto out;
+		return;
 
 	perf_pmu_disable(ctx->pmu);
 	if (event_type & EVENT_PINNED) {
@@ -1787,8 +1801,6 @@ static void ctx_sched_out(struct perf_ev
 			group_sched_out(event, cpuctx, ctx);
 	}
 	perf_pmu_enable(ctx->pmu);
-out:
-	raw_spin_unlock(&ctx->lock);
 }
 
 /*
@@ -1936,8 +1948,10 @@ static void perf_event_context_sched_out
 	rcu_read_unlock();
 
 	if (do_switch) {
+		raw_spin_lock(&ctx->lock);
 		ctx_sched_out(ctx, cpuctx, EVENT_ALL);
 		cpuctx->task_ctx = NULL;
+		raw_spin_unlock(&ctx->lock);
 	}
 }
 
@@ -2063,10 +2077,9 @@ ctx_sched_in(struct perf_event_context *
 {
 	u64 now;
 
-	raw_spin_lock(&ctx->lock);
 	ctx->is_active = 1;
 	if (likely(!ctx->nr_events))
-		goto out;
+		return;
 
 	now = perf_clock();
 	ctx->timestamp = now;
@@ -2081,9 +2094,6 @@ ctx_sched_in(struct perf_event_context *
 	/* Then walk through the lower prio flexible groups */
 	if (event_type & EVENT_FLEXIBLE)
 		ctx_flexible_sched_in(ctx, cpuctx);
-
-out:
-	raw_spin_unlock(&ctx->lock);
 }
 
 static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
@@ -2117,6 +2127,7 @@ static void perf_event_context_sched_in(
 	if (cpuctx->task_ctx == ctx)
 		return;
 
+	perf_ctx_lock(cpuctx, ctx);
 	perf_pmu_disable(ctx->pmu);
 	/*
 	 * We want to keep the following priority order:
@@ -2131,12 +2142,14 @@ static void perf_event_context_sched_in(
 
 	cpuctx->task_ctx = ctx;
 
+	perf_pmu_enable(ctx->pmu);
+	perf_ctx_unlock(cpuctx, ctx);
+
 	/*
 	 * Since these rotations are per-cpu, we need to ensure the
 	 * cpu-context we got scheduled on is actually rotating.
 	 */
 	perf_pmu_rotate_start(ctx->pmu);
-	perf_pmu_enable(ctx->pmu);
 }
 
 /*
@@ -2276,7 +2289,6 @@ static void perf_ctx_adjust_freq(struct 
 	u64 interrupts, now;
 	s64 delta;
 
-	raw_spin_lock(&ctx->lock);
 	list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
 		if (event->state != PERF_EVENT_STATE_ACTIVE)
 			continue;
@@ -2308,7 +2320,6 @@ static void perf_ctx_adjust_freq(struct 
 		if (delta > 0)
 			perf_adjust_period(event, period, delta);
 	}
-	raw_spin_unlock(&ctx->lock);
 }
 
 /*
@@ -2316,16 +2327,12 @@ static void perf_ctx_adjust_freq(struct 
  */
 static void rotate_ctx(struct perf_event_context *ctx)
 {
-	raw_spin_lock(&ctx->lock);
-
 	/*
 	 * Rotate the first entry last of non-pinned groups. Rotation might be
 	 * disabled by the inheritance code.
 	 */
 	if (!ctx->rotate_disable)
 		list_rotate_left(&ctx->flexible_groups);
-
-	raw_spin_unlock(&ctx->lock);
 }
 
 /*
@@ -2352,6 +2359,7 @@ static void perf_rotate_context(struct p
 			rotate = 1;
 	}
 
+	perf_ctx_lock(cpuctx, cpuctx->task_ctx);
 	perf_pmu_disable(cpuctx->ctx.pmu);
 	perf_ctx_adjust_freq(&cpuctx->ctx, interval);
 	if (ctx)
@@ -2377,6 +2385,7 @@ static void perf_rotate_context(struct p
 		list_del_init(&cpuctx->rotation_list);
 
 	perf_pmu_enable(cpuctx->ctx.pmu);
+	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
 }
 
 void perf_event_task_tick(void)
@@ -2423,9 +2432,8 @@ static void perf_event_enable_on_exec(st
 	if (!ctx || !ctx->nr_events)
 		goto out;
 
-	task_ctx_sched_out(ctx, EVENT_ALL);
-
 	raw_spin_lock(&ctx->lock);
+	task_ctx_sched_out(ctx, EVENT_ALL);
 
 	list_for_each_entry(event, &ctx->pinned_groups, group_entry) {
 		ret = event_enable_on_exec(event, ctx);
@@ -2444,7 +2452,6 @@ static void perf_event_enable_on_exec(st
 	 */
 	if (enabled)
 		unclone_ctx(ctx);
-
 	raw_spin_unlock(&ctx->lock);
 
 	perf_event_context_sched_in(ctx, ctx->task);
@@ -5978,6 +5985,7 @@ static int pmu_dev_alloc(struct pmu *pmu
 }
 
 static struct lock_class_key cpuctx_mutex;
+static struct lock_class_key cpuctx_lock;
 
 int perf_pmu_register(struct pmu *pmu, char *name, int type)
 {
@@ -6028,6 +6036,7 @@ int perf_pmu_register(struct pmu *pmu, c
 		cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
 		__perf_event_init_context(&cpuctx->ctx);
 		lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex);
+		lockdep_set_class(&cpuctx->ctx.lock, &cpuctx_lock);
 		cpuctx->ctx.type = cpu_context;
 		cpuctx->ctx.pmu = pmu;
 		cpuctx->jiffies_interval = 1;
@@ -6772,7 +6781,6 @@ static void perf_event_exit_task_context
 	 * our context.
 	 */
 	child_ctx = rcu_dereference_raw(child->perf_event_ctxp[ctxn]);
-	task_ctx_sched_out(child_ctx, EVENT_ALL);
 
 	/*
 	 * Take the context lock here so that if find_get_context is
@@ -6780,6 +6788,7 @@ static void perf_event_exit_task_context
 	 * incremented the context's refcount before we do put_ctx below.
 	 */
 	raw_spin_lock(&child_ctx->lock);
+	task_ctx_sched_out(child_ctx, EVENT_ALL);
 	child->perf_event_ctxp[ctxn] = NULL;
 	/*
 	 * If this context is a clone; unclone it so it can't get



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 4/9] perf: Remove task_ctx_sched_in
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
                   ` (2 preceding siblings ...)
  2011-04-09 19:17 ` [RFC][PATCH 3/9] perf: Change event scheduling locking Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:39   ` [tip:perf/core] perf: Remove task_ctx_sched_in() tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context Peter Zijlstra
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-task_ctx_sched.patch --]
[-- Type: text/plain, Size: 2825 bytes --]

Make task_ctx_sched_*() imply EVENT_ALL, since anything less will not
actually have scheduled the task in/out at all.

Since there's no side that schedules all of a task in (due to the
interleave with flexible cpuctx) we can remove this function.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |   26 ++++++--------------------
 1 file changed, 6 insertions(+), 20 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -1986,8 +1986,7 @@ void __perf_event_task_sched_out(struct 
 		perf_cgroup_sched_out(task);
 }
 
-static void task_ctx_sched_out(struct perf_event_context *ctx,
-			       enum event_type_t event_type)
+static void task_ctx_sched_out(struct perf_event_context *ctx)
 {
 	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
 
@@ -1997,7 +1996,7 @@ static void task_ctx_sched_out(struct pe
 	if (WARN_ON_ONCE(ctx != cpuctx->task_ctx))
 		return;
 
-	ctx_sched_out(ctx, cpuctx, event_type);
+	ctx_sched_out(ctx, cpuctx, EVENT_ALL);
 	cpuctx->task_ctx = NULL;
 }
 
@@ -2105,19 +2104,6 @@ static void cpu_ctx_sched_in(struct perf
 	ctx_sched_in(ctx, cpuctx, event_type, task);
 }
 
-static void task_ctx_sched_in(struct perf_event_context *ctx,
-			      enum event_type_t event_type)
-{
-	struct perf_cpu_context *cpuctx;
-
-	cpuctx = __get_cpu_context(ctx);
-	if (cpuctx->task_ctx == ctx)
-		return;
-
-	ctx_sched_in(ctx, cpuctx, event_type, NULL);
-	cpuctx->task_ctx = ctx;
-}
-
 static void perf_event_context_sched_in(struct perf_event_context *ctx,
 					struct task_struct *task)
 {
@@ -2370,7 +2356,7 @@ static void perf_rotate_context(struct p
 
 	cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
 	if (ctx)
-		task_ctx_sched_out(ctx, EVENT_FLEXIBLE);
+		ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE);
 
 	rotate_ctx(&cpuctx->ctx);
 	if (ctx)
@@ -2378,7 +2364,7 @@ static void perf_rotate_context(struct p
 
 	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, current);
 	if (ctx)
-		task_ctx_sched_in(ctx, EVENT_FLEXIBLE);
+		ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, current);
 
 done:
 	if (remove)
@@ -2433,7 +2419,7 @@ static void perf_event_enable_on_exec(st
 		goto out;
 
 	raw_spin_lock(&ctx->lock);
-	task_ctx_sched_out(ctx, EVENT_ALL);
+	task_ctx_sched_out(ctx);
 
 	list_for_each_entry(event, &ctx->pinned_groups, group_entry) {
 		ret = event_enable_on_exec(event, ctx);
@@ -6788,7 +6774,7 @@ static void perf_event_exit_task_context
 	 * incremented the context's refcount before we do put_ctx below.
 	 */
 	raw_spin_lock(&child_ctx->lock);
-	task_ctx_sched_out(child_ctx, EVENT_ALL);
+	task_ctx_sched_out(child_ctx);
 	child->perf_event_ctxp[ctxn] = NULL;
 	/*
 	 * If this context is a clone; unclone it so it can't get



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
                   ` (3 preceding siblings ...)
  2011-04-09 19:17 ` [RFC][PATCH 4/9] perf: Remove task_ctx_sched_in Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-04-10  8:13   ` Peter Zijlstra
                     ` (2 more replies)
  2011-04-09 19:17 ` [RFC][PATCH 6/9] perf: Change ctx::is_active semantics Peter Zijlstra
                   ` (3 subsequent siblings)
  8 siblings, 3 replies; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf_install_in_context.patch --]
[-- Type: text/plain, Size: 4328 bytes --]

Currently __perf_install_in_context() will try and schedule in the
event irrespective of our event scheduling rules, that is, we try to
schedule CPU-pinned, TASK-pinned, CPU-flexible, TASK-flexible, but
when creating a new event we simply try and schedule it on top of
whatever is already on the PMU, this can lead to errors for pinned
events.

Therefore, simplify things and simply schedule everything out, add the
event to the corresponding context and schedule everything back in.

This also nicely handles the case where with
__ARCH_WANT_INTERRUPTS_ON_CTXSW the IPI can come right in the middle
of schedule, before we managed to call perf_event_task_sched_in().

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |   80 ++++++++++++++++++++++------------------------------
 1 file changed, 35 insertions(+), 45 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -1476,8 +1476,12 @@ static void add_event_to_ctx(struct perf
 	event->tstamp_stopped = tstamp;
 }
 
-static void perf_event_context_sched_in(struct perf_event_context *ctx,
-					struct task_struct *tsk);
+static void task_ctx_sched_out(struct perf_event_context *ctx);
+static void
+ctx_sched_in(struct perf_event_context *ctx,
+	     struct perf_cpu_context *cpuctx,
+	     enum event_type_t event_type,
+	     struct task_struct *task);
 
 /*
  * Cross CPU call to install and enable a performance event
@@ -1488,20 +1492,31 @@ static int  __perf_install_in_context(vo
 {
 	struct perf_event *event = info;
 	struct perf_event_context *ctx = event->ctx;
-	struct perf_event *leader = event->group_leader;
 	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
-	int err;
+	struct perf_event_context *task_ctx = cpuctx->task_ctx;
+	struct task_struct *task = current;
+
+	perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+	perf_pmu_disable(cpuctx->ctx.pmu);
 
 	/*
-	 * In case we're installing a new context to an already running task,
-	 * could also happen before perf_event_task_sched_in() on architectures
-	 * which do context switches with IRQs enabled.
+	 * If there was an active task_ctx schedule it out.
 	 */
-	if (ctx->task && !cpuctx->task_ctx)
-		perf_event_context_sched_in(ctx, ctx->task);
+	if (task_ctx) {
+		task_ctx_sched_out(task_ctx);
+		/*
+		 * If the context we're installing events in is not the
+		 * active task_ctx, flip them.
+		 */
+		if (ctx->task && task_ctx != ctx) {
+			raw_spin_unlock(&cpuctx->ctx.lock);
+			raw_spin_lock(&ctx->lock);
+			cpuctx->task_ctx = task_ctx = ctx;
+		}
+		task = task_ctx->task;
+	}
+	cpu_ctx_sched_out(cpuctx, EVENT_ALL);
 
-	raw_spin_lock(&ctx->lock);
-	ctx->is_active = 1;
 	update_context_time(ctx);
 	/*
 	 * update cgrp time only if current cgrp
@@ -1512,43 +1527,18 @@ static int  __perf_install_in_context(vo
 
 	add_event_to_ctx(event, ctx);
 
-	if (!event_filter_match(event))
-		goto unlock;
-
-	/*
-	 * Don't put the event on if it is disabled or if
-	 * it is in a group and the group isn't on.
-	 */
-	if (event->state != PERF_EVENT_STATE_INACTIVE ||
-	    (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE))
-		goto unlock;
-
 	/*
-	 * An exclusive event can't go on if there are already active
-	 * hardware events, and no hardware event can go on if there
-	 * is already an exclusive event on.
+	 * Schedule everything back in
 	 */
-	if (!group_can_go_on(event, cpuctx, 1))
-		err = -EEXIST;
-	else
-		err = event_sched_in(event, cpuctx, ctx);
-
-	if (err) {
-		/*
-		 * This event couldn't go on.  If it is in a group
-		 * then we have to pull the whole group off.
-		 * If the event group is pinned then put it in error state.
-		 */
-		if (leader != event)
-			group_sched_out(leader, cpuctx, ctx);
-		if (leader->attr.pinned) {
-			update_group_times(leader);
-			leader->state = PERF_EVENT_STATE_ERROR;
-		}
-	}
+	cpu_ctx_sched_in(cpuctx, EVENT_PINNED, task);
+	if (task_ctx)
+		ctx_sched_in(task_ctx, cpuctx, EVENT_PINNED, task);
+	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
+	if (task_ctx)
+		ctx_sched_in(task_ctx, cpuctx, EVENT_FLEXIBLE, task);
 
-unlock:
-	raw_spin_unlock(&ctx->lock);
+	perf_pmu_enable(cpuctx->ctx.pmu);
+	perf_ctx_unlock(cpuctx, task_ctx);
 
 	return 0;
 }



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 6/9] perf: Change ctx::is_active semantics
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
                   ` (4 preceding siblings ...)
  2011-04-09 19:17 ` [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:40   ` [tip:perf/core] perf: Change and simplify " tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 7/9] perf: Collect the schedule in rules in one function Peter Zijlstra
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-is_active.patch --]
[-- Type: text/plain, Size: 2210 bytes --]

Instead of tracking if a context is active or not, track which events
of the context are active. By making it a bitmask of
EVENT_PINNED|EVENT_FLEXIBLE we can simplify some of the scheduling
routines since it can avoid adding events that are already active.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |   99 +++++++++++++++++++++++++---------------------------
 1 file changed, 48 insertions(+), 51 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -1780,8 +1775,9 @@ static void ctx_sched_out(struct perf_ev
 			  enum event_type_t event_type)
 {
 	struct perf_event *event;
+	int is_active = ctx->is_active;
 
-	ctx->is_active = 0;
+	ctx->is_active &= ~event_type;
 	if (likely(!ctx->nr_events))
 		return;
 
@@ -1791,12 +1787,12 @@ static void ctx_sched_out(struct perf_ev
 		return;
 
 	perf_pmu_disable(ctx->pmu);
-	if (event_type & EVENT_PINNED) {
+	if ((is_active & EVENT_PINNED) && (event_type & EVENT_PINNED)) {
 		list_for_each_entry(event, &ctx->pinned_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 	}
 
-	if (event_type & EVENT_FLEXIBLE) {
+	if ((is_active & EVENT_FLEXIBLE) && (event_type & EVENT_FLEXIBLE)) {
 		list_for_each_entry(event, &ctx->flexible_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 	}
@@ -2075,8 +2071,9 @@ ctx_sched_in(struct perf_event_context *
 	     struct task_struct *task)
 {
 	u64 now;
+	int is_active = ctx->is_active;
 
-	ctx->is_active = 1;
+	ctx->is_active |= event_type;
 	if (likely(!ctx->nr_events))
 		return;
 
@@ -2087,11 +2084,11 @@ ctx_sched_in(struct perf_event_context *
 	 * First go through the list and put on any pinned groups
 	 * in order to give them the best chance of going on.
 	 */
-	if (event_type & EVENT_PINNED)
+	if (!(is_active & EVENT_PINNED) && (event_type & EVENT_PINNED))
 		ctx_pinned_sched_in(ctx, cpuctx);
 
 	/* Then walk through the lower prio flexible groups */
-	if (event_type & EVENT_FLEXIBLE)
+	if (!(is_active & EVENT_FLEXIBLE) && (event_type & EVENT_FLEXIBLE))
 		ctx_flexible_sched_in(ctx, cpuctx);
 }
 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 7/9] perf: Collect the schedule in rules in one function
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
                   ` (5 preceding siblings ...)
  2011-04-09 19:17 ` [RFC][PATCH 6/9] perf: Change ctx::is_active semantics Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:41   ` [tip:perf/core] perf: Collect the schedule-in " tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 8/9] perf: Change close() semantics for group events Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 9/9] perf: De-schedule a task context when removing the last event Peter Zijlstra
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-sched_in.patch --]
[-- Type: text/plain, Size: 2011 bytes --]


Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |   27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -1483,6 +1483,18 @@ ctx_sched_in(struct perf_event_context *
 	     enum event_type_t event_type,
 	     struct task_struct *task);
 
+static void perf_event_sched_in(struct perf_cpu_context *cpuctx,
+				struct perf_event_context *ctx,
+				struct task_struct *task)
+{
+	cpu_ctx_sched_in(cpuctx, EVENT_PINNED, task);
+	if (ctx)
+		ctx_sched_in(ctx, cpuctx, EVENT_PINNED, task);
+	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
+	if (ctx)
+		ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, task);
+}
+
 /*
  * Cross CPU call to install and enable a performance event
  *
@@ -1530,12 +1542,7 @@ static int  __perf_install_in_context(vo
 	/*
 	 * Schedule everything back in
 	 */
-	cpu_ctx_sched_in(cpuctx, EVENT_PINNED, task);
-	if (task_ctx)
-		ctx_sched_in(task_ctx, cpuctx, EVENT_PINNED, task);
-	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
-	if (task_ctx)
-		ctx_sched_in(task_ctx, cpuctx, EVENT_FLEXIBLE, task);
+	perf_event_sched_in(cpuctx, task_ctx, task);
 
 	perf_pmu_enable(cpuctx->ctx.pmu);
 	perf_ctx_unlock(cpuctx, task_ctx);
@@ -2114,9 +2121,7 @@ static void perf_event_context_sched_in(
 	 */
 	cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
 
-	ctx_sched_in(ctx, cpuctx, EVENT_PINNED, task);
-	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
-	ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, task);
+	perf_event_sched_in(cpuctx, ctx, task);
 
 	cpuctx->task_ctx = ctx;
 
@@ -2354,9 +2359,7 @@ static void perf_rotate_context(struct p
 	if (ctx)
 		rotate_ctx(ctx);
 
-	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, current);
-	if (ctx)
-		ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, current);
+	perf_event_sched_in(cpuctx, ctx, current);
 
 done:
 	if (remove)



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 8/9] perf: Change close() semantics for group events
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
                   ` (6 preceding siblings ...)
  2011-04-09 19:17 ` [RFC][PATCH 7/9] perf: Collect the schedule in rules in one function Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:41   ` [tip:perf/core] " tip-bot for Peter Zijlstra
  2011-04-09 19:17 ` [RFC][PATCH 9/9] perf: De-schedule a task context when removing the last event Peter Zijlstra
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-remove-on-close.patch --]
[-- Type: text/plain, Size: 1507 bytes --]

In order to always call list_del_event() on the correct cpu if the
event is part of an active context and avoid having to do two IPIs,
change the close() semantics slightly.

The current perf_event_disable() call would disable a whole group if
the event that's being closed is the group leader, whereas the new
code keeps the group siblings enabled.

People should not rely on this behaviour and I don't think they do,
but in case we find they do, the fix is easy and we have to take the
double IPI cost.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |    8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -2920,12 +2920,6 @@ int perf_event_release_kernel(struct per
 {
 	struct perf_event_context *ctx = event->ctx;
 
-	/*
-	 * Remove from the PMU, can't get re-enabled since we got
-	 * here because the last ref went.
-	 */
-	perf_event_disable(event);
-
 	WARN_ON_ONCE(ctx->parent_ctx);
 	/*
 	 * There are two ways this annotation is useful:
@@ -2942,8 +2936,8 @@ int perf_event_release_kernel(struct per
 	mutex_lock_nested(&ctx->mutex, SINGLE_DEPTH_NESTING);
 	raw_spin_lock_irq(&ctx->lock);
 	perf_group_detach(event);
-	list_del_event(event, ctx);
 	raw_spin_unlock_irq(&ctx->lock);
+	perf_remove_from_context(event);
 	mutex_unlock(&ctx->mutex);
 
 	free_event(event);



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [RFC][PATCH 9/9] perf: De-schedule a task context when removing the last event
  2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
                   ` (7 preceding siblings ...)
  2011-04-09 19:17 ` [RFC][PATCH 8/9] perf: Change close() semantics for group events Peter Zijlstra
@ 2011-04-09 19:17 ` Peter Zijlstra
  2011-05-28 16:42   ` [tip:perf/core] " tip-bot for Peter Zijlstra
  8 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-09 19:17 UTC (permalink / raw)
  To: Oleg Nesterov, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, Stephane Eranian, Peter Zijlstra

[-- Attachment #1: perf-deactivate-on-close.patch --]
[-- Type: text/plain, Size: 621 bytes --]


Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/perf_event.c |    4 ++++
 1 file changed, 4 insertions(+)

Index: linux-2.6/kernel/perf_event.c
===================================================================
--- linux-2.6.orig/kernel/perf_event.c
+++ linux-2.6/kernel/perf_event.c
@@ -1114,6 +1114,10 @@ static int __perf_remove_from_context(vo
 	raw_spin_lock(&ctx->lock);
 	event_sched_out(event, cpuctx, ctx);
 	list_del_event(event, ctx);
+	if (!ctx->nr_events && cpuctx->task_ctx == ctx) {
+		ctx->is_active = 0;
+		cpuctx->task_ctx = NULL;
+	}
 	raw_spin_unlock(&ctx->lock);
 
 	return 0;



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context
  2011-04-09 19:17 ` [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context Peter Zijlstra
@ 2011-04-10  8:13   ` Peter Zijlstra
  2011-04-11  8:44     ` Lin Ming
  2011-04-11  8:12   ` Lin Ming
  2011-05-28 16:40   ` [tip:perf/core] perf: Simplify and fix __perf_install_in_context() tip-bot for Peter Zijlstra
  2 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-10  8:13 UTC (permalink / raw)
  To: Oleg Nesterov; +Cc: Jiri Olsa, Ingo Molnar, linux-kernel, Stephane Eranian

On Sat, 2011-04-09 at 21:17 +0200, Peter Zijlstra wrote:
> +       if (task_ctx) {
> +               task_ctx_sched_out(task_ctx);
> +               /*
> +                * If the context we're installing events in is not the
> +                * active task_ctx, flip them.
> +                */
> +               if (ctx->task && task_ctx != ctx) {
> +                       raw_spin_unlock(&cpuctx->ctx.lock);
> +                       raw_spin_lock(&ctx->lock);
> +                       cpuctx->task_ctx = task_ctx = ctx;
> +               }
> +               task = task_ctx->task;
> +       } 

That is actually buggy, it should read something like:

	if (task_ctx)
		task_ctx_sched_out(task_ctx);

	if (ctx->task && task_ctx != ctx) {
		raw_spin_unlock(&task_ctx->lock);
		raw_spin_lock(&ctx->lock);
		cpuctx->task_ctx = task_ctx = ctx;
	}

	if (task_ctx)
		task = task_ctx->task;

Aside from the trivial locking bug fixed, the previous version wouldn't
actually deal with installing a task_ctx where there was none before.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC][PATCH 2/9] perf: Clean up ctx reference counting
  2011-04-09 19:17 ` [RFC][PATCH 2/9] perf: Clean up ctx reference counting Peter Zijlstra
@ 2011-04-11  6:05   ` Lin Ming
  2011-04-11  8:35     ` Peter Zijlstra
  2011-05-28 16:39   ` [tip:perf/core] perf: Clean up 'ctx' " tip-bot for Peter Zijlstra
  1 sibling, 1 reply; 25+ messages in thread
From: Lin Ming @ 2011-04-11  6:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Oleg Nesterov, Jiri Olsa, Ingo Molnar, linux-kernel, Stephane Eranian

On Sun, 2011-04-10 at 03:17 +0800, Peter Zijlstra wrote:
> Small cleanup to how we refcount in find_get_context(), this also
> allows us to use put_ctx() to free things instead of using kfree().
> 
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
>  kernel/perf_event.c |   10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
> 
> Index: linux-2.6/kernel/perf_event.c
> ===================================================================
> --- linux-2.6.orig/kernel/perf_event.c
> +++ linux-2.6/kernel/perf_event.c
> @@ -2831,16 +2831,12 @@ find_get_context(struct pmu *pmu, struct
>  		unclone_ctx(ctx);
>  		++ctx->pin_count;
>  		raw_spin_unlock_irqrestore(&ctx->lock, flags);
> -	}
> -
> -	if (!ctx) {
> +	} else {
>  		ctx = alloc_perf_context(pmu, task);
>  		err = -ENOMEM;
>  		if (!ctx)
>  			goto errout;
>  
> -		get_ctx(ctx);
> -
>  		err = 0;
>  		mutex_lock(&task->perf_event_mutex);
>  		/*
> @@ -2852,14 +2848,14 @@ find_get_context(struct pmu *pmu, struct
>  		else if (task->perf_event_ctxp[ctxn])
>  			err = -EAGAIN;
>  		else {
> +			get_ctx(ctx);
>  			++ctx->pin_count;
>  			rcu_assign_pointer(task->perf_event_ctxp[ctxn], ctx);
>  		}
>  		mutex_unlock(&task->perf_event_mutex);
>  
>  		if (unlikely(err)) {
> -			put_task_struct(task);
> -			kfree(ctx);
> +			put_ctx(ctx);

You moved the get_ctx(), so it seems that this put_ctx is missing its
relevant get_ctx.

Lin Ming

>  
>  			if (err == -EAGAIN)
>  				goto retry;
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context
  2011-04-09 19:17 ` [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context Peter Zijlstra
  2011-04-10  8:13   ` Peter Zijlstra
@ 2011-04-11  8:12   ` Lin Ming
  2011-05-28 16:40   ` [tip:perf/core] perf: Simplify and fix __perf_install_in_context() tip-bot for Peter Zijlstra
  2 siblings, 0 replies; 25+ messages in thread
From: Lin Ming @ 2011-04-11  8:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Oleg Nesterov, Jiri Olsa, Ingo Molnar, linux-kernel, Stephane Eranian

On Sat, 2011-04-09 at 21:17 +0200, Peter Zijlstra wrote:
> plain text document attachment (perf_install_in_context.patch)
> Currently __perf_install_in_context() will try and schedule in the
> event irrespective of our event scheduling rules, that is, we try to
> schedule CPU-pinned, TASK-pinned, CPU-flexible, TASK-flexible, but
> when creating a new event we simply try and schedule it on top of
> whatever is already on the PMU, this can lead to errors for pinned
> events.
> 
> Therefore, simplify things and simply schedule everything out, add the
> event to the corresponding context and schedule everything back in.
> 
> This also nicely handles the case where with
> __ARCH_WANT_INTERRUPTS_ON_CTXSW the IPI can come right in the middle
> of schedule, before we managed to call perf_event_task_sched_in().
> 
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
>  kernel/perf_event.c |   80 ++++++++++++++++++++++------------------------------
>  1 file changed, 35 insertions(+), 45 deletions(-)
> 
> Index: linux-2.6/kernel/perf_event.c
> ===================================================================
> --- linux-2.6.orig/kernel/perf_event.c
> +++ linux-2.6/kernel/perf_event.c
> @@ -1476,8 +1476,12 @@ static void add_event_to_ctx(struct perf
>  	event->tstamp_stopped = tstamp;
>  }
>  
> -static void perf_event_context_sched_in(struct perf_event_context *ctx,
> -					struct task_struct *tsk);
> +static void task_ctx_sched_out(struct perf_event_context *ctx);
> +static void
> +ctx_sched_in(struct perf_event_context *ctx,
> +	     struct perf_cpu_context *cpuctx,
> +	     enum event_type_t event_type,
> +	     struct task_struct *task);
>  
>  /*
>   * Cross CPU call to install and enable a performance event
> @@ -1488,20 +1492,31 @@ static int  __perf_install_in_context(vo
>  {
>  	struct perf_event *event = info;
>  	struct perf_event_context *ctx = event->ctx;
> -	struct perf_event *leader = event->group_leader;
>  	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
> -	int err;
> +	struct perf_event_context *task_ctx = cpuctx->task_ctx;
> +	struct task_struct *task = current;
> +
> +	perf_ctx_lock(cpuctx, cpuctx->task_ctx);

perf_ctx_lock(cpuctx, task_ctx)

since task_ctx is assigned with cpuctx->task_ctx.

Lin Ming


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC][PATCH 2/9] perf: Clean up ctx reference counting
  2011-04-11  6:05   ` Lin Ming
@ 2011-04-11  8:35     ` Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-11  8:35 UTC (permalink / raw)
  To: Lin Ming
  Cc: Oleg Nesterov, Jiri Olsa, Ingo Molnar, linux-kernel, Stephane Eranian

On Mon, 2011-04-11 at 14:05 +0800, Lin Ming wrote:

> > +			put_ctx(ctx);
> 
> You moved the get_ctx(), so it seems that this put_ctx is missing its
> relevant get_ctx.

Yeah, its cheating since we initialize the refcount on 1.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context
  2011-04-10  8:13   ` Peter Zijlstra
@ 2011-04-11  8:44     ` Lin Ming
  2011-04-11  8:50       ` Peter Zijlstra
  0 siblings, 1 reply; 25+ messages in thread
From: Lin Ming @ 2011-04-11  8:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Oleg Nesterov, Jiri Olsa, Ingo Molnar, linux-kernel, Stephane Eranian

On Sun, 2011-04-10 at 10:13 +0200, Peter Zijlstra wrote:
> On Sat, 2011-04-09 at 21:17 +0200, Peter Zijlstra wrote:
> > +       if (task_ctx) {
> > +               task_ctx_sched_out(task_ctx);
> > +               /*
> > +                * If the context we're installing events in is not the
> > +                * active task_ctx, flip them.
> > +                */

In which case will this happen?

For task event, we have:

perf_install_in_context
   task_function_call(task, __perf_install_in_context, event)
      __perf_install_in_context

Doesn't this ensure that the context we're installing events is same
with the active task_ctx?

Lin Ming

> > +               if (ctx->task && task_ctx != ctx) {
> > +                       raw_spin_unlock(&cpuctx->ctx.lock);
> > +                       raw_spin_lock(&ctx->lock);
> > +                       cpuctx->task_ctx = task_ctx = ctx;
> > +               }
> > +               task = task_ctx->task;
> > +       } 
> 
> That is actually buggy, it should read something like:
> 
> 	if (task_ctx)
> 		task_ctx_sched_out(task_ctx);
> 
> 	if (ctx->task && task_ctx != ctx) {
> 		raw_spin_unlock(&task_ctx->lock);
> 		raw_spin_lock(&ctx->lock);
> 		cpuctx->task_ctx = task_ctx = ctx;
> 	}
> 
> 	if (task_ctx)
> 		task = task_ctx->task;
> 
> Aside from the trivial locking bug fixed, the previous version wouldn't
> actually deal with installing a task_ctx where there was none before.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context
  2011-04-11  8:44     ` Lin Ming
@ 2011-04-11  8:50       ` Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2011-04-11  8:50 UTC (permalink / raw)
  To: Lin Ming
  Cc: Oleg Nesterov, Jiri Olsa, Ingo Molnar, linux-kernel, Stephane Eranian

On Mon, 2011-04-11 at 16:44 +0800, Lin Ming wrote:
> On Sun, 2011-04-10 at 10:13 +0200, Peter Zijlstra wrote:
> > On Sat, 2011-04-09 at 21:17 +0200, Peter Zijlstra wrote:
> > > +       if (task_ctx) {
> > > +               task_ctx_sched_out(task_ctx);
> > > +               /*
> > > +                * If the context we're installing events in is not the
> > > +                * active task_ctx, flip them.
> > > +                */

> > > +               if (ctx->task && task_ctx != ctx) {
> > > +                       raw_spin_unlock(&cpuctx->ctx.lock);
> > > +                       raw_spin_lock(&ctx->lock);
> > > +                       cpuctx->task_ctx = task_ctx = ctx;
> > > +               }
> > > +               task = task_ctx->task;
> > > +       } 
> > 
> > That is actually buggy, it should read something like:
> > 
> > 	if (task_ctx)
> > 		task_ctx_sched_out(task_ctx);
> > 
> > 	if (ctx->task && task_ctx != ctx) {
		if (task_ctx)
> > 			raw_spin_unlock(&task_ctx->lock);
> > 		raw_spin_lock(&ctx->lock);
> > 		cpuctx->task_ctx = task_ctx = ctx;
> > 	}
> > 
> > 	if (task_ctx)
> > 		task = task_ctx->task;
> > 
> > Aside from the trivial locking bug fixed, the previous version wouldn't
> > actually deal with installing a task_ctx where there was none before.

Let me place your comment with the new version, as the old one is
borken ;-)

> In which case will this happen?
> 
> For task event, we have:
> 
> perf_install_in_context
>    task_function_call(task, __perf_install_in_context, event)
>       __perf_install_in_context
> 
> Doesn't this ensure that the context we're installing events is same
> with the active task_ctx?

With __ARCH_WANT_INTERRUPTS_ON_CTXSW the IPI might land before we did
perf_event_task_sched_in(), in which case we need to set the task_ctx
our-selves.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Optimize ctx_sched_out()
  2011-04-09 19:17 ` [RFC][PATCH 1/9] perf: Optimize ctx_sched_out Peter Zijlstra
@ 2011-05-28 16:38   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, oleg, tglx, mingo

Commit-ID:  075e0b00857e166dcc3e39037a1fc5a90acac709
Gitweb:     http://git.kernel.org/tip/075e0b00857e166dcc3e39037a1fc5a90acac709
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:40 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:09 +0200

perf: Optimize ctx_sched_out()

Oleg noted that ctx_sched_out() disables the PMU even though it might
not actually do something, avoid needless PMU-disabling.

Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.665385503@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index d863b3c..4d9a1f01 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1760,7 +1760,6 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 	struct perf_event *event;
 
 	raw_spin_lock(&ctx->lock);
-	perf_pmu_disable(ctx->pmu);
 	ctx->is_active = 0;
 	if (likely(!ctx->nr_events))
 		goto out;
@@ -1770,6 +1769,7 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 	if (!ctx->nr_active)
 		goto out;
 
+	perf_pmu_disable(ctx->pmu);
 	if (event_type & EVENT_PINNED) {
 		list_for_each_entry(event, &ctx->pinned_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
@@ -1779,8 +1779,8 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 		list_for_each_entry(event, &ctx->flexible_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 	}
-out:
 	perf_pmu_enable(ctx->pmu);
+out:
 	raw_spin_unlock(&ctx->lock);
 }
 

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Clean up 'ctx' reference counting
  2011-04-09 19:17 ` [RFC][PATCH 2/9] perf: Clean up ctx reference counting Peter Zijlstra
  2011-04-11  6:05   ` Lin Ming
@ 2011-05-28 16:39   ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  9137fb28ac74d05eb66d1d8e6778eaa14e6fed43
Gitweb:     http://git.kernel.org/tip/9137fb28ac74d05eb66d1d8e6778eaa14e6fed43
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:41 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:10 +0200

perf: Clean up 'ctx' reference counting

Small cleanup to how we refcount in find_get_context(), this also
allows us to use put_ctx() to free things instead of using kfree().

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.719340481@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |   10 +++-------
 1 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4d9a1f01..d665ac4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2835,16 +2835,12 @@ retry:
 		unclone_ctx(ctx);
 		++ctx->pin_count;
 		raw_spin_unlock_irqrestore(&ctx->lock, flags);
-	}
-
-	if (!ctx) {
+	} else {
 		ctx = alloc_perf_context(pmu, task);
 		err = -ENOMEM;
 		if (!ctx)
 			goto errout;
 
-		get_ctx(ctx);
-
 		err = 0;
 		mutex_lock(&task->perf_event_mutex);
 		/*
@@ -2856,14 +2852,14 @@ retry:
 		else if (task->perf_event_ctxp[ctxn])
 			err = -EAGAIN;
 		else {
+			get_ctx(ctx);
 			++ctx->pin_count;
 			rcu_assign_pointer(task->perf_event_ctxp[ctxn], ctx);
 		}
 		mutex_unlock(&task->perf_event_mutex);
 
 		if (unlikely(err)) {
-			put_task_struct(task);
-			kfree(ctx);
+			put_ctx(ctx);
 
 			if (err == -EAGAIN)
 				goto retry;

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Optimize event scheduling locking
  2011-04-09 19:17 ` [RFC][PATCH 3/9] perf: Change event scheduling locking Peter Zijlstra
@ 2011-05-28 16:39   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  facc43071cc0d4821c176d7d34570714eb348df9
Gitweb:     http://git.kernel.org/tip/facc43071cc0d4821c176d7d34570714eb348df9
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:42 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:12 +0200

perf: Optimize event scheduling locking

Currently we only hold one ctx->lock at a time, which results in us
flipping back and forth between cpuctx->ctx.lock and task_ctx->lock.

Avoid this and gain large atomic regions by holding both locks. We
nest the task lock inside the cpu lock, since with task scheduling we
might have to change task ctx while holding the cpu ctx lock.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.769881865@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |   61 +++++++++++++++++++++++++++++--------------------
 1 files changed, 36 insertions(+), 25 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index d665ac4..d243af9 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -200,6 +200,22 @@ __get_cpu_context(struct perf_event_context *ctx)
 	return this_cpu_ptr(ctx->pmu->pmu_cpu_context);
 }
 
+static void perf_ctx_lock(struct perf_cpu_context *cpuctx,
+			  struct perf_event_context *ctx)
+{
+	raw_spin_lock(&cpuctx->ctx.lock);
+	if (ctx)
+		raw_spin_lock(&ctx->lock);
+}
+
+static void perf_ctx_unlock(struct perf_cpu_context *cpuctx,
+			    struct perf_event_context *ctx)
+{
+	if (ctx)
+		raw_spin_unlock(&ctx->lock);
+	raw_spin_unlock(&cpuctx->ctx.lock);
+}
+
 #ifdef CONFIG_CGROUP_PERF
 
 /*
@@ -340,11 +356,8 @@ void perf_cgroup_switch(struct task_struct *task, int mode)
 	rcu_read_lock();
 
 	list_for_each_entry_rcu(pmu, &pmus, entry) {
-
 		cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
 
-		perf_pmu_disable(cpuctx->ctx.pmu);
-
 		/*
 		 * perf_cgroup_events says at least one
 		 * context on this CPU has cgroup events.
@@ -353,6 +366,8 @@ void perf_cgroup_switch(struct task_struct *task, int mode)
 		 * events for a context.
 		 */
 		if (cpuctx->ctx.nr_cgroups > 0) {
+			perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+			perf_pmu_disable(cpuctx->ctx.pmu);
 
 			if (mode & PERF_CGROUP_SWOUT) {
 				cpu_ctx_sched_out(cpuctx, EVENT_ALL);
@@ -372,9 +387,9 @@ void perf_cgroup_switch(struct task_struct *task, int mode)
 				cpuctx->cgrp = perf_cgroup_from_task(task);
 				cpu_ctx_sched_in(cpuctx, EVENT_ALL, task);
 			}
+			perf_pmu_enable(cpuctx->ctx.pmu);
+			perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
 		}
-
-		perf_pmu_enable(cpuctx->ctx.pmu);
 	}
 
 	rcu_read_unlock();
@@ -1759,15 +1774,14 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 {
 	struct perf_event *event;
 
-	raw_spin_lock(&ctx->lock);
 	ctx->is_active = 0;
 	if (likely(!ctx->nr_events))
-		goto out;
+		return;
+
 	update_context_time(ctx);
 	update_cgrp_time_from_cpuctx(cpuctx);
-
 	if (!ctx->nr_active)
-		goto out;
+		return;
 
 	perf_pmu_disable(ctx->pmu);
 	if (event_type & EVENT_PINNED) {
@@ -1780,8 +1794,6 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 			group_sched_out(event, cpuctx, ctx);
 	}
 	perf_pmu_enable(ctx->pmu);
-out:
-	raw_spin_unlock(&ctx->lock);
 }
 
 /*
@@ -1929,8 +1941,10 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
 	rcu_read_unlock();
 
 	if (do_switch) {
+		raw_spin_lock(&ctx->lock);
 		ctx_sched_out(ctx, cpuctx, EVENT_ALL);
 		cpuctx->task_ctx = NULL;
+		raw_spin_unlock(&ctx->lock);
 	}
 }
 
@@ -2056,10 +2070,9 @@ ctx_sched_in(struct perf_event_context *ctx,
 {
 	u64 now;
 
-	raw_spin_lock(&ctx->lock);
 	ctx->is_active = 1;
 	if (likely(!ctx->nr_events))
-		goto out;
+		return;
 
 	now = perf_clock();
 	ctx->timestamp = now;
@@ -2074,9 +2087,6 @@ ctx_sched_in(struct perf_event_context *ctx,
 	/* Then walk through the lower prio flexible groups */
 	if (event_type & EVENT_FLEXIBLE)
 		ctx_flexible_sched_in(ctx, cpuctx);
-
-out:
-	raw_spin_unlock(&ctx->lock);
 }
 
 static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
@@ -2110,6 +2120,7 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
 	if (cpuctx->task_ctx == ctx)
 		return;
 
+	perf_ctx_lock(cpuctx, ctx);
 	perf_pmu_disable(ctx->pmu);
 	/*
 	 * We want to keep the following priority order:
@@ -2124,12 +2135,14 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
 
 	cpuctx->task_ctx = ctx;
 
+	perf_pmu_enable(ctx->pmu);
+	perf_ctx_unlock(cpuctx, ctx);
+
 	/*
 	 * Since these rotations are per-cpu, we need to ensure the
 	 * cpu-context we got scheduled on is actually rotating.
 	 */
 	perf_pmu_rotate_start(ctx->pmu);
-	perf_pmu_enable(ctx->pmu);
 }
 
 /*
@@ -2269,7 +2282,6 @@ static void perf_ctx_adjust_freq(struct perf_event_context *ctx, u64 period)
 	u64 interrupts, now;
 	s64 delta;
 
-	raw_spin_lock(&ctx->lock);
 	list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
 		if (event->state != PERF_EVENT_STATE_ACTIVE)
 			continue;
@@ -2301,7 +2313,6 @@ static void perf_ctx_adjust_freq(struct perf_event_context *ctx, u64 period)
 		if (delta > 0)
 			perf_adjust_period(event, period, delta);
 	}
-	raw_spin_unlock(&ctx->lock);
 }
 
 /*
@@ -2309,16 +2320,12 @@ static void perf_ctx_adjust_freq(struct perf_event_context *ctx, u64 period)
  */
 static void rotate_ctx(struct perf_event_context *ctx)
 {
-	raw_spin_lock(&ctx->lock);
-
 	/*
 	 * Rotate the first entry last of non-pinned groups. Rotation might be
 	 * disabled by the inheritance code.
 	 */
 	if (!ctx->rotate_disable)
 		list_rotate_left(&ctx->flexible_groups);
-
-	raw_spin_unlock(&ctx->lock);
 }
 
 /*
@@ -2345,6 +2352,7 @@ static void perf_rotate_context(struct perf_cpu_context *cpuctx)
 			rotate = 1;
 	}
 
+	perf_ctx_lock(cpuctx, cpuctx->task_ctx);
 	perf_pmu_disable(cpuctx->ctx.pmu);
 	perf_ctx_adjust_freq(&cpuctx->ctx, interval);
 	if (ctx)
@@ -2370,6 +2378,7 @@ done:
 		list_del_init(&cpuctx->rotation_list);
 
 	perf_pmu_enable(cpuctx->ctx.pmu);
+	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
 }
 
 void perf_event_task_tick(void)
@@ -2424,9 +2433,9 @@ static void perf_event_enable_on_exec(struct perf_event_context *ctx)
 	 * in.
 	 */
 	perf_cgroup_sched_out(current);
-	task_ctx_sched_out(ctx, EVENT_ALL);
 
 	raw_spin_lock(&ctx->lock);
+	task_ctx_sched_out(ctx, EVENT_ALL);
 
 	list_for_each_entry(event, &ctx->pinned_groups, group_entry) {
 		ret = event_enable_on_exec(event, ctx);
@@ -5982,6 +5991,7 @@ free_dev:
 }
 
 static struct lock_class_key cpuctx_mutex;
+static struct lock_class_key cpuctx_lock;
 
 int perf_pmu_register(struct pmu *pmu, char *name, int type)
 {
@@ -6032,6 +6042,7 @@ skip_type:
 		cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
 		__perf_event_init_context(&cpuctx->ctx);
 		lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex);
+		lockdep_set_class(&cpuctx->ctx.lock, &cpuctx_lock);
 		cpuctx->ctx.type = cpu_context;
 		cpuctx->ctx.pmu = pmu;
 		cpuctx->jiffies_interval = 1;
@@ -6776,7 +6787,6 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
 	 * our context.
 	 */
 	child_ctx = rcu_dereference_raw(child->perf_event_ctxp[ctxn]);
-	task_ctx_sched_out(child_ctx, EVENT_ALL);
 
 	/*
 	 * Take the context lock here so that if find_get_context is
@@ -6784,6 +6794,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
 	 * incremented the context's refcount before we do put_ctx below.
 	 */
 	raw_spin_lock(&child_ctx->lock);
+	task_ctx_sched_out(child_ctx, EVENT_ALL);
 	child->perf_event_ctxp[ctxn] = NULL;
 	/*
 	 * If this context is a clone; unclone it so it can't get

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Remove task_ctx_sched_in()
  2011-04-09 19:17 ` [RFC][PATCH 4/9] perf: Remove task_ctx_sched_in Peter Zijlstra
@ 2011-05-28 16:39   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  04dc2dbbfe1c6f81b996d4dab255da75f9efbb4a
Gitweb:     http://git.kernel.org/tip/04dc2dbbfe1c6f81b996d4dab255da75f9efbb4a
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:43 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:14 +0200

perf: Remove task_ctx_sched_in()

Make task_ctx_sched_*() imply EVENT_ALL, since anything less will not
actually have scheduled the task in/out at all.

Since there's no site that schedules all of a task in (due to the
interleave with flexible cpuctx) we can remove this function.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.817893268@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |   26 ++++++--------------------
 1 files changed, 6 insertions(+), 20 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index d243af9..66b3dd8 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1979,8 +1979,7 @@ void __perf_event_task_sched_out(struct task_struct *task,
 		perf_cgroup_sched_out(task);
 }
 
-static void task_ctx_sched_out(struct perf_event_context *ctx,
-			       enum event_type_t event_type)
+static void task_ctx_sched_out(struct perf_event_context *ctx)
 {
 	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
 
@@ -1990,7 +1989,7 @@ static void task_ctx_sched_out(struct perf_event_context *ctx,
 	if (WARN_ON_ONCE(ctx != cpuctx->task_ctx))
 		return;
 
-	ctx_sched_out(ctx, cpuctx, event_type);
+	ctx_sched_out(ctx, cpuctx, EVENT_ALL);
 	cpuctx->task_ctx = NULL;
 }
 
@@ -2098,19 +2097,6 @@ static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
 	ctx_sched_in(ctx, cpuctx, event_type, task);
 }
 
-static void task_ctx_sched_in(struct perf_event_context *ctx,
-			      enum event_type_t event_type)
-{
-	struct perf_cpu_context *cpuctx;
-
-	cpuctx = __get_cpu_context(ctx);
-	if (cpuctx->task_ctx == ctx)
-		return;
-
-	ctx_sched_in(ctx, cpuctx, event_type, NULL);
-	cpuctx->task_ctx = ctx;
-}
-
 static void perf_event_context_sched_in(struct perf_event_context *ctx,
 					struct task_struct *task)
 {
@@ -2363,7 +2349,7 @@ static void perf_rotate_context(struct perf_cpu_context *cpuctx)
 
 	cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
 	if (ctx)
-		task_ctx_sched_out(ctx, EVENT_FLEXIBLE);
+		ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE);
 
 	rotate_ctx(&cpuctx->ctx);
 	if (ctx)
@@ -2371,7 +2357,7 @@ static void perf_rotate_context(struct perf_cpu_context *cpuctx)
 
 	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, current);
 	if (ctx)
-		task_ctx_sched_in(ctx, EVENT_FLEXIBLE);
+		ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, current);
 
 done:
 	if (remove)
@@ -2435,7 +2421,7 @@ static void perf_event_enable_on_exec(struct perf_event_context *ctx)
 	perf_cgroup_sched_out(current);
 
 	raw_spin_lock(&ctx->lock);
-	task_ctx_sched_out(ctx, EVENT_ALL);
+	task_ctx_sched_out(ctx);
 
 	list_for_each_entry(event, &ctx->pinned_groups, group_entry) {
 		ret = event_enable_on_exec(event, ctx);
@@ -6794,7 +6780,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
 	 * incremented the context's refcount before we do put_ctx below.
 	 */
 	raw_spin_lock(&child_ctx->lock);
-	task_ctx_sched_out(child_ctx, EVENT_ALL);
+	task_ctx_sched_out(child_ctx);
 	child->perf_event_ctxp[ctxn] = NULL;
 	/*
 	 * If this context is a clone; unclone it so it can't get

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Simplify and fix __perf_install_in_context()
  2011-04-09 19:17 ` [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context Peter Zijlstra
  2011-04-10  8:13   ` Peter Zijlstra
  2011-04-11  8:12   ` Lin Ming
@ 2011-05-28 16:40   ` tip-bot for Peter Zijlstra
  2 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:40 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  2c29ef0fef8aaff1f91263fc75c749d659da6972
Gitweb:     http://git.kernel.org/tip/2c29ef0fef8aaff1f91263fc75c749d659da6972
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:44 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:16 +0200

perf: Simplify and fix __perf_install_in_context()

Currently __perf_install_in_context() will try and schedule in the
event irrespective of our event scheduling rules, that is, we try to
schedule CPU-pinned, TASK-pinned, CPU-flexible, TASK-flexible, but
when creating a new event we simply try and schedule it on top of
whatever is already on the PMU, this can lead to errors for pinned
events.

Therefore, simplify things and simply schedule everything out, add the
event to the corresponding context and schedule everything back in.

This also nicely handles the case where with
__ARCH_WANT_INTERRUPTS_ON_CTXSW the IPI can come right in the middle
of schedule, before we managed to call perf_event_task_sched_in().

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.870894224@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |   82 ++++++++++++++++++++++----------------------------
 1 files changed, 36 insertions(+), 46 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 66b3dd8..60b333a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1469,8 +1469,12 @@ static void add_event_to_ctx(struct perf_event *event,
 	event->tstamp_stopped = tstamp;
 }
 
-static void perf_event_context_sched_in(struct perf_event_context *ctx,
-					struct task_struct *tsk);
+static void task_ctx_sched_out(struct perf_event_context *ctx);
+static void
+ctx_sched_in(struct perf_event_context *ctx,
+	     struct perf_cpu_context *cpuctx,
+	     enum event_type_t event_type,
+	     struct task_struct *task);
 
 /*
  * Cross CPU call to install and enable a performance event
@@ -1481,20 +1485,31 @@ static int  __perf_install_in_context(void *info)
 {
 	struct perf_event *event = info;
 	struct perf_event_context *ctx = event->ctx;
-	struct perf_event *leader = event->group_leader;
 	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
-	int err;
+	struct perf_event_context *task_ctx = cpuctx->task_ctx;
+	struct task_struct *task = current;
+
+	perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+	perf_pmu_disable(cpuctx->ctx.pmu);
 
 	/*
-	 * In case we're installing a new context to an already running task,
-	 * could also happen before perf_event_task_sched_in() on architectures
-	 * which do context switches with IRQs enabled.
+	 * If there was an active task_ctx schedule it out.
 	 */
-	if (ctx->task && !cpuctx->task_ctx)
-		perf_event_context_sched_in(ctx, ctx->task);
+	if (task_ctx) {
+		task_ctx_sched_out(task_ctx);
+		/*
+		 * If the context we're installing events in is not the
+		 * active task_ctx, flip them.
+		 */
+		if (ctx->task && task_ctx != ctx) {
+			raw_spin_unlock(&cpuctx->ctx.lock);
+			raw_spin_lock(&ctx->lock);
+			cpuctx->task_ctx = task_ctx = ctx;
+		}
+		task = task_ctx->task;
+	}
+	cpu_ctx_sched_out(cpuctx, EVENT_ALL);
 
-	raw_spin_lock(&ctx->lock);
-	ctx->is_active = 1;
 	update_context_time(ctx);
 	/*
 	 * update cgrp time only if current cgrp
@@ -1505,43 +1520,18 @@ static int  __perf_install_in_context(void *info)
 
 	add_event_to_ctx(event, ctx);
 
-	if (!event_filter_match(event))
-		goto unlock;
-
-	/*
-	 * Don't put the event on if it is disabled or if
-	 * it is in a group and the group isn't on.
-	 */
-	if (event->state != PERF_EVENT_STATE_INACTIVE ||
-	    (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE))
-		goto unlock;
-
 	/*
-	 * An exclusive event can't go on if there are already active
-	 * hardware events, and no hardware event can go on if there
-	 * is already an exclusive event on.
+	 * Schedule everything back in
 	 */
-	if (!group_can_go_on(event, cpuctx, 1))
-		err = -EEXIST;
-	else
-		err = event_sched_in(event, cpuctx, ctx);
-
-	if (err) {
-		/*
-		 * This event couldn't go on.  If it is in a group
-		 * then we have to pull the whole group off.
-		 * If the event group is pinned then put it in error state.
-		 */
-		if (leader != event)
-			group_sched_out(leader, cpuctx, ctx);
-		if (leader->attr.pinned) {
-			update_group_times(leader);
-			leader->state = PERF_EVENT_STATE_ERROR;
-		}
-	}
+	cpu_ctx_sched_in(cpuctx, EVENT_PINNED, task);
+	if (task_ctx)
+		ctx_sched_in(task_ctx, cpuctx, EVENT_PINNED, task);
+	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
+	if (task_ctx)
+		ctx_sched_in(task_ctx, cpuctx, EVENT_FLEXIBLE, task);
 
-unlock:
-	raw_spin_unlock(&ctx->lock);
+	perf_pmu_enable(cpuctx->ctx.pmu);
+	perf_ctx_unlock(cpuctx, task_ctx);
 
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Change and simplify ctx::is_active semantics
  2011-04-09 19:17 ` [RFC][PATCH 6/9] perf: Change ctx::is_active semantics Peter Zijlstra
@ 2011-05-28 16:40   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:40 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  db24d33e08b88e990991760a44d72006a5dc6102
Gitweb:     http://git.kernel.org/tip/db24d33e08b88e990991760a44d72006a5dc6102
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:45 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:17 +0200

perf: Change and simplify ctx::is_active semantics

Instead of tracking if a context is active or not, track which events
of the context are active. By making it a bitmask of
EVENT_PINNED|EVENT_FLEXIBLE we can simplify some of the scheduling
routines since it can avoid adding events that are already active.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.930282378@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 60b333a..71c2d44 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1763,8 +1763,9 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 			  enum event_type_t event_type)
 {
 	struct perf_event *event;
+	int is_active = ctx->is_active;
 
-	ctx->is_active = 0;
+	ctx->is_active &= ~event_type;
 	if (likely(!ctx->nr_events))
 		return;
 
@@ -1774,12 +1775,12 @@ static void ctx_sched_out(struct perf_event_context *ctx,
 		return;
 
 	perf_pmu_disable(ctx->pmu);
-	if (event_type & EVENT_PINNED) {
+	if ((is_active & EVENT_PINNED) && (event_type & EVENT_PINNED)) {
 		list_for_each_entry(event, &ctx->pinned_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 	}
 
-	if (event_type & EVENT_FLEXIBLE) {
+	if ((is_active & EVENT_FLEXIBLE) && (event_type & EVENT_FLEXIBLE)) {
 		list_for_each_entry(event, &ctx->flexible_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 	}
@@ -2058,8 +2059,9 @@ ctx_sched_in(struct perf_event_context *ctx,
 	     struct task_struct *task)
 {
 	u64 now;
+	int is_active = ctx->is_active;
 
-	ctx->is_active = 1;
+	ctx->is_active |= event_type;
 	if (likely(!ctx->nr_events))
 		return;
 
@@ -2070,11 +2072,11 @@ ctx_sched_in(struct perf_event_context *ctx,
 	 * First go through the list and put on any pinned groups
 	 * in order to give them the best chance of going on.
 	 */
-	if (event_type & EVENT_PINNED)
+	if (!(is_active & EVENT_PINNED) && (event_type & EVENT_PINNED))
 		ctx_pinned_sched_in(ctx, cpuctx);
 
 	/* Then walk through the lower prio flexible groups */
-	if (event_type & EVENT_FLEXIBLE)
+	if (!(is_active & EVENT_FLEXIBLE) && (event_type & EVENT_FLEXIBLE))
 		ctx_flexible_sched_in(ctx, cpuctx);
 }
 

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Collect the schedule-in rules in one function
  2011-04-09 19:17 ` [RFC][PATCH 7/9] perf: Collect the schedule in rules in one function Peter Zijlstra
@ 2011-05-28 16:41   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:41 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  dce5855bba5df9e87bb04584d505c1f1b103c652
Gitweb:     http://git.kernel.org/tip/dce5855bba5df9e87bb04584d505c1f1b103c652
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:46 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:19 +0200

perf: Collect the schedule-in rules in one function

This was scattered out - refactor it into a single function.
No change in functionality.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192141.979862055@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |   27 +++++++++++++++------------
 1 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 71c2d44..802f3b2 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1476,6 +1476,18 @@ ctx_sched_in(struct perf_event_context *ctx,
 	     enum event_type_t event_type,
 	     struct task_struct *task);
 
+static void perf_event_sched_in(struct perf_cpu_context *cpuctx,
+				struct perf_event_context *ctx,
+				struct task_struct *task)
+{
+	cpu_ctx_sched_in(cpuctx, EVENT_PINNED, task);
+	if (ctx)
+		ctx_sched_in(ctx, cpuctx, EVENT_PINNED, task);
+	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
+	if (ctx)
+		ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, task);
+}
+
 /*
  * Cross CPU call to install and enable a performance event
  *
@@ -1523,12 +1535,7 @@ static int  __perf_install_in_context(void *info)
 	/*
 	 * Schedule everything back in
 	 */
-	cpu_ctx_sched_in(cpuctx, EVENT_PINNED, task);
-	if (task_ctx)
-		ctx_sched_in(task_ctx, cpuctx, EVENT_PINNED, task);
-	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
-	if (task_ctx)
-		ctx_sched_in(task_ctx, cpuctx, EVENT_FLEXIBLE, task);
+	perf_event_sched_in(cpuctx, task_ctx, task);
 
 	perf_pmu_enable(cpuctx->ctx.pmu);
 	perf_ctx_unlock(cpuctx, task_ctx);
@@ -2107,9 +2114,7 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
 	 */
 	cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
 
-	ctx_sched_in(ctx, cpuctx, EVENT_PINNED, task);
-	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, task);
-	ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, task);
+	perf_event_sched_in(cpuctx, ctx, task);
 
 	cpuctx->task_ctx = ctx;
 
@@ -2347,9 +2352,7 @@ static void perf_rotate_context(struct perf_cpu_context *cpuctx)
 	if (ctx)
 		rotate_ctx(ctx);
 
-	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE, current);
-	if (ctx)
-		ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE, current);
+	perf_event_sched_in(cpuctx, ctx, current);
 
 done:
 	if (remove)

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: Change close() semantics for group events
  2011-04-09 19:17 ` [RFC][PATCH 8/9] perf: Change close() semantics for group events Peter Zijlstra
@ 2011-05-28 16:41   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:41 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, vweaver1, mingo

Commit-ID:  e03a9a55b4e45377af9ca3d464135f9ea280b8f8
Gitweb:     http://git.kernel.org/tip/e03a9a55b4e45377af9ca3d464135f9ea280b8f8
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:47 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:21 +0200

perf: Change close() semantics for group events

In order to always call list_del_event() on the correct cpu if the
event is part of an active context and avoid having to do two IPIs,
change the close() semantics slightly.

The current perf_event_disable() call would disable a whole group if
the event that's being closed is the group leader, whereas the new
code keeps the group siblings enabled.

People should not rely on this behaviour and I don't think they do,
but in case we find they do, the fix is easy and we have to take the
double IPI cost.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Vince Weaver <vweaver1@eecs.utk.edu>
Link: http://lkml.kernel.org/r/20110409192142.038377551@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |    8 +-------
 1 files changed, 1 insertions(+), 7 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 802f3b2..c378062 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2920,12 +2920,6 @@ int perf_event_release_kernel(struct perf_event *event)
 {
 	struct perf_event_context *ctx = event->ctx;
 
-	/*
-	 * Remove from the PMU, can't get re-enabled since we got
-	 * here because the last ref went.
-	 */
-	perf_event_disable(event);
-
 	WARN_ON_ONCE(ctx->parent_ctx);
 	/*
 	 * There are two ways this annotation is useful:
@@ -2942,8 +2936,8 @@ int perf_event_release_kernel(struct perf_event *event)
 	mutex_lock_nested(&ctx->mutex, SINGLE_DEPTH_NESTING);
 	raw_spin_lock_irq(&ctx->lock);
 	perf_group_detach(event);
-	list_del_event(event, ctx);
 	raw_spin_unlock_irq(&ctx->lock);
+	perf_remove_from_context(event);
 	mutex_unlock(&ctx->mutex);
 
 	free_event(event);

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [tip:perf/core] perf: De-schedule a task context when removing the last event
  2011-04-09 19:17 ` [RFC][PATCH 9/9] perf: De-schedule a task context when removing the last event Peter Zijlstra
@ 2011-05-28 16:42   ` tip-bot for Peter Zijlstra
  0 siblings, 0 replies; 25+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-05-28 16:42 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  64ce312618ef0e11d88def80effcefd1b59fdb1e
Gitweb:     http://git.kernel.org/tip/64ce312618ef0e11d88def80effcefd1b59fdb1e
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Sat, 9 Apr 2011 21:17:48 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sat, 28 May 2011 18:01:23 +0200

perf: De-schedule a task context when removing the last event

Since perf_install_in_context() will now install a context when we
add the first event, we can de-schedule the context when the last
event is removed.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110409192142.090431763@chello.nl
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/events/core.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index c378062..cc5d57d 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1120,6 +1120,10 @@ static int __perf_remove_from_context(void *info)
 	raw_spin_lock(&ctx->lock);
 	event_sched_out(event, cpuctx, ctx);
 	list_del_event(event, ctx);
+	if (!ctx->nr_events && cpuctx->task_ctx == ctx) {
+		ctx->is_active = 0;
+		cpuctx->task_ctx = NULL;
+	}
 	raw_spin_unlock(&ctx->lock);
 
 	return 0;

^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2011-05-28 16:42 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-04-09 19:17 [RFC][PATCH 0/9] perf: Rework event scheduling Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 1/9] perf: Optimize ctx_sched_out Peter Zijlstra
2011-05-28 16:38   ` [tip:perf/core] perf: Optimize ctx_sched_out() tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 2/9] perf: Clean up ctx reference counting Peter Zijlstra
2011-04-11  6:05   ` Lin Ming
2011-04-11  8:35     ` Peter Zijlstra
2011-05-28 16:39   ` [tip:perf/core] perf: Clean up 'ctx' " tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 3/9] perf: Change event scheduling locking Peter Zijlstra
2011-05-28 16:39   ` [tip:perf/core] perf: Optimize " tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 4/9] perf: Remove task_ctx_sched_in Peter Zijlstra
2011-05-28 16:39   ` [tip:perf/core] perf: Remove task_ctx_sched_in() tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 5/9] perf: Simplify and fix __perf_install_in_context Peter Zijlstra
2011-04-10  8:13   ` Peter Zijlstra
2011-04-11  8:44     ` Lin Ming
2011-04-11  8:50       ` Peter Zijlstra
2011-04-11  8:12   ` Lin Ming
2011-05-28 16:40   ` [tip:perf/core] perf: Simplify and fix __perf_install_in_context() tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 6/9] perf: Change ctx::is_active semantics Peter Zijlstra
2011-05-28 16:40   ` [tip:perf/core] perf: Change and simplify " tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 7/9] perf: Collect the schedule in rules in one function Peter Zijlstra
2011-05-28 16:41   ` [tip:perf/core] perf: Collect the schedule-in " tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 8/9] perf: Change close() semantics for group events Peter Zijlstra
2011-05-28 16:41   ` [tip:perf/core] " tip-bot for Peter Zijlstra
2011-04-09 19:17 ` [RFC][PATCH 9/9] perf: De-schedule a task context when removing the last event Peter Zijlstra
2011-05-28 16:42   ` [tip:perf/core] " tip-bot for Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).