linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations
@ 2010-01-17 12:22 Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 1/4] perf: Make __perf_event_sched_out static Frederic Weisbecker
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2010-01-17 12:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, Paul Mackerras,
	Arnaldo Carvalho de Melo

Ingo,

Please pull the perf/scheduling branch that can be found at:

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
	perf/scheduling

It brings more progresses wrt scheduling ordering between pinned
and flexible events.

There is still some work to do, for example concerning context spinlocks
that could be acquired once instead of multiple times in some areas.

But this set is still much more optimized than what we had before,
especially because of the third patch.

Thanks,
	Frederic
---

Frederic Weisbecker (4):
      perf: Make __perf_event_sched_out static
      perf: Allow pinned and flexible groups to be scheduled separately
      perf: Don't schedule out/in pinned events on task tick
      perf: Better order flexible and pinned scheduling


 kernel/perf_event.c |  149 +++++++++++++++++++++++++++++++++++++--------------
 1 files changed, 109 insertions(+), 40 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/4] perf: Make __perf_event_sched_out static
  2010-01-17 12:22 [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Frederic Weisbecker
@ 2010-01-17 12:22 ` Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 2/4] perf: Allow pinned and flexible groups to be scheduled separately Frederic Weisbecker
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2010-01-17 12:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, Paul Mackerras,
	Arnaldo Carvalho de Melo, Ingo Molnar

__perf_event_sched_out doesn't need to be globally available, make
it static.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
---
 kernel/perf_event.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index eae6ff6..c4e90b8 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1049,8 +1049,8 @@ static int perf_event_refresh(struct perf_event *event, int refresh)
 	return 0;
 }
 
-void __perf_event_sched_out(struct perf_event_context *ctx,
-			      struct perf_cpu_context *cpuctx)
+static void __perf_event_sched_out(struct perf_event_context *ctx,
+				   struct perf_cpu_context *cpuctx)
 {
 	struct perf_event *event;
 
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/4] perf: Allow pinned and flexible groups to be scheduled separately
  2010-01-17 12:22 [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 1/4] perf: Make __perf_event_sched_out static Frederic Weisbecker
@ 2010-01-17 12:22 ` Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 3/4] perf: Don't schedule out/in pinned events on task tick Frederic Weisbecker
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2010-01-17 12:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, Paul Mackerras,
	Arnaldo Carvalho de Melo, Ingo Molnar

Tune the scheduling helpers so that we can choose to schedule either
pinned and/or flexible groups from a context.

And while at it, refactor a bit the naming of these helpers to make
these more consistent and flexible.

There is no (intended) change in scheduling behaviour in this
patch.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
---
 kernel/perf_event.c |  137 ++++++++++++++++++++++++++++++++++----------------
 1 files changed, 93 insertions(+), 44 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index c4e90b8..bfc4ee0 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1049,8 +1049,15 @@ static int perf_event_refresh(struct perf_event *event, int refresh)
 	return 0;
 }
 
-static void __perf_event_sched_out(struct perf_event_context *ctx,
-				   struct perf_cpu_context *cpuctx)
+enum event_type_t {
+	EVENT_FLEXIBLE = 0x1,
+	EVENT_PINNED = 0x2,
+	EVENT_ALL = EVENT_FLEXIBLE | EVENT_PINNED,
+};
+
+static void ctx_sched_out(struct perf_event_context *ctx,
+			  struct perf_cpu_context *cpuctx,
+			  enum event_type_t event_type)
 {
 	struct perf_event *event;
 
@@ -1061,13 +1068,18 @@ static void __perf_event_sched_out(struct perf_event_context *ctx,
 	update_context_time(ctx);
 
 	perf_disable();
-	if (ctx->nr_active) {
+	if (!ctx->nr_active)
+		goto out_enable;
+
+	if (event_type & EVENT_PINNED)
 		list_for_each_entry(event, &ctx->pinned_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
 
+	if (event_type & EVENT_FLEXIBLE)
 		list_for_each_entry(event, &ctx->flexible_groups, group_entry)
 			group_sched_out(event, cpuctx, ctx);
-	}
+
+ out_enable:
 	perf_enable();
  out:
 	raw_spin_unlock(&ctx->lock);
@@ -1229,15 +1241,13 @@ void perf_event_task_sched_out(struct task_struct *task,
 	rcu_read_unlock();
 
 	if (do_switch) {
-		__perf_event_sched_out(ctx, cpuctx);
+		ctx_sched_out(ctx, cpuctx, EVENT_ALL);
 		cpuctx->task_ctx = NULL;
 	}
 }
 
-/*
- * Called with IRQs disabled
- */
-static void __perf_event_task_sched_out(struct perf_event_context *ctx)
+static void task_ctx_sched_out(struct perf_event_context *ctx,
+			       enum event_type_t event_type)
 {
 	struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context);
 
@@ -1247,39 +1257,34 @@ static void __perf_event_task_sched_out(struct perf_event_context *ctx)
 	if (WARN_ON_ONCE(ctx != cpuctx->task_ctx))
 		return;
 
-	__perf_event_sched_out(ctx, cpuctx);
+	ctx_sched_out(ctx, cpuctx, event_type);
 	cpuctx->task_ctx = NULL;
 }
 
 /*
  * Called with IRQs disabled
  */
-static void perf_event_cpu_sched_out(struct perf_cpu_context *cpuctx)
+static void __perf_event_task_sched_out(struct perf_event_context *ctx)
 {
-	__perf_event_sched_out(&cpuctx->ctx, cpuctx);
+	task_ctx_sched_out(ctx, EVENT_ALL);
+}
+
+/*
+ * Called with IRQs disabled
+ */
+static void cpu_ctx_sched_out(struct perf_cpu_context *cpuctx,
+			      enum event_type_t event_type)
+{
+	ctx_sched_out(&cpuctx->ctx, cpuctx, event_type);
 }
 
 static void
-__perf_event_sched_in(struct perf_event_context *ctx,
-			struct perf_cpu_context *cpuctx)
+ctx_pinned_sched_in(struct perf_event_context *ctx,
+		    struct perf_cpu_context *cpuctx,
+		    int cpu)
 {
-	int cpu = smp_processor_id();
 	struct perf_event *event;
-	int can_add_hw = 1;
-
-	raw_spin_lock(&ctx->lock);
-	ctx->is_active = 1;
-	if (likely(!ctx->nr_events))
-		goto out;
-
-	ctx->timestamp = perf_clock();
-
-	perf_disable();
 
-	/*
-	 * First go through the list and put on any pinned groups
-	 * in order to give them the best chance of going on.
-	 */
 	list_for_each_entry(event, &ctx->pinned_groups, group_entry) {
 		if (event->state <= PERF_EVENT_STATE_OFF)
 			continue;
@@ -1298,6 +1303,15 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 			event->state = PERF_EVENT_STATE_ERROR;
 		}
 	}
+}
+
+static void
+ctx_flexible_sched_in(struct perf_event_context *ctx,
+		      struct perf_cpu_context *cpuctx,
+		      int cpu)
+{
+	struct perf_event *event;
+	int can_add_hw = 1;
 
 	list_for_each_entry(event, &ctx->flexible_groups, group_entry) {
 		/* Ignore events in OFF or ERROR state */
@@ -1314,11 +1328,53 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 			if (group_sched_in(event, cpuctx, ctx, cpu))
 				can_add_hw = 0;
 	}
+}
+
+static void
+ctx_sched_in(struct perf_event_context *ctx,
+	     struct perf_cpu_context *cpuctx,
+	     enum event_type_t event_type)
+{
+	int cpu = smp_processor_id();
+
+	raw_spin_lock(&ctx->lock);
+	ctx->is_active = 1;
+	if (likely(!ctx->nr_events))
+		goto out;
+
+	ctx->timestamp = perf_clock();
+
+	perf_disable();
+
+	/*
+	 * First go through the list and put on any pinned groups
+	 * in order to give them the best chance of going on.
+	 */
+	if (event_type & EVENT_PINNED)
+		ctx_pinned_sched_in(ctx, cpuctx, cpu);
+
+	/* Then walk through the lower prio flexible groups */
+	if (event_type & EVENT_FLEXIBLE)
+		ctx_flexible_sched_in(ctx, cpuctx, cpu);
+
 	perf_enable();
  out:
 	raw_spin_unlock(&ctx->lock);
 }
 
+static void task_ctx_sched_in(struct task_struct *task,
+			      enum event_type_t event_type)
+{
+	struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context);
+	struct perf_event_context *ctx = task->perf_event_ctxp;
+
+	if (likely(!ctx))
+		return;
+	if (cpuctx->task_ctx == ctx)
+		return;
+	ctx_sched_in(ctx, cpuctx, event_type);
+	cpuctx->task_ctx = ctx;
+}
 /*
  * Called from scheduler to add the events of the current task
  * with interrupts disabled.
@@ -1332,22 +1388,15 @@ __perf_event_sched_in(struct perf_event_context *ctx,
  */
 void perf_event_task_sched_in(struct task_struct *task)
 {
-	struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context);
-	struct perf_event_context *ctx = task->perf_event_ctxp;
-
-	if (likely(!ctx))
-		return;
-	if (cpuctx->task_ctx == ctx)
-		return;
-	__perf_event_sched_in(ctx, cpuctx);
-	cpuctx->task_ctx = ctx;
+	task_ctx_sched_in(task, EVENT_ALL);
 }
 
-static void perf_event_cpu_sched_in(struct perf_cpu_context *cpuctx)
+static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
+			     enum event_type_t event_type)
 {
 	struct perf_event_context *ctx = &cpuctx->ctx;
 
-	__perf_event_sched_in(ctx, cpuctx);
+	ctx_sched_in(ctx, cpuctx, event_type);
 }
 
 #define MAX_INTERRUPTS (~0ULL)
@@ -1476,17 +1525,17 @@ void perf_event_task_tick(struct task_struct *curr)
 	if (ctx)
 		perf_ctx_adjust_freq(ctx);
 
-	perf_event_cpu_sched_out(cpuctx);
+	cpu_ctx_sched_out(cpuctx, EVENT_ALL);
 	if (ctx)
-		__perf_event_task_sched_out(ctx);
+		task_ctx_sched_out(ctx, EVENT_ALL);
 
 	rotate_ctx(&cpuctx->ctx);
 	if (ctx)
 		rotate_ctx(ctx);
 
-	perf_event_cpu_sched_in(cpuctx);
+	cpu_ctx_sched_in(cpuctx, EVENT_ALL);
 	if (ctx)
-		perf_event_task_sched_in(curr);
+		task_ctx_sched_in(curr, EVENT_ALL);
 }
 
 static int event_enable_on_exec(struct perf_event *event,
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/4] perf: Don't schedule out/in pinned events on task tick
  2010-01-17 12:22 [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 1/4] perf: Make __perf_event_sched_out static Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 2/4] perf: Allow pinned and flexible groups to be scheduled separately Frederic Weisbecker
@ 2010-01-17 12:22 ` Frederic Weisbecker
  2010-01-17 12:22 ` [PATCH 4/4] perf: Better order flexible and pinned scheduling Frederic Weisbecker
  2010-01-18  7:38 ` [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Peter Zijlstra
  4 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2010-01-17 12:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, Paul Mackerras,
	Arnaldo Carvalho de Melo, Ingo Molnar

We don't need to schedule in/out pinned events on task tick,
now that pinned and flexible groups can be scheduled separately.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
---
 kernel/perf_event.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index bfc4ee0..a90ae69 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1525,17 +1525,17 @@ void perf_event_task_tick(struct task_struct *curr)
 	if (ctx)
 		perf_ctx_adjust_freq(ctx);
 
-	cpu_ctx_sched_out(cpuctx, EVENT_ALL);
+	cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
 	if (ctx)
-		task_ctx_sched_out(ctx, EVENT_ALL);
+		task_ctx_sched_out(ctx, EVENT_FLEXIBLE);
 
 	rotate_ctx(&cpuctx->ctx);
 	if (ctx)
 		rotate_ctx(ctx);
 
-	cpu_ctx_sched_in(cpuctx, EVENT_ALL);
+	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE);
 	if (ctx)
-		task_ctx_sched_in(curr, EVENT_ALL);
+		task_ctx_sched_in(curr, EVENT_FLEXIBLE);
 }
 
 static int event_enable_on_exec(struct perf_event *event,
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/4] perf: Better order flexible and pinned scheduling
  2010-01-17 12:22 [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2010-01-17 12:22 ` [PATCH 3/4] perf: Don't schedule out/in pinned events on task tick Frederic Weisbecker
@ 2010-01-17 12:22 ` Frederic Weisbecker
  2010-01-18  7:38 ` [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Peter Zijlstra
  4 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2010-01-17 12:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, Paul Mackerras,
	Arnaldo Carvalho de Melo, Ingo Molnar

When a task gets scheduled in. We don't touch the cpu bound events
so the priority order becomes:

	cpu pinned, cpu flexible, task pinned, task flexible.

So schedule out cpu flexibles when a new task context gets in
and correctly order the groups to schedule in:

	task pinned, cpu flexible, task flexible.

Cpu pinned groups don't need to be touched at this time.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
---
 kernel/perf_event.c |   34 +++++++++++++++++++++++++++-------
 1 files changed, 27 insertions(+), 7 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index a90ae69..edc46b9 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1362,6 +1362,14 @@ ctx_sched_in(struct perf_event_context *ctx,
 	raw_spin_unlock(&ctx->lock);
 }
 
+static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
+			     enum event_type_t event_type)
+{
+	struct perf_event_context *ctx = &cpuctx->ctx;
+
+	ctx_sched_in(ctx, cpuctx, event_type);
+}
+
 static void task_ctx_sched_in(struct task_struct *task,
 			      enum event_type_t event_type)
 {
@@ -1388,15 +1396,27 @@ static void task_ctx_sched_in(struct task_struct *task,
  */
 void perf_event_task_sched_in(struct task_struct *task)
 {
-	task_ctx_sched_in(task, EVENT_ALL);
-}
+	struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context);
+	struct perf_event_context *ctx = task->perf_event_ctxp;
 
-static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
-			     enum event_type_t event_type)
-{
-	struct perf_event_context *ctx = &cpuctx->ctx;
+	if (likely(!ctx))
+		return;
 
-	ctx_sched_in(ctx, cpuctx, event_type);
+	if (cpuctx->task_ctx == ctx)
+		return;
+
+	/*
+	 * We want to keep the following priority order:
+	 * cpu pinned (that don't need to move), task pinned,
+	 * cpu flexible, task flexible.
+	 */
+	cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
+
+	ctx_sched_in(ctx, cpuctx, EVENT_PINNED);
+	cpu_ctx_sched_in(cpuctx, EVENT_FLEXIBLE);
+	ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE);
+
+	cpuctx->task_ctx = ctx;
 }
 
 #define MAX_INTERRUPTS (~0ULL)
-- 
1.6.2.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations
  2010-01-17 12:22 [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2010-01-17 12:22 ` [PATCH 4/4] perf: Better order flexible and pinned scheduling Frederic Weisbecker
@ 2010-01-18  7:38 ` Peter Zijlstra
  2010-01-18  7:58   ` Ingo Molnar
  4 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2010-01-18  7:38 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Paul Mackerras, Arnaldo Carvalho de Melo

On Sun, 2010-01-17 at 13:22 +0100, Frederic Weisbecker wrote:
> Ingo,
> 
> Please pull the perf/scheduling branch that can be found at:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
> 	perf/scheduling
> 
> It brings more progresses wrt scheduling ordering between pinned
> and flexible events.
> 
> There is still some work to do, for example concerning context spinlocks
> that could be acquired once instead of multiple times in some areas.
> 
> But this set is still much more optimized than what we had before,
> especially because of the third patch.

Looks very nice, thanks Frederic!

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations
  2010-01-18  7:38 ` [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Peter Zijlstra
@ 2010-01-18  7:58   ` Ingo Molnar
  2010-01-18 13:52     ` Frederic Weisbecker
  0 siblings, 1 reply; 8+ messages in thread
From: Ingo Molnar @ 2010-01-18  7:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Paul Mackerras, Arnaldo Carvalho de Melo


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Sun, 2010-01-17 at 13:22 +0100, Frederic Weisbecker wrote:
> > Ingo,
> > 
> > Please pull the perf/scheduling branch that can be found at:
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
> > 	perf/scheduling
> > 
> > It brings more progresses wrt scheduling ordering between pinned
> > and flexible events.
> > 
> > There is still some work to do, for example concerning context spinlocks
> > that could be acquired once instead of multiple times in some areas.
> > 
> > But this set is still much more optimized than what we had before,
> > especially because of the third patch.
> 
> Looks very nice, thanks Frederic!
> 
> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

Pulled, thanks guys!

	Ingo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations
  2010-01-18  7:58   ` Ingo Molnar
@ 2010-01-18 13:52     ` Frederic Weisbecker
  0 siblings, 0 replies; 8+ messages in thread
From: Frederic Weisbecker @ 2010-01-18 13:52 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, LKML, Paul Mackerras, Arnaldo Carvalho de Melo

On Mon, Jan 18, 2010 at 08:58:26AM +0100, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > On Sun, 2010-01-17 at 13:22 +0100, Frederic Weisbecker wrote:
> > > Ingo,
> > > 
> > > Please pull the perf/scheduling branch that can be found at:
> > > 
> > > git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
> > > 	perf/scheduling
> > > 
> > > It brings more progresses wrt scheduling ordering between pinned
> > > and flexible events.
> > > 
> > > There is still some work to do, for example concerning context spinlocks
> > > that could be acquired once instead of multiple times in some areas.
> > > 
> > > But this set is still much more optimized than what we had before,
> > > especially because of the third patch.
> > 
> > Looks very nice, thanks Frederic!
> > 
> > Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> 
> Pulled, thanks guys!


In case you haven't yet pushed it, I've pushed a
perf/scheduling-2 branch which is exactly the same
than perf/scheduling but with Peter's ack added on these
4 patches.

Thanks.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-01-18 13:52 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-01-17 12:22 [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Frederic Weisbecker
2010-01-17 12:22 ` [PATCH 1/4] perf: Make __perf_event_sched_out static Frederic Weisbecker
2010-01-17 12:22 ` [PATCH 2/4] perf: Allow pinned and flexible groups to be scheduled separately Frederic Weisbecker
2010-01-17 12:22 ` [PATCH 3/4] perf: Don't schedule out/in pinned events on task tick Frederic Weisbecker
2010-01-17 12:22 ` [PATCH 4/4] perf: Better order flexible and pinned scheduling Frederic Weisbecker
2010-01-18  7:38 ` [GIT PULL] perf scheduling updates: flexible/pinned groups reordering, optimizations Peter Zijlstra
2010-01-18  7:58   ` Ingo Molnar
2010-01-18 13:52     ` Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).