All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] perf_counter: cleanup for __perf_event_sched_in()
@ 2009-09-22  8:47 Xiao Guangrong
  2009-09-22  9:20 ` Paul Mackerras
                   ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-22  8:47 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, Paul Mackerras, LKML

It must be a group leader if event->attr.pinned is "1"

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 kernel/perf_event.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..fdd9c94 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1258,12 +1258,11 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader)
-			event_sched_in(event, cpuctx, ctx, cpu);
-		else {
-			if (group_can_go_on(event, cpuctx, 1))
-				group_sched_in(event, cpuctx, ctx, cpu);
-		}
+		/* Only a group leader can be pinned */
+		BUG_ON(event != event->group_leader);
+
+		if (group_can_go_on(event, cpuctx, 1))
+			group_sched_in(event, cpuctx, ctx, cpu);
 
 		/*
 		 * If this pinned group hasn't been scheduled,
-- 
1.6.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  8:47 [PATCH] perf_counter: cleanup for __perf_event_sched_in() Xiao Guangrong
@ 2009-09-22  9:20 ` Paul Mackerras
  2009-09-22  9:27   ` Xiao Guangrong
  2009-09-22  9:39 ` [PATCH] " Paul Mackerras
  2009-09-22 11:12 ` [PATCH] perf_counter: cleanup for __perf_event_sched_in() Peter Zijlstra
  2 siblings, 1 reply; 22+ messages in thread
From: Paul Mackerras @ 2009-09-22  9:20 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Peter Zijlstra, LKML

Xiao Guangrong writes:

> It must be a group leader if event->attr.pinned is "1"

True, but you shouldn't use BUG_ON unless there is no sensible way for
the kernel to continue executing, and that's not the case here.  Make
it WARN_ON, or better still, WARN_ON_ONCE.

Paul.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  9:20 ` Paul Mackerras
@ 2009-09-22  9:27   ` Xiao Guangrong
  2009-09-22  9:32     ` [PATCH v2] " Xiao Guangrong
  0 siblings, 1 reply; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-22  9:27 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: Ingo Molnar, Peter Zijlstra, LKML



Paul Mackerras wrote:
> Xiao Guangrong writes:
> 
>> It must be a group leader if event->attr.pinned is "1"
> 
> True, but you shouldn't use BUG_ON unless there is no sensible way for
> the kernel to continue executing, and that's not the case here.  Make
> it WARN_ON, or better still, WARN_ON_ONCE.
> 

Yeah, thanks for you point out, I'll fix it soon.

Thanks,
Xiao


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  9:27   ` Xiao Guangrong
@ 2009-09-22  9:32     ` Xiao Guangrong
  0 siblings, 0 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-22  9:32 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Paul Mackerras, Peter Zijlstra, LKML

It must be a group leader if event->attr.pinned is "1"

Changlog:
Use WARN_ON_ONCE() instead of BUG_ON() as Paul Mackerras's suggestion

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 kernel/perf_event.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..dc3221b 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1258,12 +1258,11 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader)
-			event_sched_in(event, cpuctx, ctx, cpu);
-		else {
-			if (group_can_go_on(event, cpuctx, 1))
-				group_sched_in(event, cpuctx, ctx, cpu);
-		}
+		/* Only a group leader can be pinned */
+		WARN_ON_ONCE(event != event->group_leader);
+
+		if (group_can_go_on(event, cpuctx, 1))
+			group_sched_in(event, cpuctx, ctx, cpu);
 
 		/*
 		 * If this pinned group hasn't been scheduled,
-- 
1.6.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  8:47 [PATCH] perf_counter: cleanup for __perf_event_sched_in() Xiao Guangrong
  2009-09-22  9:20 ` Paul Mackerras
@ 2009-09-22  9:39 ` Paul Mackerras
  2009-09-22 11:17   ` Peter Zijlstra
  2009-09-23  2:45   ` Xiao Guangrong
  2009-09-22 11:12 ` [PATCH] perf_counter: cleanup for __perf_event_sched_in() Peter Zijlstra
  2 siblings, 2 replies; 22+ messages in thread
From: Paul Mackerras @ 2009-09-22  9:39 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Peter Zijlstra, LKML

Xiao Guangrong writes:

> It must be a group leader if event->attr.pinned is "1"

Actually, looking at this more closely, it has to be a group leader
anyway since it's at the top level of ctx->group_list.  In fact I see
four places where we do:

		list_for_each_entry(event, &ctx->group_list, group_entry) {
			if (event == event->group_leader)
				...

or the equivalent, three of which appear to have been introduced by
afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
May by Peter Z.

As far as I can see the if () is superfluous in each case (a singleton
event will be a group of 1 and will have its group_leader pointing to
itself).  Peter, do you agree or have I missed something?

Paul.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  8:47 [PATCH] perf_counter: cleanup for __perf_event_sched_in() Xiao Guangrong
  2009-09-22  9:20 ` Paul Mackerras
  2009-09-22  9:39 ` [PATCH] " Paul Mackerras
@ 2009-09-22 11:12 ` Peter Zijlstra
  2 siblings, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2009-09-22 11:12 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Paul Mackerras, LKML

On Tue, 2009-09-22 at 16:47 +0800, Xiao Guangrong wrote:
> It must be a group leader if event->attr.pinned is "1"

Since we already enforce that on counter creation, this seems OK.

Thanks

> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> ---
>  kernel/perf_event.c |   11 +++++------
>  1 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 76ac4db..fdd9c94 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -1258,12 +1258,11 @@ __perf_event_sched_in(struct perf_event_context *ctx,
>  		if (event->cpu != -1 && event->cpu != cpu)
>  			continue;
>  
> -		if (event != event->group_leader)
> -			event_sched_in(event, cpuctx, ctx, cpu);
> -		else {
> -			if (group_can_go_on(event, cpuctx, 1))
> -				group_sched_in(event, cpuctx, ctx, cpu);
> -		}
> +		/* Only a group leader can be pinned */
> +		BUG_ON(event != event->group_leader);
> +
> +		if (group_can_go_on(event, cpuctx, 1))
> +			group_sched_in(event, cpuctx, ctx, cpu);
>  
>  		/*
>  		 * If this pinned group hasn't been scheduled,

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  9:39 ` [PATCH] " Paul Mackerras
@ 2009-09-22 11:17   ` Peter Zijlstra
  2009-09-23  2:45   ` Xiao Guangrong
  1 sibling, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2009-09-22 11:17 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: Xiao Guangrong, Ingo Molnar, LKML

On Tue, 2009-09-22 at 19:39 +1000, Paul Mackerras wrote:
> Xiao Guangrong writes:
> 
> > It must be a group leader if event->attr.pinned is "1"
> 
> Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list.  In fact I see
> four places where we do:
> 
> 		list_for_each_entry(event, &ctx->group_list, group_entry) {
> 			if (event == event->group_leader)
> 				...
> 
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
> 
> As far as I can see the if () is superfluous in each case (a singleton
> event will be a group of 1 and will have its group_leader pointing to
> itself).  Peter, do you agree or have I missed something?

/me kicks those neurons back to work..

Ah, yes, I think you're right, the second hunk of afedadf2 is a
pessimisation due to the extra branch.



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-22  9:39 ` [PATCH] " Paul Mackerras
  2009-09-22 11:17   ` Peter Zijlstra
@ 2009-09-23  2:45   ` Xiao Guangrong
  2009-09-23  3:32     ` Paul Mackerras
  1 sibling, 1 reply; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-23  2:45 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: Ingo Molnar, Peter Zijlstra, LKML



Paul Mackerras wrote:
> Xiao Guangrong writes:
> 
>> It must be a group leader if event->attr.pinned is "1"
> 
> Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list.  In fact I see
> four places where we do:
> 
> 		list_for_each_entry(event, &ctx->group_list, group_entry) {
> 			if (event == event->group_leader)
> 				...
> 
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
> 

I only find three places in __perf_event_sched_in/out, could you tell me
where is the fourth place?

I also noticed that all group leader is the top level of ctx->group_list,
if I not missed, the perf_event_init_task() function can be optimized,
like this:

int perf_event_init_task(struct task_struct *child)
{
	......
	/* We can only look at parent_ctx->group_list to get group leader */
	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
		if (event != event->group_leader)
			continue;
	......
	}
	......
}

I'll fix those if you are not mind :-)

Thanks,
Xiao

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] perf_counter: cleanup for __perf_event_sched_in()
  2009-09-23  2:45   ` Xiao Guangrong
@ 2009-09-23  3:32     ` Paul Mackerras
  2009-09-23  8:10       ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Xiao Guangrong
  0 siblings, 1 reply; 22+ messages in thread
From: Paul Mackerras @ 2009-09-23  3:32 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Peter Zijlstra, LKML

Xiao Guangrong writes:

> I only find three places in __perf_event_sched_in/out, could you tell me
> where is the fourth place?

My mistake, it is just those three.  I saw the list_for_each_entry_rcu
followed by if (event != event->group_leader) in perf_event_init_task
and missed the fact that it is iterating parent_ctx->event_list rather
than parent_ctx->group_list.  But as you point out:

> I also noticed that all group leader is the top level of ctx->group_list,
> if I not missed, the perf_event_init_task() function can be optimized,
> like this:
> 
> int perf_event_init_task(struct task_struct *child)
> {
> 	......
> 	/* We can only look at parent_ctx->group_list to get group leader */
> 	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
> 		if (event != event->group_leader)
> 			continue;
> 	......
> 	}
> 	......
> }

we would in fact be better off using group_list rather than event_list
anyway.  That should be safe since we hold parent_ctx->mutex.

> I'll fix those if you are not mind :-)

Please do. :)

Paul.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*()
  2009-09-23  3:32     ` Paul Mackerras
@ 2009-09-23  8:10       ` Xiao Guangrong
  2009-09-23  8:13         ` [PATCH 2/2] perf_counter: optimize for perf_event_init_task() Xiao Guangrong
  2009-09-23  8:40         ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Peter Zijlstra
  0 siblings, 2 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-23  8:10 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Paul Mackerras, Peter Zijlstra, LKML

Paul Mackerras says:

"Actually, looking at this more closely, it has to be a group leader
anyway since it's at the top level of ctx->group_list.  In fact I see
four places where we do:

  list_for_each_entry(event, &ctx->group_list, group_entry) {
	if (event == event->group_leader)
		...

or the equivalent, three of which appear to have been introduced by
afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
May by Peter Z.

As far as I can see the if () is superfluous in each case (a singleton
event will be a group of 1 and will have its group_leader pointing to
itself)."

[Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]

So, this patch fix it.

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 kernel/perf_event.c |   41 +++++++++++++++++++++++------------------
 1 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..9ca975a 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1032,10 +1032,13 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
 	perf_disable();
 	if (ctx->nr_active) {
 		list_for_each_entry(event, &ctx->group_list, group_entry) {
-			if (event != event->group_leader)
-				event_sched_out(event, cpuctx, ctx);
-			else
-				group_sched_out(event, cpuctx, ctx);
+
+			/*
+			 * It has to be a group leader since it's at the top
+			 * level of ctx->group_list
+			 */
+			WARN_ON_ONCE(event != event->group_leader);
+			group_sched_out(event, cpuctx, ctx);
 		}
 	}
 	perf_enable();
@@ -1258,12 +1261,14 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader)
-			event_sched_in(event, cpuctx, ctx, cpu);
-		else {
-			if (group_can_go_on(event, cpuctx, 1))
-				group_sched_in(event, cpuctx, ctx, cpu);
-		}
+		/*
+		 * It has to be a group leader since it's at the top
+		 * level of ctx->group_list
+		 */
+		WARN_ON_ONCE(event != event->group_leader);
+
+		if (group_can_go_on(event, cpuctx, 1))
+			group_sched_in(event, cpuctx, ctx, cpu);
 
 		/*
 		 * If this pinned group hasn't been scheduled,
@@ -1291,15 +1296,15 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader) {
-			if (event_sched_in(event, cpuctx, ctx, cpu))
+		/*
+		 * It has to be a group leader since it's at the top
+		 * level of ctx->group_list
+		 */
+		WARN_ON_ONCE(event != event->group_leader);
+
+		if (group_can_go_on(event, cpuctx, can_add_hw))
+			if (group_sched_in(event, cpuctx, ctx, cpu))
 				can_add_hw = 0;
-		} else {
-			if (group_can_go_on(event, cpuctx, can_add_hw)) {
-				if (group_sched_in(event, cpuctx, ctx, cpu))
-					can_add_hw = 0;
-			}
-		}
 	}
 	perf_enable();
  out:
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/2] perf_counter: optimize for perf_event_init_task()
  2009-09-23  8:10       ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Xiao Guangrong
@ 2009-09-23  8:13         ` Xiao Guangrong
  2009-09-23  8:43           ` Peter Zijlstra
  2009-09-23  8:40         ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Peter Zijlstra
  1 sibling, 1 reply; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-23  8:13 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Paul Mackerras, Peter Zijlstra, LKML

We can traverse ctx->group_list to get all group leader, it should be safe
since we hold ctx->mutex

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 kernel/perf_event.c |    5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 9ca975a..4e6e822 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4786,9 +4786,8 @@ int perf_event_init_task(struct task_struct *child)
 	 * We dont have to disable NMIs - we are only looking at
 	 * the list, not manipulating it:
 	 */
-	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
-		if (event != event->group_leader)
-			continue;
+	list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
+		WARN_ON_ONCE(event != event->group_leader);
 
 		if (!event->attr.inherit) {
 			inherited_all = 0;
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*()
  2009-09-23  8:10       ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Xiao Guangrong
  2009-09-23  8:13         ` [PATCH 2/2] perf_counter: optimize for perf_event_init_task() Xiao Guangrong
@ 2009-09-23  8:40         ` Peter Zijlstra
  2009-09-25  1:22           ` Xiao Guangrong
  2009-09-25  5:51           ` [PATCH 1/2 v2] perf_counter: fix " Xiao Guangrong
  1 sibling, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2009-09-23  8:40 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Paul Mackerras, LKML

On Wed, 2009-09-23 at 16:10 +0800, Xiao Guangrong wrote:
> Paul Mackerras says:
> 
> "Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list.  In fact I see
> four places where we do:
> 
>   list_for_each_entry(event, &ctx->group_list, group_entry) {
> 	if (event == event->group_leader)
> 		...
> 
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
> 
> As far as I can see the if () is superfluous in each case (a singleton
> event will be a group of 1 and will have its group_leader pointing to
> itself)."
> 
> [Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]
> 
> So, this patch fix it.

Hrm.. I think its not just a cleanup, but an actual bugfix.

The intent was to call event_sched_{in,out}() for single counter groups
because that's cheaper than group_sched_{in,out}(), however..

 - as you noticed, I got the condition wrong, it should have read:

     list_empty(&event->sibling_list)

 - it failed to call group_can_go_on() which deals with ->exclusive.

 - it also doesn't call hw_perf_group_sched_in() which might break
   power.

Also, I'm not sure I like the comments and WARN_ON bits, the changelog
should be sufficient.

> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> ---
>  kernel/perf_event.c |   41 +++++++++++++++++++++++------------------
>  1 files changed, 23 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 76ac4db..9ca975a 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -1032,10 +1032,13 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
>  	perf_disable();
>  	if (ctx->nr_active) {
>  		list_for_each_entry(event, &ctx->group_list, group_entry) {
> -			if (event != event->group_leader)
> -				event_sched_out(event, cpuctx, ctx);
> -			else
> -				group_sched_out(event, cpuctx, ctx);
> +
> +			/*
> +			 * It has to be a group leader since it's at the top
> +			 * level of ctx->group_list
> +			 */
> +			WARN_ON_ONCE(event != event->group_leader);
> +			group_sched_out(event, cpuctx, ctx);
>  		}
>  	}
>  	perf_enable();
> @@ -1258,12 +1261,14 @@ __perf_event_sched_in(struct perf_event_context *ctx,
>  		if (event->cpu != -1 && event->cpu != cpu)
>  			continue;
>  
> -		if (event != event->group_leader)
> -			event_sched_in(event, cpuctx, ctx, cpu);
> -		else {
> -			if (group_can_go_on(event, cpuctx, 1))
> -				group_sched_in(event, cpuctx, ctx, cpu);
> -		}
> +		/*
> +		 * It has to be a group leader since it's at the top
> +		 * level of ctx->group_list
> +		 */
> +		WARN_ON_ONCE(event != event->group_leader);
> +
> +		if (group_can_go_on(event, cpuctx, 1))
> +			group_sched_in(event, cpuctx, ctx, cpu);
>  
>  		/*
>  		 * If this pinned group hasn't been scheduled,
> @@ -1291,15 +1296,15 @@ __perf_event_sched_in(struct perf_event_context *ctx,
>  		if (event->cpu != -1 && event->cpu != cpu)
>  			continue;
>  
> -		if (event != event->group_leader) {
> -			if (event_sched_in(event, cpuctx, ctx, cpu))
> +		/*
> +		 * It has to be a group leader since it's at the top
> +		 * level of ctx->group_list
> +		 */
> +		WARN_ON_ONCE(event != event->group_leader);
> +
> +		if (group_can_go_on(event, cpuctx, can_add_hw))
> +			if (group_sched_in(event, cpuctx, ctx, cpu))
>  				can_add_hw = 0;
> -		} else {
> -			if (group_can_go_on(event, cpuctx, can_add_hw)) {
> -				if (group_sched_in(event, cpuctx, ctx, cpu))
> -					can_add_hw = 0;
> -			}
> -		}
>  	}
>  	perf_enable();
>   out:

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] perf_counter: optimize for perf_event_init_task()
  2009-09-23  8:13         ` [PATCH 2/2] perf_counter: optimize for perf_event_init_task() Xiao Guangrong
@ 2009-09-23  8:43           ` Peter Zijlstra
  2009-09-25  1:23             ` Xiao Guangrong
  0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2009-09-23  8:43 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Paul Mackerras, LKML

On Wed, 2009-09-23 at 16:13 +0800, Xiao Guangrong wrote:
> We can traverse ctx->group_list to get all group leader, it should be safe
> since we hold ctx->mutex

I don't think we need that WARN_ON_ONCE there.

> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> ---
>  kernel/perf_event.c |    5 ++---
>  1 files changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 9ca975a..4e6e822 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -4786,9 +4786,8 @@ int perf_event_init_task(struct task_struct *child)
>  	 * We dont have to disable NMIs - we are only looking at
>  	 * the list, not manipulating it:
>  	 */
> -	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
> -		if (event != event->group_leader)
> -			continue;
> +	list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
> +		WARN_ON_ONCE(event != event->group_leader);
>  
>  		if (!event->attr.inherit) {
>  			inherited_all = 0;

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*()
  2009-09-23  8:40         ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Peter Zijlstra
@ 2009-09-25  1:22           ` Xiao Guangrong
  2009-09-25  5:51           ` [PATCH 1/2 v2] perf_counter: fix " Xiao Guangrong
  1 sibling, 0 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-25  1:22 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Paul Mackerras, LKML



Peter Zijlstra wrote:

> 
> Hrm.. I think its not just a cleanup, but an actual bugfix.
> 
> The intent was to call event_sched_{in,out}() for single counter groups
> because that's cheaper than group_sched_{in,out}(), however..
> 
>  - as you noticed, I got the condition wrong, it should have read:
> 
>      list_empty(&event->sibling_list)
> 
>  - it failed to call group_can_go_on() which deals with ->exclusive.
> 
>  - it also doesn't call hw_perf_group_sched_in() which might break
>    power.
> 

Yeah, I'll fix the title

> Also, I'm not sure I like the comments and WARN_ON bits, the changelog
> should be sufficient.
> 

Um, I'll remove the comments and WARN_ON_ONCE()

Thanks,
Xiao

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] perf_counter: optimize for perf_event_init_task()
  2009-09-23  8:43           ` Peter Zijlstra
@ 2009-09-25  1:23             ` Xiao Guangrong
  0 siblings, 0 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-25  1:23 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, Paul Mackerras, LKML



Peter Zijlstra wrote:
> On Wed, 2009-09-23 at 16:13 +0800, Xiao Guangrong wrote:
>> We can traverse ctx->group_list to get all group leader, it should be safe
>> since we hold ctx->mutex
> 
> I don't think we need that WARN_ON_ONCE there.
> 

I'll remove it

Thanks,
Xiao

>> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
>> ---
>>  kernel/perf_event.c |    5 ++---
>>  1 files changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
>> index 9ca975a..4e6e822 100644
>> --- a/kernel/perf_event.c
>> +++ b/kernel/perf_event.c
>> @@ -4786,9 +4786,8 @@ int perf_event_init_task(struct task_struct *child)
>>  	 * We dont have to disable NMIs - we are only looking at
>>  	 * the list, not manipulating it:
>>  	 */
>> -	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
>> -		if (event != event->group_leader)
>> -			continue;
>> +	list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
>> +		WARN_ON_ONCE(event != event->group_leader);
>>  
>>  		if (!event->attr.inherit) {
>>  			inherited_all = 0;
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*()
  2009-09-23  8:40         ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Peter Zijlstra
  2009-09-25  1:22           ` Xiao Guangrong
@ 2009-09-25  5:51           ` Xiao Guangrong
  2009-09-25  5:54             ` [PATCH 2/2 v2] optimize for perf_event_init_task() Xiao Guangrong
                               ` (2 more replies)
  1 sibling, 3 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-25  5:51 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, Paul Mackerras, LKML

Paul Mackerras says:
"Actually, looking at this more closely, it has to be a group leader
anyway since it's at the top level of ctx->group_list.  In fact I see
four places where we do:

  list_for_each_entry(event, &ctx->group_list, group_entry) {
	if (event == event->group_leader)
		...

or the equivalent, three of which appear to have been introduced by
afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
May by Peter Z.

As far as I can see the if () is superfluous in each case (a singleton
event will be a group of 1 and will have its group_leader pointing to
itself)."
[Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]

And Peter Zijlstra point out this is an bugfix:
"The intent was to call event_sched_{in,out}() for single counter groups
because that's cheaper than group_sched_{in,out}(), however..

 - as you noticed, I got the condition wrong, it should have read:

     list_empty(&event->sibling_list)

 - it failed to call group_can_go_on() which deals with ->exclusive.

 - it also doesn't call hw_perf_group_sched_in() which might break
   power."
[Can be found at http://marc.info/?l=linux-kernel&m=125369523318583&w=2]

Changelog v1->v2:
- fix the title name as Peter Zijlstra's suggestion
- remove the comments and WARN_ON_ONCE() as Peter Zijlstra's suggestion

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 kernel/perf_event.c |   30 ++++++++----------------------
 1 files changed, 8 insertions(+), 22 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 76ac4db..2a15cd6 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1030,14 +1030,10 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
 	update_context_time(ctx);
 
 	perf_disable();
-	if (ctx->nr_active) {
-		list_for_each_entry(event, &ctx->group_list, group_entry) {
-			if (event != event->group_leader)
-				event_sched_out(event, cpuctx, ctx);
-			else
-				group_sched_out(event, cpuctx, ctx);
-		}
-	}
+	if (ctx->nr_active)
+		list_for_each_entry(event, &ctx->group_list, group_entry)
+			group_sched_out(event, cpuctx, ctx);
+
 	perf_enable();
  out:
 	spin_unlock(&ctx->lock);
@@ -1258,12 +1254,8 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader)
-			event_sched_in(event, cpuctx, ctx, cpu);
-		else {
-			if (group_can_go_on(event, cpuctx, 1))
-				group_sched_in(event, cpuctx, ctx, cpu);
-		}
+		if (group_can_go_on(event, cpuctx, 1))
+			group_sched_in(event, cpuctx, ctx, cpu);
 
 		/*
 		 * If this pinned group hasn't been scheduled,
@@ -1291,15 +1283,9 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader) {
-			if (event_sched_in(event, cpuctx, ctx, cpu))
+		if (group_can_go_on(event, cpuctx, can_add_hw))
+			if (group_sched_in(event, cpuctx, ctx, cpu))
 				can_add_hw = 0;
-		} else {
-			if (group_can_go_on(event, cpuctx, can_add_hw)) {
-				if (group_sched_in(event, cpuctx, ctx, cpu))
-					can_add_hw = 0;
-			}
-		}
 	}
 	perf_enable();
  out:
-- 
1.6.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/2 v2] optimize for perf_event_init_task()
  2009-09-25  5:51           ` [PATCH 1/2 v2] perf_counter: fix " Xiao Guangrong
@ 2009-09-25  5:54             ` Xiao Guangrong
  2009-09-25  8:55               ` Peter Zijlstra
  2009-10-01  7:47               ` [tip:perf/urgent] perf_event: Clean up perf_event_init_task() tip-bot for Xiao Guangrong
  2009-09-25  8:55             ` [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*() Peter Zijlstra
  2009-10-01  7:46             ` [tip:perf/urgent] perf_event: Fix event group handling in __perf_event_sched_*() tip-bot for Xiao Guangrong
  2 siblings, 2 replies; 22+ messages in thread
From: Xiao Guangrong @ 2009-09-25  5:54 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, Paul Mackerras, LKML

We can traverse ctx->group_list to get all group leader, it should be safe
since we hold ctx->mutex

Changlog v1->v2:
- remove WARN_ON_ONCE() as Peter Zijlstra's suggestion

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 kernel/perf_event.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 2a15cd6..0276fb4 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4767,9 +4767,7 @@ int perf_event_init_task(struct task_struct *child)
 	 * We dont have to disable NMIs - we are only looking at
 	 * the list, not manipulating it:
 	 */
-	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
-		if (event != event->group_leader)
-			continue;
+	list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
 
 		if (!event->attr.inherit) {
 			inherited_all = 0;
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*()
  2009-09-25  5:51           ` [PATCH 1/2 v2] perf_counter: fix " Xiao Guangrong
  2009-09-25  5:54             ` [PATCH 2/2 v2] optimize for perf_event_init_task() Xiao Guangrong
@ 2009-09-25  8:55             ` Peter Zijlstra
  2009-10-01  7:46             ` [tip:perf/urgent] perf_event: Fix event group handling in __perf_event_sched_*() tip-bot for Xiao Guangrong
  2 siblings, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2009-09-25  8:55 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Paul Mackerras, LKML

On Fri, 2009-09-25 at 13:51 +0800, Xiao Guangrong wrote:
> Paul Mackerras says:
> "Actually, looking at this more closely, it has to be a group leader
> anyway since it's at the top level of ctx->group_list.  In fact I see
> four places where we do:
> 
>   list_for_each_entry(event, &ctx->group_list, group_entry) {
> 	if (event == event->group_leader)
> 		...
> 
> or the equivalent, three of which appear to have been introduced by
> afedadf2 ("perf_counter: Optimize sched in/out of counters") back in
> May by Peter Z.
> 
> As far as I can see the if () is superfluous in each case (a singleton
> event will be a group of 1 and will have its group_leader pointing to
> itself)."
> [Can be found at http://marc.info/?l=linux-kernel&m=125361238901442&w=2]
> 
> And Peter Zijlstra point out this is an bugfix:
> "The intent was to call event_sched_{in,out}() for single counter groups
> because that's cheaper than group_sched_{in,out}(), however..
> 
>  - as you noticed, I got the condition wrong, it should have read:
> 
>      list_empty(&event->sibling_list)
> 
>  - it failed to call group_can_go_on() which deals with ->exclusive.
> 
>  - it also doesn't call hw_perf_group_sched_in() which might break
>    power."
> [Can be found at http://marc.info/?l=linux-kernel&m=125369523318583&w=2]
> 
> Changelog v1->v2:
> - fix the title name as Peter Zijlstra's suggestion
> - remove the comments and WARN_ON_ONCE() as Peter Zijlstra's suggestion
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>

Thanks,

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2 v2] optimize for perf_event_init_task()
  2009-09-25  5:54             ` [PATCH 2/2 v2] optimize for perf_event_init_task() Xiao Guangrong
@ 2009-09-25  8:55               ` Peter Zijlstra
  2009-10-01  7:31                 ` Ingo Molnar
  2009-10-01  7:47               ` [tip:perf/urgent] perf_event: Clean up perf_event_init_task() tip-bot for Xiao Guangrong
  1 sibling, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2009-09-25  8:55 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Ingo Molnar, Paul Mackerras, LKML

On Fri, 2009-09-25 at 13:54 +0800, Xiao Guangrong wrote:
> We can traverse ctx->group_list to get all group leader, it should be safe
> since we hold ctx->mutex
> 
> Changlog v1->v2:
> - remove WARN_ON_ONCE() as Peter Zijlstra's suggestion
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>

Thanks!

Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2 v2] optimize for perf_event_init_task()
  2009-09-25  8:55               ` Peter Zijlstra
@ 2009-10-01  7:31                 ` Ingo Molnar
  0 siblings, 0 replies; 22+ messages in thread
From: Ingo Molnar @ 2009-10-01  7:31 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Xiao Guangrong, Paul Mackerras, LKML


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Fri, 2009-09-25 at 13:54 +0800, Xiao Guangrong wrote:
> > We can traverse ctx->group_list to get all group leader, it should be safe
> > since we hold ctx->mutex
> > 
> > Changlog v1->v2:
> > - remove WARN_ON_ONCE() as Peter Zijlstra's suggestion
> > 
> > Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> 
> Thanks!
> 
> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>

Thanks Xiao and Peter, i've queued up both patches for v2.6.32.

	Ingo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [tip:perf/urgent] perf_event: Fix event group handling in __perf_event_sched_*()
  2009-09-25  5:51           ` [PATCH 1/2 v2] perf_counter: fix " Xiao Guangrong
  2009-09-25  5:54             ` [PATCH 2/2 v2] optimize for perf_event_init_task() Xiao Guangrong
  2009-09-25  8:55             ` [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*() Peter Zijlstra
@ 2009-10-01  7:46             ` tip-bot for Xiao Guangrong
  2 siblings, 0 replies; 22+ messages in thread
From: tip-bot for Xiao Guangrong @ 2009-10-01  7:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, paulus, hpa, mingo, peterz, xiaoguangrong, tglx, mingo

Commit-ID:  8c9ed8e14c342ec5e7f27e7e498f62409a10eb29
Gitweb:     http://git.kernel.org/tip/8c9ed8e14c342ec5e7f27e7e498f62409a10eb29
Author:     Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
AuthorDate: Fri, 25 Sep 2009 13:51:17 +0800
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Thu, 1 Oct 2009 09:30:44 +0200

perf_event: Fix event group handling in __perf_event_sched_*()

Paul Mackerras says:

 "Actually, looking at this more closely, it has to be a group
 leader anyway since it's at the top level of ctx->group_list.  In
 fact I see four places where we do:

  list_for_each_entry(event, &ctx->group_list, group_entry) {
	if (event == event->group_leader)
		...

 or the equivalent, three of which appear to have been introduced
 by afedadf2 ("perf_counter: Optimize sched in/out of counters")
 back in May by Peter Z.

 As far as I can see the if () is superfluous in each case (a
 singleton event will be a group of 1 and will have its
 group_leader pointing to itself)."

 [ See: http://marc.info/?l=linux-kernel&m=125361238901442&w=2 ]

And Peter Zijlstra points out this is a bugfix:

 "The intent was to call event_sched_{in,out}() for single event
  groups because that's cheaper than group_sched_{in,out}(),
  however..

  - as you noticed, I got the condition wrong, it should have read:

      list_empty(&event->sibling_list)

  - it failed to call group_can_go_on() which deals with ->exclusive.

  - it also doesn't call hw_perf_group_sched_in() which might break
    power."

 [ See: http://marc.info/?l=linux-kernel&m=125369523318583&w=2 ]

Changelog v1->v2:

 - Fix the title name according to Peter Zijlstra's suggestion

 - Remove the comments and WARN_ON_ONCE() as Peter Zijlstra's
   suggestion

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <4ABC5A55.7000208@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 kernel/perf_event.c |   30 ++++++++----------------------
 1 files changed, 8 insertions(+), 22 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 0f86feb..e50543d 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1030,14 +1030,10 @@ void __perf_event_sched_out(struct perf_event_context *ctx,
 	update_context_time(ctx);
 
 	perf_disable();
-	if (ctx->nr_active) {
-		list_for_each_entry(event, &ctx->group_list, group_entry) {
-			if (event != event->group_leader)
-				event_sched_out(event, cpuctx, ctx);
-			else
-				group_sched_out(event, cpuctx, ctx);
-		}
-	}
+	if (ctx->nr_active)
+		list_for_each_entry(event, &ctx->group_list, group_entry)
+			group_sched_out(event, cpuctx, ctx);
+
 	perf_enable();
  out:
 	spin_unlock(&ctx->lock);
@@ -1258,12 +1254,8 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader)
-			event_sched_in(event, cpuctx, ctx, cpu);
-		else {
-			if (group_can_go_on(event, cpuctx, 1))
-				group_sched_in(event, cpuctx, ctx, cpu);
-		}
+		if (group_can_go_on(event, cpuctx, 1))
+			group_sched_in(event, cpuctx, ctx, cpu);
 
 		/*
 		 * If this pinned group hasn't been scheduled,
@@ -1291,15 +1283,9 @@ __perf_event_sched_in(struct perf_event_context *ctx,
 		if (event->cpu != -1 && event->cpu != cpu)
 			continue;
 
-		if (event != event->group_leader) {
-			if (event_sched_in(event, cpuctx, ctx, cpu))
+		if (group_can_go_on(event, cpuctx, can_add_hw))
+			if (group_sched_in(event, cpuctx, ctx, cpu))
 				can_add_hw = 0;
-		} else {
-			if (group_can_go_on(event, cpuctx, can_add_hw)) {
-				if (group_sched_in(event, cpuctx, ctx, cpu))
-					can_add_hw = 0;
-			}
-		}
 	}
 	perf_enable();
  out:

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [tip:perf/urgent] perf_event: Clean up perf_event_init_task()
  2009-09-25  5:54             ` [PATCH 2/2 v2] optimize for perf_event_init_task() Xiao Guangrong
  2009-09-25  8:55               ` Peter Zijlstra
@ 2009-10-01  7:47               ` tip-bot for Xiao Guangrong
  1 sibling, 0 replies; 22+ messages in thread
From: tip-bot for Xiao Guangrong @ 2009-10-01  7:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, paulus, hpa, mingo, peterz, xiaoguangrong, tglx, mingo

Commit-ID:  27f9994c50e95f3a5a81fe4c7491a9f9cffe6ec0
Gitweb:     http://git.kernel.org/tip/27f9994c50e95f3a5a81fe4c7491a9f9cffe6ec0
Author:     Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
AuthorDate: Fri, 25 Sep 2009 13:54:01 +0800
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Thu, 1 Oct 2009 09:30:44 +0200

perf_event: Clean up perf_event_init_task()

While at it: we can traverse ctx->group_list to get all
group leader, it should be safe since we hold ctx->mutex.

Changlog v1->v2:

  - remove WARN_ON_ONCE() according to Peter Zijlstra's suggestion

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <4ABC5AF9.6060808@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 kernel/perf_event.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index e50543d..e491fb0 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -4767,9 +4767,7 @@ int perf_event_init_task(struct task_struct *child)
 	 * We dont have to disable NMIs - we are only looking at
 	 * the list, not manipulating it:
 	 */
-	list_for_each_entry_rcu(event, &parent_ctx->event_list, event_entry) {
-		if (event != event->group_leader)
-			continue;
+	list_for_each_entry(event, &parent_ctx->group_list, group_entry) {
 
 		if (!event->attr.inherit) {
 			inherited_all = 0;

^ permalink raw reply related	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2009-10-01  7:47 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-22  8:47 [PATCH] perf_counter: cleanup for __perf_event_sched_in() Xiao Guangrong
2009-09-22  9:20 ` Paul Mackerras
2009-09-22  9:27   ` Xiao Guangrong
2009-09-22  9:32     ` [PATCH v2] " Xiao Guangrong
2009-09-22  9:39 ` [PATCH] " Paul Mackerras
2009-09-22 11:17   ` Peter Zijlstra
2009-09-23  2:45   ` Xiao Guangrong
2009-09-23  3:32     ` Paul Mackerras
2009-09-23  8:10       ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Xiao Guangrong
2009-09-23  8:13         ` [PATCH 2/2] perf_counter: optimize for perf_event_init_task() Xiao Guangrong
2009-09-23  8:43           ` Peter Zijlstra
2009-09-25  1:23             ` Xiao Guangrong
2009-09-23  8:40         ` [PATCH 1/2] perf_counter: cleanup for __perf_event_sched_*() Peter Zijlstra
2009-09-25  1:22           ` Xiao Guangrong
2009-09-25  5:51           ` [PATCH 1/2 v2] perf_counter: fix " Xiao Guangrong
2009-09-25  5:54             ` [PATCH 2/2 v2] optimize for perf_event_init_task() Xiao Guangrong
2009-09-25  8:55               ` Peter Zijlstra
2009-10-01  7:31                 ` Ingo Molnar
2009-10-01  7:47               ` [tip:perf/urgent] perf_event: Clean up perf_event_init_task() tip-bot for Xiao Guangrong
2009-09-25  8:55             ` [PATCH 1/2 v2] perf_counter: fix for __perf_event_sched_*() Peter Zijlstra
2009-10-01  7:46             ` [tip:perf/urgent] perf_event: Fix event group handling in __perf_event_sched_*() tip-bot for Xiao Guangrong
2009-09-22 11:12 ` [PATCH] perf_counter: cleanup for __perf_event_sched_in() Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.