All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
@ 2014-02-25 11:47 Dietmar Eggemann
  2014-02-25 13:16 ` Srikar Dronamraju
  2014-02-25 20:52 ` Peter Zijlstra
  0 siblings, 2 replies; 9+ messages in thread
From: Dietmar Eggemann @ 2014-02-25 11:47 UTC (permalink / raw)
  To: Peter Zijlstra, Ben Segall; +Cc: linux-kernel, Dietmar Eggemann

From: Dietmar Eggemann <Dietmar.Eggemann@arm.com>

The struct sched_avg of struct rq is only used in case group
scheduling is enabled inside __update_tg_runnable_avg() to update
per-cpu representation of a task group.  I.e. that there is no need to
maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.

This patch guards struct sched_avg of struct rq and
update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.

There is an extra empty definition for update_rq_runnable_avg()
necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case.

The function print_cfs_group_stats() which prints out struct sched_avg
of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---

Hi,

I was just wondering what the overall policy is when it comes to guard
specific functionality in the scheduler code?  Do we want to to guard
something like the fair group scheduling support completely?

The patch is against tip/master .

 kernel/sched/fair.c  |   13 +++++++------
 kernel/sched/sched.h |    2 ++
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5f6ddbef80af..76c6513b6889 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2376,12 +2376,19 @@ static inline void __update_group_entity_contrib(struct sched_entity *se)
 		se->avg.load_avg_contrib >>= NICE_0_SHIFT;
 	}
 }
+
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
+{
+	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
+	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
+}
 #else /* CONFIG_FAIR_GROUP_SCHED */
 static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 						 int force_update) {}
 static inline void __update_tg_runnable_avg(struct sched_avg *sa,
 						  struct cfs_rq *cfs_rq) {}
 static inline void __update_group_entity_contrib(struct sched_entity *se) {}
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {}
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 static inline void __update_task_entity_contrib(struct sched_entity *se)
@@ -2480,12 +2487,6 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
 	__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
 }
 
-static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
-{
-	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
-	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
-}
-
 /* Add the load generated by se into cfs_rq's child load-average */
 static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 						  struct sched_entity *se,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4be68da1fe00..63beab7512e7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -630,7 +630,9 @@ struct rq {
 	struct llist_head wake_list;
 #endif
 
+#ifdef CONFIG_FAIR_GROUP_SCHED
 	struct sched_avg avg;
+#endif
 };
 
 static inline int cpu_of(struct rq *rq)
-- 
1.7.9.5



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-25 11:47 [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED Dietmar Eggemann
@ 2014-02-25 13:16 ` Srikar Dronamraju
  2014-02-25 17:47   ` Dietmar Eggemann
  2014-02-25 20:52 ` Peter Zijlstra
  1 sibling, 1 reply; 9+ messages in thread
From: Srikar Dronamraju @ 2014-02-25 13:16 UTC (permalink / raw)
  To: Dietmar Eggemann; +Cc: Peter Zijlstra, Ben Segall, linux-kernel

> The struct sched_avg of struct rq is only used in case group
> scheduling is enabled inside __update_tg_runnable_avg() to update
> per-cpu representation of a task group.  I.e. that there is no need to
> maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.
> 
> This patch guards struct sched_avg of struct rq and
> update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.
> 

While this patch looks good, I see fields in sched_avg viz decay_count,
last_runnable_update, load_avg_contrib  only relevant to sched_entity.
i.e they don't seem to be updated or used for rq->avg. Should we look at
splitting sched_avg so that rq->avg doesn't have unwanted fields?

--
Thanks and Regards
Srikar Dronamraju


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-25 13:16 ` Srikar Dronamraju
@ 2014-02-25 17:47   ` Dietmar Eggemann
  2014-02-25 19:55     ` bsegall
  0 siblings, 1 reply; 9+ messages in thread
From: Dietmar Eggemann @ 2014-02-25 17:47 UTC (permalink / raw)
  To: Srikar Dronamraju; +Cc: Peter Zijlstra, Ben Segall, linux-kernel

On 25/02/14 13:16, Srikar Dronamraju wrote:
>> The struct sched_avg of struct rq is only used in case group
>> scheduling is enabled inside __update_tg_runnable_avg() to update
>> per-cpu representation of a task group.  I.e. that there is no need to
>> maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.
>>
>> This patch guards struct sched_avg of struct rq and
>> update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.
>>
> 
> While this patch looks good, I see fields in sched_avg viz decay_count,
> last_runnable_update, load_avg_contrib  only relevant to sched_entity.
> i.e they don't seem to be updated or used for rq->avg. Should we look at
> splitting sched_avg so that rq->avg doesn't have unwanted fields?

Yes, AFAICS at least load_avg_contrib and decay_count are only relevant
for struct sched_entity whereas last_runnable_update is used in
__update_entity_runnable_avg() itself.

By having struct sched_avg embedded into struct sched_entity and struct
rq, __update_entity_runnable_avg() and __update_tg_runnable_avg() can be
used on both and moreover, all current members of struct sched_avg
belong logically together.

With this patch I was more interested in the fact that we do not have to
maintain the load avg for struct rq in a !CONFIG_FAIR_GROUP_SCHED system.

-- Dietmar

> 
> --
> Thanks and Regards
> Srikar Dronamraju
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-25 17:47   ` Dietmar Eggemann
@ 2014-02-25 19:55     ` bsegall
  0 siblings, 0 replies; 9+ messages in thread
From: bsegall @ 2014-02-25 19:55 UTC (permalink / raw)
  To: Dietmar Eggemann; +Cc: Srikar Dronamraju, Peter Zijlstra, linux-kernel, pjt

Dietmar Eggemann <dietmar.eggemann@arm.com> writes:

> On 25/02/14 13:16, Srikar Dronamraju wrote:
>>> The struct sched_avg of struct rq is only used in case group
>>> scheduling is enabled inside __update_tg_runnable_avg() to update
>>> per-cpu representation of a task group.  I.e. that there is no need to
>>> maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.
>>>
>>> This patch guards struct sched_avg of struct rq and
>>> update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.
>>>

Reviewed-by: Ben Segall <bsegall@google.com>

If we ever decide to use runnable_avg_sum/period in balance or
something, it's trivial enough to move it back, and there is no point in
calculating unusued stats until then.

>> 
>> While this patch looks good, I see fields in sched_avg viz decay_count,
>> last_runnable_update, load_avg_contrib  only relevant to sched_entity.
>> i.e they don't seem to be updated or used for rq->avg. Should we look at
>> splitting sched_avg so that rq->avg doesn't have unwanted fields?
>
> Yes, AFAICS at least load_avg_contrib and decay_count are only relevant
> for struct sched_entity whereas last_runnable_update is used in
> __update_entity_runnable_avg() itself.
>
> By having struct sched_avg embedded into struct sched_entity and struct
> rq, __update_entity_runnable_avg() and __update_tg_runnable_avg() can be
> used on both and moreover, all current members of struct sched_avg
> belong logically together.
>
> With this patch I was more interested in the fact that we do not have to
> maintain the load avg for struct rq in a !CONFIG_FAIR_GROUP_SCHED system.
>

Yeah, last_runnable_update and runnable_avg_sum/period are used,
decay_count and load_avg_contrib could be moved to the se just fine, and
won't cause any problems.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-25 11:47 [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED Dietmar Eggemann
  2014-02-25 13:16 ` Srikar Dronamraju
@ 2014-02-25 20:52 ` Peter Zijlstra
  2014-02-26 11:19   ` Dietmar Eggemann
  1 sibling, 1 reply; 9+ messages in thread
From: Peter Zijlstra @ 2014-02-25 20:52 UTC (permalink / raw)
  To: Dietmar Eggemann; +Cc: Ben Segall, linux-kernel

On Tue, Feb 25, 2014 at 11:47:42AM +0000, Dietmar Eggemann wrote:
> +++ b/kernel/sched/sched.h
> @@ -630,7 +630,9 @@ struct rq {
>  	struct llist_head wake_list;
>  #endif
>  
> +#ifdef CONFIG_FAIR_GROUP_SCHED
>  	struct sched_avg avg;
> +#endif
>  };

There is already a CONFIG_FAIR_GROUP_SCHED #ifdef in that structure;
does it make sense to move this variable in there instead of adding yet
another #ifdef?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-25 20:52 ` Peter Zijlstra
@ 2014-02-26 11:19   ` Dietmar Eggemann
  2014-02-26 13:16     ` Peter Zijlstra
                       ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Dietmar Eggemann @ 2014-02-26 11:19 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ben Segall, linux-kernel

On 25/02/14 20:52, Peter Zijlstra wrote:
> On Tue, Feb 25, 2014 at 11:47:42AM +0000, Dietmar Eggemann wrote:
>> +++ b/kernel/sched/sched.h
>> @@ -630,7 +630,9 @@ struct rq {
>>  	struct llist_head wake_list;
>>  #endif
>>  
>> +#ifdef CONFIG_FAIR_GROUP_SCHED
>>  	struct sched_avg avg;
>> +#endif
>>  };
> 
> There is already a CONFIG_FAIR_GROUP_SCHED #ifdef in that structure;
> does it make sense to move this variable in there instead of adding yet
> another #ifdef?
> 

I changed the patch accordingly.

-- >8 --
Subject: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED

The struct sched_avg of struct rq is only used in case group
scheduling is enabled inside __update_tg_runnable_avg() to update
per-cpu representation of a task group.  I.e. that there is no need to
maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.

This patch guards struct sched_avg of struct rq and
update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.

There is an extra empty definition for update_rq_runnable_avg()
necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case.

The function print_cfs_group_stats() which prints out struct sched_avg
of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/fair.c  |   13 +++++++------
 kernel/sched/sched.h |    4 ++--
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5f6ddbef80af..76c6513b6889 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2376,12 +2376,19 @@ static inline void __update_group_entity_contrib(struct sched_entity *se)
 		se->avg.load_avg_contrib >>= NICE_0_SHIFT;
 	}
 }
+
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
+{
+	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
+	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
+}
 #else /* CONFIG_FAIR_GROUP_SCHED */
 static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 						 int force_update) {}
 static inline void __update_tg_runnable_avg(struct sched_avg *sa,
 						  struct cfs_rq *cfs_rq) {}
 static inline void __update_group_entity_contrib(struct sched_entity *se) {}
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {}
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 static inline void __update_task_entity_contrib(struct sched_entity *se)
@@ -2480,12 +2487,6 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
 	__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
 }
 
-static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
-{
-	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
-	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
-}
-
 /* Add the load generated by se into cfs_rq's child load-average */
 static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 						  struct sched_entity *se,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4be68da1fe00..b1fb1a62c52d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -540,6 +540,8 @@ struct rq {
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	/* list of leaf cfs_rq on this cpu: */
 	struct list_head leaf_cfs_rq_list;
+
+	struct sched_avg avg;
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 	/*
@@ -629,8 +631,6 @@ struct rq {
 #ifdef CONFIG_SMP
 	struct llist_head wake_list;
 #endif
-
-	struct sched_avg avg;
 };
 
 static inline int cpu_of(struct rq *rq)
-- 
1.7.9.5




^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-26 11:19   ` Dietmar Eggemann
@ 2014-02-26 13:16     ` Peter Zijlstra
  2014-02-26 17:53     ` bsegall
  2014-02-27 13:32     ` [tip:sched/core] sched: Put rq' s " tip-bot for Dietmar Eggemann
  2 siblings, 0 replies; 9+ messages in thread
From: Peter Zijlstra @ 2014-02-26 13:16 UTC (permalink / raw)
  To: Dietmar Eggemann; +Cc: Ben Segall, linux-kernel

On Wed, Feb 26, 2014 at 11:19:33AM +0000, Dietmar Eggemann wrote:
> I changed the patch accordingly.

Thanks!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-26 11:19   ` Dietmar Eggemann
  2014-02-26 13:16     ` Peter Zijlstra
@ 2014-02-26 17:53     ` bsegall
  2014-02-27 13:32     ` [tip:sched/core] sched: Put rq' s " tip-bot for Dietmar Eggemann
  2 siblings, 0 replies; 9+ messages in thread
From: bsegall @ 2014-02-26 17:53 UTC (permalink / raw)
  To: Dietmar Eggemann; +Cc: Peter Zijlstra, linux-kernel

Dietmar Eggemann <dietmar.eggemann@arm.com> writes:

> On 25/02/14 20:52, Peter Zijlstra wrote:
>> On Tue, Feb 25, 2014 at 11:47:42AM +0000, Dietmar Eggemann wrote:
>>> +++ b/kernel/sched/sched.h
>>> @@ -630,7 +630,9 @@ struct rq {
>>>  	struct llist_head wake_list;
>>>  #endif
>>>  
>>> +#ifdef CONFIG_FAIR_GROUP_SCHED
>>>  	struct sched_avg avg;
>>> +#endif
>>>  };
>> 
>> There is already a CONFIG_FAIR_GROUP_SCHED #ifdef in that structure;
>> does it make sense to move this variable in there instead of adding yet
>> another #ifdef?
>> 
>
> I changed the patch accordingly.
>
> -- >8 --
> Subject: [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED
>
> The struct sched_avg of struct rq is only used in case group
> scheduling is enabled inside __update_tg_runnable_avg() to update
> per-cpu representation of a task group.  I.e. that there is no need to
> maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.
>
> This patch guards struct sched_avg of struct rq and
> update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.
>
> There is an extra empty definition for update_rq_runnable_avg()
> necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case.
>
> The function print_cfs_group_stats() which prints out struct sched_avg
> of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED.
>
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: Ben Segall <bsegall@google.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [tip:sched/core] sched: Put rq' s sched_avg under CONFIG_FAIR_GROUP_SCHED
  2014-02-26 11:19   ` Dietmar Eggemann
  2014-02-26 13:16     ` Peter Zijlstra
  2014-02-26 17:53     ` bsegall
@ 2014-02-27 13:32     ` tip-bot for Dietmar Eggemann
  2 siblings, 0 replies; 9+ messages in thread
From: tip-bot for Dietmar Eggemann @ 2014-02-27 13:32 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, bsegall, hpa, mingo, peterz, dietmar.eggemann, tglx

Commit-ID:  f5f9739d7a0ccbdcf913a0b3604b134129d14f7e
Gitweb:     http://git.kernel.org/tip/f5f9739d7a0ccbdcf913a0b3604b134129d14f7e
Author:     Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate: Wed, 26 Feb 2014 11:19:33 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 27 Feb 2014 12:41:00 +0100

sched: Put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED

The struct sched_avg of struct rq is only used in case group
scheduling is enabled inside __update_tg_runnable_avg() to update
per-cpu representation of a task group.  I.e. that there is no need to
maintain the runnable avg of a rq in the !CONFIG_FAIR_GROUP_SCHED case.

This patch guards struct sched_avg of struct rq and
update_rq_runnable_avg() with CONFIG_FAIR_GROUP_SCHED.

There is an extra empty definition for update_rq_runnable_avg()
necessary for the !CONFIG_FAIR_GROUP_SCHED && CONFIG_SMP case.

The function print_cfs_group_stats() which prints out struct sched_avg
of struct rq is already guarded with CONFIG_FAIR_GROUP_SCHED.

Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/530DCDC5.1060406@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c  | 13 +++++++------
 kernel/sched/sched.h |  4 ++--
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a3a41c61..be4f7d9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2374,12 +2374,19 @@ static inline void __update_group_entity_contrib(struct sched_entity *se)
 		se->avg.load_avg_contrib >>= NICE_0_SHIFT;
 	}
 }
+
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
+{
+	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
+	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
+}
 #else /* CONFIG_FAIR_GROUP_SCHED */
 static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 						 int force_update) {}
 static inline void __update_tg_runnable_avg(struct sched_avg *sa,
 						  struct cfs_rq *cfs_rq) {}
 static inline void __update_group_entity_contrib(struct sched_entity *se) {}
+static inline void update_rq_runnable_avg(struct rq *rq, int runnable) {}
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 static inline void __update_task_entity_contrib(struct sched_entity *se)
@@ -2478,12 +2485,6 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
 	__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
 }
 
-static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
-{
-	__update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable);
-	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
-}
-
 /* Add the load generated by se into cfs_rq's child load-average */
 static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 						  struct sched_entity *se,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d608125..046084e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -541,6 +541,8 @@ struct rq {
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	/* list of leaf cfs_rq on this cpu: */
 	struct list_head leaf_cfs_rq_list;
+
+	struct sched_avg avg;
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 
 	/*
@@ -630,8 +632,6 @@ struct rq {
 #ifdef CONFIG_SMP
 	struct llist_head wake_list;
 #endif
-
-	struct sched_avg avg;
 };
 
 static inline int cpu_of(struct rq *rq)

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-02-27 13:33 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-25 11:47 [PATCH] sched: put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED Dietmar Eggemann
2014-02-25 13:16 ` Srikar Dronamraju
2014-02-25 17:47   ` Dietmar Eggemann
2014-02-25 19:55     ` bsegall
2014-02-25 20:52 ` Peter Zijlstra
2014-02-26 11:19   ` Dietmar Eggemann
2014-02-26 13:16     ` Peter Zijlstra
2014-02-26 17:53     ` bsegall
2014-02-27 13:32     ` [tip:sched/core] sched: Put rq' s " tip-bot for Dietmar Eggemann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.