linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] sched: Reduce contention in update_cfs_rq_blocked_load
@ 2014-08-26 23:11 Jason Low
  2014-08-26 23:24 ` Paul Turner
  0 siblings, 1 reply; 10+ messages in thread
From: Jason Low @ 2014-08-26 23:11 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Ben Segall; +Cc: linux-kernel, Jason Low

Based on perf profiles, the update_cfs_rq_blocked_load function constantly
shows up as taking up a noticeable % of system run time. This is especially
apparent on larger numa systems.

Much of the contention is in __update_cfs_rq_tg_load_contrib when we're
updating the tg load contribution stats. However, it was noticed that the
values often don't get modified by much. In fact, much of the time, they
don't get modified at all. However, the update can always get attempted due
to force_update.

In this patch, we remove the force_update in only the
__update_cfs_rq_tg_load_contrib. Thus the tg load contrib stats now get
modified only if the delta is large enough (in the current code, they get
updated when the delta is larger than 12.5%). This is a way to rate-limit
the updates while largely keeping the values accurate.

When testing this change with AIM7 workloads, we found that it was able to
reduce the overhead of the function by up to a factor of 20x.

Cc: Yuyang Du <yuyang.du@intel.com>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Scott J Norton <scott.norton@hp.com>
Signed-off-by: Jason Low <jason.low2@hp.com>
---
 kernel/sched/fair.c |   10 ++++------
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fea7d33..7a6e18b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2352,8 +2352,7 @@ static inline u64 __synchronize_entity_decay(struct sched_entity *se)
 }
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
-static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
-						 int force_update)
+static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq)
 {
 	struct task_group *tg = cfs_rq->tg;
 	long tg_contrib;
@@ -2361,7 +2360,7 @@ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 	tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
 	tg_contrib -= cfs_rq->tg_load_contrib;
 
-	if (force_update || abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
+	if (abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
 		atomic_long_add(tg_contrib, &tg->load_avg);
 		cfs_rq->tg_load_contrib += tg_contrib;
 	}
@@ -2436,8 +2435,7 @@ static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
 	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
 }
 #else /* CONFIG_FAIR_GROUP_SCHED */
-static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
-						 int force_update) {}
+static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq) {}
 static inline void __update_tg_runnable_avg(struct sched_avg *sa,
 						  struct cfs_rq *cfs_rq) {}
 static inline void __update_group_entity_contrib(struct sched_entity *se) {}
@@ -2537,7 +2535,7 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
 		cfs_rq->last_decay = now;
 	}
 
-	__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
+	__update_cfs_rq_tg_load_contrib(cfs_rq);
 }
 
 /* Add the load generated by se into cfs_rq's child load-average */
-- 
1.7.1




^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-09-09 15:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-26 23:11 [PATCH v2] sched: Reduce contention in update_cfs_rq_blocked_load Jason Low
2014-08-26 23:24 ` Paul Turner
2014-08-27 17:34   ` Jason Low
2014-08-27 23:32     ` Tim Chen
2014-08-28 19:46       ` Jason Low
2014-09-01 12:55         ` Peter Zijlstra
2014-09-02  7:41           ` Jason Low
2014-09-03 11:32             ` Peter Zijlstra
2014-09-09 14:52             ` [tip:sched/core] sched: Reduce contention in update_cfs_rq_blocked_load() tip-bot for Jason Low
2014-08-27 19:18   ` [PATCH RESULT] sched: Rewrite per entity runnable load average tracking v5 Yuyang Du

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).