All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peng Liu <iwtbavbm@gmail.com>
To: linux-kernel@vger.kernel.org
Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
	rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
	qais.yousef@arm.com, morten.rasmussen@arm.com
Subject: [PATCH] sched/fair: fix sgc->{min,max}_capacity miscalculate
Date: Tue, 31 Dec 2019 11:51:22 +0800	[thread overview]
Message-ID: <20191231035122.GA10020@iZj6chx1xj0e0buvshuecpZ> (raw)

commit bf475ce0a3dd ("sched/fair: Add per-CPU min capacity to
sched_group_capacity") introduced per-cpu min_capacity.

commit e3d6d0cb66f2 ("sched/fair: Add sched_group per-CPU max capacity")
introduced per-cpu max_capacity.

sgc->capacity is the *SUM* of all CPU's capacity in the group.
sgc->{min,max}_capacity are the sg per-cpu variables. Compare with
sgc->capacity to get sgc->{min,max}_capacity makes no sense. Instead,
we should compare one by one in each iteration to get
sgc->{min,max}_capacity of the group.

Signed-off-by: Peng Liu <iwtbavbm@gmail.com>
---
 kernel/sched/fair.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2d170b5da0e3..97b164fcda93 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7795,6 +7795,7 @@ void update_group_capacity(struct sched_domain *sd, int cpu)
 		for_each_cpu(cpu, sched_group_span(sdg)) {
 			struct sched_group_capacity *sgc;
 			struct rq *rq = cpu_rq(cpu);
+			unsigned long cap;
 
 			/*
 			 * build_sched_domains() -> init_sched_groups_capacity()
@@ -7808,14 +7809,16 @@ void update_group_capacity(struct sched_domain *sd, int cpu)
 			 * causing divide-by-zero issues on boot.
 			 */
 			if (unlikely(!rq->sd)) {
-				capacity += capacity_of(cpu);
+				cap = capacity_of(cpu);
+				capacity += cap;
+				min_capacity = min(cap, min_capacity);
+				max_capacity = max(cap, max_capacity);
 			} else {
 				sgc = rq->sd->groups->sgc;
 				capacity += sgc->capacity;
+				min_capacity = min(sgc->min_capacity, min_capacity);
+				max_capacity = max(sgc->max_capacity, max_capacity);
 			}
-
-			min_capacity = min(capacity, min_capacity);
-			max_capacity = max(capacity, max_capacity);
 		}
 	} else  {
 		/*
-- 
2.17.1


             reply	other threads:[~2019-12-31  3:51 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-31  3:51 Peng Liu [this message]
2020-01-01  5:56 ` [PATCH] sched/fair: fix sgc->{min,max}_capacity miscalculate Valentin Schneider
2020-01-01 14:13   ` Peng Liu
2020-01-01 18:55     ` Valentin Schneider
2020-01-03 14:21       ` Peng Liu
2020-01-03 14:44         ` Valentin Schneider

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191231035122.GA10020@iZj6chx1xj0e0buvshuecpZ \
    --to=iwtbavbm@gmail.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=qais.yousef@arm.com \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.