linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Suren Baghdasaryan <surenb@google.com>,
	Daniel Drake <drake@endlessm.com>,
	Vinayak Menon <vinmenon@codeaurora.org>,
	Christopher Lameter <cl@linux.com>,
	Mike Galbraith <efault@gmx.de>,
	Shakeel Butt <shakeelb@google.com>,
	Peter Enderborg <peter.enderborg@sony.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO
Date: Mon, 6 Aug 2018 11:19:28 -0400	[thread overview]
Message-ID: <20180806151928.GB9888@cmpxchg.org> (raw)
In-Reply-To: <20180803165641.GA2476@hirez.programming.kicks-ass.net>

On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool psi_update_stats(struct psi_group *group)
> > +{
> > +	u64 deltas[NR_PSI_STATES - 1] = { 0, };
> > +	unsigned long missed_periods = 0;
> > +	unsigned long nonidle_total = 0;
> > +	u64 now, expires, period;
> > +	int cpu;
> > +	int s;
> > +
> > +	mutex_lock(&group->stat_lock);
> > +
> > +	/*
> > +	 * Collect the per-cpu time buckets and average them into a
> > +	 * single time sample that is normalized to wallclock time.
> > +	 *
> > +	 * For averaging, each CPU is weighted by its non-idle time in
> > +	 * the sampling period. This eliminates artifacts from uneven
> > +	 * loading, or even entirely idle CPUs.
> > +	 *
> > +	 * We don't need to synchronize against CPU hotplugging. If we
> > +	 * see a CPU that's online and has samples, we incorporate it.
> > +	 */
> > +	for_each_online_cpu(cpu) {
> > +		struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
> > +		u32 uninitialized_var(nonidle);
> 
> urgh.. I can see why the compiler got confused. Dodgy :-)

:-) I think we can make this cleaner. Something like this (modulo the
READ_ONCE/WRITE_ONCE you pointed out in the other email)?

diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index abccfddba5d5..ce6f02ada1cd 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -220,6 +220,49 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
 	}
 }
 
+static u32 read_update_delta(struct psi_group_cpu *groupc,
+			     enum psi_states state, int cpu)
+{
+	u32 time, delta;
+
+	time = READ_ONCE(groupc->times[state]);
+	/*
+	 * In addition to already concluded states, we also
+	 * incorporate currently active states on the CPU, since
+	 * states may last for many sampling periods.
+	 *
+	 * This way we keep our delta sampling buckets small (u32) and
+	 * our reported pressure close to what's actually happening.
+	 */
+	if (test_state(groupc->tasks, state)) {
+		/*
+		 * We can race with a state change and need to make
+		 * sure the state_start update is ordered against the
+		 * updates to the live state and the time buckets
+		 * (groupc->times).
+		 *
+		 * 1. If we observe task state that needs to be
+		 * recorded, make sure we see state_start from when
+		 * that state went into effect or we'll count time
+		 * from the previous state.
+		 *
+		 * 2. If the time delta has already been added to the
+		 * bucket, make sure we don't see it in state_start or
+		 * we'll count it twice.
+		 *
+		 * If the time delta is out of state_start but not in
+		 * the time bucket yet, we'll miss it entirely and
+		 * handle it in the next period.
+		 */
+		smp_rmb();
+		time += cpu_clock(cpu) - groupc->state_start;
+	}
+	delta = time - groupc->times_prev[state];
+	groupc->times_prev[state] = time;
+
+	return delta;
+}
+
 static bool psi_update_stats(struct psi_group *group)
 {
 	u64 deltas[NR_PSI_STATES - 1] = { 0, };
@@ -244,60 +287,17 @@ static bool psi_update_stats(struct psi_group *group)
 	 */
 	for_each_online_cpu(cpu) {
 		struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
-		u32 uninitialized_var(nonidle);
-
-		BUILD_BUG_ON(PSI_NONIDLE != NR_PSI_STATES - 1);
-
-		for (s = PSI_NONIDLE; s >= 0; s--) {
-			u32 time, delta;
-
-			time = READ_ONCE(groupc->times[s]);
-			/*
-			 * In addition to already concluded states, we
-			 * also incorporate currently active states on
-			 * the CPU, since states may last for many
-			 * sampling periods.
-			 *
-			 * This way we keep our delta sampling buckets
-			 * small (u32) and our reported pressure close
-			 * to what's actually happening.
-			 */
-			if (test_state(groupc->tasks, cpu, s)) {
-				/*
-				 * We can race with a state change and
-				 * need to make sure the state_start
-				 * update is ordered against the
-				 * updates to the live state and the
-				 * time buckets (groupc->times).
-				 *
-				 * 1. If we observe task state that
-				 * needs to be recorded, make sure we
-				 * see state_start from when that
-				 * state went into effect or we'll
-				 * count time from the previous state.
-				 *
-				 * 2. If the time delta has already
-				 * been added to the bucket, make sure
-				 * we don't see it in state_start or
-				 * we'll count it twice.
-				 *
-				 * If the time delta is out of
-				 * state_start but not in the time
-				 * bucket yet, we'll miss it entirely
-				 * and handle it in the next period.
-				 */
-				smp_rmb();
-				time += cpu_clock(cpu) - groupc->state_start;
-			}
-			delta = time - groupc->times_prev[s];
-			groupc->times_prev[s] = time;
-
-			if (s == PSI_NONIDLE) {
-				nonidle = nsecs_to_jiffies(delta);
-				nonidle_total += nonidle;
-			} else {
-				deltas[s] += (u64)delta * nonidle;
-			}
+		u32 nonidle;
+
+		nonidle = read_update_delta(groupc, PSI_NONIDLE, cpu);
+		nonidle = nsecs_to_jiffies(nonidle);
+		nonidle_total += nonidle;
+
+		for (s = 0; s < PSI_NONIDLE; s++) {
+			u32 delta;
+
+			delta = read_update_delta(groupc, s, cpu);
+			deltas[s] += (u64)delta * nonidle;
 		}
 	}
 

  parent reply	other threads:[~2018-08-06 15:16 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-01 15:19 [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v3 Johannes Weiner
2018-08-01 15:19 ` [PATCH 1/9] mm: workingset: don't drop refault information prematurely Johannes Weiner
2018-08-01 15:19 ` [PATCH 2/9] mm: workingset: tell cache transitions from workingset thrashing Johannes Weiner
2018-08-01 21:56   ` Suren Baghdasaryan
2018-08-02 12:28     ` Johannes Weiner
2018-08-01 15:19 ` [PATCH 3/9] delayacct: track delays from thrashing cache pages Johannes Weiner
2018-08-01 15:19 ` [PATCH 4/9] sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD Johannes Weiner
2018-08-01 15:19 ` [PATCH 5/9] sched: loadavg: make calc_load_n() public Johannes Weiner
2018-08-01 15:19 ` [PATCH 6/9] sched: sched.h: make rq locking and clock functions available in stats.h Johannes Weiner
2018-08-01 15:19 ` [PATCH 7/9] sched: introduce this_rq_lock_irq() Johannes Weiner
2018-08-01 15:19 ` [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO Johannes Weiner
2018-08-03 16:56   ` Peter Zijlstra
2018-08-06 15:05     ` Johannes Weiner
2018-08-06 15:25       ` Peter Zijlstra
2018-08-06 15:40         ` Johannes Weiner
2018-08-06 15:19     ` Johannes Weiner [this message]
2018-08-06 16:03       ` Peter Zijlstra
2018-08-21 19:44     ` Johannes Weiner
2018-08-22  9:16       ` Peter Zijlstra
2018-08-03 17:07   ` Peter Zijlstra
2018-08-06 15:23     ` Johannes Weiner
2018-08-03 17:15   ` Peter Zijlstra
2018-08-03 17:21   ` Peter Zijlstra
2018-08-21 20:11     ` Johannes Weiner
2018-08-22  9:10       ` Peter Zijlstra
2018-08-22 17:28         ` Johannes Weiner
2018-08-01 15:19 ` [PATCH 9/9] psi: cgroup support Johannes Weiner
2018-08-07 11:50 ` [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v3 peter enderborg
2018-08-07 17:51   ` Johannes Weiner
  -- strict thread matches above, loose matches on Subject: below --
2018-08-28 17:22 [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v4 Johannes Weiner
2018-08-28 17:22 ` [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO Johannes Weiner
2018-08-28 20:11   ` Randy Dunlap
2018-08-28 20:56     ` Johannes Weiner
2018-08-28 21:30       ` Randy Dunlap
2018-09-07 10:16   ` Peter Zijlstra
2018-09-07 10:21     ` Peter Zijlstra
2018-09-07 14:44     ` Johannes Weiner
2018-09-07 14:58       ` Peter Zijlstra
2018-09-07 17:50         ` Johannes Weiner
2018-09-07 10:24   ` Peter Zijlstra
2018-09-07 14:54     ` Johannes Weiner
2018-08-01 15:12 Johannes Weiner
2018-08-01 15:13 ` [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180806151928.GB9888@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=drake@endlessm.com \
    --cc=efault@gmx.de \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mingo@redhat.com \
    --cc=peter.enderborg@sony.com \
    --cc=peterz@infradead.org \
    --cc=shakeelb@google.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vinmenon@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).