From: Johannes Weiner <hannes@cmpxchg.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Tejun Heo <tj@kernel.org>, Suren Baghdasaryan <surenb@google.com>,
Daniel Drake <drake@endlessm.com>,
Vinayak Menon <vinmenon@codeaurora.org>,
Christopher Lameter <cl@linux.com>,
Peter Enderborg <peter.enderborg@sony.com>,
Shakeel Butt <shakeelb@google.com>,
Mike Galbraith <efault@gmx.de>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO
Date: Fri, 7 Sep 2018 13:50:15 -0400 [thread overview]
Message-ID: <20180907175015.GA8479@cmpxchg.org> (raw)
In-Reply-To: <20180907145858.GK24106@hirez.programming.kicks-ass.net>
On Fri, Sep 07, 2018 at 04:58:58PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 07, 2018 at 10:44:22AM -0400, Johannes Weiner wrote:
>
> > > This does the whole seqcount thing 6x, which is a bit of a waste.
> >
> > [...]
> >
> > > It's a bit cumbersome, but that's because of C.
> >
> > I was actually debating exactly this with Suren before, but since this
> > is a super cold path I went with readability. I was also thinking that
> > restarts could happen quite regularly under heavy scheduler load, and
> > so keeping the individual retry sections small could be helpful - but
> > I didn't instrument this in any way.
>
> I was hoping going over the whole thing once would reduce the time we
> need to keep that line in shared mode and reduce traffic. And yes, this
> path is cold, but I was thinking about reducing the interference on the
> remote CPU.
>
> Alternatively, we memcpy the whole line under the seqlock and then do
> everything later.
>
> Also, this only has a single cpu_clock() invocation.
Good points.
How about the below? It's still pretty readable, and generates compact
code inside the now single retry section:
ffffffff81ed464f: 44 89 ff mov %r15d,%edi
ffffffff81ed4652: e8 00 00 00 00 callq ffffffff81ed4657 <update_stats+0xca>
ffffffff81ed4653: R_X86_64_PLT32 sched_clock_cpu-0x4
memcpy(times, groupc->times, sizeof(groupc->times));
ffffffff81ed4657: 49 8b 14 24 mov (%r12),%rdx
state_start = groupc->state_start;
ffffffff81ed465b: 48 8b 4b 50 mov 0x50(%rbx),%rcx
memcpy(times, groupc->times, sizeof(groupc->times));
ffffffff81ed465f: 48 89 54 24 30 mov %rdx,0x30(%rsp)
ffffffff81ed4664: 49 8b 54 24 08 mov 0x8(%r12),%rdx
ffffffff81ed4669: 48 89 54 24 38 mov %rdx,0x38(%rsp)
ffffffff81ed466e: 49 8b 54 24 10 mov 0x10(%r12),%rdx
ffffffff81ed4673: 48 89 54 24 40 mov %rdx,0x40(%rsp)
memcpy(tasks, groupc->tasks, sizeof(groupc->tasks));
ffffffff81ed4678: 49 8b 55 00 mov 0x0(%r13),%rdx
ffffffff81ed467c: 48 89 54 24 24 mov %rdx,0x24(%rsp)
ffffffff81ed4681: 41 8b 55 08 mov 0x8(%r13),%edx
ffffffff81ed4685: 89 54 24 2c mov %edx,0x2c(%rsp)
---
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 0f07749b60a4..595414599b98 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -197,17 +197,26 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
}
}
-static u32 get_recent_time(struct psi_group *group, int cpu,
- enum psi_states state)
+static void get_recent_times(struct psi_group *group, int cpu, u32 *times)
{
struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
+ unsigned int tasks[NR_PSI_TASK_COUNTS];
+ u64 now, state_start;
unsigned int seq;
- u32 time, delta;
+ int s;
+ /* Snapshot a coherent view of the CPU state */
do {
seq = read_seqcount_begin(&groupc->seq);
+ now = cpu_clock(cpu);
+ memcpy(times, groupc->times, sizeof(groupc->times));
+ memcpy(tasks, groupc->tasks, sizeof(groupc->tasks));
+ state_start = groupc->state_start;
+ } while (read_seqcount_retry(&groupc->seq, seq));
- time = groupc->times[state];
+ /* Calculate state time deltas against the previous snapshot */
+ for (s = 0; s < NR_PSI_STATES; s++) {
+ u32 delta;
/*
* In addition to already concluded states, we also
* incorporate currently active states on the CPU,
@@ -217,14 +226,14 @@ static u32 get_recent_time(struct psi_group *group, int cpu,
* (u32) and our reported pressure close to what's
* actually happening.
*/
- if (test_state(groupc->tasks, state))
- time += cpu_clock(cpu) - groupc->state_start;
- } while (read_seqcount_retry(&groupc->seq, seq));
+ if (test_state(tasks, s))
+ times[s] += now - state_start;
- delta = time - groupc->times_prev[state];
- groupc->times_prev[state] = time;
+ delta = times[s] - groupc->times_prev[s];
+ groupc->times_prev[s] = times[s];
- return delta;
+ times[s] = delta;
+ }
}
static void calc_avgs(unsigned long avg[3], int missed_periods,
@@ -267,18 +276,16 @@ static bool update_stats(struct psi_group *group)
* loading, or even entirely idle CPUs.
*/
for_each_possible_cpu(cpu) {
+ u32 times[NR_PSI_STATES];
u32 nonidle;
- nonidle = get_recent_time(group, cpu, PSI_NONIDLE);
- nonidle = nsecs_to_jiffies(nonidle);
- nonidle_total += nonidle;
+ get_recent_times(group, cpu, times);
- for (s = 0; s < PSI_NONIDLE; s++) {
- u32 delta;
+ nonidle = nsecs_to_jiffies(times[PSI_NONIDLE]);
+ nonidle_total += nonidle;
- delta = get_recent_time(group, cpu, s);
- deltas[s] += (u64)delta * nonidle;
- }
+ for (s = 0; s < PSI_NONIDLE; s++)
+ deltas[s] += (u64)times[s] * nonidle;
}
/*
next prev parent reply other threads:[~2018-09-07 17:50 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-28 17:22 [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v4 Johannes Weiner
2018-08-28 17:22 ` [PATCH 1/9] mm: workingset: don't drop refault information prematurely Johannes Weiner
2018-08-28 17:22 ` [PATCH 2/9] mm: workingset: tell cache transitions from workingset thrashing Johannes Weiner
2018-08-28 17:22 ` [PATCH 3/9] delayacct: track delays from thrashing cache pages Johannes Weiner
2018-08-28 17:22 ` [PATCH 4/9] sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD Johannes Weiner
2018-09-12 23:28 ` Andrew Morton
2018-09-13 1:49 ` Johannes Weiner
2018-08-28 17:22 ` [PATCH 5/9] sched: loadavg: make calc_load_n() public Johannes Weiner
2018-08-28 17:22 ` [PATCH 6/9] sched: sched.h: make rq locking and clock functions available in stats.h Johannes Weiner
2018-08-28 17:22 ` [PATCH 7/9] sched: introduce this_rq_lock_irq() Johannes Weiner
2018-08-28 17:22 ` [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO Johannes Weiner
2018-08-28 20:11 ` Randy Dunlap
2018-08-28 20:56 ` Johannes Weiner
2018-08-28 21:30 ` Randy Dunlap
2018-09-07 10:16 ` Peter Zijlstra
2018-09-07 10:21 ` Peter Zijlstra
2018-09-07 14:44 ` Johannes Weiner
2018-09-07 14:58 ` Peter Zijlstra
2018-09-07 17:50 ` Johannes Weiner [this message]
2018-09-07 10:24 ` Peter Zijlstra
2018-09-07 14:54 ` Johannes Weiner
2018-08-28 17:22 ` [PATCH 9/9] psi: cgroup support Johannes Weiner
2018-09-05 21:43 ` [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v4 Johannes Weiner
2018-09-07 7:36 ` Daniel Drake
2018-09-07 7:46 ` Peter Zijlstra
2018-09-07 11:04 ` Peter Zijlstra
2018-09-07 15:09 ` Johannes Weiner
2018-09-07 15:58 ` Suren Baghdasaryan
2018-09-17 5:22 ` Daniel Drake
2018-09-18 15:53 ` Suren Baghdasaryan
2018-09-25 22:05 ` Suren Baghdasaryan
2018-09-17 13:29 ` peter enderborg
2018-09-17 13:40 ` Peter Zijlstra
2018-09-18 16:03 ` Suren Baghdasaryan
2018-10-19 2:07 ` Andrew Morton
2018-10-23 17:29 ` Johannes Weiner
2018-10-23 17:41 ` Peter Zijlstra
-- strict thread matches above, loose matches on Subject: below --
2018-08-01 15:19 [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v3 Johannes Weiner
2018-08-01 15:19 ` [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO Johannes Weiner
2018-08-03 16:56 ` Peter Zijlstra
2018-08-06 15:05 ` Johannes Weiner
2018-08-06 15:25 ` Peter Zijlstra
2018-08-06 15:40 ` Johannes Weiner
2018-08-06 15:19 ` Johannes Weiner
2018-08-06 16:03 ` Peter Zijlstra
2018-08-21 19:44 ` Johannes Weiner
2018-08-22 9:16 ` Peter Zijlstra
2018-08-03 17:07 ` Peter Zijlstra
2018-08-06 15:23 ` Johannes Weiner
2018-08-03 17:15 ` Peter Zijlstra
2018-08-03 17:21 ` Peter Zijlstra
2018-08-21 20:11 ` Johannes Weiner
2018-08-22 9:10 ` Peter Zijlstra
2018-08-22 17:28 ` Johannes Weiner
2018-08-01 15:12 Johannes Weiner
2018-08-01 15:13 ` [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180907175015.GA8479@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=drake@endlessm.com \
--cc=efault@gmx.de \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=peter.enderborg@sony.com \
--cc=peterz@infradead.org \
--cc=shakeelb@google.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vinmenon@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).