From: peterz@infradead.org
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: mingo@kernel.org, tglx@linutronix.de,
linux-kernel@vger.kernel.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
paulmck@kernel.org, frederic@kernel.org,
torvalds@linux-foundation.org, hch@lst.de
Subject: Re: [PATCH -v2 1/5] sched: Fix ttwu() race
Date: Tue, 21 Jul 2020 13:37:19 +0200 [thread overview]
Message-ID: <20200721113719.GI119549@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <159532854586.15672.5123219635720172265@build.alporthouse.com>
On Tue, Jul 21, 2020 at 11:49:05AM +0100, Chris Wilson wrote:
> Quoting Peter Zijlstra (2020-06-22 11:01:23)
> > @@ -2378,6 +2385,9 @@ static inline bool ttwu_queue_cond(int c
> > static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
> > {
> > if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
> > + if (WARN_ON_ONCE(cpu == smp_processor_id()))
> > + return false;
> > +
> > sched_clock_cpu(cpu); /* Sync clocks across CPUs */
> > __ttwu_queue_wakelist(p, cpu, wake_flags);
> > return true;
>
> We've been hitting this warning frequently, but have never seen the
> rcu-torture-esque oops ourselves.
How easy is it to hit this? What, if anything, can I do to make my own
computer go bang?
> <4> [181.766705] RIP: 0010:ttwu_queue_wakelist+0xbc/0xd0
> <4> [181.766710] Code: 00 00 00 5b 5d 41 5c 41 5d c3 31 c0 5b 5d 41 5c 41 5d c3 31 c0 f6 c3 08 74 f2 48 c7 c2 00 ad 03 00 83 7c 11 40 01 77 e4 eb 80 <0f> 0b 31 c0 eb dc 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 bf 17
> <4> [181.766726] RSP: 0018:ffffc90000003e08 EFLAGS: 00010046
> <4> [181.766733] RAX: 0000000000000000 RBX: 00000000ffffffff RCX: ffff888276a00000
> <4> [181.766740] RDX: 000000000003ad00 RSI: ffffffff8232045b RDI: ffffffff8233103e
> <4> [181.766747] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001
> <4> [181.766754] R10: 00000000d3fa25c3 R11: 0000000053712267 R12: ffff88825b912940
> <4> [181.766761] R13: 0000000000000000 R14: 0000000000000087 R15: 000000000003ad00
> <4> [181.766769] FS: 0000000000000000(0000) GS:ffff888276a00000(0000) knlGS:0000000000000000
> <4> [181.766777] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> <4> [181.766783] CR2: 000055b8245814e0 CR3: 0000000005610003 CR4: 00000000003606f0
> <4> [181.766790] Call Trace:
> <4> [181.766794] <IRQ>
> <4> [181.766798] try_to_wake_up+0x21b/0x690
> <4> [181.766805] autoremove_wake_function+0xc/0x50
> <4> [181.766858] __i915_sw_fence_complete+0x1ee/0x250 [i915]
> <4> [181.766912] dma_i915_sw_fence_wake+0x2d/0x40 [i915]
Please, don't trim oopses..
> We are seeing this on the ttwu_queue() path, so with p->on_cpu=0, and the
> warning is cleared up by
>
> - if (WARN_ON_ONCE(cpu == smp_processor_id()))
> + if (WARN_ON_ONCE(p->on_cpu && cpu == smp_processor_id()))
>
> which would appear to restore the old behaviour for ttwu_queue() and
> seem to be consistent with the intent of this patch. Hopefully this
> helps identify the problem correctly.
Hurmph, that's actively wrong. We should never queue to self, as that
would result in self-IPI, which is not possible on a bunch of archs. It
works for you because x86 can in fact do that.
So ttwu_queue_cond() will only return true when:
- target-cpu and current-cpu do not share cache;
so it cannot be this condition, because you _always_
share cache with yourself.
- when WF_ON_CPU and target-cpu has nr_running <= 1;
which means p->on_cpu == true.
So now you have cpu == smp_processor_id() && p->on_cpu == 1, however
your modified WARN contradicts that.
*puzzle*
next prev parent reply other threads:[~2020-07-21 11:38 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-22 10:01 [PATCH -v2 0/5] sched: TTWU, IPI and stuff Peter Zijlstra
2020-06-22 10:01 ` [PATCH -v2 1/5] sched: Fix ttwu() race Peter Zijlstra
2020-06-22 12:56 ` Peter Zijlstra
2020-06-23 7:19 ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
2020-06-23 8:48 ` [tip: sched/urgent] sched/core: " tip-bot2 for Peter Zijlstra
2020-07-21 10:49 ` [PATCH -v2 1/5] sched: " Chris Wilson
2020-07-21 11:37 ` peterz [this message]
2020-07-22 9:57 ` Chris Wilson
2020-07-23 18:28 ` Peter Zijlstra
2020-07-23 19:41 ` Chris Wilson
2020-07-23 20:11 ` Peter Zijlstra
2020-07-24 17:55 ` Paul E. McKenney
2020-06-22 10:01 ` [PATCH -v2 2/5] sched: s/WF_ON_RQ/WQ_ON_CPU/ Peter Zijlstra
2020-06-23 7:19 ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
2020-06-23 8:48 ` [tip: sched/urgent] sched/core: s/WF_ON_RQ/WQ_ON_CPU/ tip-bot2 for Peter Zijlstra
2020-06-22 10:01 ` [PATCH -v2 3/5] smp, irq_work: Continue smp_call_function*() and irq_work*() integration Peter Zijlstra
2020-06-23 7:19 ` [tip: sched/urgent] " tip-bot2 for Peter Zijlstra
2020-06-23 8:48 ` tip-bot2 for Peter Zijlstra
2020-06-22 10:01 ` [PATCH -v2 4/5] irq_work: Cleanup Peter Zijlstra
2020-06-22 10:01 ` [PATCH -v2 5/5] smp: Cleanup smp_call_function*() Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200721113719.GI119549@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=bsegall@google.com \
--cc=chris@chris-wilson.co.uk \
--cc=dietmar.eggemann@arm.com \
--cc=frederic@kernel.org \
--cc=hch@lst.de \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=paulmck@kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).