From: Kurt Kanzenbach <kurt.kanzenbach@linutronix.de>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
linux-kernel@vger.kernel.org,
Daniel Wagner <daniel.wagner@siemens.com>,
Peter Zijlstra <peterz@infradead.org>,
x86@kernel.org, Linus Torvalds <torvalds@linux-foundation.org>,
"H. Peter Anvin" <hpa@zytor.com>,
Boqun Feng <boqun.feng@gmail.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Mark Rutland <mark.rutland@arm.com>
Subject: Re: [Problem] Cache line starvation
Date: Fri, 28 Sep 2018 11:05:22 +0200 [thread overview]
Message-ID: <20180928090521.mj5elgqnla6qcz2r@linutronix.de> (raw)
In-Reply-To: <alpine.DEB.2.21.1809271644120.8118@nanos.tec.linutronix.de>
Hi Thomas,
On Thu, Sep 27, 2018 at 04:47:47PM +0200, Thomas Gleixner wrote:
> On Thu, 27 Sep 2018, Kurt Kanzenbach wrote:
> > On Thu, Sep 27, 2018 at 04:25:47PM +0200, Kurt Kanzenbach wrote:
> > > However, the issue still triggers fine. With stress-ng we're able to
> > > generate latency in millisecond range. The only workaround we've found
> > > so far is to add a "delay" in cpu_relax().
> >
> > It might interesting for you, how we added the delay. We've used:
> >
> > static inline void cpu_relax(void)
> > {
> > volatile int i = 0;
> >
> > asm volatile("yield" ::: "memory");
> > while (i++ <= 1000);
> > }
> >
> > Of course it's not efficient, but it works.
>
> I wonder if it's just the store on the stack which makes it work. I've seen
> that when instrumenting x86. When the careful instrumentation just stayed
> in registers it failed. Once it was too much and stack got involved it
> vanished away.
I've performed more tests: Adding a store to a global variable just
before calling cpu_relax() doesn't help. Furthermore, adding up to 20
yield instructions (just like you did on x86) didn't work either.
Thanks,
Kurt
next prev parent reply other threads:[~2018-09-28 9:05 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-21 12:02 [Problem] Cache line starvation Sebastian Andrzej Siewior
2018-09-21 12:13 ` Thomas Gleixner
2018-09-21 12:50 ` Sebastian Andrzej Siewior
2018-09-21 12:20 ` Peter Zijlstra
2018-09-21 12:54 ` Thomas Gleixner
2018-10-03 7:51 ` Catalin Marinas
2018-10-03 8:07 ` Thomas Gleixner
2018-10-03 8:28 ` Peter Zijlstra
2018-10-03 10:43 ` Thomas Gleixner
2018-10-03 8:23 ` Peter Zijlstra
2018-09-26 7:34 ` Peter Zijlstra
2018-09-26 8:04 ` Thomas Gleixner
2018-09-26 12:53 ` Will Deacon
2018-09-27 14:25 ` Kurt Kanzenbach
2018-09-27 14:41 ` Kurt Kanzenbach
2018-09-27 14:47 ` Thomas Gleixner
2018-09-28 9:05 ` Kurt Kanzenbach [this message]
2018-09-28 15:26 ` Kurt Kanzenbach
2018-09-28 19:26 ` Sebastian Andrzej Siewior
2018-09-28 19:34 ` Thomas Gleixner
2018-10-02 6:31 ` Daniel Wagner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180928090521.mj5elgqnla6qcz2r@linutronix.de \
--to=kurt.kanzenbach@linutronix.de \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=daniel.wagner@siemens.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).