Linux-ARM-Kernel Archive on lore.kernel.org
 help / color / Atom feed
From: Jayachandran Chandrasekharan Nair <jnair@marvell.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Jan Glauber <jglauber@marvell.com>,
	"will.deacon@arm.com" <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC] Disable lockref on arm64
Date: Thu, 2 May 2019 23:19:02 +0000
Message-ID: <20190502231858.GB13168@dc5-eodlnx05.marvell.com> (raw)
In-Reply-To: <CAHk-=wjmtMrxC1nSEHarBn8bW+hNXGv=2YeAWmTw1o54V8GKWA@mail.gmail.com>

On Thu, May 02, 2019 at 09:12:18AM -0700, Linus Torvalds wrote:
> On Thu, May 2, 2019 at 1:27 AM Jan Glauber <jglauber@marvell.com> wrote:
> >
> > I'll see how x86 runs the same testcase, I thought that playing
> > cacheline ping-pong is not the optimal use case for any CPU.
> 
> Oh, ping-pong is always bad.
> 
> But from past experience, x86 tends to be able to always do tight a
> cmpxchg loop without failing more than a once or twice, which is all
> you need for things like this.

I don't really see the point your are making about hardware. If you
look at the test case, you have about 64 cores doing CAS to the same
location. At any point one of them will succeed and the other 63 will
fail - and in our case since cpu_relax is a nop, they sit in a tight
loop mostly failing.

And further due to the nature of the test case, the successful thread
will come back almost immediately with another CAS.

> And it's "easy" to do in hardware on a CPU: all you need to do is
> guarantee that when you have a cmpxchg loop, the cacheline is sticky
> enough that it stays around at the local CPU for the duration of one
> loop entry (ie from one cmpxchg to the next).
> 
> Obviously you can do that wrong too, and make cachelines *too* sticky,
> and then you get fairness issues.

This is certainly not the case, we are not bouncing around not making
progress at all. We have all 64 cores hitting the same location in a
very tight loop slowing the system down. And you will get fairness
issues anyway about which of the failing cores succeeds next.

The testcase does not hang indefinitely, it eventually completes. The
scaling loss is, in my opinion, due to the naive lockref implementation,
rather than due to a hardware limitation.

Are you expecting the hardware cache coherency implementation to have
the equivalent of queued locks and block potentially failing CAS?

After speaking to the folks doing performance comparisons here, x86
suffers in the same test case as well, when there are enough cores.

Your patch that switches to spinlock (in this case queued) works nicely
in case of high contention. Is this something that will be merged to
mainline? We can provide some testing results if that will help.
 
> But it really sounds like what happens for your ThunderX2 case, the
> different CPU's steal each others cachelines so quickly that even when
> you get the cacheline, you don't then get to update it.
> 
> Does ThunderX2 do LSE atomics? Are the acquire/release versions really
> slow, perhaps, and more or less serializing (maybe it does the
> "release" logic even when the store _fails_?), so that doing two
> back-to-back cmpxchg ends up taking the core a "long" time, so that
> the cache subsystem then steals it easily in between cmpxchg's in a
> loop? Does the L1 cache maybe have no way to keep a line around from
> one cmpxchg to the next?

ThunderX2 has LSE atomics. It has also full out-of-order execution with
weak ordering for load/store, so a barrier will be slow down exection.

Also to address some points on the earlier rant: ThunderX2 is a fairly
beefy processor (based on Broadcom Vulcan), it compares well on per-core
performance with x86 (and with A76 from what I hear even though A76 was
out a few years later). There are more cores per socket due to the fact
that there is no ISA baggage, and not really due to a weaker core.
 
> This is (one example) where having a CPU and an interconnect that
> works together matters.  And yes, it probably needs a few generations
> of hardware tuning where people see problems and fix them.

The next generation ThunderX3 is in the works, and it will have even
more cores, it is going to be fun :)

JC

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply index

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-29 14:52 Jan Glauber
2019-05-01 16:01 ` Will Deacon
2019-05-02  8:38   ` Jan Glauber
2019-05-01 16:41 ` Linus Torvalds
2019-05-02  8:27   ` Jan Glauber
2019-05-02 16:12     ` Linus Torvalds
2019-05-02 23:19       ` Jayachandran Chandrasekharan Nair [this message]
2019-05-03 19:40         ` Linus Torvalds
2019-05-06  6:13           ` [EXT] " Jayachandran Chandrasekharan Nair
2019-05-06 17:13             ` Linus Torvalds
2019-05-06 18:10             ` Will Deacon
2019-05-18  4:24               ` Jayachandran Chandrasekharan Nair
2019-05-18 10:00                 ` Ard Biesheuvel
2019-05-22 16:04                   ` Will Deacon
2019-06-12  4:10                     ` Jayachandran Chandrasekharan Nair
2019-06-12  9:31                       ` Will Deacon
2019-06-14  7:09                         ` Jayachandran Chandrasekharan Nair
2019-06-14  9:58                           ` Will Deacon
2019-06-14 10:24                             ` Ard Biesheuvel
2019-06-14 10:38                               ` Will Deacon
2019-06-15  4:21                                 ` Kees Cook
2019-06-15  8:47                                   ` Ard Biesheuvel
2019-06-15 13:59                                     ` Kees Cook
2019-06-15 14:18                                       ` Ard Biesheuvel
2019-06-16 21:31                                         ` Kees Cook
2019-06-17 11:33                                           ` Ard Biesheuvel
2019-06-17 17:26                                             ` Will Deacon
2019-06-17 20:07                                               ` Jayachandran Chandrasekharan Nair
2019-06-18  5:41                                               ` Kees Cook
2019-06-13  9:53                       ` Hanjun Guo
2019-06-05 13:48   ` [PATCH] lockref: Limit number of cmpxchg loop retries Jan Glauber
2019-06-05 20:16     ` Linus Torvalds
2019-06-06  8:03       ` Jan Glauber
2019-06-06  9:41         ` Will Deacon
2019-06-06 10:28           ` Jan Glauber
2019-06-07  7:27             ` Jan Glauber
2019-06-07 20:14               ` Linus Torvalds

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190502231858.GB13168@dc5-eodlnx05.marvell.com \
    --to=jnair@marvell.com \
    --cc=catalin.marinas@arm.com \
    --cc=jglauber@marvell.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-ARM-Kernel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-arm-kernel/0 linux-arm-kernel/git/0.git
	git clone --mirror https://lore.kernel.org/linux-arm-kernel/1 linux-arm-kernel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-arm-kernel linux-arm-kernel/ https://lore.kernel.org/linux-arm-kernel \
		linux-arm-kernel@lists.infradead.org infradead-linux-arm-kernel@archiver.kernel.org
	public-inbox-index linux-arm-kernel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-arm-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox