asahi.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Jon Nettleton <jon@solid-run.com>
To: Will Deacon <will@kernel.org>
Cc: Hector Martin <marcan@marcan.st>,
	Peter Zijlstra <peterz@infradead.org>,
	 Arnd Bergmann <arnd@arndb.de>, Ingo Molnar <mingo@kernel.org>,
	Alan Stern <stern@rowland.harvard.edu>,
	 Andrea Parri <parri.andrea@gmail.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	 Nicholas Piggin <npiggin@gmail.com>,
	David Howells <dhowells@redhat.com>,
	 Jade Alglave <j.alglave@ucl.ac.uk>,
	Luc Maranget <luc.maranget@inria.fr>,
	 "Paul E. McKenney" <paulmck@kernel.org>,
	Akira Yokosawa <akiyks@gmail.com>,
	 Daniel Lustig <dlustig@nvidia.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	 Mark Rutland <mark.rutland@arm.com>,
	Jonathan Corbet <corbet@lwn.net>, Tejun Heo <tj@kernel.org>,
	 jirislaby@kernel.org, Marc Zyngier <maz@kernel.org>,
	 Catalin Marinas <catalin.marinas@arm.com>,
	Oliver Neukum <oneukum@suse.com>,
	 Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org,  linux-arch@vger.kernel.org,
	linux-doc@vger.kernel.org,  linux-arm-kernel@lists.infradead.org,
	Asahi Linux <asahi@lists.linux.dev>,
	 stable@vger.kernel.org
Subject: Re: [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure
Date: Tue, 16 Aug 2022 19:49:16 +0200	[thread overview]
Message-ID: <CABdtJHt_3TKJVLhLiYMcBtvyA_DwaNapv1xHVeDdQH7cAC6YWw@mail.gmail.com> (raw)
In-Reply-To: <20220816173654.GA11766@willie-the-truck>

On Tue, Aug 16, 2022 at 7:38 PM Will Deacon <will@kernel.org> wrote:
>
> On Tue, Aug 16, 2022 at 11:30:45PM +0900, Hector Martin wrote:
> > On 16/08/2022 23.04, Will Deacon wrote:
> > >> diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
> > >> index 3096f086b5a3..71ab4ba9c25d 100644
> > >> --- a/include/asm-generic/bitops/atomic.h
> > >> +++ b/include/asm-generic/bitops/atomic.h
> > >> @@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
> > >>    unsigned long mask = BIT_MASK(nr);
> > >>
> > >>    p += BIT_WORD(nr);
> > >> -  if (READ_ONCE(*p) & mask)
> > >> -          return 1;
> > >> -
> > >>    old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
> > >>    return !!(old & mask);
> > >>  }
> > >> @@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
> > >>    unsigned long mask = BIT_MASK(nr);
> > >>
> > >>    p += BIT_WORD(nr);
> > >> -  if (!(READ_ONCE(*p) & mask))
> > >> -          return 0;
> > >> -
> > >>    old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
> > >>    return !!(old & mask);
> > >
> > > I suppose one sad thing about this is that, on arm64, we could reasonably
> > > keep the READ_ONCE() path with a DMB LD (R->RW) barrier before the return
> > > but I don't think we can express that in the Linux memory model so we
> > > end up in RmW territory every time.
> >
> > You'd need a barrier *before* the READ_ONCE(), since what we're trying
> > to prevent is a consumer from writing to the value without being able to
> > observe the writes that happened prior, while this side read the old
> > value. A barrier after the READ_ONCE() doesn't do anything, as that read
> > is the last memory operation in this thread (of the problematic sequence).
>
> Right, having gone back to your litmus test, I now realise it's the "SB"
> shape from the memory ordering terminology. It's funny because the arm64
> acquire/release instructions are RCsc and so upgrading the READ_ONCE()
> to an *arm64* acquire instruction would work for your specific case, but
> only because the preceeding store is a release.
>
> > At that point, I'm not sure DMB LD / early read / LSE atomic would be
> > any faster than just always doing the LSE atomic?
>
> It depends a lot on the configuration of the system and the state of the
> relevant cacheline, but generally avoiding an RmW by introducing a barrier
> is likely to be a win. It just gets ugly here as we'd want to avoid the
> DMB in the case where we end up doing the RmW. Possibly we could do
> something funky like a test-and-test-and-test-and-set (!) where we do
> the DMB+READ_ONCE() only if the first READ_ONCE() has the bit set, but
> even just typing that is horrible and I'd _absolutely_ want to see perf
> numbers to show that it's a benefit once you start taking into account
> things like branch prediction.
>
> Anywho, since Linus has applied the patch and it should work, this is
> just an interesting aside.
>
> Will
>

It is moot if Linus has already taken the patch, but with a stock
kernel config I am
still seeing a slight performance dip but only ~1-2% in the specific
tests I was running.
Sorry about the noise I will need to look at my kernel builder and see what went
wrong when I have more time.

Cheers,
Jon

  reply	other threads:[~2022-08-16 17:49 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-16  7:03 [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure Hector Martin
2022-08-16  8:16 ` Arnd Bergmann
2022-08-16 12:29   ` Jon Nettleton
2022-08-16 13:00     ` Will Deacon
2022-08-16 13:05       ` Jon Nettleton
2022-08-16 13:23         ` Marc Zyngier
2022-08-16 14:06   ` Will Deacon
2022-08-16 18:14     ` Matthew Wilcox
2022-08-16 14:04 ` Will Deacon
2022-08-16 14:30   ` Hector Martin
2022-08-16 17:36     ` Will Deacon
2022-08-16 17:49       ` Jon Nettleton [this message]
2022-08-16 18:02         ` Linus Torvalds
2022-08-17  5:40           ` Jon Nettleton
2022-08-17  8:20 ` David Laight

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABdtJHt_3TKJVLhLiYMcBtvyA_DwaNapv1xHVeDdQH7cAC6YWw@mail.gmail.com \
    --to=jon@solid-run.com \
    --cc=akiyks@gmail.com \
    --cc=arnd@arndb.de \
    --cc=asahi@lists.linux.dev \
    --cc=boqun.feng@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=dhowells@redhat.com \
    --cc=dlustig@nvidia.com \
    --cc=j.alglave@ucl.ac.uk \
    --cc=jirislaby@kernel.org \
    --cc=joel@joelfernandes.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luc.maranget@inria.fr \
    --cc=marcan@marcan.st \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mingo@kernel.org \
    --cc=npiggin@gmail.com \
    --cc=oneukum@suse.com \
    --cc=parri.andrea@gmail.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=stable@vger.kernel.org \
    --cc=stern@rowland.harvard.edu \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).