From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-io1-f44.google.com (mail-io1-f44.google.com [209.85.166.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D087953BE for ; Tue, 16 Aug 2022 17:49:54 +0000 (UTC) Received: by mail-io1-f44.google.com with SMTP id y187so5026640iof.0 for ; Tue, 16 Aug 2022 10:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=solid-run-com.20210112.gappssmtp.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=kjqRKq9oJnes+t5wbhe3Qf46+H6wK6o+K4jvO1Tjuz8=; b=2Pp8bzj1MnGCU6V3vFJNsSKqqD/ziYTUDm2CF1NJhzXFBX0xc9mbyDXDKdd+QB6Wzg fnZJp2eKCWYuykufGDOSXGPsv7jAdNiq872uZ6+YS3FEFDrsSvziP6q9qSsi0Ckj3rGk XHwBXg1srh4Gm2YSVxSA6FWxO8UL52ni/TVzR7B/j2W6YFIoS6dNX9hcZaecsBB51/48 +kYqS4rCI7UtAK7/T6NtJ9t9wNKLMRAm8kGrTxwRKhYfTpMqr49Lyx3o36X27FzopYeX UT/Tl9jNtv43LRs9+KljN2KuIM+HYa4Z0ZT/eAtIJuPoK4dcHdAc27Zu4v9ChOlJn8pk BZMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=kjqRKq9oJnes+t5wbhe3Qf46+H6wK6o+K4jvO1Tjuz8=; b=dSEe69FDKLvpMHQIupzQ6brheZ1puZeRd9D2NHjJhp13pMFjVO6PymH2C5Efp2wex9 NzmltQ+EesisQBEUArD5rP3aAc1hCxMPANybkkSn5rl0TPR6fYrjHcVCUncSW+nAd2ae JAj6RrBQhUTXl2qbhJKYedLi4RNUdnhKW+N7rTxLOzoPO+XdlHUlmTMRBCFOcvx9q2Y2 I/r6j8hgTa/LAi36QB4GoDoLEI9XCLkF51Nao+9KvoGKHyZOs23xtdLCRWBBKlT0HSPT 9ST2aUiRLbkgKDfe+a+nW7r56713gwdXKxJwPIFN8CZul3hHTGjFpVFYQO2y+BSLo3ms iSwA== X-Gm-Message-State: ACgBeo25GIxfNOOhzVf0PGEzkF/wROeHNlK3UpD0hzvlzDA1GuyGqryV Aff2TTyEximhiRi1g7bBuZWUJSR2XGT5LeJpOIab+g== X-Google-Smtp-Source: AA6agR7IlYaeIBknvmvu0X+8YEBbXtjhQpsdJAi0dy2tbyCrCmJ4c7rG6mWa+yox2ulVDU8DDG9g31v9WMcdQCjTBiU= X-Received: by 2002:a05:6602:3cc:b0:679:61e7:3928 with SMTP id g12-20020a05660203cc00b0067961e73928mr9381040iov.217.1660672193863; Tue, 16 Aug 2022 10:49:53 -0700 (PDT) Precedence: bulk X-Mailing-List: asahi@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20220816070311.89186-1-marcan@marcan.st> <20220816140423.GC11202@willie-the-truck> <20220816173654.GA11766@willie-the-truck> In-Reply-To: <20220816173654.GA11766@willie-the-truck> From: Jon Nettleton Date: Tue, 16 Aug 2022 19:49:16 +0200 Message-ID: Subject: Re: [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure To: Will Deacon Cc: Hector Martin , Peter Zijlstra , Arnd Bergmann , Ingo Molnar , Alan Stern , Andrea Parri , Boqun Feng , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Mark Rutland , Jonathan Corbet , Tejun Heo , jirislaby@kernel.org, Marc Zyngier , Catalin Marinas , Oliver Neukum , Linus Torvalds , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Asahi Linux , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" On Tue, Aug 16, 2022 at 7:38 PM Will Deacon wrote: > > On Tue, Aug 16, 2022 at 11:30:45PM +0900, Hector Martin wrote: > > On 16/08/2022 23.04, Will Deacon wrote: > > >> diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h > > >> index 3096f086b5a3..71ab4ba9c25d 100644 > > >> --- a/include/asm-generic/bitops/atomic.h > > >> +++ b/include/asm-generic/bitops/atomic.h > > >> @@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p) > > >> unsigned long mask = BIT_MASK(nr); > > >> > > >> p += BIT_WORD(nr); > > >> - if (READ_ONCE(*p) & mask) > > >> - return 1; > > >> - > > >> old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p); > > >> return !!(old & mask); > > >> } > > >> @@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p) > > >> unsigned long mask = BIT_MASK(nr); > > >> > > >> p += BIT_WORD(nr); > > >> - if (!(READ_ONCE(*p) & mask)) > > >> - return 0; > > >> - > > >> old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); > > >> return !!(old & mask); > > > > > > I suppose one sad thing about this is that, on arm64, we could reasonably > > > keep the READ_ONCE() path with a DMB LD (R->RW) barrier before the return > > > but I don't think we can express that in the Linux memory model so we > > > end up in RmW territory every time. > > > > You'd need a barrier *before* the READ_ONCE(), since what we're trying > > to prevent is a consumer from writing to the value without being able to > > observe the writes that happened prior, while this side read the old > > value. A barrier after the READ_ONCE() doesn't do anything, as that read > > is the last memory operation in this thread (of the problematic sequence). > > Right, having gone back to your litmus test, I now realise it's the "SB" > shape from the memory ordering terminology. It's funny because the arm64 > acquire/release instructions are RCsc and so upgrading the READ_ONCE() > to an *arm64* acquire instruction would work for your specific case, but > only because the preceeding store is a release. > > > At that point, I'm not sure DMB LD / early read / LSE atomic would be > > any faster than just always doing the LSE atomic? > > It depends a lot on the configuration of the system and the state of the > relevant cacheline, but generally avoiding an RmW by introducing a barrier > is likely to be a win. It just gets ugly here as we'd want to avoid the > DMB in the case where we end up doing the RmW. Possibly we could do > something funky like a test-and-test-and-test-and-set (!) where we do > the DMB+READ_ONCE() only if the first READ_ONCE() has the bit set, but > even just typing that is horrible and I'd _absolutely_ want to see perf > numbers to show that it's a benefit once you start taking into account > things like branch prediction. > > Anywho, since Linus has applied the patch and it should work, this is > just an interesting aside. > > Will > It is moot if Linus has already taken the patch, but with a stock kernel config I am still seeing a slight performance dip but only ~1-2% in the specific tests I was running. Sorry about the noise I will need to look at my kernel builder and see what went wrong when I have more time. Cheers, Jon From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9983AC25B0E for ; Tue, 16 Aug 2022 17:50:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cY7ie4wGSAGppsBnx1ZuLUH5qz9+MfSVgquUbK8ySYY=; b=dZD14hvU9zFKBb ah+QHPc+QjpReJVt3nz4iZERVszr8cSJE6B8TzEYL5Bb2SlaoxWmf3mM3HuBYehBIjHJzr11O5ngk HoxqVyXMQiGvMrNjdwfhnETTJR/nVJ0qXkQg/WvC4GHMLR5TTnWziuYsDF7Ouee0B4tiLjQ6Z78QJ Vtew4YGMImOjId/1xLEAa5BmsyOOT9UH9v3BVX8i6dbYqnfzDloD2lKwYVZlIjILj7L8ldw0yPA5q JHhnu5UUmf3v5cAWGGRTdqegz5XA0kbFJ5Yk0nwmUsGcfE/sxD5l121B4WwTuDYBYF2xjynDqLCCg JsUZU9TzpwX6EA0jfWEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oO0hT-005SH4-V6; Tue, 16 Aug 2022 17:50:00 +0000 Received: from mail-io1-xd34.google.com ([2607:f8b0:4864:20::d34]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oO0hQ-005SDx-RP for linux-arm-kernel@lists.infradead.org; Tue, 16 Aug 2022 17:49:58 +0000 Received: by mail-io1-xd34.google.com with SMTP id b142so5903582iof.10 for ; Tue, 16 Aug 2022 10:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=solid-run-com.20210112.gappssmtp.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=kjqRKq9oJnes+t5wbhe3Qf46+H6wK6o+K4jvO1Tjuz8=; b=2Pp8bzj1MnGCU6V3vFJNsSKqqD/ziYTUDm2CF1NJhzXFBX0xc9mbyDXDKdd+QB6Wzg fnZJp2eKCWYuykufGDOSXGPsv7jAdNiq872uZ6+YS3FEFDrsSvziP6q9qSsi0Ckj3rGk XHwBXg1srh4Gm2YSVxSA6FWxO8UL52ni/TVzR7B/j2W6YFIoS6dNX9hcZaecsBB51/48 +kYqS4rCI7UtAK7/T6NtJ9t9wNKLMRAm8kGrTxwRKhYfTpMqr49Lyx3o36X27FzopYeX UT/Tl9jNtv43LRs9+KljN2KuIM+HYa4Z0ZT/eAtIJuPoK4dcHdAc27Zu4v9ChOlJn8pk BZMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=kjqRKq9oJnes+t5wbhe3Qf46+H6wK6o+K4jvO1Tjuz8=; b=CRLI8qnMNumvo3mmdmbkOGMpHkP2LaxiwUQqewnAZY4Xxu4LNbSbWXcTh0Y0Y4eVLu W/9kQN8rcrFTfvFz/U7Cso39kkkCxKvn69J5U+U2muwBWtlD7Z/RLP/CGD8rq21drRDq NS6v7k9dcMVtNdN/IuCzRRstarDBP2aroAqjFQ3pUy1vUHMB4ZsyTGg0aIej6cYqBi8f hpKSPjbeqNILjPSbzdHOX+mBFjv6rB4ILbUc2dnuXubsME9pgkDS6AyGE2nF/1H2HeHr IHEM3hyFQaOCBzdsacc68C/cthFWnp+tmItsx9aRfnHaTeL2yBSJ6+CwjWq2N2+eEk2d DVow== X-Gm-Message-State: ACgBeo1SHPMJ5keJA1UncSy/6cA03YvNZyGo+CEDRtfWelRuxy20i0kj o7DaH69iSbMM1pEYIOwbYBX02PIL6riTRSQR+zyphw== X-Google-Smtp-Source: AA6agR7IlYaeIBknvmvu0X+8YEBbXtjhQpsdJAi0dy2tbyCrCmJ4c7rG6mWa+yox2ulVDU8DDG9g31v9WMcdQCjTBiU= X-Received: by 2002:a05:6602:3cc:b0:679:61e7:3928 with SMTP id g12-20020a05660203cc00b0067961e73928mr9381040iov.217.1660672193863; Tue, 16 Aug 2022 10:49:53 -0700 (PDT) MIME-Version: 1.0 References: <20220816070311.89186-1-marcan@marcan.st> <20220816140423.GC11202@willie-the-truck> <20220816173654.GA11766@willie-the-truck> In-Reply-To: <20220816173654.GA11766@willie-the-truck> From: Jon Nettleton Date: Tue, 16 Aug 2022 19:49:16 +0200 Message-ID: Subject: Re: [PATCH] locking/atomic: Make test_and_*_bit() ordered on failure To: Will Deacon Cc: Hector Martin , Peter Zijlstra , Arnd Bergmann , Ingo Molnar , Alan Stern , Andrea Parri , Boqun Feng , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Mark Rutland , Jonathan Corbet , Tejun Heo , jirislaby@kernel.org, Marc Zyngier , Catalin Marinas , Oliver Neukum , Linus Torvalds , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Asahi Linux , stable@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220816_104957_158404_DFB8F7ED X-CRM114-Status: GOOD ( 37.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Aug 16, 2022 at 7:38 PM Will Deacon wrote: > > On Tue, Aug 16, 2022 at 11:30:45PM +0900, Hector Martin wrote: > > On 16/08/2022 23.04, Will Deacon wrote: > > >> diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h > > >> index 3096f086b5a3..71ab4ba9c25d 100644 > > >> --- a/include/asm-generic/bitops/atomic.h > > >> +++ b/include/asm-generic/bitops/atomic.h > > >> @@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p) > > >> unsigned long mask = BIT_MASK(nr); > > >> > > >> p += BIT_WORD(nr); > > >> - if (READ_ONCE(*p) & mask) > > >> - return 1; > > >> - > > >> old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p); > > >> return !!(old & mask); > > >> } > > >> @@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p) > > >> unsigned long mask = BIT_MASK(nr); > > >> > > >> p += BIT_WORD(nr); > > >> - if (!(READ_ONCE(*p) & mask)) > > >> - return 0; > > >> - > > >> old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); > > >> return !!(old & mask); > > > > > > I suppose one sad thing about this is that, on arm64, we could reasonably > > > keep the READ_ONCE() path with a DMB LD (R->RW) barrier before the return > > > but I don't think we can express that in the Linux memory model so we > > > end up in RmW territory every time. > > > > You'd need a barrier *before* the READ_ONCE(), since what we're trying > > to prevent is a consumer from writing to the value without being able to > > observe the writes that happened prior, while this side read the old > > value. A barrier after the READ_ONCE() doesn't do anything, as that read > > is the last memory operation in this thread (of the problematic sequence). > > Right, having gone back to your litmus test, I now realise it's the "SB" > shape from the memory ordering terminology. It's funny because the arm64 > acquire/release instructions are RCsc and so upgrading the READ_ONCE() > to an *arm64* acquire instruction would work for your specific case, but > only because the preceeding store is a release. > > > At that point, I'm not sure DMB LD / early read / LSE atomic would be > > any faster than just always doing the LSE atomic? > > It depends a lot on the configuration of the system and the state of the > relevant cacheline, but generally avoiding an RmW by introducing a barrier > is likely to be a win. It just gets ugly here as we'd want to avoid the > DMB in the case where we end up doing the RmW. Possibly we could do > something funky like a test-and-test-and-test-and-set (!) where we do > the DMB+READ_ONCE() only if the first READ_ONCE() has the bit set, but > even just typing that is horrible and I'd _absolutely_ want to see perf > numbers to show that it's a benefit once you start taking into account > things like branch prediction. > > Anywho, since Linus has applied the patch and it should work, this is > just an interesting aside. > > Will > It is moot if Linus has already taken the patch, but with a stock kernel config I am still seeing a slight performance dip but only ~1-2% in the specific tests I was running. Sorry about the noise I will need to look at my kernel builder and see what went wrong when I have more time. Cheers, Jon _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel