From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E180BC43381 for ; Tue, 19 Mar 2019 15:45:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B094B2083D for ; Tue, 19 Mar 2019 15:45:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727725AbfCSPpO (ORCPT ); Tue, 19 Mar 2019 11:45:14 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:56120 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727081AbfCSPpN (ORCPT ); Tue, 19 Mar 2019 11:45:13 -0400 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x2JFiHtY097478 for ; Tue, 19 Mar 2019 11:45:12 -0400 Received: from e16.ny.us.ibm.com (e16.ny.us.ibm.com [129.33.205.206]) by mx0a-001b2d01.pphosted.com with ESMTP id 2rb0usghaf-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 19 Mar 2019 11:45:12 -0400 Received: from localhost by e16.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 19 Mar 2019 15:45:10 -0000 Received: from b01cxnp22035.gho.pok.ibm.com (9.57.198.25) by e16.ny.us.ibm.com (146.89.104.203) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 19 Mar 2019 15:45:07 -0000 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp22035.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x2JFj77u11927754 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 19 Mar 2019 15:45:07 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 50B45B2064; Tue, 19 Mar 2019 15:45:07 +0000 (GMT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 225EEB2067; Tue, 19 Mar 2019 15:45:07 +0000 (GMT) Received: from paulmck-ThinkPad-W541 (unknown [9.70.82.188]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Tue, 19 Mar 2019 15:45:07 +0000 (GMT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id 561DF16C1285; Tue, 19 Mar 2019 08:45:56 -0700 (PDT) Date: Tue, 19 Mar 2019 08:45:56 -0700 From: "Paul E. McKenney" To: James Bottomley Cc: Al Viro , Eric Biggers , "Tobin C. Harding" , linux-fsdevel@vger.kernel.org Subject: Re: dcache locking question Reply-To: paulmck@linux.ibm.com References: <20190315185455.GA2217@ZenIV.linux.org.uk> <20190316223128.GV4102@linux.ibm.com> <20190317001840.GF2217@ZenIV.linux.org.uk> <20190317005005.GY4102@linux.ibm.com> <1552789220.6551.13.camel@HansenPartnership.com> <20190317030634.GG2217@ZenIV.linux.org.uk> <1552796596.6551.17.camel@HansenPartnership.com> <20190318003514.GD4102@linux.ibm.com> <1552926378.3203.13.camel@HansenPartnership.com> <20190318171106.GK4102@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190318171106.GK4102@linux.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 19031915-0072-0000-0000-0000040D5F39 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010786; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000281; SDB=6.01176660; UDB=6.00615471; IPR=6.00957348; MB=3.00026056; MTD=3.00000008; XFM=3.00000015; UTC=2019-03-19 15:45:09 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19031915-0073-0000-0000-00004B8A2410 Message-Id: <20190319154556.GA31740@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-03-19_07:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1903190115 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Mar 18, 2019 at 10:11:06AM -0700, Paul E. McKenney wrote: > On Mon, Mar 18, 2019 at 09:26:18AM -0700, James Bottomley wrote: > > On Sun, 2019-03-17 at 17:35 -0700, Paul E. McKenney wrote: > > > On Sat, Mar 16, 2019 at 09:23:16PM -0700, James Bottomley wrote: > > > > On Sun, 2019-03-17 at 03:06 +0000, Al Viro wrote: > > > > > On Sat, Mar 16, 2019 at 07:20:20PM -0700, James Bottomley wrote: > > > > > > On Sat, 2019-03-16 at 17:50 -0700, Paul E. McKenney wrote: > > > > > > [...] > > > > > > > I -have- seen stores of constant values be torn, but not > > > > > > > stores of runtime-variable values and not loads. Still, such > > > > > > > tearing is permitted, and including the READ_ONCE() is making > > > > > > > it easier for things like thread sanitizers. In addition, > > > > > > > the READ_ONCE() makes it clear that the value being loaded is > > > > > > > unstable, which can be useful documentation. > > > > > > > > > > > > Um, just so I'm clear, because this assumption permeates all > > > > > > our code: load or store tearing can never occur if we're doing > > > > > > load or store of a 32 bit value which is naturally > > > > > > aligned. Where naturally aligned is within the gift of the CPU > > > > > > to determine but which the compiler or kernel will always > > > > > > ensure for us unless we pack the structure or deliberately > > > > > > misalign the allocation. > > > > > > A non-volatile store of certain 32-bit constants can and does tear > > > on some architectures. These architectures would be the ones with a > > > store-immediate instruction with a small immediate field, and where > > > the 32-bit constant is such that a pair of 16-bit immediate store > > > instructions can store that value. > > > > Understood: PA-RISC is one such architecture: our ldil (load immediate > > long) can only take 21 bits of immediate data and you have to use a > > second instruction (ldo) to get the remaining 11 bits. However, the > > compiler guarantees no tearing in memory visibility for PA by doing the > > lidl/ldo sequence on a register and then writing the register to memory > > which I believe is an architectural guarantee. > > Good to know, thank you! > > > > There was a bug in an old version of GCC where even volatile 32-bit > > > stores of these constants would tear. They did fix the bug, but it > > > took some time to find a GCC person who understood that this was in > > > fact a bug. > > > > > > Hence my preference for READ_ONCE() and WRITE_ONCE() for data-racing > > > loads and stores. > > > > OK, but didn't everyone eventually agree this was a compiler bug? > > They did agree, but only in the case where the store was volatile, > as in WRITE_ONCE(), and -not- in the case of a plain store. > > At least the kernel doesn't make general use of vector instructions. > If it did, I would not be surprised to see compilers use three 32-bit > vector stores to store to a 32-bit int adjacent to a 64-bit pointer. :-/ And it turns out that the CPU architecture in question was x86-64, for whatever that is worth. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55981 (There is also a later bug report dealing strictly with volatile, but my search-engine skills are failing me this morning.) Thanx, Paul > > > > > Wait a sec; are there any 64bit architectures where the same is > > > > > not guaranteed for dereferencing properly aligned void **? > > > > > > > > Yes, naturally alligned void * dereference shouldn't tear > > > > either. Iwas just using 32 bit as my example because 64 bit > > > > accesses will tear on 32 bit architectures but 64 bit naturally > > > > aligned accesses shouldn't tear on 64 bit architectures. However, > > > > since we can't guarantee the 64 bitness of the architecture 32 bit > > > > or void * is our gold standard for not tearing. > > > > > > For stores of quantities not known at compiler time, agreed. But > > > that same store-immediate situation could happen on 64-bit systems. > > > > > > > James > > > > > > > > > > > > > If that's the case, I can think of quite a few places that are > > > > > rather dubious, and I don't see how READ_ONCE() could help in > > > > > those - e.g. if an architecture only has 32bit loads, rcu list > > > > > traversals are not going to be doable without one hell of an > > > > > extra headache. > > > > > > All the 64-bit systems that run the Linux kernel do have 64-bit load > > > instructions and rcu_dereference() uses READ_ONCE() internally, so we > > > should be fine with RCU list traverals. > > > > I really don't think it's possible to get the same immediate constant > > tearing bug on 64 bit. If you look at PA, we have no 64 bit > > equivalent of the ldil/ldo pair so all 64 bit immediate stores come > > straight from the global data table via a register, so no tearing. I > > bet every 64 bit architecture has a similar approach because 64 bit > > immediate data just requires too many bits to stuff into an instruction > > pair. > > > > James > >