From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11F81C54EED for ; Mon, 30 Jan 2023 04:36:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbjA3Egv (ORCPT ); Sun, 29 Jan 2023 23:36:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230187AbjA3Egs (ORCPT ); Sun, 29 Jan 2023 23:36:48 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3603769C for ; Sun, 29 Jan 2023 20:36:46 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7D03F60EBC for ; Mon, 30 Jan 2023 04:36:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0618C433EF; Mon, 30 Jan 2023 04:36:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1675053405; bh=Jw9oe6kjPrddDFxEkme2lYJy6zog54y/sVk4BBq8Zx0=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=eHPSudZoghJqnlCLsiJ4EIfWXWXVNEueH797ut1ED07b6fu472PQeuRW5Q9vgHkvU M99CAC+huxZuXuZHD9xdUnzFXocdOgBHEC00GA18KUojBQXN05FqRwdiss6spM7Gqu B2reQrHuVmvPB+rOmE7HgaMnlitdoa2jztPHRKHHuESdlaBTbjsNdxM+ntUcKwd4yl JCFAYuspgV+ttWde9JN1CnOmMF7XhaCkLfq/ulDLQYjbijMLHeGGpgyqfhJxmm45Wo EJnRyGHk7/bPoZ3DwvW8kKvJxJf0QJxiCI2HkShuKdlfsiIXoI+I4RkYFVvoFMhsZu HhPcVhW6ISLfw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6CE455C0326; Sun, 29 Jan 2023 20:36:45 -0800 (PST) Date: Sun, 29 Jan 2023 20:36:45 -0800 From: "Paul E. McKenney" To: Alan Stern Cc: Jonas Oberhauser , Andrea Parri , will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/2] tools/memory-model: Make ppo a subrelation of po Message-ID: <20230130043645.GN2948950@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20230126134604.2160-1-jonas.oberhauser@huaweicloud.com> <20230126134604.2160-3-jonas.oberhauser@huaweicloud.com> <47acbaa7-8280-48f2-678f-53762cf3fe9d@huaweicloud.com> <0da94668-c041-1d59-a46d-bd13562e385e@huaweicloud.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jan 29, 2023 at 09:39:17PM -0500, Alan Stern wrote: > On Sun, Jan 29, 2023 at 11:19:32PM +0100, Jonas Oberhauser wrote: > > I see now. Somehow I thought stores must execute in program order, but I > > guess it doesn't make sense. > > In that sense, W ->xbstar&int X always means W propagates to X's CPU before > > X executes. > > It also means any write that propagates to W's CPU before W executes > also propagates to X's CPU before X executes (because it's the same CPU > and W executes before X). > > > > Ideally we would fix this by changing the definition of po-rel to: > > > > > > [M] ; (xbstar & int) ; [Release] > > > > > > (This is closely related to the use of (xbstar & int) in the definition > > > of vis that you asked about.) > > > > This misses the property of release stores that any po-earlier store must > > also execute before the release store. > > I should have written: > > [M] ; (po | (xbstar & int)) ; [Release] > > > Perhaps it could be changed to the old  po-rel | [M] ; (xbstar & int) ; > > [Release] but then one could instead move this into the definition of > > cumul-fence. > > In fact you'd probably want this for all the propagation fences, so > > cumul-fence and pb should be the right place. > > > > > Unfortunately we can't do this, because > > > po-rel has to be defined long before xbstar. > > > > You could do it, by turning the relation into one massive recursive > > definition. > > Which would make pretty much the entire memory model one big recursion. > I do not want to do that. > > > Thinking about what the options are: > > 1) accept the difference and run with it by making it consistent inside the > > axiomatic model > > 2) fix it through the recursive definition, which seems to be quite ugly but > > also consistent with the power operational model as far as I can tell > > 3) weaken the operational model... somehow > > 4) just ignore the anomaly > > 5) ??? > > > > Currently my least favorite option is 4) since it seems a bit off that the > > reasoning applies in one specific case of LKMM, more specifically the data > > race definition which should be equivalent to "the order of the two races > > isn't fixed", but here the order isn't fixed but it's a data race. > > I think the patch happens to almost do 1) because the xbstar&int at the end > > should already imply ordering through the prop&int <= hb rule. > > What would remain is to also exclude rcu-fence somehow. > > IMO 1) is the best choice. > > Alan > > PS: For the record, here's a simpler litmus test to illustrates the > failing. The idea is that Wz=1 is reordered before the store-release, > so it ought to propagate before Wy=1. The LKMM does not require this. In PowerPC terms, would this be like having the Wz=1 being reorders before the Wy=1, but not before the lwsync instruction preceding the Wy=1 that made it be a release store? If so, we might have to keep this quirk. Thanx, Paul > C before-release > > {} > > P0(int *x, int *y, int *z) > { > int r1; > > r1 = READ_ONCE(*x); > smp_store_release(y, 1); > WRITE_ONCE(*z, 1); > } > > P1(int *x, int *y, int *z) > { > int r2; > > r2 = READ_ONCE(*z); > WRITE_ONCE(*x, r2); > } > > P2(int *x, int *y, int *z) > { > int r3; > int r4; > > r3 = READ_ONCE(*y); > smp_rmb(); > r4 = READ_ONCE(*z); > } > > exists (0:r1=1 /\ 2:r3=1 /\ 2:r4=0)