From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B92C5C4743D for ; Sun, 6 Jun 2021 22:47:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 97F8F6109F for ; Sun, 6 Jun 2021 22:47:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231163AbhFFWtk (ORCPT ); Sun, 6 Jun 2021 18:49:40 -0400 Received: from mail-ej1-f51.google.com ([209.85.218.51]:45836 "EHLO mail-ej1-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231143AbhFFWtj (ORCPT ); Sun, 6 Jun 2021 18:49:39 -0400 Received: by mail-ej1-f51.google.com with SMTP id k7so23388601ejv.12 for ; Sun, 06 Jun 2021 15:47:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gtTe+bW9CJrIIoUl0y1d11JaP4RsQBfOevBdp08DRug=; b=OxInyRJVkqQCmKRmT6KGFE8qM7XAfQzY8nbQmo5qsLIenPiuQZPigbSL/hrNiuh00j Fiyq6WXUnbySLjjZhVoY4ZVWT2wgI8gyhocnD0Acawt0/u24ZMB8KiorrZ5u8WVzW5Mp bJ3imPB0o2E2dsaZWR8zv/W4ByDxPBlNuhDHo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gtTe+bW9CJrIIoUl0y1d11JaP4RsQBfOevBdp08DRug=; b=SeGGICxVHDw2LeBlif7qD5HoqcH44MI5o6s8ksd5dQgx+vL3aSl+guCXpf2hv8dO9v /na7sdor9ukc3n9m9TMifmufEcP7yY53ufmk6VgTyoGRJJ4rtssT0GkQnrPR2ft4mmBM zG4abziMVY/mBlAbgG4iFczCW3iWBmT6tL+AnksczPkTHN2r1JU7KKP4oJVs68RxPZoi yh4jukVfrI+k0a1TGu/D1tFNxN6oQmesmY/1ZpiJQR+Ff2Ukkrim4U00UPgYIxhmb7+i TewhMShkQk+N9X7qWjDiBNCtsqgqaGY+PeksW5a/MP/8Ny4Krvzs6/rNqfYaq6pEHO/T XcZQ== X-Gm-Message-State: AOAM532377ctfyREgjuqTVpjxJaes5liVtmR3BLVagVHrXfKQOsjNnrP bE2Qx1Vng/p+0hTbw+auSQ78rn31M/QkSp7S05k= X-Google-Smtp-Source: ABdhPJwR8WyhzF5gpiEA7pTUBJnbhQH9DUSyfeOb6jCAlXChToBM5TIdgdcl6HHit3Kx9JL2Hefsdg== X-Received: by 2002:a17:906:d92:: with SMTP id m18mr13185060eji.27.1623019595266; Sun, 06 Jun 2021 15:46:35 -0700 (PDT) Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com. [209.85.208.46]) by smtp.gmail.com with ESMTPSA id r21sm6956982edy.78.2021.06.06.15.46.35 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 06 Jun 2021 15:46:35 -0700 (PDT) Received: by mail-ed1-f46.google.com with SMTP id i13so17946178edb.9 for ; Sun, 06 Jun 2021 15:46:35 -0700 (PDT) X-Received: by 2002:a05:6512:3f82:: with SMTP id x2mr9598539lfa.421.1623019102844; Sun, 06 Jun 2021 15:38:22 -0700 (PDT) MIME-Version: 1.0 References: <20210604205600.GB4397@paulmck-ThinkPad-P17-Gen-1> <20210604214010.GD4397@paulmck-ThinkPad-P17-Gen-1> <20210605145739.GB1712909@rowland.harvard.edu> <20210606001418.GH4397@paulmck-ThinkPad-P17-Gen-1> <20210606012903.GA1723421@rowland.harvard.edu> <20210606185922.GF7746@tucnak> In-Reply-To: From: Linus Torvalds Date: Sun, 6 Jun 2021 15:38:06 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC] LKMM: Add volatile_if() To: Alexander Monakov Cc: Jakub Jelinek , Alan Stern , Segher Boessenkool , "Paul E. McKenney" , Peter Zijlstra , Will Deacon , Andrea Parri , Boqun Feng , Nick Piggin , David Howells , Jade Alglave , Luc Maranget , Akira Yokosawa , Linux Kernel Mailing List , linux-toolchains@vger.kernel.org, linux-arch Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-toolchains@vger.kernel.org On Sun, Jun 6, 2021 at 2:19 PM Alexander Monakov wrote: > > > So yeah, that seems like a nice solution to the issue, and should make > > the barriers all unique to the compiler. > > It also plants a nice LTO time-bomb (__COUNTER__ values will be unique > only within each LTO input unit, not across all of them). That could be an issue in other circumstances, but for at least volatile_if() that doesn't much matter. The decision there is purely local, and it's literally about the two sides of the conditional not being merged. Now, an optimizing linker or assembler can of course do anything at all in theory: and if that ends up being an issue we'd have to have some way to actually propagate the barrier from being just a compiler thing. Right now gcc doesn't even output the barrier in the assembly code, so it's invisible to any optimizing assembler/linker thing. But I don't think that's an issue with what _currently_ goes on in an assembler or linker - not even a smart one like LTO. And such things really are independent of "volatile_if()". We use barriers for other things where we need to force some kind of operation ordering, and right now the only thing that re-orders accesses etc is the compiler. Btw, since we have compiler people on line, the suggested 'barrier()' isn't actually perfect for this particular use: #define barrier() __asm__ __volatile__("" : : "i" (__COUNTER__) : "memory") in the general barrier case, we very much want to have that "memory" clobber, because the whole point of the general barrier case is that we want to make sure that the compiler doesn't cache memory state across it (ie the traditional use was basically what we now use "cpu_relax()" for, and you would use it for busy-looping on some condition). In the case of "volatile_if()", we actually would like to have not a memory clobber, but a "memory read". IOW, it would be a barrier for any writes taking place, but reads can move around it. I don't know of any way to express that to the compiler. We've used hacks for it before (in gcc, BLKmode reads turn into that kind of barrier in practice, so you can do something like make the memory input to the asm be a big array). But that turned out to be fairly unreliable, so now we use memory clobbers even if we just mean "reads random memory". Example: variable_test_bit(), which generates a "bt" instruction, does : "m" (*(unsigned long *)addr), "Ir" (nr) : "memory"); and the memory clobber is obviously wrong: 'bt' only *reads* memory, but since the whole reason we use it is that it's not just that word at address 'addr', in order to make sure that any previous writes are actually stable in memory, we use that "memory" clobber. It would be much nicer to have a "memory read" marker instead, to let the compiler know "I need to have done all pending writes to memory, but I can still cache read values over this op because it doesn't _change_ memory". Anybody have ideas or suggestions for something like that? Linus