From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B983C48BCD for ; Mon, 7 Jun 2021 08:28:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 732EC6108E for ; Mon, 7 Jun 2021 08:28:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231380AbhFGIam (ORCPT ); Mon, 7 Jun 2021 04:30:42 -0400 Received: from mail-oo1-f45.google.com ([209.85.161.45]:34492 "EHLO mail-oo1-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231303AbhFGIab (ORCPT ); Mon, 7 Jun 2021 04:30:31 -0400 Received: by mail-oo1-f45.google.com with SMTP id i8-20020a4aa1080000b0290201edd785e7so3942555ool.1 for ; Mon, 07 Jun 2021 01:28:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+hJoek+dH6lRvIIEvtmpn8nrUNfH1I0O0qNY3ghdb8Q=; b=LOIHafCIFA0IyhxDKrZXKOLD3XN7IJWm1jCQ89b3C8PBWRM0EWPXMpK+bU8j8FHAmx zBCy3YmZZUZ0fh1b2Z5aX0OYihxljb5HWbZlX4ssC1etg2xnuwIyBZFo3x6KPeep+/Wd SnRHtQ8wY7o3OjrI1m0igZ3dC/oE0Br9IevO15JvfcPCWzzX3kqULNHoI1FkxVckZK32 rLYr0klqfNunFl8K/I0SpyWocYXGUjbHb+0pc86I3lso8POqgvokDcwgGd59MRXOw6I5 vZ0BOFL85Empou7cWfk/hbPTnfuWqAQrZu4pefLvNpgya2FMLrMQrywZNbG33VKJY1cE bQYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+hJoek+dH6lRvIIEvtmpn8nrUNfH1I0O0qNY3ghdb8Q=; b=Ecz4gD93izR29vQrvpl6XhvzuN9n8eGzhZPLtrWcedsy6UEMuJP1oeWUnb1G85TEZj 5iacmYYjvPNH50HlxmEoKJXzvjh8gN6em9wSVHjkJ1kXn5i3DdNbj802br+VqpAJNUhW hrPzflXUJgysXPkHFVzgoCy4iidrTVgxUVOrp9JWnm+admuefIzJnU3sZANxKnrKE8lq 7UgTGWQyK+XXmEvwUTjIJf1+V5ZmxsVXMnCxCSA2tpGjxy2FZ+TOqmAHggBhiDkkFych 7LqhMO72y9qnfEHl0YjLvuJbfZ3NpCLNazhklaq2UKK+q1ntxReADzR0VXgWdueuy1Mb anoQ== X-Gm-Message-State: AOAM533kxEttKcnzyxzCctR3REIsb4T7xh2m405XDS5e2Oqt0F7oBvE/ eddOPORH4BnQlqP2uQvu7Vczn5lZfAHXOxE/DweGsQ== X-Google-Smtp-Source: ABdhPJwhAqSuRQaCfX2Zb3geQjaQVw22GWcI4D506b+J4A3lSdAExBnrgHThzMCaxjPiiDa1vckJdt5mZ70A9Xmoo0Y= X-Received: by 2002:a4a:1145:: with SMTP id 66mr12558163ooc.14.1623054443861; Mon, 07 Jun 2021 01:27:23 -0700 (PDT) MIME-Version: 1.0 References: <20210604205600.GB4397@paulmck-ThinkPad-P17-Gen-1> <20210604214010.GD4397@paulmck-ThinkPad-P17-Gen-1> <20210605145739.GB1712909@rowland.harvard.edu> <20210606001418.GH4397@paulmck-ThinkPad-P17-Gen-1> <20210606012903.GA1723421@rowland.harvard.edu> <20210606185922.GF7746@tucnak> In-Reply-To: From: Marco Elver Date: Mon, 7 Jun 2021 10:27:10 +0200 Message-ID: Subject: Re: [RFC] LKMM: Add volatile_if() To: Alexander Monakov Cc: Linus Torvalds , Jakub Jelinek , Alan Stern , Segher Boessenkool , "Paul E. McKenney" , Peter Zijlstra , Will Deacon , Andrea Parri , Boqun Feng , Nick Piggin , David Howells , Jade Alglave , Luc Maranget , Akira Yokosawa , Linux Kernel Mailing List , linux-toolchains@vger.kernel.org, linux-arch Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-toolchains@vger.kernel.org On Mon, 7 Jun 2021 at 10:02, Alexander Monakov wrote: > On Sun, 6 Jun 2021, Linus Torvalds wrote: [...] > > On Sun, Jun 6, 2021 at 2:19 PM Alexander Monakov wrote: [...] > > Btw, since we have compiler people on line, the suggested 'barrier()' > > isn't actually perfect for this particular use: > > > > #define barrier() __asm__ __volatile__("" : : "i" (__COUNTER__) : "memory") > > > > in the general barrier case, we very much want to have that "memory" > > clobber, because the whole point of the general barrier case is that > > we want to make sure that the compiler doesn't cache memory state > > across it (ie the traditional use was basically what we now use > > "cpu_relax()" for, and you would use it for busy-looping on some > > condition). > > > > In the case of "volatile_if()", we actually would like to have not a > > memory clobber, but a "memory read". IOW, it would be a barrier for > > any writes taking place, but reads can move around it. > > > > I don't know of any way to express that to the compiler. We've used > > hacks for it before (in gcc, BLKmode reads turn into that kind of > > barrier in practice, so you can do something like make the memory > > input to the asm be a big array). But that turned out to be fairly > > unreliable, so now we use memory clobbers even if we just mean "reads > > random memory". > > So the barrier which is a compiler barrier but not a machine barrier is > __atomic_signal_fence(model), but internally GCC will not treat it smarter > than an asm-with-memory-clobber today. FWIW, Clang seems to be cleverer about it, and seems to do the optimal thing if I use a __atomic_signal_fence(__ATOMIC_RELEASE): https://godbolt.org/z/4v5xojqaY Thanks, -- Marco