From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 210FBC77B7C for ; Wed, 24 May 2023 20:35:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229912AbjEXUe7 (ORCPT ); Wed, 24 May 2023 16:34:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236149AbjEXUev (ORCPT ); Wed, 24 May 2023 16:34:51 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9628E1AC; Wed, 24 May 2023 13:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=xfybRCqfmly451via0zB+H/Q+AApSu2iaDmN5fGZ/x4=; b=IgXiGZt/AjIkZyk47M6JmzSETo 5JvYqmTY0S/6CNcf6rOgIUGkx8fCELILrDrVXemYkb5BPYKAn0HI2Y6XB5K+3xzooX2AKsZSw2rzz mh9MKJ/BLQiIqzzeKOeINobvgbeJqtrrw702L1m+480O4PgbeQxZTVHsPTxFOgTQXwjhkEFw9ES3j jYrlmoMd2O6KMc0tFDq7r+mdPcC6sbWD0mdmLQKvSrN/CRbmjwjlImLpNC1DyVuMY2O6L3UPugOJF IUkQyLOL6dwpJlAiac2Dvj+LDhnldubALNgI54EHmr7EPEHVm08LfHTcHZ65apsu3c2zLqTuflfCR 76pcnqHw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q1vAx-00BWOr-Uz; Wed, 24 May 2023 20:33:40 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id AD8D830013F; Wed, 24 May 2023 22:33:36 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id D6F5F20A78733; Wed, 24 May 2023 22:33:36 +0200 (CEST) Date: Wed, 24 May 2023 22:33:36 +0200 From: Peter Zijlstra To: Sean Christopherson Cc: Kautuk Consul , Chao Peng , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song Subject: Re: [PATCH v7 08/14] KVM: Rename mmu_notifier_* Message-ID: <20230524203336.GC3447678@hirez.programming.kicks-ass.net> References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> <20220706082016.2603916-9-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Wed, May 24, 2023 at 01:16:03PM -0700, Sean Christopherson wrote: > Atomics aren't memory barriers on all architectures, e.g. see the various > definitions of smp_mb__after_atomic(). > > Even if atomic operations did provide barriers, using an atomic would be overkill > and a net negative. On strongly ordered architectures like x86, memory barriers are > just compiler barriers, whereas atomics may be more expensive. Not quite, smp_{r,w}mb() and smp_mb__{before,after}_atomic() are compiler barriers on the TSO archs, but smp_mb() very much isn't. TSO still allows stores to be delayed vs later loads (iow it doesn't pretend to hide the store buffer). > Of course, the only > accesses outside of mmu_lock are reads, so on x86 that "atomic" access is just a > READ_ONCE() load, but that's not the case for all architectures. This is true on *all* archs. atomic_set() and atomic_read() are no more and no less than WRITE_ONCE() / READ_ONCE(). > Anyways, the point is that atomics and memory barriers are different things that > serve different purposes. This is true; esp. on the weakly ordered architectures where atomics do not naturally imply any ordering.