From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FEFAC2D0DB for ; Thu, 23 Jan 2020 15:34:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6DF7A214AF for ; Thu, 23 Jan 2020 15:34:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1579793650; bh=02ovQDw+hoDudVKkxk7NvXtiVUAxvZi4dvOPTDfkTfc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=RydKN9vXJYnil+3SlSCRoQ5sgrg/dNjcPe+nE00Rgl6e/HcqKN/f4FigbcYc/MG7c o7CJej8MXGZGw1utfdYbIoZ7LaVnSkDUcTFp8j6D+8WsBrp6PYsDxBLxW/39gsHi0N FL1agyM3pdDwaOTuV55lclOClGrrJwaX/gadpTbk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729152AbgAWPeJ (ORCPT ); Thu, 23 Jan 2020 10:34:09 -0500 Received: from mail.kernel.org ([198.145.29.99]:51776 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726232AbgAWPeG (ORCPT ); Thu, 23 Jan 2020 10:34:06 -0500 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1F4B4214AF; Thu, 23 Jan 2020 15:34:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1579793645; bh=02ovQDw+hoDudVKkxk7NvXtiVUAxvZi4dvOPTDfkTfc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hJrS8Y3xxu1fXS21mDG9bcrOlZZxdSmTrRpqmO7iSeJZG3ynEKIoWnfy0oUapgyev JJzRBEHqciQpTXSHFQ6b6Zu+PU7VIAqolicrd7IFNdJqaiwrAeLvbxr5bgJfJ1EfpL dBGjkTtEnACfKAkuTXGBmeLf6nKYtXkYVVeZP1nQ= From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arch@vger.kernel.org, kernel-team@android.com, Will Deacon , Michael Ellerman , Peter Zijlstra , Linus Torvalds , Segher Boessenkool , Christian Borntraeger , Luc Van Oostenryck , Arnd Bergmann , Peter Oberparleiter , Masahiro Yamada , Nick Desaulniers Subject: [PATCH v2 05/10] READ_ONCE: Enforce atomicity for {READ,WRITE}_ONCE() memory accesses Date: Thu, 23 Jan 2020 15:33:36 +0000 Message-Id: <20200123153341.19947-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200123153341.19947-1-will@kernel.org> References: <20200123153341.19947-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org {READ,WRITE}_ONCE() cannot guarantee atomicity for arbitrary data sizes. This can be surprising to callers that might incorrectly be expecting atomicity for accesses to aggregate structures, although there are other callers where tearing is actually permissable (e.g. if they are using something akin to sequence locking to protect the access). Linus sayeth: | We could also look at being stricter for the normal READ/WRITE_ONCE(), | and require that they are | | (a) regular integer types | | (b) fit in an atomic word | | We actually did (b) for a while, until we noticed that we do it on | loff_t's etc and relaxed the rules. But maybe we could have a | "non-atomic" version of READ/WRITE_ONCE() that is used for the | questionable cases? The slight snag is that we also have to support 64-bit accesses on 32-bit architectures, as these appear to be widespread and tend to work out ok if either the architecture supports atomic 64-bit accesses (x86, armv7) or if the variable being accesses represents a virtual address and therefore only requires 32-bit atomicity in practice. Take a step in that direction by introducing a variant of 'compiletime_assert_atomic_type()' and use it to check the pointer argument to {READ,WRITE}_ONCE(). Expose __{READ,WRITE_ONCE}() variants which are allowed to tear and convert the two broken callers over to the new macros. Suggested-by: Linus Torvalds Cc: Peter Zijlstra Cc: Michael Ellerman Cc: Arnd Bergmann Signed-off-by: Will Deacon --- drivers/xen/time.c | 2 +- include/linux/compiler.h | 37 +++++++++++++++++++++++++++++++++---- net/xdp/xsk_queue.h | 2 +- 3 files changed, 35 insertions(+), 6 deletions(-) diff --git a/drivers/xen/time.c b/drivers/xen/time.c index 0968859c29d0..108edbcbc040 100644 --- a/drivers/xen/time.c +++ b/drivers/xen/time.c @@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta( do { state_time = get64(&state->state_entry_time); rmb(); /* Hypervisor might update data. */ - *res = READ_ONCE(*state); + *res = __READ_ONCE(*state); rmb(); /* Hypervisor might update data. */ } while (get64(&state->state_entry_time) != state_time || (state_time & XEN_RUNSTATE_UPDATE)); diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 44974d658f30..a7b2195f2655 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -198,24 +198,43 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #include #include +/* + * Use __READ_ONCE() instead of READ_ONCE() if you do not require any + * atomicity or dependency ordering guarantees. Note that this may result + * in tears! + */ +#define __READ_ONCE(x) (*(const volatile typeof(x) *)&(x)) + +#define __READ_ONCE_SCALAR(x) \ +({ \ + typeof(x) __x = __READ_ONCE(x); \ + smp_read_barrier_depends(); \ + __x; \ +}) + /* * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need * to hide memory access from KASAN. */ #define READ_ONCE_NOCHECK(x) \ ({ \ - typeof(x) __x = *(volatile typeof(x) *)&(x); \ - smp_read_barrier_depends(); \ - __x; \ + compiletime_assert_rwonce_type(x); \ + __READ_ONCE_SCALAR(x); \ }) #define READ_ONCE(x) READ_ONCE_NOCHECK(x) -#define WRITE_ONCE(x, val) \ +#define __WRITE_ONCE(x, val) \ do { \ *(volatile typeof(x) *)&(x) = (val); \ } while (0) +#define WRITE_ONCE(x, val) \ +do { \ + compiletime_assert_rwonce_type(x); \ + __WRITE_ONCE(x, val); \ +} while (0) + #ifdef CONFIG_KASAN /* * We can't declare function 'inline' because __no_sanitize_address conflicts @@ -299,6 +318,16 @@ static inline void *offset_to_ptr(const int *off) compiletime_assert(__native_word(t), \ "Need native word sized stores/loads for atomicity.") +/* + * Yes, this permits 64-bit accesses on 32-bit architectures. These will + * actually be atomic in many cases (namely x86), but for others we rely on + * the access being split into 2x32-bit accesses for a 32-bit quantity (e.g. + * a virtual address) and a strong prevailing wind. + */ +#define compiletime_assert_rwonce_type(t) \ + compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long), \ + "Unsupported access size for {READ,WRITE}_ONCE().") + /* &a[0] degrades to a pointer: a different type from an array */ #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0])) diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index eddae4688862..2b55c1c7b2b6 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -304,7 +304,7 @@ static inline struct xdp_desc *xskq_validate_desc(struct xsk_queue *q, struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring; unsigned int idx = q->cons_tail & q->ring_mask; - *desc = READ_ONCE(ring->desc[idx]); + *desc = __READ_ONCE(ring->desc[idx]); if (xskq_is_valid_desc(q, desc, umem)) return desc; -- 2.25.0.341.g760bfbb309-goog