From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-22.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FDACC433DF for ; Fri, 15 May 2020 15:03:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 79B4B207CB for ; Fri, 15 May 2020 15:03:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D8KQpXCY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726726AbgEOPD5 (ORCPT ); Fri, 15 May 2020 11:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726563AbgEOPDx (ORCPT ); Fri, 15 May 2020 11:03:53 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B77CEC05BD09 for ; Fri, 15 May 2020 08:03:53 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id l17so2866238qvm.12 for ; Fri, 15 May 2020 08:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=p/SCTVw7StWX3LdYvyHEBQe+hUDMyY+dnWBb3b3gSe0=; b=D8KQpXCYdJluCquOV1Xi0e4IrOFx9gK6CtHA67Hh7zrbHOH2ea1u1/ti7oBYFZtxZQ X8ZLwe1pcLnbUXwTPyiPxDFo3glMc26DJhyvPhsCI3cwNjvibbx+bJhQn6Q36QpRWU7+ 8tSYCbkAdU0HENzPMT5tusThH6PYn1iKZK7LT2I3VqBwy4N0eZ7gzdYgo3RQ6c/tlgHc TyH9vgnxMzWju0PBL2M5eH49hMGm8Dv5esVEFCk0Ww1mPuaOVtAlAxzn1ySdSl+biGm/ qKfZk4TNoNZrsbvulKPc3KcBHddqlMWsaHPEmRVO1jvxQpH8jI8/ZgDrmB0YATpfGYno 9wqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=p/SCTVw7StWX3LdYvyHEBQe+hUDMyY+dnWBb3b3gSe0=; b=gOhmOwW+O/Wh0B/tim+Tgy9U2z8qEiOpy0xx5QmUz+C/RTmJczGXTgP4wcTdXGCMlP 2Phr3Ffso8/uXnPVO1ecSq7qzLPeVYgAonEmhBTAsXQsOYTxEAt+DUUpZVJkOukVw1Dg AFRSCjbC0rlPWUU3ULGi9tOX79mHIuRW0R3huA8g1WmO+QHw839d7zHTsKQY839K8gfW 3BgzkQRoXVNDQAFv3AivPnOAxRYJCX544ph7s+Th37XZS5c3Q38E2bmqTUdO9hYmUjuj XhjZ6kygpfjtHYVzgUeNgCgV03HU6Vg2+vPy7KDL7zQKp9RhpYpz5I71VzEMorqRHSPa Fl2g== X-Gm-Message-State: AOAM533G4kxOocYWIrvbaH0diZaL0dMQb7oo+0X1tnZv6nETzPWeDNRZ 1o6kIMFqMxkDONAqgBZcfin444E7TQ== X-Google-Smtp-Source: ABdhPJxwNe2Y0LJFesK0WxWSztnbvVFqtEcRKZUzqD7QzNf5+rGlwialZ2KTHFtpD34/5LJfesLiH65auA== X-Received: by 2002:a05:6214:7e1:: with SMTP id bp1mr3748671qvb.208.1589555032839; Fri, 15 May 2020 08:03:52 -0700 (PDT) Date: Fri, 15 May 2020 17:03:31 +0200 In-Reply-To: <20200515150338.190344-1-elver@google.com> Message-Id: <20200515150338.190344-4-elver@google.com> Mime-Version: 1.0 References: <20200515150338.190344-1-elver@google.com> X-Mailer: git-send-email 2.26.2.761.g0e0b3e54be-goog Subject: [PATCH -tip 03/10] kcsan: Support distinguishing volatile accesses From: Marco Elver To: elver@google.com Cc: paulmck@kernel.org, dvyukov@google.com, glider@google.com, andreyknvl@google.com, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, will@kernel.org, clang-built-linux@googlegroups.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In the kernel, volatile is used in various concurrent context, whether in low-level synchronization primitives or for legacy reasons. If supported by the compiler, we will assume that aligned volatile accesses up to sizeof(long long) (matching compiletime_assert_rwonce_type()) are atomic. Recent versions Clang [1] (GCC tentative [2]) can instrument volatile accesses differently. Add the option (required) to enable the instrumentation, and provide the necessary runtime functions. None of the updated compilers are widely available yet (Clang 11 will be the first release to support the feature). [1] https://github.com/llvm/llvm-project/commit/5a2c31116f412c3b6888be361137efd705e05814 [2] https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544452.html This patch allows removing any explicit checks in primitives such as READ_ONCE() and WRITE_ONCE(). Signed-off-by: Marco Elver --- kernel/kcsan/core.c | 43 ++++++++++++++++++++++++++++++++++++++++++ scripts/Makefile.kcsan | 5 ++++- 2 files changed, 47 insertions(+), 1 deletion(-) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index a73a66cf79df..15f67949d11e 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -789,6 +789,49 @@ void __tsan_write_range(void *ptr, size_t size) } EXPORT_SYMBOL(__tsan_write_range); +/* + * Use of explicit volatile is generally disallowed [1], however, volatile is + * still used in various concurrent context, whether in low-level + * synchronization primitives or for legacy reasons. + * [1] https://lwn.net/Articles/233479/ + * + * We only consider volatile accesses atomic if they are aligned and would pass + * the size-check of compiletime_assert_rwonce_type(). + */ +#define DEFINE_TSAN_VOLATILE_READ_WRITE(size) \ + void __tsan_volatile_read##size(void *ptr) \ + { \ + const bool is_atomic = size <= sizeof(long long) && \ + IS_ALIGNED((unsigned long)ptr, size); \ + if (IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS) && is_atomic) \ + return; \ + check_access(ptr, size, is_atomic ? KCSAN_ACCESS_ATOMIC : 0); \ + } \ + EXPORT_SYMBOL(__tsan_volatile_read##size); \ + void __tsan_unaligned_volatile_read##size(void *ptr) \ + __alias(__tsan_volatile_read##size); \ + EXPORT_SYMBOL(__tsan_unaligned_volatile_read##size); \ + void __tsan_volatile_write##size(void *ptr) \ + { \ + const bool is_atomic = size <= sizeof(long long) && \ + IS_ALIGNED((unsigned long)ptr, size); \ + if (IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS) && is_atomic) \ + return; \ + check_access(ptr, size, \ + KCSAN_ACCESS_WRITE | \ + (is_atomic ? KCSAN_ACCESS_ATOMIC : 0)); \ + } \ + EXPORT_SYMBOL(__tsan_volatile_write##size); \ + void __tsan_unaligned_volatile_write##size(void *ptr) \ + __alias(__tsan_volatile_write##size); \ + EXPORT_SYMBOL(__tsan_unaligned_volatile_write##size) + +DEFINE_TSAN_VOLATILE_READ_WRITE(1); +DEFINE_TSAN_VOLATILE_READ_WRITE(2); +DEFINE_TSAN_VOLATILE_READ_WRITE(4); +DEFINE_TSAN_VOLATILE_READ_WRITE(8); +DEFINE_TSAN_VOLATILE_READ_WRITE(16); + /* * The below are not required by KCSAN, but can still be emitted by the * compiler. diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 20337a7ecf54..c02662b30a7c 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -9,7 +9,10 @@ else cc-param = --param -$(1) endif +# Most options here should be kept optional, to allow enabling more compilers +# if the absence of some options still allows us to use KCSAN in most cases. CFLAGS_KCSAN := -fsanitize=thread \ - $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) + $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \ + $(call cc-param,tsan-distinguish-volatile=1) endif # CONFIG_KCSAN -- 2.26.2.761.g0e0b3e54be-goog