From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 225D9C433F5 for ; Tue, 5 Oct 2021 11:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BF3A61186 for ; Tue, 5 Oct 2021 11:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234404AbhJELDI (ORCPT ); Tue, 5 Oct 2021 07:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234487AbhJELCg (ORCPT ); Tue, 5 Oct 2021 07:02:36 -0400 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55471C061787 for ; Tue, 5 Oct 2021 04:00:17 -0700 (PDT) Received: by mail-wm1-x349.google.com with SMTP id d12-20020a1c730c000000b0030b4e0ecf5dso916970wmb.9 for ; Tue, 05 Oct 2021 04:00:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=E3HFvEWei3GORNCVEmwUUOkLQSsacsR5poZwBNsW3No=; b=gHVBD0A+3lHoHP3P8gK1UEdrby9miEcsvus8c3g0zgInVbsGgAU3MVoXEmvCMU0pqT XsksqsMBf+Hrn5+rm9PXOFI7hX3xUlvTsfvIp0JihSS0PPAqU4KYHoDYpWRmKjlLfv3Z ux5nfeNlt/aT3R0wsShxX5tYcRBQGyKY82Rl2rRwAfYMVXQQmz8vQj5Wv5lkreriN5iv 4MrGMe4C5FTHMphxlwJoEyac0WBUKG5Pj09EyVrDVCw2pbWupHZuTVzmR6/ARtMZvkBk j/24lJVwkBO4ioR+fijSBKbPcf2HbjlMwYLLnO0riZ/tFPOjsoI5h9EGaPdHgllWr2mf 1vFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E3HFvEWei3GORNCVEmwUUOkLQSsacsR5poZwBNsW3No=; b=FHGtdQd3hyv9cllCBz9i/K8KhJTGoxxxxNUJbhECbeaqREnCQGlwf51EQDRyZKf1yY weAvXSDXTwQyMwzgXOoonsEVzAyMHRg+UrXQzWiytkkIkISWrqbNwaYOE49UjC3XutVd R++GwOvZ2Gz/zOtUiufFD/HM/FmwAeG6ca2ugMvE4xqJt4DaKhqCFi+mKIBMig/05bAY thYjahaU0ix9LUaRb7AgGIJksFphyvkZAVos0CCt2wqkDJk7lFjhskdAMqDmd9FJWvWk 9G1DRVYIpD0zrwVadKrBkQW0JFs312y6sweIjFT/h++dD/M5PZIB8UC7M/Pwhof8kI/i mXOg== X-Gm-Message-State: AOAM532EWfC7uEMzDMfs7wLnpRk8J2UYEkeVT+Wyuf4RBoJX5Fmg52sV HrHKbyqIBAgcE/tDBJhjBzwIdLvkZQ== X-Google-Smtp-Source: ABdhPJwKseKhghSFu1rN08u1NlsDKg2r0ReVbz2Z2EPPZdFMgRfGNw4KroaYjgRYAz0IHVtIcy7DHZxqsA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a1c:f31a:: with SMTP id q26mr2540960wmq.159.1633431615667; Tue, 05 Oct 2021 04:00:15 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:56 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-15-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 14/23] locking/barriers, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of: smp_mb() smp_rmb() smp_wmb() smp_store_release() Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 29 +++++++++++++++-------------- include/linux/spinlock.h | 2 +- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 640f09479bdf..27a9c9edfef6 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -14,6 +14,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #ifndef nop @@ -62,15 +63,15 @@ #ifdef CONFIG_SMP #ifndef smp_mb -#define smp_mb() __smp_mb() +#define smp_mb() do { kcsan_mb(); __smp_mb(); } while (0) #endif #ifndef smp_rmb -#define smp_rmb() __smp_rmb() +#define smp_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) #endif #ifndef smp_wmb -#define smp_wmb() __smp_wmb() +#define smp_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) #endif #else /* !CONFIG_SMP */ @@ -123,19 +124,19 @@ do { \ #ifdef CONFIG_SMP #ifndef smp_store_mb -#define smp_store_mb(var, value) __smp_store_mb(var, value) +#define smp_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) #endif #ifndef smp_mb__before_atomic -#define smp_mb__before_atomic() __smp_mb__before_atomic() +#define smp_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) #endif #ifndef smp_mb__after_atomic -#define smp_mb__after_atomic() __smp_mb__after_atomic() +#define smp_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) #endif #ifndef smp_store_release -#define smp_store_release(p, v) __smp_store_release(p, v) +#define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #endif #ifndef smp_load_acquire @@ -178,13 +179,13 @@ do { \ #endif /* CONFIG_SMP */ /* Barriers for virtual machine guests when talking to an SMP host */ -#define virt_mb() __smp_mb() -#define virt_rmb() __smp_rmb() -#define virt_wmb() __smp_wmb() -#define virt_store_mb(var, value) __smp_store_mb(var, value) -#define virt_mb__before_atomic() __smp_mb__before_atomic() -#define virt_mb__after_atomic() __smp_mb__after_atomic() -#define virt_store_release(p, v) __smp_store_release(p, v) +#define virt_mb() do { kcsan_mb(); __smp_mb(); } while (0) +#define virt_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) +#define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) +#define virt_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) +#define virt_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) +#define virt_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) +#define virt_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #define virt_load_acquire(p) __smp_load_acquire(p) /** diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 45310ea1b1d7..f6d69808b929 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -172,7 +172,7 @@ do { \ * Architectures that can implement ACQUIRE better need to take care. */ #ifndef smp_mb__after_spinlock -#define smp_mb__after_spinlock() do { } while (0) +#define smp_mb__after_spinlock() kcsan_mb() #endif #ifdef CONFIG_DEBUG_SPINLOCK -- 2.33.0.800.g4c38ced690-goog