From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EF7BC43334 for ; Fri, 1 Jul 2022 14:29:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231255AbiGAO3P (ORCPT ); Fri, 1 Jul 2022 10:29:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231575AbiGAO1g (ORCPT ); Fri, 1 Jul 2022 10:27:36 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8D5D5C963 for ; Fri, 1 Jul 2022 07:24:56 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id w5-20020a056402268500b0043980311a5fso1408249edd.3 for ; Fri, 01 Jul 2022 07:24:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=y2vNDbHfnK38Naadc3+Fwqq63OR/g0K1wBY1TqEXyAM=; b=JFjDqoVxXFGtm201IbbG357skezLd+T04/kWKBQGEughGhugLaopnebvFnawR0Kdsr 27ps95W+5qwzJFISc09B33+ERrKVXpOpqoMtrCKT/HtJS/mc+Y2T4vZtazhtmZt5dEW/ 6YcV3sK0Awn2rROg1rzXOx4DymiaFx9MNHURtldWEnOEz01PiUXsvdxOa9SJzM6FKaQu heiTguJRDqBIlOpunqu4UGMuMlcF9OwwspcMFPk4tB7coqypU57aVRBKufAmVL56Ntdn YavSj16XkJUWu5JYiDjJH4XZH6E6lfbc716eCpEJho13ZlQOC3VfHu610QO29KVA6ujZ pKyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=y2vNDbHfnK38Naadc3+Fwqq63OR/g0K1wBY1TqEXyAM=; b=xOQ752nX2ztbly8WyLJADYCwvp5oF2ESgsWY7dDVDdcZxj/uUPwKric+KpNAqm6Btv NWWpFnMY3785OfZrHJCgC5lfusXiA2fiMASQdBD2gShUHpGxC4GaUDuz7qlgS0df5ZIe dosC2leUiy0vNybQEpoe4r3wJ4ASJFBhKlHjNdaC9qUg2pyaFxT8JI5raTaB+hnC/SBR MoIF0RLGDu0r7ArHPHcTvbXOYOo/IzNHcHBuMUKJp1ps3MUT2qe/uFq5efgjiD+KuQvD B8iQSs4a7iR4UMMkIs+gIMXWbV9ZGUx0DQc49jWCG2fDwg3zQZnoQ/FQp1sRT1+qYQCm vQTg== X-Gm-Message-State: AJIora8OeAw6mkOe/ibjMIF/XzxnySA2ucxOmZ4/wbY4AHIIMG3dMcoI 6PxcaPii5T+FqgLOoTjGpXgXQy2hZBc= X-Google-Smtp-Source: AGRyM1uuNnnM3qssrOrgZOqetYyyDgTLXfrXa0K/82jjlreLIfrHkHYk4kBhmEAowU1mOBxY98uQNMpqi60= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:a6f5:f713:759c:abb6]) (user=glider job=sendgmr) by 2002:aa7:c9c9:0:b0:431:962f:f61e with SMTP id i9-20020aa7c9c9000000b00431962ff61emr19491774edt.189.1656685496470; Fri, 01 Jul 2022 07:24:56 -0700 (PDT) Date: Fri, 1 Jul 2022 16:23:01 +0200 In-Reply-To: <20220701142310.2188015-1-glider@google.com> Message-Id: <20220701142310.2188015-37-glider@google.com> Mime-Version: 1.0 References: <20220701142310.2188015-1-glider@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v4 36/45] x86: kmsan: use __msan_ string functions where possible From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7ca9bd6b4f5c9b9816404862ae87ca7984395f33 --- arch/x86/include/asm/string_64.h | 23 +++++++++++++++++++++-- include/linux/fortify-string.h | 2 ++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 6e450827f677a..3b87d889b6e16 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ #define __HAVE_ARCH_MEMCPY 1 +#if defined(__SANITIZE_MEMORY__) +#undef memcpy +void *__msan_memcpy(void *dst, const void *src, size_t size); +#define memcpy __msan_memcpy +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); #define __HAVE_ARCH_MEMSET +#if defined(__SANITIZE_MEMORY__) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset __msan_memset +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) } #define __HAVE_ARCH_MEMMOVE +#if defined(__SANITIZE_MEMORY__) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove __msan_memmove +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); int memcmp(const void *cs, const void *ct, size_t count); @@ -64,8 +82,7 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) - +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) /* * For files that not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. @@ -73,7 +90,9 @@ int strcmp(const char *cs, const char *ct); #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index 3b401fa0f3746..6c8a1a29d0b63 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -285,8 +285,10 @@ __FORTIFY_INLINE void fortify_memset_chk(__kernel_size_t size, * __builtin_object_size() must be captured here to avoid evaluating argument * side-effects further into the macro layers. */ +#ifndef CONFIG_KMSAN #define memset(p, c, s) __fortify_memset_chk(p, c, s, \ __builtin_object_size(p, 0), __builtin_object_size(p, 1)) +#endif /* * To make sure the compiler can enforce protection against buffer overflows, -- 2.37.0.rc0.161.g10f37bed90-goog