From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E035EC43603 for ; Thu, 5 Dec 2019 16:00:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84B8324654 for ; Thu, 5 Dec 2019 16:00:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V5gYVCxB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84B8324654 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 29F406B10FD; Thu, 5 Dec 2019 11:00:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 229AF6B10FE; Thu, 5 Dec 2019 11:00:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A42D6B10FF; Thu, 5 Dec 2019 11:00:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id DE5766B10FD for ; Thu, 5 Dec 2019 11:00:42 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 55E795DCD for ; Thu, 5 Dec 2019 16:00:42 +0000 (UTC) X-FDA: 76231550724.05.cart91_5d1d5ef8cad2c X-HE-Tag: cart91_5d1d5ef8cad2c X-Filterd-Recvd-Size: 20169 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 16:00:39 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id z7so4182290wrl.13 for ; Thu, 05 Dec 2019 08:00:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=h4K7QwsFVFPQUy0N+P51JPnSwRUhH/GGEVyqMAsvjq0=; b=V5gYVCxBffW+3TgMpkBx8dlZvwHNwNDyn/Xzds6aPb9JMq53uTcz2+GaqdJQr71uDc x2jCprPKDGXN+8a5xWobWkguaJKPHSlYXgWghggPIyC28b8CIAIM5Bzo9MfFyUFkC1IM EQFR+dmjyCUBdlrubZ1aw8Mln6FOFzt90RYHOtNHij1mHfaKfr75eBXkTign07TUbQ/t jFv4x6NgB9y74hMA2Lmk3lQtbn7e2HqciVzah16WfD0Z+2xEg7+ipmAoVAIyummJx1eF 36YxfuY5B5Qoy1ZMHULjgz1eCe8861zNqxGusYmgHWGp1t7GP9YlZGEFFfgxl/gkRJLb qwyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=h4K7QwsFVFPQUy0N+P51JPnSwRUhH/GGEVyqMAsvjq0=; b=RyIr/SpS0mfa/kfNYPgrUGMsJ+VchRTCJt/d2QdONxlr+p+qJQtkXIWGSKujOdVWmU T8Qf+SfsdChh7thF7KOUXJVdcjlsxZWtMegJWw4awZpPE1L/UiqlFT1o4KsQxh0mnzCo HHljyMhCHWEtXuKPuNDtl6LN2Avu2TlZVsPi9+wzPFxdgW1uX87OVv61Fv96bo/v9d+J VEXkCDV+bVR2OTkL/smo+MOEN9Lf6lT/Hn0c1LhbvmYxFkEocCmHnGronDSMcMTxjAc+ onkcR6PQXEnu/TyOQl4Ev0b8yOcjEJTagluUKztQz++l0HaW/Q30ukWxjHdp46vb32lB Lf3A== X-Gm-Message-State: APjAAAUioubJ4h7oyv+zn5yZ3Aw7ogreu1SylZyF66x8xGc86Hq/i6SS bu00uy6HHgA8rm9/o114AOMUGGiXKTY2hbuoNpic3Q== X-Google-Smtp-Source: APXvYqwV0eKQk3C5JiwzVOn1YXEEcU3bP9d0N4Wiuk7jOxXCyBB4aW5EWvo7VzYrt0eRtWZPAgvDVG1wwAq5PHDxbXA= X-Received: by 2002:adf:f28c:: with SMTP id k12mr10757655wro.360.1575561638161; Thu, 05 Dec 2019 08:00:38 -0800 (PST) MIME-Version: 1.0 References: <20191122112621.204798-1-glider@google.com> <20191122112621.204798-28-glider@google.com> In-Reply-To: From: Alexander Potapenko Date: Thu, 5 Dec 2019 17:00:27 +0100 Message-ID: Subject: Re: [PATCH RFC v3 27/36] kmsan: hooks for copy_to_user() and friends To: Andrey Konovalov Cc: Alexander Viro , Vegard Nossum , Dmitry Vyukov , Linux Memory Management List , Andreas Dilger , Andrew Morton , Andrey Ryabinin , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Christoph Hellwig , Christoph Hellwig , "Darrick J. Wong" , "David S. Miller" , Dmitry Torokhov , Eric Biggers , Eric Dumazet , Eric Van Hensbergen , Greg Kroah-Hartman , Harry Wentland , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jason Wang , Jens Axboe , Marek Szyprowski , Marco Elver , Mark Rutland , "Martin K. Petersen" , Martin Schwidefsky , Matthew Wilcox , "Michael S . Tsirkin" , Michal Simek , Petr Mladek , Qian Cai , Randy Dunlap , Robin Murphy , Sergey Senozhatsky , Steven Rostedt , Takashi Iwai , "Theodore Ts'o" , Thomas Gleixner , Vasily Gorbik , Wolfram Sang Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Archived-At: List-Archive: List-Post: On Fri, Nov 29, 2019 at 4:34 PM Andrey Konovalov wr= ote: > > On Fri, Nov 22, 2019 at 12:27 PM wrote: > > > > Memory that is copied from userspace must be unpoisoned. > > Before copying memory to userspace, check it and report an error if it > > contains uninitialized bits. > > > > Signed-off-by: Alexander Potapenko > > To: Alexander Potapenko > > Cc: Alexander Viro > > Cc: Vegard Nossum > > Cc: Dmitry Vyukov > > Cc: linux-mm@kvack.org > > --- > > v3: > > - fixed compilation errors reported by kbuild test bot > > > > Change-Id: I38428b9c7d1909b8441dcec1749b080494a7af99 > > --- > > arch/x86/include/asm/uaccess.h | 12 ++++++++++++ > > include/asm-generic/cacheflush.h | 7 ++++++- > > include/asm-generic/uaccess.h | 12 ++++++++++-- > > include/linux/uaccess.h | 32 +++++++++++++++++++++++++++----- > > lib/iov_iter.c | 6 ++++++ > > lib/usercopy.c | 6 +++++- > > 6 files changed, 66 insertions(+), 9 deletions(-) > > > > diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uacc= ess.h > > index 61d93f062a36..ac4b26583f7c 100644 > > --- a/arch/x86/include/asm/uaccess.h > > +++ b/arch/x86/include/asm/uaccess.h > > @@ -6,6 +6,7 @@ > > */ > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -174,6 +175,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof= (0UL), 0ULL, 0UL)) > > ASM_CALL_CONSTRAINT = \ > > : "0" (ptr), "i" (sizeof(*(ptr)))); = \ > > (x) =3D (__force __typeof__(*(ptr))) __val_gu; = \ > > + kmsan_unpoison_shadow(&(x), sizeof(*(ptr))); = \ > > __builtin_expect(__ret_gu, 0); = \ > > }) > > > > @@ -248,6 +250,7 @@ extern void __put_user_8(void); > > __chk_user_ptr(ptr); \ > > might_fault(); \ > > __pu_val =3D x; \ > > + kmsan_check_memory(&(__pu_val), sizeof(*(ptr))); \ > > switch (sizeof(*(ptr))) { \ > > case 1: \ > > __put_user_x(1, __pu_val, ptr, __ret_pu); \ > > @@ -270,7 +273,9 @@ extern void __put_user_8(void); > > > > #define __put_user_size(x, ptr, size, label) = \ > > do { = \ > > + __typeof__(*(ptr)) __pus_val =3D x; = \ > > __chk_user_ptr(ptr); = \ > > + kmsan_check_memory(&(__pus_val), size); = \ > > switch (size) { = \ > > case 1: = \ > > __put_user_goto(x, ptr, "b", "b", "iq", label); \ > > @@ -295,7 +300,10 @@ do { = \ > > */ > > #define __put_user_size_ex(x, ptr, size) = \ > > do { = \ > > + __typeof__(*(ptr)) __puse_val; = \ > > Can we do =3D x here? Yes. Fixed, thanks! > > __chk_user_ptr(ptr); = \ > > + __puse_val =3D x; = \ > > + kmsan_check_memory(&(__puse_val), size); = \ > > switch (size) { = \ > > case 1: = \ > > __put_user_asm_ex(x, ptr, "b", "b", "iq"); = \ > > @@ -363,6 +371,7 @@ do { = \ > > default: = \ > > (x) =3D __get_user_bad(); = \ > > } = \ > > + kmsan_unpoison_shadow(&(x), size); = \ > > } while (0) > > > > #define __get_user_asm(x, addr, err, itype, rtype, ltype, errret) = \ > > @@ -413,6 +422,7 @@ do { = \ > > default: = \ > > (x) =3D __get_user_bad(); = \ > > } = \ > > + kmsan_unpoison_shadow(&(x), size); = \ > > } while (0) > > > > #define __get_user_asm_ex(x, addr, itype, rtype, ltype) = \ > > @@ -428,11 +438,13 @@ do { = \ > > #define __put_user_nocheck(x, ptr, size) \ > > ({ \ > > __label__ __pu_label; \ > > + __typeof__(*(ptr)) __pun_val =3D x; \ > > Not sure if this matters, but two lines below do (x). Right. > Also, why can't we use __pu_val instead of defining __pun_val? Will do. > > > int __pu_err =3D -EFAULT; \ > > __typeof__(*(ptr)) __pu_val =3D (x); \ > > __typeof__(ptr) __pu_ptr =3D (ptr); \ > > __typeof__(size) __pu_size =3D (size); \ > > __uaccess_begin(); \ > > + kmsan_check_memory(&(__pun_val), size); \ > > __put_user_size(__pu_val, __pu_ptr, __pu_size, __pu_label); = \ > > __pu_err =3D 0; \ > > __pu_label: \ > > diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cac= heflush.h > > index a950a22c4890..707531dccf5e 100644 > > --- a/include/asm-generic/cacheflush.h > > +++ b/include/asm-generic/cacheflush.h > > @@ -4,6 +4,7 @@ > > > > /* Keep includes the same across arches. */ > > #include > > +#include > > > > #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0 > > > > @@ -72,10 +73,14 @@ static inline void flush_cache_vunmap(unsigned long= start, unsigned long end) > > > > #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ > > do { \ > > + kmsan_check_memory(src, len); \ > > memcpy(dst, src, len); \ > > flush_icache_user_range(vma, page, vaddr, len); \ > > } while (0) > > #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ > > - memcpy(dst, src, len) > > + do { \ > > + memcpy(dst, src, len); \ > > + kmsan_unpoison_shadow(dst, len); \ > > + } while (0) > > > > #endif /* __ASM_CACHEFLUSH_H */ > > diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uacces= s.h > > index e935318804f8..508ee649aeef 100644 > > --- a/include/asm-generic/uaccess.h > > +++ b/include/asm-generic/uaccess.h > > @@ -142,7 +142,11 @@ static inline int __access_ok(unsigned long addr, = unsigned long size) > > > > static inline int __put_user_fn(size_t size, void __user *ptr, void *x= ) > > { > > - return unlikely(raw_copy_to_user(ptr, x, size)) ? -EFAULT : 0; > > + int n; > > + > > + n =3D raw_copy_to_user(ptr, x, size); > > + kmsan_copy_to_user(ptr, x, size, n); > > + return unlikely(n) ? -EFAULT : 0; > > } > > > > #define __put_user_fn(sz, u, k) __put_user_fn(sz, u, k) > > @@ -203,7 +207,11 @@ extern int __put_user_bad(void) __attribute__((nor= eturn)); > > #ifndef __get_user_fn > > static inline int __get_user_fn(size_t size, const void __user *ptr, v= oid *x) > > { > > - return unlikely(raw_copy_from_user(x, ptr, size)) ? -EFAULT : 0= ; > > + int copied, to_copy =3D size; > > + > > + copied =3D raw_copy_from_user(x, ptr, size); > > + kmsan_unpoison_shadow(x, to_copy - copied); > > + return unlikely(copied) ? -EFAULT : 0; > > } > > > > #define __get_user_fn(sz, u, k) __get_user_fn(sz, u, k) > > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h > > index d4ee6e942562..7550d11a8077 100644 > > --- a/include/linux/uaccess.h > > +++ b/include/linux/uaccess.h > > @@ -5,6 +5,7 @@ > > #include > > #include > > #include > > +#include > > > > #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS) > > > > @@ -58,18 +59,26 @@ > > static __always_inline __must_check unsigned long > > __copy_from_user_inatomic(void *to, const void __user *from, unsigned = long n) > > { > > + unsigned long to_copy =3D n; > > + > > kasan_check_write(to, n); > > check_object_size(to, n, false); > > - return raw_copy_from_user(to, from, n); > > + n =3D raw_copy_from_user(to, from, n); > > + kmsan_unpoison_shadow(to, to_copy - n); > > + return n; > > } > > > > static __always_inline __must_check unsigned long > > __copy_from_user(void *to, const void __user *from, unsigned long n) > > { > > + unsigned long to_copy =3D n; > > This is confusing. I think we need a var for raw_copy_from_user() > return value instead. Same in functions above and below. raw_copy_from_user() returns the number of bytes _not_ copied from userspace. So in the case it returns 0 we need to unpoison to_copy bytes. > > + > > might_fault(); > > kasan_check_write(to, n); > > check_object_size(to, n, false); > > - return raw_copy_from_user(to, from, n); > > + n =3D raw_copy_from_user(to, from, n); > > + kmsan_unpoison_shadow(to, to_copy - n); > > + return n; > > } > > > > /** > > @@ -88,29 +97,39 @@ __copy_from_user(void *to, const void __user *from,= unsigned long n) > > static __always_inline __must_check unsigned long > > __copy_to_user_inatomic(void __user *to, const void *from, unsigned lo= ng n) > > { > > + unsigned long to_copy =3D n; > > + > > kasan_check_read(from, n); > > check_object_size(from, n, true); > > - return raw_copy_to_user(to, from, n); > > + n =3D raw_copy_to_user(to, from, n); > > + kmsan_copy_to_user((const void *)to, from, to_copy, n); > > + return n; > > } > > > > static __always_inline __must_check unsigned long > > __copy_to_user(void __user *to, const void *from, unsigned long n) > > { > > + unsigned long to_copy =3D n; > > + > > might_fault(); > > kasan_check_read(from, n); > > check_object_size(from, n, true); > > - return raw_copy_to_user(to, from, n); > > + n =3D raw_copy_to_user(to, from, n); > > + kmsan_copy_to_user((const void *)to, from, to_copy, n); > > + return n; > > } > > > > #ifdef INLINE_COPY_FROM_USER > > static inline __must_check unsigned long > > _copy_from_user(void *to, const void __user *from, unsigned long n) > > { > > - unsigned long res =3D n; > > + unsigned long res =3D n, to_copy =3D n; > > + > > might_fault(); > > if (likely(access_ok(from, n))) { > > kasan_check_write(to, n); > > res =3D raw_copy_from_user(to, from, n); > > + kmsan_unpoison_shadow(to, to_copy - res); > > } > > if (unlikely(res)) > > memset(to + (n - res), 0, res); > > @@ -125,10 +144,13 @@ _copy_from_user(void *, const void __user *, unsi= gned long); > > static inline __must_check unsigned long > > _copy_to_user(void __user *to, const void *from, unsigned long n) > > { > > + unsigned long to_copy =3D n; > > + > > might_fault(); > > if (access_ok(to, n)) { > > kasan_check_read(from, n); > > n =3D raw_copy_to_user(to, from, n); > > + kmsan_copy_to_user(to, from, to_copy, n); > > } > > return n; > > } > > diff --git a/lib/iov_iter.c b/lib/iov_iter.c > > index 639d5e7014c1..f038676068b2 100644 > > --- a/lib/iov_iter.c > > +++ b/lib/iov_iter.c > > @@ -137,18 +137,24 @@ > > > > static int copyout(void __user *to, const void *from, size_t n) > > { > > + size_t to_copy =3D n; > > + > > if (access_ok(to, n)) { > > kasan_check_read(from, n); > > n =3D raw_copy_to_user(to, from, n); > > + kmsan_copy_to_user(to, from, to_copy, n); > > } > > return n; > > } > > > > static int copyin(void *to, const void __user *from, size_t n) > > { > > + size_t to_copy =3D n; > > + > > if (access_ok(from, n)) { > > kasan_check_write(to, n); > > n =3D raw_copy_from_user(to, from, n); > > + kmsan_unpoison_shadow(to, to_copy - n); > > } > > return n; > > } > > diff --git a/lib/usercopy.c b/lib/usercopy.c > > index cbb4d9ec00f2..abfd93edecba 100644 > > --- a/lib/usercopy.c > > +++ b/lib/usercopy.c > > @@ -1,4 +1,5 @@ > > // SPDX-License-Identifier: GPL-2.0 > > +#include > > #include > > #include > > > > @@ -7,11 +8,12 @@ > > #ifndef INLINE_COPY_FROM_USER > > unsigned long _copy_from_user(void *to, const void __user *from, unsig= ned long n) > > { > > - unsigned long res =3D n; > > + unsigned long res =3D n, to_copy =3D n; > > might_fault(); > > if (likely(access_ok(from, n))) { > > kasan_check_write(to, n); > > res =3D raw_copy_from_user(to, from, n); > > + kmsan_unpoison_shadow(to, to_copy - res); > > } > > if (unlikely(res)) > > memset(to + (n - res), 0, res); > > @@ -23,10 +25,12 @@ EXPORT_SYMBOL(_copy_from_user); > > #ifndef INLINE_COPY_TO_USER > > unsigned long _copy_to_user(void __user *to, const void *from, unsigne= d long n) > > { > > + unsigned long to_copy =3D n; > > might_fault(); > > if (likely(access_ok(to, n))) { > > kasan_check_read(from, n); > > n =3D raw_copy_to_user(to, from, n); > > + kmsan_copy_to_user(to, from, to_copy, n); > > } > > return n; > > } > > -- > > 2.24.0.432.g9d3f5f5b63-goog > > --=20 Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Stra=C3=9Fe, 33 80636 M=C3=BCnchen Gesch=C3=A4ftsf=C3=BChrer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg