From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC674C43334 for ; Tue, 12 Jul 2022 13:52:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232848AbiGLNwN (ORCPT ); Tue, 12 Jul 2022 09:52:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231184AbiGLNwJ (ORCPT ); Tue, 12 Jul 2022 09:52:09 -0400 Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30C43B79E0 for ; Tue, 12 Jul 2022 06:52:08 -0700 (PDT) Received: by mail-yb1-xb2b.google.com with SMTP id l11so13977048ybu.13 for ; Tue, 12 Jul 2022 06:52:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=hWp5/64or+mP5imIMCPIUbpfNaM35wXtmKXqwrNhd14=; b=fLRarT5d7pJmh7A50/oCtJdSEsMl6AUE3JJAWBUs5VRHku5TfmA2ZaNuN9yxSHaiIN uwtuacBFDfHbwF6Bm+jjGZvODksVkWM5jxN3OkliOz894DxvftnJTqBwbIKjL3a5Jg7O aD/FvQUvMXwFSfDXS95BPdRaSu7iLOPKW1iUcwu/8mbh2rgEJYfXBSvs5rJCN+tbCW5B j9nesILDH3K0YJDH31eTwvUt6JTHN6f/36ZkIpJmZn5kpG6CapCdvx0JiOFJWYt0Wlhp +g0M1lIyMMzrKYY4H+JmEQ3F9zNBW040MDY9TBTTgYfY69+SrLs4HRdOEbdtzaKYoX2a DFuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hWp5/64or+mP5imIMCPIUbpfNaM35wXtmKXqwrNhd14=; b=W0cB3bvuDJ3nJGb+CGU3iZ34LhSuBpQJxQ5SUApXQ/cXV6zEzXl1libRXIR2/eQCv2 JtOUmpRsUiCQ0HfUByfZjGHr5025rQC4I7Il+kfbDCM5PLSlG0EWBPWWMYWkc3c0+uaq R1dqX6/0j4zeUf291FIJQkXUhczdLX2eiDI5Xe9we0uCzEw9T+4n1Qok530HvtQAtWw6 p6TxdcHVH4bvyQjxR8vspU8NezVAjSyT7DnvBG/LWJMzO0sJ6LNSOCyUJTS5BW7Q6bdF EW62bqXhAhtEACDRz4yPqtTHZjxzFsA9aFH7EgqYsuoNwoDTdkV/iAm/4pCYlB0zprbz emyQ== X-Gm-Message-State: AJIora+iRrKiQuU5rNJ0yGOJzGQEy8sblnJh7rJRY4h7zTPR/2+EYgeU +jTH8yIA9MlEV8XY6Y7fSEEZlAvDF0NBwKZZL6V3Gw== X-Google-Smtp-Source: AGRyM1vaXhDrW9SzgGVGnQeO3sKZbN8zrYy0RbeU0jBi1M5Io3ShqhFaENqRfTiUKywPAI3ytd02a4hWcGbWqnCnWlg= X-Received: by 2002:a5b:10a:0:b0:66d:d8e3:9da2 with SMTP id 10-20020a5b010a000000b0066dd8e39da2mr22834061ybx.87.1657633927213; Tue, 12 Jul 2022 06:52:07 -0700 (PDT) MIME-Version: 1.0 References: <20220701142310.2188015-1-glider@google.com> <20220701142310.2188015-19-glider@google.com> In-Reply-To: <20220701142310.2188015-19-glider@google.com> From: Marco Elver Date: Tue, 12 Jul 2022 15:51:31 +0200 Message-ID: Subject: Re: [PATCH v4 18/45] instrumented.h: add KMSAN support To: Alexander Potapenko Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 1 Jul 2022 at 16:24, Alexander Potapenko wrote: > > To avoid false positives, KMSAN needs to unpoison the data copied from > the userspace. To detect infoleaks - check the memory buffer passed to > copy_to_user(). > > Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver With the code simplification below. [...] > --- a/mm/kmsan/hooks.c > +++ b/mm/kmsan/hooks.c > @@ -212,6 +212,44 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end) > } > EXPORT_SYMBOL(kmsan_iounmap_page_range); > > +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, > + size_t left) > +{ > + unsigned long ua_flags; > + > + if (!kmsan_enabled || kmsan_in_runtime()) > + return; > + /* > + * At this point we've copied the memory already. It's hard to check it > + * before copying, as the size of actually copied buffer is unknown. > + */ > + > + /* copy_to_user() may copy zero bytes. No need to check. */ > + if (!to_copy) > + return; > + /* Or maybe copy_to_user() failed to copy anything. */ > + if (to_copy <= left) > + return; > + > + ua_flags = user_access_save(); > + if ((u64)to < TASK_SIZE) { > + /* This is a user memory access, check it. */ > + kmsan_internal_check_memory((void *)from, to_copy - left, to, > + REASON_COPY_TO_USER); This could just do "} else {" and the stuff below, and would result in simpler code with no explicit "return" and no duplicated user_access_restore(). > + user_access_restore(ua_flags); > + return; > + } > + /* Otherwise this is a kernel memory access. This happens when a compat > + * syscall passes an argument allocated on the kernel stack to a real > + * syscall. > + * Don't check anything, just copy the shadow of the copied bytes. > + */ > + kmsan_internal_memmove_metadata((void *)to, (void *)from, > + to_copy - left); > + user_access_restore(ua_flags); > +} > +EXPORT_SYMBOL(kmsan_copy_to_user);