From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0798C83000 for ; Tue, 28 Apr 2020 19:40:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 658CB2085B for ; Tue, 28 Apr 2020 19:40:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 658CB2085B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF53D8E0009; Tue, 28 Apr 2020 15:40:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA6638E0001; Tue, 28 Apr 2020 15:40:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE27F8E0009; Tue, 28 Apr 2020 15:40:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id C43528E0001 for ; Tue, 28 Apr 2020 15:40:15 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7D275180AD801 for ; Tue, 28 Apr 2020 19:40:15 +0000 (UTC) X-FDA: 76758279990.13.wound40_7642ef410685b X-HE-Tag: wound40_7642ef410685b X-Filterd-Recvd-Size: 3730 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Apr 2020 19:40:12 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E5C5C14; Tue, 28 Apr 2020 12:40:10 -0700 (PDT) Received: from C02TF0J2HF1T.local (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE6B73F68F; Tue, 28 Apr 2020 12:40:05 -0700 (PDT) Date: Tue, 28 Apr 2020 20:40:01 +0100 From: Catalin Marinas To: Kevin Brodsky Cc: linux-arm-kernel@lists.infradead.org, Will Deacon , Vincenzo Frascino , Szabolcs Nagy , Richard Earnshaw , Andrey Konovalov , Peter Collingbourne , linux-mm@kvack.org, linux-arch@vger.kernel.org, Alexander Viro Subject: Re: [PATCH v3 20/23] fs: Allow copy_mount_options() to access user-space in a single pass Message-ID: <20200428194001.GB35158@C02TF0J2HF1T.local> References: <20200421142603.3894-1-catalin.marinas@arm.com> <20200421142603.3894-21-catalin.marinas@arm.com> <9544d86b-d445-3497-fbbf-56c590400f83@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9544d86b-d445-3497-fbbf-56c590400f83@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 28, 2020 at 07:16:29PM +0100, Kevin Brodsky wrote: > On 21/04/2020 15:26, Catalin Marinas wrote: > > diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h > > index 32fc8061aa76..566da441eba2 100644 > > --- a/arch/arm64/include/asm/uaccess.h > > +++ b/arch/arm64/include/asm/uaccess.h > > @@ -416,6 +416,17 @@ extern unsigned long __must_check __arch_copy_in_user(void __user *to, const voi > > #define INLINE_COPY_TO_USER > > #define INLINE_COPY_FROM_USER > > +static inline bool arch_has_exact_copy_from_user(unsigned long n) > > +{ > > + /* > > + * copy_from_user() aligns the source pointer if the size is greater > > + * than 15. Since all the loads are naturally aligned, they can only > > + * fail on the first byte. > > + */ > > + return n > 15; > > +} > > +#define arch_has_exact_copy_from_user > > + > > extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned long n); > > static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n) > > { > > diff --git a/fs/namespace.c b/fs/namespace.c > > index a28e4db075ed..8febc50dfc5d 100644 > > --- a/fs/namespace.c > > +++ b/fs/namespace.c > > @@ -3025,13 +3025,16 @@ void *copy_mount_options(const void __user * data) > > if (!copy) > > return ERR_PTR(-ENOMEM); > > - size = PAGE_SIZE - offset_in_page(data); > > + size = PAGE_SIZE; > > + if (!arch_has_exact_copy_from_user(size)) > > + size -= offset_in_page(data); > > - if (copy_from_user(copy, data, size)) { > > + if (copy_from_user(copy, data, size) == size) { > > kfree(copy); > > return ERR_PTR(-EFAULT); > > } > > if (size != PAGE_SIZE) { > > + WARN_ON(1); > > I'm not sure I understand the rationale here. If we don't have exact > copy_from_user for size, then we will attempt to copy up to the end of the > page. Assuming this doesn't fault, we then want to carry on copying from the > start of the next page, until we reach a total size of up to 4K. Why would > we warn in that case? We shouldn't warn, thanks for spotting this. I added it for some testing and somehow ended up in the commit. -- Catalin