From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE828C49EA6 for ; Thu, 24 Jun 2021 20:37:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B7F80613BD for ; Thu, 24 Jun 2021 20:37:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232560AbhFXUjV (ORCPT ); Thu, 24 Jun 2021 16:39:21 -0400 Received: from foss.arm.com ([217.140.110.172]:38262 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229525AbhFXUjU (ORCPT ); Thu, 24 Jun 2021 16:39:20 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB24A1042; Thu, 24 Jun 2021 13:37:00 -0700 (PDT) Received: from [10.57.9.136] (unknown [10.57.9.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF4683F694; Thu, 24 Jun 2021 13:36:58 -0700 (PDT) Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() To: Catalin Marinas , Al Viro Cc: Matthew Wilcox , Christoph Hellwig , Chen Huang , Mark Rutland , Andrew Morton , Stephen Rothwell , Randy Dunlap , Will Deacon , Linux ARM , linux-mm , open list References: <20210623132223.GA96264@C02TD0UTHF1T.local> <1c635945-fb25-8871-7b34-f475f75b2caf@huawei.com> <27fbb8c1-2a65-738f-6bec-13f450395ab7@arm.com> <20210624185554.GC25097@arm.com> From: Robin Murphy Message-ID: Date: Thu, 24 Jun 2021 21:36:54 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: <20210624185554.GC25097@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-06-24 19:55, Catalin Marinas wrote: > On Thu, Jun 24, 2021 at 04:27:17PM +0000, Al Viro wrote: >> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote: >>> FWIW I think the only way to make the kernel behaviour any more robust here >>> would be to make the whole uaccess API more expressive, such that rather >>> than simply saying "I only got this far" it could actually differentiate >>> between stopping due to a fault which may be recoverable and worth retrying, >>> and one which definitely isn't. >> >> ... and propagate that "more expressive" information through what, 3 or 4 >> levels in the call chain? >> >> From include/linux/uaccess.h: >> >> * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting >> * at to must become equal to the bytes fetched from the corresponding area >> * starting at from. All data past to + size - N must be left unmodified. >> * >> * If copying succeeds, the return value must be 0. If some data cannot be >> * fetched, it is permitted to copy less than had been fetched; the only >> * hard requirement is that not storing anything at all (i.e. returning size) >> * should happen only when nothing could be copied. In other words, you don't >> * have to squeeze as much as possible - it is allowed, but not necessary. >> >> arm64 instances violate the aforementioned hard requirement. > > After reading the above a few more times, I think I get it. The key > sentence is: not storing anything at all should happen only when nothing > could be copied. In the MTE case, something can still be copied. > >> Please, fix >> it there; it's not hard. All you need is an exception handler in .Ltiny15 >> that would fall back to (short) byte-by-byte copy if the faulting address >> happened to be unaligned. Or just do one-byte copy, not that it had been >> considerably cheaper than a loop. Will be cheaper than propagating that extra >> information up the call chain, let alone paying for extra ->write_begin() >> and ->write_end() for single byte in generic_perform_write(). > > Yeah, it's definitely fixable in the arch code. I misread the above > requirements and thought it could be fixed in the core code. > > Quick hack, though I think in the actual exception handling path in .S > more sense (and it needs the copy_to_user for symmetry): Hmm, if anything the asm version might be even more straightforward; I think it's pretty much just this (untested): diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 043da90f5dd7..632bf1f9540d 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -62,6 +62,9 @@ EXPORT_SYMBOL(__arch_copy_to_user) .section .fixup,"ax" .align 2 -9998: sub x0, end, dst // bytes not copied +9998: ldrb w7, [x1] +USER(9997f, sttrb w7, [x0]) + add x0, x0, #1 +9997: sub x0, end, dst // bytes not copied ret .previous If we can get away without trying to finish the whole copy bytewise, (i.e. we don't cause any faults of our own by knowingly over-reading in the routine itself), I'm more than happy with that. Robin. > diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h > index b5f08621fa29..903f8a2a457b 100644 > --- a/arch/arm64/include/asm/uaccess.h > +++ b/arch/arm64/include/asm/uaccess.h > @@ -415,6 +415,15 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u > uaccess_ttbr0_enable(); \ > __acfu_ret = __arch_copy_from_user((to), \ > __uaccess_mask_ptr(from), (n)); \ > + if (__acfu_ret == n) { \ > + int __cfu_err = 0; \ > + char __cfu_val; \ > + __raw_get_mem("ldtr", __cfu_val, (char *)from, __cfu_err);\ > + if (!__cfu_err) { \ > + *(char *)to = __cfu_val; \ > + __acfu_ret--; \ > + } \ > + } \ > uaccess_ttbr0_disable(); \ > __acfu_ret; \ > }) > > Of course, it only fixes the MTE problem, I'll ignore the MMIO case > (though it may work in certain configurations like synchronous faults). > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A9ADC49EA6 for ; Thu, 24 Jun 2021 20:38:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0265A61249 for ; Thu, 24 Jun 2021 20:38:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0265A61249 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:Cc:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=wDwqjJWoZQhszZKZLudpCAaVYH/NuiWNI9mxiZU4Oms=; b=TDtWU9XsQatxwLupQXow7lkvIn mWHUGPOz+MNWVt3L9O8fm/xw9dO+dgmVzRKwKqeBFVg1hHMQBUL6B5tP1GsKrjuK/PGoFMSdb4DLk 48LzCIFA4RXCMEHVlTHlE2BeyfoAS20JjppjIrQEUfA+op/3vrRC9bPdXxXyKIJYQLxUl9YPwlszE ZyLHb0BzyfwqjcpAT9uB0TbtQlL18Y5B8AlLGSSV6FXe2CiRspR+NmaHypsKu10ela9J4BbTR2uyf wQuKl0lRoUpRiRLdNFIi3kY4LeV1MmPhucOJKZ9vKJs+DTFAT3ee6TX+1IR1zBIkITnRIr7nWhocu WzW28mSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lwW5x-00GDXx-GY; Thu, 24 Jun 2021 20:37:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lwW5t-00GDQQ-Rq for linux-arm-kernel@lists.infradead.org; Thu, 24 Jun 2021 20:37:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB24A1042; Thu, 24 Jun 2021 13:37:00 -0700 (PDT) Received: from [10.57.9.136] (unknown [10.57.9.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF4683F694; Thu, 24 Jun 2021 13:36:58 -0700 (PDT) Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() To: Catalin Marinas , Al Viro Cc: Matthew Wilcox , Christoph Hellwig , Chen Huang , Mark Rutland , Andrew Morton , Stephen Rothwell , Randy Dunlap , Will Deacon , Linux ARM , linux-mm , open list References: <20210623132223.GA96264@C02TD0UTHF1T.local> <1c635945-fb25-8871-7b34-f475f75b2caf@huawei.com> <27fbb8c1-2a65-738f-6bec-13f450395ab7@arm.com> <20210624185554.GC25097@arm.com> From: Robin Murphy Message-ID: Date: Thu, 24 Jun 2021 21:36:54 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: <20210624185554.GC25097@arm.com> Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210624_133702_058712_0578819E X-CRM114-Status: GOOD ( 28.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021-06-24 19:55, Catalin Marinas wrote: > On Thu, Jun 24, 2021 at 04:27:17PM +0000, Al Viro wrote: >> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote: >>> FWIW I think the only way to make the kernel behaviour any more robust here >>> would be to make the whole uaccess API more expressive, such that rather >>> than simply saying "I only got this far" it could actually differentiate >>> between stopping due to a fault which may be recoverable and worth retrying, >>> and one which definitely isn't. >> >> ... and propagate that "more expressive" information through what, 3 or 4 >> levels in the call chain? >> >> From include/linux/uaccess.h: >> >> * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting >> * at to must become equal to the bytes fetched from the corresponding area >> * starting at from. All data past to + size - N must be left unmodified. >> * >> * If copying succeeds, the return value must be 0. If some data cannot be >> * fetched, it is permitted to copy less than had been fetched; the only >> * hard requirement is that not storing anything at all (i.e. returning size) >> * should happen only when nothing could be copied. In other words, you don't >> * have to squeeze as much as possible - it is allowed, but not necessary. >> >> arm64 instances violate the aforementioned hard requirement. > > After reading the above a few more times, I think I get it. The key > sentence is: not storing anything at all should happen only when nothing > could be copied. In the MTE case, something can still be copied. > >> Please, fix >> it there; it's not hard. All you need is an exception handler in .Ltiny15 >> that would fall back to (short) byte-by-byte copy if the faulting address >> happened to be unaligned. Or just do one-byte copy, not that it had been >> considerably cheaper than a loop. Will be cheaper than propagating that extra >> information up the call chain, let alone paying for extra ->write_begin() >> and ->write_end() for single byte in generic_perform_write(). > > Yeah, it's definitely fixable in the arch code. I misread the above > requirements and thought it could be fixed in the core code. > > Quick hack, though I think in the actual exception handling path in .S > more sense (and it needs the copy_to_user for symmetry): Hmm, if anything the asm version might be even more straightforward; I think it's pretty much just this (untested): diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 043da90f5dd7..632bf1f9540d 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -62,6 +62,9 @@ EXPORT_SYMBOL(__arch_copy_to_user) .section .fixup,"ax" .align 2 -9998: sub x0, end, dst // bytes not copied +9998: ldrb w7, [x1] +USER(9997f, sttrb w7, [x0]) + add x0, x0, #1 +9997: sub x0, end, dst // bytes not copied ret .previous If we can get away without trying to finish the whole copy bytewise, (i.e. we don't cause any faults of our own by knowingly over-reading in the routine itself), I'm more than happy with that. Robin. > diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h > index b5f08621fa29..903f8a2a457b 100644 > --- a/arch/arm64/include/asm/uaccess.h > +++ b/arch/arm64/include/asm/uaccess.h > @@ -415,6 +415,15 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u > uaccess_ttbr0_enable(); \ > __acfu_ret = __arch_copy_from_user((to), \ > __uaccess_mask_ptr(from), (n)); \ > + if (__acfu_ret == n) { \ > + int __cfu_err = 0; \ > + char __cfu_val; \ > + __raw_get_mem("ldtr", __cfu_val, (char *)from, __cfu_err);\ > + if (!__cfu_err) { \ > + *(char *)to = __cfu_val; \ > + __acfu_ret--; \ > + } \ > + } \ > uaccess_ttbr0_disable(); \ > __acfu_ret; \ > }) > > Of course, it only fixes the MTE problem, I'll ignore the MMIO case > (though it may work in certain configurations like synchronous faults). > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel