From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A908CC433FE for ; Wed, 27 Oct 2021 16:17:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49EA060F92 for ; Wed, 27 Oct 2021 16:17:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 49EA060F92 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linuxfoundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A23746B0071; Wed, 27 Oct 2021 12:17:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AC636B0072; Wed, 27 Oct 2021 12:17:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84D1B940007; Wed, 27 Oct 2021 12:17:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 718786B0071 for ; Wed, 27 Oct 2021 12:17:12 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 26DA039B97 for ; Wed, 27 Oct 2021 16:17:12 +0000 (UTC) X-FDA: 78742721904.14.0E4ED33 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id 4E67850000BD for ; Wed, 27 Oct 2021 16:17:06 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3618B60E76; Wed, 27 Oct 2021 16:17:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1635351430; bh=v9grbAI3FqILUGkhSIP/vSoCHHwEZ9+rzbFCCMlHXt0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=HvLJllQ7zhY2L5pvjRiZzVDgl+lbFR18tGeBCo8QHsJLRY5IlUCCWz6LLokuIPQ9R Vp3gfEDrVK2uoh/XoNZkJvljDcDxx28AJUsyv/ygbEHwAHMVZFr5m/5pEUMiyodeqF wmKrgvodNGrKZV745B6BFhINSskRX71blVzPsvG0= Date: Wed, 27 Oct 2021 18:17:08 +0200 From: Greg Kroah-Hartman To: Chen Huang Cc: Catalin Marinas , Will Deacon , Robin Murphy , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, stable@vger.kernel.org, linux-mm@kvack.org, Al Viro Subject: Re: [PATCH 5.10.y] arm64: Avoid premature usercopy failure Message-ID: References: <20211027014047.2317325-1-chenhuang5@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211027014047.2317325-1-chenhuang5@huawei.com> X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 4E67850000BD X-Stat-Signature: ggtsghkiuykba11bm3zgjshmsfr66gkr Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=HvLJllQ7; dmarc=pass (policy=none) header.from=linuxfoundation.org; spf=pass (imf04.hostedemail.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-HE-Tag: 1635351426-544630 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 27, 2021 at 01:40:47AM +0000, Chen Huang wrote: > From: Robin Murphy > > commit 295cf156231ca3f9e3a66bde7fab5e09c41835e0 upstream. > > Al reminds us that the usercopy API must only return complete failure > if absolutely nothing could be copied. Currently, if userspace does > something silly like giving us an unaligned pointer to Device memory, > or a size which overruns MTE tag bounds, we may fail to honour that > requirement when faulting on a multi-byte access even though a smaller > access could have succeeded. > > Add a mitigation to the fixup routines to fall back to a single-byte > copy if we faulted on a larger access before anything has been written > to the destination, to guarantee making *some* forward progress. We > needn't be too concerned about the overall performance since this should > only occur when callers are doing something a bit dodgy in the first > place. Particularly broken userspace might still be able to trick > generic_perform_write() into an infinite loop by targeting write() at > an mmap() of some read-only device register where the fault-in load > succeeds but any store synchronously aborts such that copy_to_user() is > genuinely unable to make progress, but, well, don't do that... > > CC: stable@vger.kernel.org > Reported-by: Chen Huang > Suggested-by: Al Viro > Reviewed-by: Catalin Marinas > Signed-off-by: Robin Murphy > Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.com > Signed-off-by: Will Deacon > Signed-off-by: Chen Huang > --- > arch/arm64/lib/copy_from_user.S | 13 ++++++++++--- > arch/arm64/lib/copy_in_user.S | 21 ++++++++++++++------- > arch/arm64/lib/copy_to_user.S | 14 +++++++++++--- > 3 files changed, 35 insertions(+), 13 deletions(-) Both now queued up, thanks. greg k-h