From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACDEAC4743C for ; Wed, 23 Jun 2021 09:32:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 91F1761185 for ; Wed, 23 Jun 2021 09:32:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230056AbhFWJen (ORCPT ); Wed, 23 Jun 2021 05:34:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:52928 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbhFWJem (ORCPT ); Wed, 23 Jun 2021 05:34:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8EC9960724; Wed, 23 Jun 2021 09:32:23 +0000 (UTC) Date: Wed, 23 Jun 2021 10:32:21 +0100 From: Catalin Marinas To: Al Viro Cc: Xiaoming Ni , Chen Huang , Andrew Morton , Stephen Rothwell , "Matthew Wilcox (Oracle)" , Randy Dunlap , Will Deacon , Linux ARM , linux-mm , open list Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() Message-ID: <20210623093220.GA3718@arm.com> References: <92fa298d-9d88-0ca4-40d9-13690dcd42f9@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 23, 2021 at 04:27:37AM +0000, Al Viro wrote: > On Wed, Jun 23, 2021 at 11:24:54AM +0800, Xiaoming Ni wrote: > > On 2021/6/23 10:50, Al Viro wrote: > > > On Wed, Jun 23, 2021 at 10:39:31AM +0800, Chen Huang wrote: > > > > > > > Then when kernel handles the alignment_fault, it will not panic. As the > > > > arm64 memory model spec said, when the address is not a multiple of the > > > > element size, the access is unaligned. Unaligned accesses are allowed to > > > > addresses marked as Normal, but not to Device regions. An unaligned access > > > > to a Device region will trigger an exception (alignment fault). > > > > > > > > do_alignment_fault > > > > do_bad_area > > > > __do_kernel_fault > > > > fixup_exception > > > > > > > > But that fixup cann't handle the unaligned copy, so the > > > > copy_page_from_iter_atomic returns 0 and traps in loop. > > > > > > Looks like you need to fix your raw_copy_from_user(), then... > > > > Exit loop when iov_iter_copy_from_user_atomic() returns 0. > > This should solve the problem, too, and it's easier. > > It might be easier, but it's not going to work correctly. > If the page gets evicted by memory pressure, you are going > to get spurious short write. > > Besides, it's simply wrong - write(2) does *NOT* require an > aligned source. It (and raw_copy_from_user()) should act the > same way memcpy(3) does. On arm64, neither memcpy() nor raw_copy_from_user() are expected to work on Device mappings, we have memcpy_fromio() for this but only for ioremap(). There's no (easy) way to distinguish in the write() syscall how the source buffer is mapped. generic_perform_write() does an iov_iter_fault_in_readable() check but that's not sufficient and it also breaks the cases where you can get intra-page faults (arm64 MTE or SPARC ADI). I think in the general case it's racy anyway (another thread doing an mprotect(PROT_NONE) after the readable check passed). So I think generic_perform_write() returning -EFAULT if copied == 0 would make sense (well, unless it breaks other cases I'm not aware of). -- Catalin From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAD33C4743C for ; Wed, 23 Jun 2021 09:33:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AD6B960724 for ; Wed, 23 Jun 2021 09:33:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD6B960724 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PpB3H9TYoTWBfyavsCdukvl7Hew7snknki4u+2x4QsQ=; b=DebFhHeBY0Dood 4GyEn4QMdkzFYCvU9CMVNxZ7NhKVIJ7q/qeQg5fDL711sJ/etWmTh7bxaVmQd6nSO+nv+rxPkRIdh 4+qSNGBfVIKJinXygsAS/0S+HGDbVQNFs2wd2OjPR11XotOGVi/UTi8UrRfxxxrrnqCarb776k6wN wv28y6207DbrsYH19DEtm3UmVwrNmwkpJnYESq97Cm8eocLGs+oSZbTGV61LXxdZBmoDSOV4Munfq gUPFgf2EA3dJ/oZEGerG9AeBmEehwpGL7ohAXnV/f51ccFGbKMC488L51hDF2Pv7U3pn2Yi7aK2Bn 6lkKtTSA9AdyhoSpS2JA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvzFI-00A6A4-H4; Wed, 23 Jun 2021 09:32:32 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvzFC-00A67D-0l for linux-arm-kernel@lists.infradead.org; Wed, 23 Jun 2021 09:32:27 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8EC9960724; Wed, 23 Jun 2021 09:32:23 +0000 (UTC) Date: Wed, 23 Jun 2021 10:32:21 +0100 From: Catalin Marinas To: Al Viro Cc: Xiaoming Ni , Chen Huang , Andrew Morton , Stephen Rothwell , "Matthew Wilcox (Oracle)" , Randy Dunlap , Will Deacon , Linux ARM , linux-mm , open list Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() Message-ID: <20210623093220.GA3718@arm.com> References: <92fa298d-9d88-0ca4-40d9-13690dcd42f9@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210623_023226_115290_84955B05 X-CRM114-Status: GOOD ( 22.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jun 23, 2021 at 04:27:37AM +0000, Al Viro wrote: > On Wed, Jun 23, 2021 at 11:24:54AM +0800, Xiaoming Ni wrote: > > On 2021/6/23 10:50, Al Viro wrote: > > > On Wed, Jun 23, 2021 at 10:39:31AM +0800, Chen Huang wrote: > > > > > > > Then when kernel handles the alignment_fault, it will not panic. As the > > > > arm64 memory model spec said, when the address is not a multiple of the > > > > element size, the access is unaligned. Unaligned accesses are allowed to > > > > addresses marked as Normal, but not to Device regions. An unaligned access > > > > to a Device region will trigger an exception (alignment fault). > > > > > > > > do_alignment_fault > > > > do_bad_area > > > > __do_kernel_fault > > > > fixup_exception > > > > > > > > But that fixup cann't handle the unaligned copy, so the > > > > copy_page_from_iter_atomic returns 0 and traps in loop. > > > > > > Looks like you need to fix your raw_copy_from_user(), then... > > > > Exit loop when iov_iter_copy_from_user_atomic() returns 0. > > This should solve the problem, too, and it's easier. > > It might be easier, but it's not going to work correctly. > If the page gets evicted by memory pressure, you are going > to get spurious short write. > > Besides, it's simply wrong - write(2) does *NOT* require an > aligned source. It (and raw_copy_from_user()) should act the > same way memcpy(3) does. On arm64, neither memcpy() nor raw_copy_from_user() are expected to work on Device mappings, we have memcpy_fromio() for this but only for ioremap(). There's no (easy) way to distinguish in the write() syscall how the source buffer is mapped. generic_perform_write() does an iov_iter_fault_in_readable() check but that's not sufficient and it also breaks the cases where you can get intra-page faults (arm64 MTE or SPARC ADI). I think in the general case it's racy anyway (another thread doing an mprotect(PROT_NONE) after the readable check passed). So I think generic_perform_write() returning -EFAULT if copied == 0 would make sense (well, unless it breaks other cases I'm not aware of). -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel