From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 891A8C4727C for ; Tue, 29 Sep 2020 12:30:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CB4120789 for ; Tue, 29 Sep 2020 12:30:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601382610; bh=dsEVre5mahEKufw3QNy6wTxI/ryhoX5wYHAmIg65jyU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=HRCjtWgc2McvaFWGK3Vn9RlXZ5LrQWDVryVaz2uFmrK4ZI2FUoWGpM30ykDWpQll2 rX+cauM5sbq4BtlgbPqoIFQqhMWgSmVzCSXLCb4aWN2T8uYFdI9M6enOJMa6jWuLPp mh1GgQkWyJK92G4G5Bapd7CuXk1tjnpr2oakOtRQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729140AbgI2M3r (ORCPT ); Tue, 29 Sep 2020 08:29:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:43262 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729315AbgI2LaX (ORCPT ); Tue, 29 Sep 2020 07:30:23 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D05FF23AFA; Tue, 29 Sep 2020 11:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601378681; bh=dsEVre5mahEKufw3QNy6wTxI/ryhoX5wYHAmIg65jyU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rLORgFCj7dTyOlERetT5f6IWhYwYHYTm/ueY6cZbdnD8RZk01TVpaMsmu/pGQUYaY D9tD2ec3Fa0KxjStL927ZIuAQDzZKvpATSGNtsHvH8ruGU2ofXERIkll6gIbR5yAoC efk+6FIQ/WLkQaLiZdYoTAKOCRScZKWWKQ/czJQ0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jeff Moyer , Andrew Morton , "Kirill A. Shutemov" , Justin He , Dan Williams , Linus Torvalds , Sasha Levin Subject: [PATCH 4.19 100/245] mm: avoid data corruption on CoW fault into PFN-mapped VMA Date: Tue, 29 Sep 2020 12:59:11 +0200 Message-Id: <20200929105951.867784718@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200929105946.978650816@linuxfoundation.org> References: <20200929105946.978650816@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Kirill A. Shutemov [ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ] Jeff Moyer has reported that one of xfstests triggers a warning when run on DAX-enabled filesystem: WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50 ... wp_page_copy+0x98c/0xd50 (unreliable) do_wp_page+0xd8/0xad0 __handle_mm_fault+0x748/0x1b90 handle_mm_fault+0x120/0x1f0 __do_page_fault+0x240/0xd70 do_page_fault+0x38/0xd0 handle_page_fault+0x10/0x30 The warning happens on failed __copy_from_user_inatomic() which tries to copy data into a CoW page. This happens because of race between MADV_DONTNEED and CoW page fault: CPU0 CPU1 handle_mm_fault() do_wp_page() wp_page_copy() do_wp_page() madvise(MADV_DONTNEED) zap_page_range() zap_pte_range() ptep_get_and_clear_full() __copy_from_user_inatomic() sees empty PTE and fails WARN_ON_ONCE(1) clear_page() The solution is to re-try __copy_from_user_inatomic() under PTL after checking that PTE is matches the orig_pte. The second copy attempt can still fail, like due to non-readable PTE, but there's nothing reasonable we can do about, except clearing the CoW page. Reported-by: Jeff Moyer Signed-off-by: Andrew Morton Signed-off-by: Kirill A. Shutemov Tested-by: Jeff Moyer Cc: Cc: Justin He Cc: Dan Williams Link: http://lkml.kernel.org/r/20200218154151.13349-1-kirill.shutemov@linux.intel.com Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/memory.c | 35 +++++++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fcad8a0d943d3..eeae63bd95027 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2353,7 +2353,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src, bool ret; void *kaddr; void __user *uaddr; - bool force_mkyoung; + bool locked = false; struct vm_area_struct *vma = vmf->vma; struct mm_struct *mm = vma->vm_mm; unsigned long addr = vmf->address; @@ -2378,11 +2378,11 @@ static inline bool cow_user_page(struct page *dst, struct page *src, * On architectures with software "accessed" bits, we would * take a double page fault, so mark it accessed here. */ - force_mkyoung = arch_faults_on_old_pte() && !pte_young(vmf->orig_pte); - if (force_mkyoung) { + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { pte_t entry; vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); + locked = true; if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { /* * Other thread has already handled the fault @@ -2406,18 +2406,37 @@ static inline bool cow_user_page(struct page *dst, struct page *src, * zeroes. */ if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + if (locked) + goto warn; + + /* Re-validate under PTL if the page is still mapped */ + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); + locked = true; + if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { + /* The PTE changed under us. Retry page fault. */ + ret = false; + goto pte_unlock; + } + /* - * Give a warn in case there can be some obscure - * use-case + * The same page can be mapped back since last copy attampt. + * Try to copy again under PTL. */ - WARN_ON_ONCE(1); - clear_page(kaddr); + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + /* + * Give a warn in case there can be some obscure + * use-case + */ +warn: + WARN_ON_ONCE(1); + clear_page(kaddr); + } } ret = true; pte_unlock: - if (force_mkyoung) + if (locked) pte_unmap_unlock(vmf->pte, vmf->ptl); kunmap_atomic(kaddr); flush_dcache_page(dst); -- 2.25.1