From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92D0FC433FE for ; Mon, 28 Feb 2022 20:31:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229566AbiB1Ucf (ORCPT ); Mon, 28 Feb 2022 15:32:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229549AbiB1Ucd (ORCPT ); Mon, 28 Feb 2022 15:32:33 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DF5364E2 for ; Mon, 28 Feb 2022 12:31:53 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 37B1660DCC for ; Mon, 28 Feb 2022 20:31:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90EAFC340F1; Mon, 28 Feb 2022 20:31:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1646080312; bh=Tebi7TEOa4X7l7MmnhcV/vB8mRLlmUL+G8ogI0lvY2Q=; h=Date:To:From:Subject:From; b=OP9Ea9xR+KI5NpUKYcJTgyHK0wQAH0MtWCiXxxIehjNOJH34yauyaOuVl/9BeqDml 6/Gj/ihyvBdbmN+EU52LvtJ7XQ/VW5CrviHtN1Vzg3sADp/88EgR/odgy4RagTav2b 7bzHZSdnrwP/XoSsYOjZny/+lG2buBavSBsjd0e4= Date: Mon, 28 Feb 2022 12:31:51 -0800 To: mm-commits@vger.kernel.org, zwisler@kernel.org, xiyuyang19@fudan.edu.cn, willy@infradead.org, viro@zeniv.linux.org.uk, shy828301@gmail.com, rcampbell@nvidia.com, kirill.shutemov@linux.intel.com, jack@suse.cz, hughd@google.com, hch@infradead.org, duanxiongchun@bytedance.com, dan.j.williams@intel.com, apopple@nvidia.com, songmuchun@bytedance.com, akpm@linux-foundation.org From: Andrew Morton Subject: + dax-fix-missing-writeprotect-the-pte-entry.patch added to -mm tree Message-Id: <20220228203152.90EAFC340F1@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: dax: fix missing writeprotect the pte entry has been added to the -mm tree. Its filename is dax-fix-missing-writeprotect-the-pte-entry.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/dax-fix-missing-writeprotect-the-pte-entry.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/dax-fix-missing-writeprotect-the-pte-entry.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Muchun Song Subject: dax: fix missing writeprotect the pte entry Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Link: https://lkml.kernel.org/r/20220228063536.24911-6-songmuchun@bytedance.com Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song Cc: Alistair Popple Cc: Al Viro Cc: Christoph Hellwig Cc: Dan Williams Cc: Hugh Dickins Cc: Jan Kara Cc: "Kirill A. Shutemov" Cc: Matthew Wilcox (Oracle) Cc: Ralph Campbell Cc: Ross Zwisler Cc: Xiongchun Duan Cc: Xiyu Yang Cc: Yang Shi Signed-off-by: Andrew Morton --- fs/dax.c | 83 ++++------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) --- a/fs/dax.c~dax-fix-missing-writeprotect-the-pte-entry +++ a/fs/dax.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_ return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_s count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There _ Patches currently in -mm which might be from songmuchun@bytedance.com are mm-list_lru-transpose-the-array-of-per-node-per-memcg-lru-lists.patch mm-introduce-kmem_cache_alloc_lru.patch fs-introduce-alloc_inode_sb-to-allocate-filesystems-specific-inode.patch fs-allocate-inode-by-using-alloc_inode_sb.patch f2fs-allocate-inode-by-using-alloc_inode_sb.patch mm-dcache-use-kmem_cache_alloc_lru-to-allocate-dentry.patch xarray-use-kmem_cache_alloc_lru-to-allocate-xa_node.patch mm-memcontrol-move-memcg_online_kmem-to-mem_cgroup_css_online.patch mm-list_lru-allocate-list_lru_one-only-when-needed.patch mm-list_lru-rename-memcg_drain_all_list_lrus-to-memcg_reparent_list_lrus.patch mm-list_lru-replace-linear-array-with-xarray.patch mm-memcontrol-reuse-memory-cgroup-id-for-kmem-id.patch mm-memcontrol-fix-cannot-alloc-the-maximum-memcg-id.patch mm-list_lru-rename-list_lru_per_memcg-to-list_lru_memcg.patch mm-memcontrol-rename-memcg_cache_id-to-memcg_kmem_id.patch mm-thp-fix-wrong-cache-flush-in-remove_migration_pmd.patch mm-fix-missing-cache-flush-for-all-tail-pages-of-compound-page.patch mm-hugetlb-fix-missing-cache-flush-in-copy_huge_page_from_user.patch mm-hugetlb-fix-missing-cache-flush-in-hugetlb_mcopy_atomic_pte.patch mm-shmem-fix-missing-cache-flush-in-shmem_mfill_atomic_pte.patch mm-userfaultfd-fix-missing-cache-flush-in-mcopy_atomic_pte-and-__mcopy_atomic.patch mm-replace-multiple-dcache-flush-with-flush_dcache_folio.patch mm-hugetlb-free-the-2nd-vmemmap-page-associated-with-each-hugetlb-page.patch mm-hugetlb-replace-hugetlb_free_vmemmap_enabled-with-a-static_key.patch mm-sparsemem-use-page-table-lock-to-protect-kernel-pmd-operations.patch selftests-vm-add-a-hugetlb-test-case.patch mm-sparsemem-move-vmemmap-related-to-hugetlb-to-config_hugetlb_page_free_vmemmap.patch mm-rmap-fix-cache-flush-on-thp-pages.patch dax-fix-cache-flush-on-pmd-mapped-pages.patch mm-rmap-introduce-pfn_mkclean_range-to-cleans-ptes.patch mm-pvmw-add-support-for-walking-devmap-pages.patch dax-fix-missing-writeprotect-the-pte-entry.patch mm-remove-range-parameter-from-follow_invalidate_pte.patch