linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
@ 2021-04-23 17:29 Jan Kara
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

Hello,

here is another version of my patches to address races between hole punching
and page cache filling functions for ext4 and other filesystems. I think
we are coming close to a complete solution so I've removed the RFC tag from
the subject. I went through all filesystems supporting hole punching and
converted them from their private locks to a generic one (usually fixing the
race ext4 had as a side effect). I also found out ceph & cifs didn't have
any protection from the hole punch vs page fault race either so I've added
appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
read, write, and hole punch paths are not properly serialized possibly leading
to fs corruption. Both issues are non-trivial so respective fs maintainers
have to deal with those (I've informed them and problems were generally
confirmed). Anyway, for all the other filesystem this kind of race should
be closed.

As a next step, I'd like to actually make sure all calls to
truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
and then we can also get rid of i_size checks in some places (truncate can
use the same serialization scheme as hole punch). But that step is mostly
a cleanup so I'd like to get these functional fixes in first.

Changes since v3:
* Renamed and moved lock to struct address_space
* Added conversions of tmpfs, ceph, cifs, fuse, f2fs
* Fixed error handling path in filemap_read()
* Removed .page_mkwrite() cleanup from the series for now

Changes since v2:
* Added documentation and comments regarding lock ordering and how the lock is
  supposed to be used
* Added conversions of ext2, xfs, zonefs
* Added patch removing i_mapping_sem protection from .page_mkwrite handlers

Changes since v1:
* Moved to using inode->i_mapping_sem instead of aops handler to acquire
  appropriate lock

---
Motivation:

Amir has reported [1] a that ext4 has a potential issues when reads can race
with hole punching possibly exposing stale data from freed blocks or even
corrupting filesystem when stale mapping data gets used for writeout. The
problem is that during hole punching, new page cache pages can get instantiated
and block mapping from the looked up in a punched range after
truncate_inode_pages() has run but before the filesystem removes blocks from
the file. In principle any filesystem implementing hole punching thus needs to
implement a mechanism to block instantiating page cache pages during hole
punching to avoid this race. This is further complicated by the fact that there
are multiple places that can instantiate pages in page cache.  We can have
regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
result in reading in page cache pages through force_page_cache_readahead().

There are couple of ways how to fix this. First way (currently implemented by
XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
serialized with hole punching. This is easy to do but as a result all reads
would then be serialized with writes and thus mixed read-write workloads suffer
heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
when creating new pages in the page cache and looking up their corresponding
block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
which provides necessary serialization with hole punching for ext4.

								Honza

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

Previous versions:
Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz

CC: ceph-devel@vger.kernel.org
CC: Chao Yu <yuchao0@huawei.com>
CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
CC: Hugh Dickins <hughd@google.com>
CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Jeff Layton <jlayton@kernel.org>
CC: Johannes Thumshirn <jth@kernel.org>
CC: linux-cifs@vger.kernel.org
CC: <linux-ext4@vger.kernel.org>
CC: linux-f2fs-devel@lists.sourceforge.net
CC: <linux-fsdevel@vger.kernel.org>
CC: <linux-mm@kvack.org>
CC: <linux-xfs@vger.kernel.org>
CC: Miklos Szeredi <miklos@szeredi.hu>
CC: Steve French <sfrench@samba.org>
CC: Ted Tso <tytso@mit.edu>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 18:30   ` Matthew Wilcox
  2021-04-23 23:04   ` Dave Chinner
  2021-04-23 17:29 ` [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock Jan Kara
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 10+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

Currently, serializing operations such as page fault, read, or readahead
against hole punching is rather difficult. The basic race scheme is
like:

fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
  truncate_inode_pages_range()
						  <create pages in page
						   cache here>
  <update fs block mapping and free blocks>

Now the problem is in this way read / page fault / readahead can
instantiate pages in page cache with potentially stale data (if blocks
get quickly reused). Avoiding this race is not simple - page locks do
not work because we want to make sure there are *no* pages in given
range. inode->i_rwsem does not work because page fault happens under
mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
the performance for mixed read-write workloads suffer.

So create a new rw_semaphore in the address_space - invalidate_lock -
that protects adding of pages to page cache for page faults / reads /
readahead.

Signed-off-by: Jan Kara <jack@suse.cz>
CC: ceph-devel@vger.kernel.org
CC: Chao Yu <yuchao0@huawei.com>
CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
CC: Hugh Dickins <hughd@google.com>
CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Jeff Layton <jlayton@kernel.org>
CC: Johannes Thumshirn <jth@kernel.org>
CC: linux-cifs@vger.kernel.org
CC: <linux-ext4@vger.kernel.org>
CC: linux-f2fs-devel@lists.sourceforge.net
CC: <linux-fsdevel@vger.kernel.org>
CC: <linux-mm@kvack.org>
CC: <linux-xfs@vger.kernel.org>
CC: Miklos Szeredi <miklos@szeredi.hu>
CC: Steve French <sfrench@samba.org>
CC: Ted Tso <tytso@mit.edu>
---
 Documentation/filesystems/locking.rst | 39 ++++++++++++-----
 fs/inode.c                            |  3 ++
 include/linux/fs.h                    |  4 ++
 mm/filemap.c                          | 61 +++++++++++++++++++++------
 mm/readahead.c                        |  2 +
 mm/rmap.c                             | 37 ++++++++--------
 mm/truncate.c                         |  2 +-
 7 files changed, 106 insertions(+), 42 deletions(-)

diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index b7dcc86c92a4..7cbf72862832 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -266,19 +266,19 @@ prototypes::
 locking rules:
 	All except set_page_dirty and freepage may block
 
-======================	======================== =========
-ops			PageLocked(page)	 i_rwsem
-======================	======================== =========
+======================	======================== =========	===============
+ops			PageLocked(page)	 i_rwsem	invalidate_lock
+======================	======================== =========	===============
 writepage:		yes, unlocks (see below)
-readpage:		yes, unlocks
+readpage:		yes, unlocks				shared
 writepages:
 set_page_dirty		no
-readahead:		yes, unlocks
-readpages:		no
+readahead:		yes, unlocks				shared
+readpages:		no					shared
 write_begin:		locks the page		 exclusive
 write_end:		yes, unlocks		 exclusive
 bmap:
-invalidatepage:		yes
+invalidatepage:		yes					exclusive
 releasepage:		yes
 freepage:		yes
 direct_IO:
@@ -373,7 +373,10 @@ keep it that way and don't breed new callers.
 ->invalidatepage() is called when the filesystem must attempt to drop
 some or all of the buffers from the page when it is being truncated. It
 returns zero on success. If ->invalidatepage is zero, the kernel uses
-block_invalidatepage() instead.
+block_invalidatepage() instead. The filesystem should exclusively acquire
+invalidate_lock before invalidating page cache in truncate / hole punch path (and
+thus calling into ->invalidatepage) to block races between page cache
+invalidation and page cache filling functions (fault, read, ...).
 
 ->releasepage() is called when the kernel is about to try to drop the
 buffers from the page in preparation for freeing it.  It returns zero to
@@ -567,6 +570,20 @@ in sys_read() and friends.
 the lease within the individual filesystem to record the result of the
 operation
 
+->fallocate implementation must be really careful to maintain page cache
+consistency when punching holes or performing other operations that invalidate
+page cache contents. Usually the filesystem needs to call
+truncate_inode_pages_range() to invalidate relevant range of the page cache.
+However the filesystem usually also needs to update its internal (and on disk)
+view of file offset -> disk block mapping. Until this update is finished, the
+filesystem needs to block page faults and reads from reloading now-stale page
+cache contents from the disk. VFS provides mapping->invalidate_lock for this
+and acquires it in shared mode in paths loading pages from disk
+(filemap_fault(), filemap_read(), readahead paths). The filesystem is
+responsible for taking this lock in its fallocate implementation and generally
+whenever the page cache contents needs to be invalidated because a block is
+moving from under a page.
+
 dquot_operations
 ================
 
@@ -628,9 +645,9 @@ access:		yes
 to be faulted in. The filesystem must find and return the page associated
 with the passed in "pgoff" in the vm_fault structure. If it is possible that
 the page may be truncated and/or invalidated, then the filesystem must lock
-the page, then ensure it is not already truncated (the page lock will block
-subsequent truncate), and then return with VM_FAULT_LOCKED, and the page
-locked. The VM will unlock the page.
+invalidate_lock, then ensure the page is not already truncated (invalidate_lock
+will block subsequent truncate), and then return with VM_FAULT_LOCKED, and the
+page locked. The VM will unlock the page.
 
 ->map_pages() is called when VM asks to map easy accessible pages.
 Filesystem should find and map pages associated with offsets from "start_pgoff"
diff --git a/fs/inode.c b/fs/inode.c
index a047ab306f9a..43596dd8b61e 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
 	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
 	mapping->private_data = NULL;
 	mapping->writeback_index = 0;
+	init_rwsem(&mapping->invalidate_lock);
+	lockdep_set_class(&mapping->invalidate_lock,
+			  &sb->s_type->invalidate_lock_key);
 	inode->i_private = NULL;
 	inode->i_mapping = mapping;
 	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ec8f3ddf4a6a..3fca7bf2d0fb 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -435,6 +435,8 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
  * struct address_space - Contents of a cacheable, mappable object.
  * @host: Owner, either the inode or the block_device.
  * @i_pages: Cached pages.
+ * @invalidate_lock: Guards coherency between page cache contents and
+ *   file offset->disk block mappings in the filesystem during invalidates
  * @gfp_mask: Memory allocation flags to use for allocating pages.
  * @i_mmap_writable: Number of VM_SHARED mappings.
  * @nr_thps: Number of THPs in the pagecache (non-shmem only).
@@ -453,6 +455,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
 struct address_space {
 	struct inode		*host;
 	struct xarray		i_pages;
+	struct rw_semaphore	invalidate_lock;
 	gfp_t			gfp_mask;
 	atomic_t		i_mmap_writable;
 #ifdef CONFIG_READ_ONLY_THP_FOR_FS
@@ -2351,6 +2354,7 @@ struct file_system_type {
 
 	struct lock_class_key i_lock_key;
 	struct lock_class_key i_mutex_key;
+	struct lock_class_key invalidate_lock_key;
 	struct lock_class_key i_mutex_dir_key;
 };
 
diff --git a/mm/filemap.c b/mm/filemap.c
index bd7c50e060a9..9ea8dfb0609c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -77,7 +77,8 @@
  *        ->i_pages lock
  *
  *  ->i_rwsem
- *    ->i_mmap_rwsem		(truncate->unmap_mapping_range)
+ *    ->invalidate_lock		(acquired by fs in truncate path)
+ *      ->i_mmap_rwsem		(truncate->unmap_mapping_range)
  *
  *  ->mmap_lock
  *    ->i_mmap_rwsem
@@ -85,7 +86,8 @@
  *        ->i_pages lock	(arch-dependent flush_dcache_mmap_lock)
  *
  *  ->mmap_lock
- *    ->lock_page		(access_process_vm)
+ *    ->invalidate_lock		(filemap_fault)
+ *      ->lock_page		(filemap_fault, access_process_vm)
  *
  *  ->i_rwsem			(generic_perform_write)
  *    ->mmap_lock		(fault_in_pages_readable->do_page_fault)
@@ -2276,20 +2278,30 @@ static int filemap_update_page(struct kiocb *iocb,
 {
 	int error;
 
+	if (iocb->ki_flags & IOCB_NOWAIT) {
+		if (!down_read_trylock(&mapping->invalidate_lock))
+			return -EAGAIN;
+	} else {
+		down_read(&mapping->invalidate_lock);
+	}
+
 	if (!trylock_page(page)) {
+		error = -EAGAIN;
 		if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO))
-			return -EAGAIN;
+			goto unlock_mapping;
 		if (!(iocb->ki_flags & IOCB_WAITQ)) {
+			up_read(&mapping->invalidate_lock);
 			put_and_wait_on_page_locked(page, TASK_KILLABLE);
 			return AOP_TRUNCATED_PAGE;
 		}
 		error = __lock_page_async(page, iocb->ki_waitq);
 		if (error)
-			return error;
+			goto unlock_mapping;
 	}
 
+	error = AOP_TRUNCATED_PAGE;
 	if (!page->mapping)
-		goto truncated;
+		goto unlock;
 
 	error = 0;
 	if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, page))
@@ -2300,15 +2312,13 @@ static int filemap_update_page(struct kiocb *iocb,
 		goto unlock;
 
 	error = filemap_read_page(iocb->ki_filp, mapping, page);
-	if (error == AOP_TRUNCATED_PAGE)
-		put_page(page);
-	return error;
-truncated:
-	unlock_page(page);
-	put_page(page);
-	return AOP_TRUNCATED_PAGE;
+	goto unlock_mapping;
 unlock:
 	unlock_page(page);
+unlock_mapping:
+	up_read(&mapping->invalidate_lock);
+	if (error == AOP_TRUNCATED_PAGE)
+		put_page(page);
 	return error;
 }
 
@@ -2323,6 +2333,19 @@ static int filemap_create_page(struct file *file,
 	if (!page)
 		return -ENOMEM;
 
+	/*
+	 * Protect against truncate / hole punch. Grabbing invalidate_lock here
+	 * assures we cannot instantiate and bring uptodate new pagecache pages
+	 * after evicting page cache during truncate and before actually
+	 * freeing blocks.  Note that we could release invalidate_lock after
+	 * inserting the page into page cache as the locked page would then be
+	 * enough to synchronize with hole punching. But there are code paths
+	 * such as filemap_update_page() filling in partially uptodate pages or
+	 * ->readpages() that need to hold invalidate_lock while mapping blocks
+	 * for IO so let's hold the lock here as well to keep locking rules
+	 * simple.
+	 */
+	down_read(&mapping->invalidate_lock);
 	error = add_to_page_cache_lru(page, mapping, index,
 			mapping_gfp_constraint(mapping, GFP_KERNEL));
 	if (error == -EEXIST)
@@ -2334,9 +2357,11 @@ static int filemap_create_page(struct file *file,
 	if (error)
 		goto error;
 
+	up_read(&mapping->invalidate_lock);
 	pagevec_add(pvec, page);
 	return 0;
 error:
+	up_read(&mapping->invalidate_lock);
 	put_page(page);
 	return error;
 }
@@ -2896,6 +2921,13 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
 		ret = VM_FAULT_MAJOR;
 		fpin = do_sync_mmap_readahead(vmf);
+	}
+
+	/*
+	 * See comment in filemap_create_page() why we need invalidate_lock
+	 */
+	down_read(&mapping->invalidate_lock);
+	if (!page) {
 retry_find:
 		page = pagecache_get_page(mapping, offset,
 					  FGP_CREAT|FGP_FOR_MMAP,
@@ -2903,6 +2935,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		if (!page) {
 			if (fpin)
 				goto out_retry;
+			up_read(&mapping->invalidate_lock);
 			return VM_FAULT_OOM;
 		}
 	}
@@ -2943,9 +2976,11 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	if (unlikely(offset >= max_off)) {
 		unlock_page(page);
 		put_page(page);
+		up_read(&mapping->invalidate_lock);
 		return VM_FAULT_SIGBUS;
 	}
 
+	up_read(&mapping->invalidate_lock);
 	vmf->page = page;
 	return ret | VM_FAULT_LOCKED;
 
@@ -2971,6 +3006,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	if (!error || error == AOP_TRUNCATED_PAGE)
 		goto retry_find;
 
+	up_read(&mapping->invalidate_lock);
 	shrink_readahead_size_eio(ra);
 	return VM_FAULT_SIGBUS;
 
@@ -2982,6 +3018,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	 */
 	if (page)
 		put_page(page);
+	up_read(&mapping->invalidate_lock);
 	if (fpin)
 		fput(fpin);
 	return ret | VM_FAULT_RETRY;
diff --git a/mm/readahead.c b/mm/readahead.c
index c5b0457415be..37dd07b32c67 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -192,6 +192,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
 	 */
 	unsigned int nofs = memalloc_nofs_save();
 
+	down_read(&mapping->invalidate_lock);
 	/*
 	 * Preallocate as many pages as we will need.
 	 */
@@ -236,6 +237,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
 	 * will then handle the error.
 	 */
 	read_pages(ractl, &page_pool, false);
+	up_read(&mapping->invalidate_lock);
 	memalloc_nofs_restore(nofs);
 }
 EXPORT_SYMBOL_GPL(page_cache_ra_unbounded);
diff --git a/mm/rmap.c b/mm/rmap.c
index dba8cb8a5578..e4f769a4dcc8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -22,24 +22,25 @@
  *
  * inode->i_rwsem	(while writing or truncating, not reading or faulting)
  *   mm->mmap_lock
- *     page->flags PG_locked (lock_page)   * (see hugetlbfs below)
- *       hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
- *         mapping->i_mmap_rwsem
- *           hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
- *           anon_vma->rwsem
- *             mm->page_table_lock or pte_lock
- *               swap_lock (in swap_duplicate, swap_info_get)
- *                 mmlist_lock (in mmput, drain_mmlist and others)
- *                 mapping->private_lock (in __set_page_dirty_buffers)
- *                   lock_page_memcg move_lock (in __set_page_dirty_buffers)
- *                     i_pages lock (widely used)
- *                       lruvec->lru_lock (in lock_page_lruvec_irq)
- *                 inode->i_lock (in set_page_dirty's __mark_inode_dirty)
- *                 bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
- *                   sb_lock (within inode_lock in fs/fs-writeback.c)
- *                   i_pages lock (widely used, in set_page_dirty,
- *                             in arch-dependent flush_dcache_mmap_lock,
- *                             within bdi.wb->list_lock in __sync_single_inode)
+ *     mapping->invalidate_lock (in filemap_fault)
+ *       page->flags PG_locked (lock_page)   * (see hugetlbfs below)
+ *         hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
+ *           mapping->i_mmap_rwsem
+ *             hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
+ *             anon_vma->rwsem
+ *               mm->page_table_lock or pte_lock
+ *                 swap_lock (in swap_duplicate, swap_info_get)
+ *                   mmlist_lock (in mmput, drain_mmlist and others)
+ *                   mapping->private_lock (in __set_page_dirty_buffers)
+ *                     lock_page_memcg move_lock (in __set_page_dirty_buffers)
+ *                       i_pages lock (widely used)
+ *                         lruvec->lru_lock (in lock_page_lruvec_irq)
+ *                   inode->i_lock (in set_page_dirty's __mark_inode_dirty)
+ *                   bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
+ *                     sb_lock (within inode_lock in fs/fs-writeback.c)
+ *                     i_pages lock (widely used, in set_page_dirty,
+ *                               in arch-dependent flush_dcache_mmap_lock,
+ *                               within bdi.wb->list_lock in __sync_single_inode)
  *
  * anon_vma->rwsem,mapping->i_mmap_rwsem   (memory_failure, collect_procs_anon)
  *   ->tasklist_lock
diff --git a/mm/truncate.c b/mm/truncate.c
index 2cf71d8c3c62..464ad70a081f 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -416,7 +416,7 @@ EXPORT_SYMBOL(truncate_inode_pages_range);
  * @mapping: mapping to truncate
  * @lstart: offset from which to truncate
  *
- * Called under (and serialised by) inode->i_rwsem.
+ * Called under (and serialised by) inode->i_rwsem and inode->i_mapping_rwsem.
  *
  * Note: When this function returns, there can be a page in the process of
  * deletion (inside __delete_from_page_cache()) in the specified range.  Thus
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 17:29 ` [PATCH 04/12] ext2: Convert to using invalidate_lock Jan Kara
  2021-04-23 22:07 ` [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Dave Chinner
  3 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, linux-ext4

Convert ext4 to use mapping->invalidate_lock instead of its private
EXT4_I(inode)->i_mmap_sem. This is mostly search-and-replace. By this
conversion we fix a long standing race between hole punching and read(2)
/ readahead(2) paths that can lead to stale page cache contents.

CC: <linux-ext4@vger.kernel.org>
CC: Ted Tso <tytso@mit.edu>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/ext4/ext4.h     | 10 ----------
 fs/ext4/extents.c  | 25 +++++++++++++-----------
 fs/ext4/file.c     | 13 +++++++------
 fs/ext4/inode.c    | 47 +++++++++++++++++-----------------------------
 fs/ext4/ioctl.c    |  4 ++--
 fs/ext4/super.c    | 13 +++++--------
 fs/ext4/truncate.h |  8 +++++---
 7 files changed, 50 insertions(+), 70 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 826a56e3bbd2..2ae365458dca 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1081,15 +1081,6 @@ struct ext4_inode_info {
 	 * by other means, so we have i_data_sem.
 	 */
 	struct rw_semaphore i_data_sem;
-	/*
-	 * i_mmap_sem is for serializing page faults with truncate / punch hole
-	 * operations. We have to make sure that new page cannot be faulted in
-	 * a section of the inode that is being punched. We cannot easily use
-	 * i_data_sem for this since we need protection for the whole punch
-	 * operation and i_data_sem ranks below transaction start so we have
-	 * to occasionally drop it.
-	 */
-	struct rw_semaphore i_mmap_sem;
 	struct inode vfs_inode;
 	struct jbd2_inode *jinode;
 
@@ -2908,7 +2899,6 @@ extern int ext4_chunk_trans_blocks(struct inode *, int nrblocks);
 extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode,
 			     loff_t lstart, loff_t lend);
 extern vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf);
-extern vm_fault_t ext4_filemap_fault(struct vm_fault *vmf);
 extern qsize_t *ext4_get_reserved_space(struct inode *inode);
 extern int ext4_get_projid(struct inode *inode, kprojid_t *projid);
 extern void ext4_da_release_space(struct inode *inode, int to_free);
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 77c84d6f1af6..8bb6b84c8a84 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -4467,6 +4467,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
 			    loff_t len, int mode)
 {
 	struct inode *inode = file_inode(file);
+	struct address_space *mapping = file->f_mapping;
 	handle_t *handle = NULL;
 	unsigned int max_blocks;
 	loff_t new_size = 0;
@@ -4553,17 +4554,17 @@ static long ext4_zero_range(struct file *file, loff_t offset,
 		 * Prevent page faults from reinstantiating pages we have
 		 * released from page cache.
 		 */
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&mapping->invalidate_lock);
 
 		ret = ext4_break_layouts(inode);
 		if (ret) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			goto out_mutex;
 		}
 
 		ret = ext4_update_disksize_before_punch(inode, offset, len);
 		if (ret) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			goto out_mutex;
 		}
 		/* Now release the pages and zero block aligned part of pages */
@@ -4572,7 +4573,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
 
 		ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,
 					     flags);
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 		if (ret)
 			goto out_mutex;
 	}
@@ -5214,6 +5215,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
 static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 {
 	struct super_block *sb = inode->i_sb;
+	struct address_space *mapping = inode->i_mapping;
 	ext4_lblk_t punch_start, punch_stop;
 	handle_t *handle;
 	unsigned int credits;
@@ -5267,7 +5269,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 	 * Prevent page faults from reinstantiating pages we have released from
 	 * page cache.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = ext4_break_layouts(inode);
 	if (ret)
@@ -5282,15 +5284,15 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 	 * Write tail of the last page before removed range since it will get
 	 * removed from the page cache below.
 	 */
-	ret = filemap_write_and_wait_range(inode->i_mapping, ioffset, offset);
+	ret = filemap_write_and_wait_range(mapping, ioffset, offset);
 	if (ret)
 		goto out_mmap;
 	/*
 	 * Write data that will be shifted to preserve them when discarding
 	 * page cache below. We are also protected from pages becoming dirty
-	 * by i_mmap_sem.
+	 * by i_rwsem and invalidate_lock.
 	 */
-	ret = filemap_write_and_wait_range(inode->i_mapping, offset + len,
+	ret = filemap_write_and_wait_range(mapping, offset + len,
 					   LLONG_MAX);
 	if (ret)
 		goto out_mmap;
@@ -5343,7 +5345,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 	ext4_journal_stop(handle);
 	ext4_fc_stop_ineligible(sb);
 out_mmap:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out_mutex:
 	inode_unlock(inode);
 	return ret;
@@ -5360,6 +5362,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
 {
 	struct super_block *sb = inode->i_sb;
+	struct address_space *mapping = inode->i_mapping;
 	handle_t *handle;
 	struct ext4_ext_path *path;
 	struct ext4_extent *extent;
@@ -5418,7 +5421,7 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
 	 * Prevent page faults from reinstantiating pages we have released from
 	 * page cache.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = ext4_break_layouts(inode);
 	if (ret)
@@ -5519,7 +5522,7 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
 	ext4_journal_stop(handle);
 	ext4_fc_stop_ineligible(sb);
 out_mmap:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out_mutex:
 	inode_unlock(inode);
 	return ret;
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 194f5d00fa32..61fa787138d8 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -687,22 +687,23 @@ static vm_fault_t ext4_dax_huge_fault(struct vm_fault *vmf,
 	 */
 	bool write = (vmf->flags & FAULT_FLAG_WRITE) &&
 		(vmf->vma->vm_flags & VM_SHARED);
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	pfn_t pfn;
 
 	if (write) {
 		sb_start_pagefault(sb);
 		file_update_time(vmf->vma->vm_file);
-		down_read(&EXT4_I(inode)->i_mmap_sem);
+		down_read(&mapping->invalidate_lock);
 retry:
 		handle = ext4_journal_start_sb(sb, EXT4_HT_WRITE_PAGE,
 					       EXT4_DATA_TRANS_BLOCKS(sb));
 		if (IS_ERR(handle)) {
-			up_read(&EXT4_I(inode)->i_mmap_sem);
+			up_read(&mapping->invalidate_lock);
 			sb_end_pagefault(sb);
 			return VM_FAULT_SIGBUS;
 		}
 	} else {
-		down_read(&EXT4_I(inode)->i_mmap_sem);
+		down_read(&mapping->invalidate_lock);
 	}
 	result = dax_iomap_fault(vmf, pe_size, &pfn, &error, &ext4_iomap_ops);
 	if (write) {
@@ -714,10 +715,10 @@ static vm_fault_t ext4_dax_huge_fault(struct vm_fault *vmf,
 		/* Handling synchronous page fault? */
 		if (result & VM_FAULT_NEEDDSYNC)
 			result = dax_finish_sync_fault(vmf, pe_size, pfn);
-		up_read(&EXT4_I(inode)->i_mmap_sem);
+		up_read(&mapping->invalidate_lock);
 		sb_end_pagefault(sb);
 	} else {
-		up_read(&EXT4_I(inode)->i_mmap_sem);
+		up_read(&mapping->invalidate_lock);
 	}
 
 	return result;
@@ -739,7 +740,7 @@ static const struct vm_operations_struct ext4_dax_vm_ops = {
 #endif
 
 static const struct vm_operations_struct ext4_file_vm_ops = {
-	.fault		= ext4_filemap_fault,
+	.fault		= filemap_fault,
 	.map_pages	= filemap_map_pages,
 	.page_mkwrite   = ext4_page_mkwrite,
 };
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 0948a43f1b3d..62020bff7096 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3952,20 +3952,19 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
 	return ret;
 }
 
-static void ext4_wait_dax_page(struct ext4_inode_info *ei)
+static void ext4_wait_dax_page(struct inode *inode)
 {
-	up_write(&ei->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	schedule();
-	down_write(&ei->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 }
 
 int ext4_break_layouts(struct inode *inode)
 {
-	struct ext4_inode_info *ei = EXT4_I(inode);
 	struct page *page;
 	int error;
 
-	if (WARN_ON_ONCE(!rwsem_is_locked(&ei->i_mmap_sem)))
+	if (WARN_ON_ONCE(!rwsem_is_locked(&inode->i_mapping->invalidate_lock)))
 		return -EINVAL;
 
 	do {
@@ -3976,7 +3975,7 @@ int ext4_break_layouts(struct inode *inode)
 		error = ___wait_var_event(&page->_refcount,
 				atomic_read(&page->_refcount) == 1,
 				TASK_INTERRUPTIBLE, 0, 0,
-				ext4_wait_dax_page(ei));
+				ext4_wait_dax_page(inode));
 	} while (error == 0);
 
 	return error;
@@ -4007,9 +4006,9 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
 
 	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
 	if (ext4_has_inline_data(inode)) {
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&mapping->invalidate_lock);
 		ret = ext4_convert_inline_data(inode);
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 		if (ret)
 			return ret;
 	}
@@ -4060,7 +4059,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
 	 * Prevent page faults from reinstantiating pages we have released from
 	 * page cache.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = ext4_break_layouts(inode);
 	if (ret)
@@ -4133,7 +4132,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
 out_stop:
 	ext4_journal_stop(handle);
 out_dio:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out_mutex:
 	inode_unlock(inode);
 	return ret;
@@ -5428,11 +5427,11 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
 			inode_dio_wait(inode);
 		}
 
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&inode->i_mapping->invalidate_lock);
 
 		rc = ext4_break_layouts(inode);
 		if (rc) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&inode->i_mapping->invalidate_lock);
 			goto err_out;
 		}
 
@@ -5508,7 +5507,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
 				error = rc;
 		}
 out_mmap_sem:
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 	}
 
 	if (!error) {
@@ -5985,10 +5984,10 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
 	 * data (and journalled aops don't know how to handle these cases).
 	 */
 	if (val) {
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&inode->i_mapping->invalidate_lock);
 		err = filemap_write_and_wait(inode->i_mapping);
 		if (err < 0) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&inode->i_mapping->invalidate_lock);
 			return err;
 		}
 	}
@@ -6021,7 +6020,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
 	percpu_up_write(&sbi->s_writepages_rwsem);
 
 	if (val)
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 
 	/* Finally we can mark the inode as dirty. */
 
@@ -6065,7 +6064,7 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	sb_start_pagefault(inode->i_sb);
 	file_update_time(vma->vm_file);
 
-	down_read(&EXT4_I(inode)->i_mmap_sem);
+	down_read(&mapping->invalidate_lock);
 
 	err = ext4_convert_inline_data(inode);
 	if (err)
@@ -6178,7 +6177,7 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 out_ret:
 	ret = block_page_mkwrite_return(err);
 out:
-	up_read(&EXT4_I(inode)->i_mmap_sem);
+	up_read(&mapping->invalidate_lock);
 	sb_end_pagefault(inode->i_sb);
 	return ret;
 out_error:
@@ -6186,15 +6185,3 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	ext4_journal_stop(handle);
 	goto out;
 }
-
-vm_fault_t ext4_filemap_fault(struct vm_fault *vmf)
-{
-	struct inode *inode = file_inode(vmf->vma->vm_file);
-	vm_fault_t ret;
-
-	down_read(&EXT4_I(inode)->i_mmap_sem);
-	ret = filemap_fault(vmf);
-	up_read(&EXT4_I(inode)->i_mmap_sem);
-
-	return ret;
-}
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index a2cf35066f46..ec4e4350e2b0 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -147,7 +147,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
 		goto journal_err_out;
 	}
 
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 	err = filemap_write_and_wait(inode->i_mapping);
 	if (err)
 		goto err_out;
@@ -255,7 +255,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
 	ext4_double_up_write_data_sem(inode, inode_bl);
 
 err_out:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 journal_err_out:
 	unlock_two_nondirectories(inode, inode_bl);
 	iput(inode_bl);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index b9693680463a..0525a19fd39d 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -90,12 +90,9 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
 /*
  * Lock ordering
  *
- * Note the difference between i_mmap_sem (EXT4_I(inode)->i_mmap_sem) and
- * i_mmap_rwsem (inode->i_mmap_rwsem)!
- *
  * page fault path:
- * mmap_lock -> sb_start_pagefault -> i_mmap_sem (r) -> transaction start ->
- *   page lock -> i_data_sem (rw)
+ * mmap_lock -> sb_start_pagefault -> invalidate_lock (r) -> transaction start
+ *   -> page lock -> i_data_sem (rw)
  *
  * buffered write path:
  * sb_start_write -> i_mutex -> mmap_lock
@@ -103,8 +100,9 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
  *   i_data_sem (rw)
  *
  * truncate:
- * sb_start_write -> i_mutex -> i_mmap_sem (w) -> i_mmap_rwsem (w) -> page lock
- * sb_start_write -> i_mutex -> i_mmap_sem (w) -> transaction start ->
+ * sb_start_write -> i_mutex -> invalidate_lock (w) -> i_mmap_rwsem (w) ->
+ *   page lock
+ * sb_start_write -> i_mutex -> invalidate_lock (w) -> transaction start ->
  *   i_data_sem (rw)
  *
  * direct IO:
@@ -1349,7 +1347,6 @@ static void init_once(void *foo)
 	INIT_LIST_HEAD(&ei->i_orphan);
 	init_rwsem(&ei->xattr_sem);
 	init_rwsem(&ei->i_data_sem);
-	init_rwsem(&ei->i_mmap_sem);
 	inode_init_once(&ei->vfs_inode);
 	ext4_fc_init_inode(&ei->vfs_inode);
 }
diff --git a/fs/ext4/truncate.h b/fs/ext4/truncate.h
index bcbe3668c1d4..b7242e08c9dd 100644
--- a/fs/ext4/truncate.h
+++ b/fs/ext4/truncate.h
@@ -11,14 +11,16 @@
  */
 static inline void ext4_truncate_failed_write(struct inode *inode)
 {
+	struct address_space *mapping = inode->i_mapping;
+
 	/*
 	 * We don't need to call ext4_break_layouts() because the blocks we
 	 * are truncating were never visible to userspace.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
-	truncate_inode_pages(inode->i_mapping, inode->i_size);
+	down_write(&mapping->invalidate_lock);
+	truncate_inode_pages(mapping, inode->i_size);
 	ext4_truncate(inode);
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 04/12] ext2: Convert to using invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
  2021-04-23 17:29 ` [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 22:07 ` [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Dave Chinner
  3 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, linux-ext4

Ext2 has its private dax_sem used for synchronizing page faults and
truncation. Use mapping->invalidate_lock instead as it is meant for this
purpose.

CC: <linux-ext4@vger.kernel.org>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/ext2/ext2.h  | 11 -----------
 fs/ext2/file.c  |  7 +++----
 fs/ext2/inode.c | 12 ++++++------
 fs/ext2/super.c |  3 ---
 4 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
index 3309fb2d327a..96d5ea5df78b 100644
--- a/fs/ext2/ext2.h
+++ b/fs/ext2/ext2.h
@@ -671,9 +671,6 @@ struct ext2_inode_info {
 	struct rw_semaphore xattr_sem;
 #endif
 	rwlock_t i_meta_lock;
-#ifdef CONFIG_FS_DAX
-	struct rw_semaphore dax_sem;
-#endif
 
 	/*
 	 * truncate_mutex is for serialising ext2_truncate() against
@@ -689,14 +686,6 @@ struct ext2_inode_info {
 #endif
 };
 
-#ifdef CONFIG_FS_DAX
-#define dax_sem_down_write(ext2_inode)	down_write(&(ext2_inode)->dax_sem)
-#define dax_sem_up_write(ext2_inode)	up_write(&(ext2_inode)->dax_sem)
-#else
-#define dax_sem_down_write(ext2_inode)
-#define dax_sem_up_write(ext2_inode)
-#endif
-
 /*
  * Inode dynamic state flags
  */
diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index 96044f5dbc0e..9d4870df95c4 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -81,7 +81,7 @@ static ssize_t ext2_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
  *
  * mmap_lock (MM)
  *   sb_start_pagefault (vfs, freeze)
- *     ext2_inode_info->dax_sem
+ *     address_space->invalidate_lock
  *       address_space->i_mmap_rwsem or page_lock (mutually exclusive in DAX)
  *         ext2_inode_info->truncate_mutex
  *
@@ -91,7 +91,6 @@ static ssize_t ext2_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
 static vm_fault_t ext2_dax_fault(struct vm_fault *vmf)
 {
 	struct inode *inode = file_inode(vmf->vma->vm_file);
-	struct ext2_inode_info *ei = EXT2_I(inode);
 	vm_fault_t ret;
 	bool write = (vmf->flags & FAULT_FLAG_WRITE) &&
 		(vmf->vma->vm_flags & VM_SHARED);
@@ -100,11 +99,11 @@ static vm_fault_t ext2_dax_fault(struct vm_fault *vmf)
 		sb_start_pagefault(inode->i_sb);
 		file_update_time(vmf->vma->vm_file);
 	}
-	down_read(&ei->dax_sem);
+	down_read(&inode->i_mapping->invalidate_lock);
 
 	ret = dax_iomap_fault(vmf, PE_SIZE_PTE, NULL, NULL, &ext2_iomap_ops);
 
-	up_read(&ei->dax_sem);
+	up_read(&inode->i_mapping->invalidate_lock);
 	if (write)
 		sb_end_pagefault(inode->i_sb);
 	return ret;
diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
index 68178b2234bd..e843be0ae53c 100644
--- a/fs/ext2/inode.c
+++ b/fs/ext2/inode.c
@@ -1175,7 +1175,7 @@ static void ext2_free_branches(struct inode *inode, __le32 *p, __le32 *q, int de
 		ext2_free_data(inode, p, q);
 }
 
-/* dax_sem must be held when calling this function */
+/* mapping->invalidate_lock must be held when calling this function */
 static void __ext2_truncate_blocks(struct inode *inode, loff_t offset)
 {
 	__le32 *i_data = EXT2_I(inode)->i_data;
@@ -1192,7 +1192,7 @@ static void __ext2_truncate_blocks(struct inode *inode, loff_t offset)
 	iblock = (offset + blocksize-1) >> EXT2_BLOCK_SIZE_BITS(inode->i_sb);
 
 #ifdef CONFIG_FS_DAX
-	WARN_ON(!rwsem_is_locked(&ei->dax_sem));
+	WARN_ON(!rwsem_is_locked(&inode->i_mapping->invalidate_lock));
 #endif
 
 	n = ext2_block_to_path(inode, iblock, offsets, NULL);
@@ -1274,9 +1274,9 @@ static void ext2_truncate_blocks(struct inode *inode, loff_t offset)
 	if (ext2_inode_is_fast_symlink(inode))
 		return;
 
-	dax_sem_down_write(EXT2_I(inode));
+	down_write(&inode->i_mapping->invalidate_lock);
 	__ext2_truncate_blocks(inode, offset);
-	dax_sem_up_write(EXT2_I(inode));
+	up_write(&inode->i_mapping->invalidate_lock);
 }
 
 static int ext2_setsize(struct inode *inode, loff_t newsize)
@@ -1306,10 +1306,10 @@ static int ext2_setsize(struct inode *inode, loff_t newsize)
 	if (error)
 		return error;
 
-	dax_sem_down_write(EXT2_I(inode));
+	down_write(&inode->i_mapping->invalidate_lock);
 	truncate_setsize(inode, newsize);
 	__ext2_truncate_blocks(inode, newsize);
-	dax_sem_up_write(EXT2_I(inode));
+	up_write(&inode->i_mapping->invalidate_lock);
 
 	inode->i_mtime = inode->i_ctime = current_time(inode);
 	if (inode_needs_sync(inode)) {
diff --git a/fs/ext2/super.c b/fs/ext2/super.c
index 6c4753277916..143e72c95acf 100644
--- a/fs/ext2/super.c
+++ b/fs/ext2/super.c
@@ -206,9 +206,6 @@ static void init_once(void *foo)
 	init_rwsem(&ei->xattr_sem);
 #endif
 	mutex_init(&ei->truncate_mutex);
-#ifdef CONFIG_FS_DAX
-	init_rwsem(&ei->dax_sem);
-#endif
 	inode_init_once(&ei->vfs_inode);
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
@ 2021-04-23 18:30   ` Matthew Wilcox
  2021-04-23 23:04   ` Dave Chinner
  1 sibling, 0 replies; 10+ messages in thread
From: Matthew Wilcox @ 2021-04-23 18:30 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range.

One of the things I've had in mind for a while is moving the DAX locked
entry concept into the page cache proper.  It would avoid creating the
new semaphore, at the cost of taking the i_pages lock twice (once to
insert the entries that cover the range, and once to delete the entries).

It'd have pretty much the same effect, though -- read/fault/... would
block until the entry was deleted from the page cache.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (2 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 04/12] ext2: Convert to using invalidate_lock Jan Kara
@ 2021-04-23 22:07 ` Dave Chinner
  2021-04-23 23:51   ` Matthew Wilcox
  3 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2021-04-23 22:07 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Ted Tso,
	ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

Hi Jan,

In future, can you please use the same cc-list for the entire
patchset?

The stuff that has hit the XFS list (where I'm replying from)
doesn't give me any context as to what the core changes are that
allow XFS to be changed, so I can't review them in isolation.

I've got to spend time now reconstructing the patchset into a single
series because the delivery has been spread across three different
mailing lists and so hit 3 different procmail filters.  I'll comment
on the patches once I've reconstructed the series and read through
it as a whole...

/me considers the way people use "cc" tags in git commits for
including mailing lists on individual patches actively harmful.
Unless the recipient is subscribed to all the mailing lists the
patchset was CC'd to, they can't easily find the bits of the
patchset that didn't arrive in their mail box. Individual mailing
lists should receive entire patchsets for review, not random,
individual, context free patches. 

And, FWIW, cc'ing the cover letter to all the mailing lists is not
good enough. Being able to see the code change as a whole is what
matters for review, not the cover letter...

Cheers,

Dave.

On Fri, Apr 23, 2021 at 07:29:29PM +0200, Jan Kara wrote:
> Hello,
> 
> here is another version of my patches to address races between hole punching
> and page cache filling functions for ext4 and other filesystems. I think
> we are coming close to a complete solution so I've removed the RFC tag from
> the subject. I went through all filesystems supporting hole punching and
> converted them from their private locks to a generic one (usually fixing the
> race ext4 had as a side effect). I also found out ceph & cifs didn't have
> any protection from the hole punch vs page fault race either so I've added
> appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
> GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
> both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
> read, write, and hole punch paths are not properly serialized possibly leading
> to fs corruption. Both issues are non-trivial so respective fs maintainers
> have to deal with those (I've informed them and problems were generally
> confirmed). Anyway, for all the other filesystem this kind of race should
> be closed.
> 
> As a next step, I'd like to actually make sure all calls to
> truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
> and then we can also get rid of i_size checks in some places (truncate can
> use the same serialization scheme as hole punch). But that step is mostly
> a cleanup so I'd like to get these functional fixes in first.
> 
> Changes since v3:
> * Renamed and moved lock to struct address_space
> * Added conversions of tmpfs, ceph, cifs, fuse, f2fs
> * Fixed error handling path in filemap_read()
> * Removed .page_mkwrite() cleanup from the series for now
> 
> Changes since v2:
> * Added documentation and comments regarding lock ordering and how the lock is
>   supposed to be used
> * Added conversions of ext2, xfs, zonefs
> * Added patch removing i_mapping_sem protection from .page_mkwrite handlers
> 
> Changes since v1:
> * Moved to using inode->i_mapping_sem instead of aops handler to acquire
>   appropriate lock
> 
> ---
> Motivation:
> 
> Amir has reported [1] a that ext4 has a potential issues when reads can race
> with hole punching possibly exposing stale data from freed blocks or even
> corrupting filesystem when stale mapping data gets used for writeout. The
> problem is that during hole punching, new page cache pages can get instantiated
> and block mapping from the looked up in a punched range after
> truncate_inode_pages() has run but before the filesystem removes blocks from
> the file. In principle any filesystem implementing hole punching thus needs to
> implement a mechanism to block instantiating page cache pages during hole
> punching to avoid this race. This is further complicated by the fact that there
> are multiple places that can instantiate pages in page cache.  We can have
> regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
> result in reading in page cache pages through force_page_cache_readahead().
> 
> There are couple of ways how to fix this. First way (currently implemented by
> XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
> serialized with hole punching. This is easy to do but as a result all reads
> would then be serialized with writes and thus mixed read-write workloads suffer
> heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
> when creating new pages in the page cache and looking up their corresponding
> block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
> which provides necessary serialization with hole punching for ext4.
> 
> 								Honza
> 
> [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/
> 
> Previous versions:
> Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
> Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz
> 
> CC: ceph-devel@vger.kernel.org
> CC: Chao Yu <yuchao0@huawei.com>
> CC: Damien Le Moal <damien.lemoal@wdc.com>
> CC: "Darrick J. Wong" <darrick.wong@oracle.com>
> CC: Hugh Dickins <hughd@google.com>
> CC: Jaegeuk Kim <jaegeuk@kernel.org>
> CC: Jeff Layton <jlayton@kernel.org>
> CC: Johannes Thumshirn <jth@kernel.org>
> CC: linux-cifs@vger.kernel.org
> CC: <linux-ext4@vger.kernel.org>
> CC: linux-f2fs-devel@lists.sourceforge.net
> CC: <linux-fsdevel@vger.kernel.org>
> CC: <linux-mm@kvack.org>
> CC: <linux-xfs@vger.kernel.org>
> CC: Miklos Szeredi <miklos@szeredi.hu>
> CC: Steve French <sfrench@samba.org>
> CC: Ted Tso <tytso@mit.edu>
> 

-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
  2021-04-23 18:30   ` Matthew Wilcox
@ 2021-04-23 23:04   ` Dave Chinner
  2021-04-26 15:46     ` Jan Kara
  1 sibling, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2021-04-23 23:04 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Ted Tso,
	ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range. inode->i_rwsem does not work because page fault happens under
> mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> the performance for mixed read-write workloads suffer.
> 
> So create a new rw_semaphore in the address_space - invalidate_lock -
> that protects adding of pages to page cache for page faults / reads /
> readahead.
.....
> diff --git a/fs/inode.c b/fs/inode.c
> index a047ab306f9a..43596dd8b61e 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
>  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
>  	mapping->private_data = NULL;
>  	mapping->writeback_index = 0;
> +	init_rwsem(&mapping->invalidate_lock);
> +	lockdep_set_class(&mapping->invalidate_lock,
> +			  &sb->s_type->invalidate_lock_key);
>  	inode->i_private = NULL;
>  	inode->i_mapping = mapping;
>  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */

Oh, lockdep. That might be a problem here.

The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
tracked as nesting properly against the IOLOCK and the ILOCK. When
you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
add subclass annotations to the lock and they are going to be
different to the locking that the VFS does.

We'll see this from xfs_lock_two_inodes() (e.g. in
xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
oper.....

Oooooh. The page cache copy done when breaking a shared extent needs
to lock out page faults on both the source and destination, but it
still needs to be able to populate the page cache of both the source
and destination file.....

.... and vfs_dedupe_file_range_compare() has to be able to read
pages from both the source and destination file to determine that
the contents are identical and that's done while we hold the
XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
user data modification operations being run....

I now have many doubts that this "serialise page faults by locking
out page cache instantiation" method actually works as a generic
mechanism. It's not just page cache invalidation that relies on
being able to lock out page faults: copy-on-write and deduplication
both require the ability to populate the page cache with source data
while page faults are locked out so the data can be compared/copied
atomically with the extent level manipulations and so user data
modifications cannot occur until the physical extent manipulation
operation has completed.

Having only just realised this is a problem, no solution has
immediately popped into my mind. I'll chew on it over the weekend,
but I'm not hopeful at this point...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
  2021-04-23 22:07 ` [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Dave Chinner
@ 2021-04-23 23:51   ` Matthew Wilcox
  2021-04-24  6:11     ` Christoph Hellwig
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2021-04-23 23:51 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jan Kara, linux-fsdevel, Christoph Hellwig, Amir Goldstein,
	Ted Tso, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> I've got to spend time now reconstructing the patchset into a single
> series because the delivery has been spread across three different
> mailing lists and so hit 3 different procmail filters.  I'll comment
> on the patches once I've reconstructed the series and read through
> it as a whole...

$ b4 mbox 20210423171010.12-1-jack@suse.cz
Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
Grabbing thread from lore.kernel.org/ceph-devel
6 messages in the thread
Saved ./20210423171010.12-1-jack@suse.cz.mbx


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
  2021-04-23 23:51   ` Matthew Wilcox
@ 2021-04-24  6:11     ` Christoph Hellwig
  0 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2021-04-24  6:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Dave Chinner, Jan Kara, linux-fsdevel, Christoph Hellwig,
	Amir Goldstein, Ted Tso, ceph-devel, Chao Yu, Damien Le Moal,
	Darrick J. Wong, Hugh Dickins, Jaegeuk Kim, Jeff Layton,
	Johannes Thumshirn, linux-cifs, linux-ext4, linux-f2fs-devel,
	linux-mm, linux-xfs, Miklos Szeredi, Steve French

On Sat, Apr 24, 2021 at 12:51:49AM +0100, Matthew Wilcox wrote:
> On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> > I've got to spend time now reconstructing the patchset into a single
> > series because the delivery has been spread across three different
> > mailing lists and so hit 3 different procmail filters.  I'll comment
> > on the patches once I've reconstructed the series and read through
> > it as a whole...
> 
> $ b4 mbox 20210423171010.12-1-jack@suse.cz
> Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
> Grabbing thread from lore.kernel.org/ceph-devel
> 6 messages in the thread
> Saved ./20210423171010.12-1-jack@suse.cz.mbx

Yikes.  Just send them damn mails.  Or switch the lists to NNTP, but
don't let the people who are reviewing your patches do stupid work
with weird tools.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 23:04   ` Dave Chinner
@ 2021-04-26 15:46     ` Jan Kara
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2021-04-26 15:46 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jan Kara, linux-fsdevel, Christoph Hellwig, Amir Goldstein,
	Ted Tso, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Sat 24-04-21 09:04:49, Dave Chinner wrote:
> On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> > Currently, serializing operations such as page fault, read, or readahead
> > against hole punching is rather difficult. The basic race scheme is
> > like:
> > 
> > fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
> >   truncate_inode_pages_range()
> > 						  <create pages in page
> > 						   cache here>
> >   <update fs block mapping and free blocks>
> > 
> > Now the problem is in this way read / page fault / readahead can
> > instantiate pages in page cache with potentially stale data (if blocks
> > get quickly reused). Avoiding this race is not simple - page locks do
> > not work because we want to make sure there are *no* pages in given
> > range. inode->i_rwsem does not work because page fault happens under
> > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> > the performance for mixed read-write workloads suffer.
> > 
> > So create a new rw_semaphore in the address_space - invalidate_lock -
> > that protects adding of pages to page cache for page faults / reads /
> > readahead.
> .....
> > diff --git a/fs/inode.c b/fs/inode.c
> > index a047ab306f9a..43596dd8b61e 100644
> > --- a/fs/inode.c
> > +++ b/fs/inode.c
> > @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
> >  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
> >  	mapping->private_data = NULL;
> >  	mapping->writeback_index = 0;
> > +	init_rwsem(&mapping->invalidate_lock);
> > +	lockdep_set_class(&mapping->invalidate_lock,
> > +			  &sb->s_type->invalidate_lock_key);
> >  	inode->i_private = NULL;
> >  	inode->i_mapping = mapping;
> >  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */
> 
> Oh, lockdep. That might be a problem here.
> 
> The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
> tracked as nesting properly against the IOLOCK and the ILOCK. When
> you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
> add subclass annotations to the lock and they are going to be
> different to the locking that the VFS does.
> 
> We'll see this from xfs_lock_two_inodes() (e.g. in
> xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
> oper.....

Thanks for the pointer. I was kind of wondering what lockdep nesting games
XFS plays but then forgot to look into details. Anyway, I've preserved the
nesting annotations in XFS and fstests run on XFS passed without lockdep
complaining so there isn't at least an obvious breakage. Also as far as I'm
checking the code XFS usage in and lock nesting of MMAPLOCK should be
compatible with the nesting VFS enforces (also see below)...
 
> Oooooh. The page cache copy done when breaking a shared extent needs
> to lock out page faults on both the source and destination, but it
> still needs to be able to populate the page cache of both the source
> and destination file.....
> 
> .... and vfs_dedupe_file_range_compare() has to be able to read
> pages from both the source and destination file to determine that
> the contents are identical and that's done while we hold the
> XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
> user data modification operations being run....

So I started wondering why fstests passed when reading this :) The reason
is that vfs_dedupe_get_page() does not use standard page cache filling path
(neither readahead API nor filemap_read()), instead it uses
read_mapping_page() and so gets into page cache filling path below the
level at which we get invalidate_lock and thus everything works as it
should. So read_mapping_page() is similar to places like e.g.
block_truncate_page() or block_write_begin() which may end up filling in
page cache contents but they rely on upper layers to already hold
appropriate locks. I'll add a comment to read_mapping_page() about this.
Once all filesystems are converted to use invalidate_lock, I also want to
add WARN_ON_ONCE() to various places verifying that invalidate_lock is held
as it should...
 
> I now have many doubts that this "serialise page faults by locking
> out page cache instantiation" method actually works as a generic
> mechanism. It's not just page cache invalidation that relies on
> being able to lock out page faults: copy-on-write and deduplication
> both require the ability to populate the page cache with source data
> while page faults are locked out so the data can be compared/copied
> atomically with the extent level manipulations and so user data
> modifications cannot occur until the physical extent manipulation
> operation has completed.

Hum, that is a good point. So there are actually two different things you
want to block at different places:

1) You really want to block page cache instantiation for operations such as
hole punch as that operation mutates data and thus contents would become
stale.

2) You want to block page cache *modification* for operations such as
dedupe while keeping page cache in place. This is somewhat different
requirement but invalidate_lock can, in principle, cover it as well.
Basically we just need to keep invalidate_lock usage in .page_mkwrite
helpers. The question remains whether invalidate_lock is still a good name
with this usage in mind and I probably need to update a documentation to
reflect this usage.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-04-26 15:46 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
2021-04-23 18:30   ` Matthew Wilcox
2021-04-23 23:04   ` Dave Chinner
2021-04-26 15:46     ` Jan Kara
2021-04-23 17:29 ` [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock Jan Kara
2021-04-23 17:29 ` [PATCH 04/12] ext2: Convert to using invalidate_lock Jan Kara
2021-04-23 22:07 ` [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Dave Chinner
2021-04-23 23:51   ` Matthew Wilcox
2021-04-24  6:11     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).