All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
@ 2021-04-23 17:29 Jan Kara
  2021-04-23 17:29 ` [PATCH 01/12] mm: Fix comments mentioning i_mutex Jan Kara
                   ` (12 more replies)
  0 siblings, 13 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

Hello,

here is another version of my patches to address races between hole punching
and page cache filling functions for ext4 and other filesystems. I think
we are coming close to a complete solution so I've removed the RFC tag from
the subject. I went through all filesystems supporting hole punching and
converted them from their private locks to a generic one (usually fixing the
race ext4 had as a side effect). I also found out ceph & cifs didn't have
any protection from the hole punch vs page fault race either so I've added
appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
read, write, and hole punch paths are not properly serialized possibly leading
to fs corruption. Both issues are non-trivial so respective fs maintainers
have to deal with those (I've informed them and problems were generally
confirmed). Anyway, for all the other filesystem this kind of race should
be closed.

As a next step, I'd like to actually make sure all calls to
truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
and then we can also get rid of i_size checks in some places (truncate can
use the same serialization scheme as hole punch). But that step is mostly
a cleanup so I'd like to get these functional fixes in first.

Changes since v3:
* Renamed and moved lock to struct address_space
* Added conversions of tmpfs, ceph, cifs, fuse, f2fs
* Fixed error handling path in filemap_read()
* Removed .page_mkwrite() cleanup from the series for now

Changes since v2:
* Added documentation and comments regarding lock ordering and how the lock is
  supposed to be used
* Added conversions of ext2, xfs, zonefs
* Added patch removing i_mapping_sem protection from .page_mkwrite handlers

Changes since v1:
* Moved to using inode->i_mapping_sem instead of aops handler to acquire
  appropriate lock

---
Motivation:

Amir has reported [1] a that ext4 has a potential issues when reads can race
with hole punching possibly exposing stale data from freed blocks or even
corrupting filesystem when stale mapping data gets used for writeout. The
problem is that during hole punching, new page cache pages can get instantiated
and block mapping from the looked up in a punched range after
truncate_inode_pages() has run but before the filesystem removes blocks from
the file. In principle any filesystem implementing hole punching thus needs to
implement a mechanism to block instantiating page cache pages during hole
punching to avoid this race. This is further complicated by the fact that there
are multiple places that can instantiate pages in page cache.  We can have
regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
result in reading in page cache pages through force_page_cache_readahead().

There are couple of ways how to fix this. First way (currently implemented by
XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
serialized with hole punching. This is easy to do but as a result all reads
would then be serialized with writes and thus mixed read-write workloads suffer
heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
when creating new pages in the page cache and looking up their corresponding
block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
which provides necessary serialization with hole punching for ext4.

								Honza

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

Previous versions:
Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz

CC: ceph-devel@vger.kernel.org
CC: Chao Yu <yuchao0@huawei.com>
CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
CC: Hugh Dickins <hughd@google.com>
CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Jeff Layton <jlayton@kernel.org>
CC: Johannes Thumshirn <jth@kernel.org>
CC: linux-cifs@vger.kernel.org
CC: <linux-ext4@vger.kernel.org>
CC: linux-f2fs-devel@lists.sourceforge.net
CC: <linux-fsdevel@vger.kernel.org>
CC: <linux-mm@kvack.org>
CC: <linux-xfs@vger.kernel.org>
CC: Miklos Szeredi <miklos@szeredi.hu>
CC: Steve French <sfrench@samba.org>
CC: Ted Tso <tytso@mit.edu>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 01/12] mm: Fix comments mentioning i_mutex
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Christoph Hellwig

inode->i_mutex has been replaced with inode->i_rwsem long ago. Fix
comments still mentioning i_mutex.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/filemap.c        | 10 +++++-----
 mm/madvise.c        |  2 +-
 mm/memory-failure.c |  2 +-
 mm/rmap.c           |  6 +++---
 mm/shmem.c          | 20 ++++++++++----------
 mm/truncate.c       |  8 ++++----
 6 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 43700480d897..bd7c50e060a9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -76,7 +76,7 @@
  *      ->swap_lock		(exclusive_swap_page, others)
  *        ->i_pages lock
  *
- *  ->i_mutex
+ *  ->i_rwsem
  *    ->i_mmap_rwsem		(truncate->unmap_mapping_range)
  *
  *  ->mmap_lock
@@ -87,7 +87,7 @@
  *  ->mmap_lock
  *    ->lock_page		(access_process_vm)
  *
- *  ->i_mutex			(generic_perform_write)
+ *  ->i_rwsem			(generic_perform_write)
  *    ->mmap_lock		(fault_in_pages_readable->do_page_fault)
  *
  *  bdi->wb.list_lock
@@ -3625,12 +3625,12 @@ EXPORT_SYMBOL(generic_perform_write);
  * modification times and calls proper subroutines depending on whether we
  * do direct IO or a standard buffered write.
  *
- * It expects i_mutex to be grabbed unless we work on a block device or similar
+ * It expects i_rwsem to be grabbed unless we work on a block device or similar
  * object which does not need locking at all.
  *
  * This function does *not* take care of syncing data in case of O_SYNC write.
  * A caller has to handle it. This is mainly due to the fact that we want to
- * avoid syncing under i_mutex.
+ * avoid syncing under i_rwsem.
  *
  * Return:
  * * number of bytes written, even for truncated writes
@@ -3718,7 +3718,7 @@ EXPORT_SYMBOL(__generic_file_write_iter);
  *
  * This is a wrapper around __generic_file_write_iter() to be used by most
  * filesystems. It takes care of syncing the file in case of O_SYNC file
- * and acquires i_mutex as needed.
+ * and acquires i_rwsem as needed.
  * Return:
  * * negative error code if no data has been written at all of
  *   vfs_fsync_range() failed for a synchronous write
diff --git a/mm/madvise.c b/mm/madvise.c
index 01fef79ac761..bd28d693e0ad 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -853,7 +853,7 @@ static long madvise_remove(struct vm_area_struct *vma,
 			+ ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
 
 	/*
-	 * Filesystem's fallocate may need to take i_mutex.  We need to
+	 * Filesystem's fallocate may need to take i_rwsem.  We need to
 	 * explicitly grab a reference because the vma (and hence the
 	 * vma's reference to the file) can go away as soon as we drop
 	 * mmap_lock.
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 24210c9bd843..fca50b554122 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -704,7 +704,7 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn)
 	/*
 	 * Truncation is a bit tricky. Enable it per file system for now.
 	 *
-	 * Open: to take i_mutex or not for this? Right now we don't.
+	 * Open: to take i_rwsem or not for this? Right now we don't.
 	 */
 	return truncate_error_page(p, pfn, mapping);
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index b0fc27e77d6d..dba8cb8a5578 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -20,9 +20,9 @@
 /*
  * Lock ordering in mm:
  *
- * inode->i_mutex	(while writing or truncating, not reading or faulting)
+ * inode->i_rwsem	(while writing or truncating, not reading or faulting)
  *   mm->mmap_lock
- *     page->flags PG_locked (lock_page)   * (see huegtlbfs below)
+ *     page->flags PG_locked (lock_page)   * (see hugetlbfs below)
  *       hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
  *         mapping->i_mmap_rwsem
  *           hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
@@ -41,7 +41,7 @@
  *                             in arch-dependent flush_dcache_mmap_lock,
  *                             within bdi.wb->list_lock in __sync_single_inode)
  *
- * anon_vma->rwsem,mapping->i_mutex      (memory_failure, collect_procs_anon)
+ * anon_vma->rwsem,mapping->i_mmap_rwsem   (memory_failure, collect_procs_anon)
  *   ->tasklist_lock
  *     pte map lock
  *
diff --git a/mm/shmem.c b/mm/shmem.c
index b2db4ed0fbc7..55b2888db542 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -96,7 +96,7 @@ static struct vfsmount *shm_mnt;
 
 /*
  * shmem_fallocate communicates with shmem_fault or shmem_writepage via
- * inode->i_private (with i_mutex making sure that it has only one user at
+ * inode->i_private (with i_rwsem making sure that it has only one user at
  * a time): we would prefer not to enlarge the shmem inode just for that.
  */
 struct shmem_falloc {
@@ -774,7 +774,7 @@ static int shmem_free_swap(struct address_space *mapping,
  * Determine (in bytes) how many of the shmem object's pages mapped by the
  * given offsets are swapped out.
  *
- * This is safe to call without i_mutex or the i_pages lock thanks to RCU,
+ * This is safe to call without i_rwsem or the i_pages lock thanks to RCU,
  * as long as the inode doesn't go away and racy results are not a problem.
  */
 unsigned long shmem_partial_swap_usage(struct address_space *mapping,
@@ -806,7 +806,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
  * Determine (in bytes) how many of the shmem object's pages mapped by the
  * given vma is swapped out.
  *
- * This is safe to call without i_mutex or the i_pages lock thanks to RCU,
+ * This is safe to call without i_rwsem or the i_pages lock thanks to RCU,
  * as long as the inode doesn't go away and racy results are not a problem.
  */
 unsigned long shmem_swap_usage(struct vm_area_struct *vma)
@@ -1069,7 +1069,7 @@ static int shmem_setattr(struct user_namespace *mnt_userns,
 		loff_t oldsize = inode->i_size;
 		loff_t newsize = attr->ia_size;
 
-		/* protected by i_mutex */
+		/* protected by i_rwsem */
 		if ((newsize < oldsize && (info->seals & F_SEAL_SHRINK)) ||
 		    (newsize > oldsize && (info->seals & F_SEAL_GROW)))
 			return -EPERM;
@@ -2049,7 +2049,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 	/*
 	 * Trinity finds that probing a hole which tmpfs is punching can
 	 * prevent the hole-punch from ever completing: which in turn
-	 * locks writers out with its hold on i_mutex.  So refrain from
+	 * locks writers out with its hold on i_rwsem.  So refrain from
 	 * faulting pages into the hole while it's being punched.  Although
 	 * shmem_undo_range() does remove the additions, it may be unable to
 	 * keep up, as each new page needs its own unmap_mapping_range() call,
@@ -2060,7 +2060,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 	 * we just need to make racing faults a rare case.
 	 *
 	 * The implementation below would be much simpler if we just used a
-	 * standard mutex or completion: but we cannot take i_mutex in fault,
+	 * standard mutex or completion: but we cannot take i_rwsem in fault,
 	 * and bloating every shmem inode for this unlikely case would be sad.
 	 */
 	if (unlikely(inode->i_private)) {
@@ -2518,7 +2518,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
 	struct shmem_inode_info *info = SHMEM_I(inode);
 	pgoff_t index = pos >> PAGE_SHIFT;
 
-	/* i_mutex is held by caller */
+	/* i_rwsem is held by caller */
 	if (unlikely(info->seals & (F_SEAL_GROW |
 				   F_SEAL_WRITE | F_SEAL_FUTURE_WRITE))) {
 		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE))
@@ -2618,7 +2618,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 
 		/*
 		 * We must evaluate after, since reads (unlike writes)
-		 * are called without i_mutex protection against truncate
+		 * are called without i_rwsem protection against truncate
 		 */
 		nr = PAGE_SIZE;
 		i_size = i_size_read(inode);
@@ -2688,7 +2688,7 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
 		return -ENXIO;
 
 	inode_lock(inode);
-	/* We're holding i_mutex so we can access i_size directly */
+	/* We're holding i_rwsem so we can access i_size directly */
 	offset = mapping_seek_hole_data(mapping, offset, inode->i_size, whence);
 	if (offset >= 0)
 		offset = vfs_setpos(file, offset, MAX_LFS_FILESIZE);
@@ -2717,7 +2717,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
 		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
 
-		/* protected by i_mutex */
+		/* protected by i_rwsem */
 		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
 			error = -EPERM;
 			goto out;
diff --git a/mm/truncate.c b/mm/truncate.c
index 455944264663..2cf71d8c3c62 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -416,7 +416,7 @@ EXPORT_SYMBOL(truncate_inode_pages_range);
  * @mapping: mapping to truncate
  * @lstart: offset from which to truncate
  *
- * Called under (and serialised by) inode->i_mutex.
+ * Called under (and serialised by) inode->i_rwsem.
  *
  * Note: When this function returns, there can be a page in the process of
  * deletion (inside __delete_from_page_cache()) in the specified range.  Thus
@@ -433,7 +433,7 @@ EXPORT_SYMBOL(truncate_inode_pages);
  * truncate_inode_pages_final - truncate *all* pages before inode dies
  * @mapping: mapping to truncate
  *
- * Called under (and serialized by) inode->i_mutex.
+ * Called under (and serialized by) inode->i_rwsem.
  *
  * Filesystems have to use this in the .evict_inode path to inform the
  * VM that this is the final truncate and the inode is going away.
@@ -766,7 +766,7 @@ EXPORT_SYMBOL(truncate_pagecache);
  * setattr function when ATTR_SIZE is passed in.
  *
  * Must be called with a lock serializing truncates and writes (generally
- * i_mutex but e.g. xfs uses a different lock) and before all filesystem
+ * i_rwsem but e.g. xfs uses a different lock) and before all filesystem
  * specific block truncation has been performed.
  */
 void truncate_setsize(struct inode *inode, loff_t newsize)
@@ -795,7 +795,7 @@ EXPORT_SYMBOL(truncate_setsize);
  *
  * The function must be called after i_size is updated so that page fault
  * coming after we unlock the page will already see the new i_size.
- * The function must be called while we still hold i_mutex - this not only
+ * The function must be called while we still hold i_rwsem - this not only
  * makes sure i_size is stable but also that userspace cannot observe new
  * i_size value before we are prepared to store mmap writes at new inode size.
  */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
  2021-04-23 17:29 ` [PATCH 01/12] mm: Fix comments mentioning i_mutex Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 18:30     ` [f2fs-dev] " Matthew Wilcox
  2021-04-23 23:04     ` [f2fs-dev] " Dave Chinner
  2021-04-23 17:29 ` [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock Jan Kara
                   ` (10 subsequent siblings)
  12 siblings, 2 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

Currently, serializing operations such as page fault, read, or readahead
against hole punching is rather difficult. The basic race scheme is
like:

fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
  truncate_inode_pages_range()
						  <create pages in page
						   cache here>
  <update fs block mapping and free blocks>

Now the problem is in this way read / page fault / readahead can
instantiate pages in page cache with potentially stale data (if blocks
get quickly reused). Avoiding this race is not simple - page locks do
not work because we want to make sure there are *no* pages in given
range. inode->i_rwsem does not work because page fault happens under
mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
the performance for mixed read-write workloads suffer.

So create a new rw_semaphore in the address_space - invalidate_lock -
that protects adding of pages to page cache for page faults / reads /
readahead.

Signed-off-by: Jan Kara <jack@suse.cz>
CC: ceph-devel@vger.kernel.org
CC: Chao Yu <yuchao0@huawei.com>
CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
CC: Hugh Dickins <hughd@google.com>
CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Jeff Layton <jlayton@kernel.org>
CC: Johannes Thumshirn <jth@kernel.org>
CC: linux-cifs@vger.kernel.org
CC: <linux-ext4@vger.kernel.org>
CC: linux-f2fs-devel@lists.sourceforge.net
CC: <linux-fsdevel@vger.kernel.org>
CC: <linux-mm@kvack.org>
CC: <linux-xfs@vger.kernel.org>
CC: Miklos Szeredi <miklos@szeredi.hu>
CC: Steve French <sfrench@samba.org>
CC: Ted Tso <tytso@mit.edu>
---
 Documentation/filesystems/locking.rst | 39 ++++++++++++-----
 fs/inode.c                            |  3 ++
 include/linux/fs.h                    |  4 ++
 mm/filemap.c                          | 61 +++++++++++++++++++++------
 mm/readahead.c                        |  2 +
 mm/rmap.c                             | 37 ++++++++--------
 mm/truncate.c                         |  2 +-
 7 files changed, 106 insertions(+), 42 deletions(-)

diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index b7dcc86c92a4..7cbf72862832 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -266,19 +266,19 @@ prototypes::
 locking rules:
 	All except set_page_dirty and freepage may block
 
-======================	======================== =========
-ops			PageLocked(page)	 i_rwsem
-======================	======================== =========
+======================	======================== =========	===============
+ops			PageLocked(page)	 i_rwsem	invalidate_lock
+======================	======================== =========	===============
 writepage:		yes, unlocks (see below)
-readpage:		yes, unlocks
+readpage:		yes, unlocks				shared
 writepages:
 set_page_dirty		no
-readahead:		yes, unlocks
-readpages:		no
+readahead:		yes, unlocks				shared
+readpages:		no					shared
 write_begin:		locks the page		 exclusive
 write_end:		yes, unlocks		 exclusive
 bmap:
-invalidatepage:		yes
+invalidatepage:		yes					exclusive
 releasepage:		yes
 freepage:		yes
 direct_IO:
@@ -373,7 +373,10 @@ keep it that way and don't breed new callers.
 ->invalidatepage() is called when the filesystem must attempt to drop
 some or all of the buffers from the page when it is being truncated. It
 returns zero on success. If ->invalidatepage is zero, the kernel uses
-block_invalidatepage() instead.
+block_invalidatepage() instead. The filesystem should exclusively acquire
+invalidate_lock before invalidating page cache in truncate / hole punch path (and
+thus calling into ->invalidatepage) to block races between page cache
+invalidation and page cache filling functions (fault, read, ...).
 
 ->releasepage() is called when the kernel is about to try to drop the
 buffers from the page in preparation for freeing it.  It returns zero to
@@ -567,6 +570,20 @@ in sys_read() and friends.
 the lease within the individual filesystem to record the result of the
 operation
 
+->fallocate implementation must be really careful to maintain page cache
+consistency when punching holes or performing other operations that invalidate
+page cache contents. Usually the filesystem needs to call
+truncate_inode_pages_range() to invalidate relevant range of the page cache.
+However the filesystem usually also needs to update its internal (and on disk)
+view of file offset -> disk block mapping. Until this update is finished, the
+filesystem needs to block page faults and reads from reloading now-stale page
+cache contents from the disk. VFS provides mapping->invalidate_lock for this
+and acquires it in shared mode in paths loading pages from disk
+(filemap_fault(), filemap_read(), readahead paths). The filesystem is
+responsible for taking this lock in its fallocate implementation and generally
+whenever the page cache contents needs to be invalidated because a block is
+moving from under a page.
+
 dquot_operations
 ================
 
@@ -628,9 +645,9 @@ access:		yes
 to be faulted in. The filesystem must find and return the page associated
 with the passed in "pgoff" in the vm_fault structure. If it is possible that
 the page may be truncated and/or invalidated, then the filesystem must lock
-the page, then ensure it is not already truncated (the page lock will block
-subsequent truncate), and then return with VM_FAULT_LOCKED, and the page
-locked. The VM will unlock the page.
+invalidate_lock, then ensure the page is not already truncated (invalidate_lock
+will block subsequent truncate), and then return with VM_FAULT_LOCKED, and the
+page locked. The VM will unlock the page.
 
 ->map_pages() is called when VM asks to map easy accessible pages.
 Filesystem should find and map pages associated with offsets from "start_pgoff"
diff --git a/fs/inode.c b/fs/inode.c
index a047ab306f9a..43596dd8b61e 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
 	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
 	mapping->private_data = NULL;
 	mapping->writeback_index = 0;
+	init_rwsem(&mapping->invalidate_lock);
+	lockdep_set_class(&mapping->invalidate_lock,
+			  &sb->s_type->invalidate_lock_key);
 	inode->i_private = NULL;
 	inode->i_mapping = mapping;
 	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ec8f3ddf4a6a..3fca7bf2d0fb 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -435,6 +435,8 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
  * struct address_space - Contents of a cacheable, mappable object.
  * @host: Owner, either the inode or the block_device.
  * @i_pages: Cached pages.
+ * @invalidate_lock: Guards coherency between page cache contents and
+ *   file offset->disk block mappings in the filesystem during invalidates
  * @gfp_mask: Memory allocation flags to use for allocating pages.
  * @i_mmap_writable: Number of VM_SHARED mappings.
  * @nr_thps: Number of THPs in the pagecache (non-shmem only).
@@ -453,6 +455,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
 struct address_space {
 	struct inode		*host;
 	struct xarray		i_pages;
+	struct rw_semaphore	invalidate_lock;
 	gfp_t			gfp_mask;
 	atomic_t		i_mmap_writable;
 #ifdef CONFIG_READ_ONLY_THP_FOR_FS
@@ -2351,6 +2354,7 @@ struct file_system_type {
 
 	struct lock_class_key i_lock_key;
 	struct lock_class_key i_mutex_key;
+	struct lock_class_key invalidate_lock_key;
 	struct lock_class_key i_mutex_dir_key;
 };
 
diff --git a/mm/filemap.c b/mm/filemap.c
index bd7c50e060a9..9ea8dfb0609c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -77,7 +77,8 @@
  *        ->i_pages lock
  *
  *  ->i_rwsem
- *    ->i_mmap_rwsem		(truncate->unmap_mapping_range)
+ *    ->invalidate_lock		(acquired by fs in truncate path)
+ *      ->i_mmap_rwsem		(truncate->unmap_mapping_range)
  *
  *  ->mmap_lock
  *    ->i_mmap_rwsem
@@ -85,7 +86,8 @@
  *        ->i_pages lock	(arch-dependent flush_dcache_mmap_lock)
  *
  *  ->mmap_lock
- *    ->lock_page		(access_process_vm)
+ *    ->invalidate_lock		(filemap_fault)
+ *      ->lock_page		(filemap_fault, access_process_vm)
  *
  *  ->i_rwsem			(generic_perform_write)
  *    ->mmap_lock		(fault_in_pages_readable->do_page_fault)
@@ -2276,20 +2278,30 @@ static int filemap_update_page(struct kiocb *iocb,
 {
 	int error;
 
+	if (iocb->ki_flags & IOCB_NOWAIT) {
+		if (!down_read_trylock(&mapping->invalidate_lock))
+			return -EAGAIN;
+	} else {
+		down_read(&mapping->invalidate_lock);
+	}
+
 	if (!trylock_page(page)) {
+		error = -EAGAIN;
 		if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO))
-			return -EAGAIN;
+			goto unlock_mapping;
 		if (!(iocb->ki_flags & IOCB_WAITQ)) {
+			up_read(&mapping->invalidate_lock);
 			put_and_wait_on_page_locked(page, TASK_KILLABLE);
 			return AOP_TRUNCATED_PAGE;
 		}
 		error = __lock_page_async(page, iocb->ki_waitq);
 		if (error)
-			return error;
+			goto unlock_mapping;
 	}
 
+	error = AOP_TRUNCATED_PAGE;
 	if (!page->mapping)
-		goto truncated;
+		goto unlock;
 
 	error = 0;
 	if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, page))
@@ -2300,15 +2312,13 @@ static int filemap_update_page(struct kiocb *iocb,
 		goto unlock;
 
 	error = filemap_read_page(iocb->ki_filp, mapping, page);
-	if (error == AOP_TRUNCATED_PAGE)
-		put_page(page);
-	return error;
-truncated:
-	unlock_page(page);
-	put_page(page);
-	return AOP_TRUNCATED_PAGE;
+	goto unlock_mapping;
 unlock:
 	unlock_page(page);
+unlock_mapping:
+	up_read(&mapping->invalidate_lock);
+	if (error == AOP_TRUNCATED_PAGE)
+		put_page(page);
 	return error;
 }
 
@@ -2323,6 +2333,19 @@ static int filemap_create_page(struct file *file,
 	if (!page)
 		return -ENOMEM;
 
+	/*
+	 * Protect against truncate / hole punch. Grabbing invalidate_lock here
+	 * assures we cannot instantiate and bring uptodate new pagecache pages
+	 * after evicting page cache during truncate and before actually
+	 * freeing blocks.  Note that we could release invalidate_lock after
+	 * inserting the page into page cache as the locked page would then be
+	 * enough to synchronize with hole punching. But there are code paths
+	 * such as filemap_update_page() filling in partially uptodate pages or
+	 * ->readpages() that need to hold invalidate_lock while mapping blocks
+	 * for IO so let's hold the lock here as well to keep locking rules
+	 * simple.
+	 */
+	down_read(&mapping->invalidate_lock);
 	error = add_to_page_cache_lru(page, mapping, index,
 			mapping_gfp_constraint(mapping, GFP_KERNEL));
 	if (error == -EEXIST)
@@ -2334,9 +2357,11 @@ static int filemap_create_page(struct file *file,
 	if (error)
 		goto error;
 
+	up_read(&mapping->invalidate_lock);
 	pagevec_add(pvec, page);
 	return 0;
 error:
+	up_read(&mapping->invalidate_lock);
 	put_page(page);
 	return error;
 }
@@ -2896,6 +2921,13 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
 		ret = VM_FAULT_MAJOR;
 		fpin = do_sync_mmap_readahead(vmf);
+	}
+
+	/*
+	 * See comment in filemap_create_page() why we need invalidate_lock
+	 */
+	down_read(&mapping->invalidate_lock);
+	if (!page) {
 retry_find:
 		page = pagecache_get_page(mapping, offset,
 					  FGP_CREAT|FGP_FOR_MMAP,
@@ -2903,6 +2935,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		if (!page) {
 			if (fpin)
 				goto out_retry;
+			up_read(&mapping->invalidate_lock);
 			return VM_FAULT_OOM;
 		}
 	}
@@ -2943,9 +2976,11 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	if (unlikely(offset >= max_off)) {
 		unlock_page(page);
 		put_page(page);
+		up_read(&mapping->invalidate_lock);
 		return VM_FAULT_SIGBUS;
 	}
 
+	up_read(&mapping->invalidate_lock);
 	vmf->page = page;
 	return ret | VM_FAULT_LOCKED;
 
@@ -2971,6 +3006,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	if (!error || error == AOP_TRUNCATED_PAGE)
 		goto retry_find;
 
+	up_read(&mapping->invalidate_lock);
 	shrink_readahead_size_eio(ra);
 	return VM_FAULT_SIGBUS;
 
@@ -2982,6 +3018,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	 */
 	if (page)
 		put_page(page);
+	up_read(&mapping->invalidate_lock);
 	if (fpin)
 		fput(fpin);
 	return ret | VM_FAULT_RETRY;
diff --git a/mm/readahead.c b/mm/readahead.c
index c5b0457415be..37dd07b32c67 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -192,6 +192,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
 	 */
 	unsigned int nofs = memalloc_nofs_save();
 
+	down_read(&mapping->invalidate_lock);
 	/*
 	 * Preallocate as many pages as we will need.
 	 */
@@ -236,6 +237,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
 	 * will then handle the error.
 	 */
 	read_pages(ractl, &page_pool, false);
+	up_read(&mapping->invalidate_lock);
 	memalloc_nofs_restore(nofs);
 }
 EXPORT_SYMBOL_GPL(page_cache_ra_unbounded);
diff --git a/mm/rmap.c b/mm/rmap.c
index dba8cb8a5578..e4f769a4dcc8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -22,24 +22,25 @@
  *
  * inode->i_rwsem	(while writing or truncating, not reading or faulting)
  *   mm->mmap_lock
- *     page->flags PG_locked (lock_page)   * (see hugetlbfs below)
- *       hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
- *         mapping->i_mmap_rwsem
- *           hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
- *           anon_vma->rwsem
- *             mm->page_table_lock or pte_lock
- *               swap_lock (in swap_duplicate, swap_info_get)
- *                 mmlist_lock (in mmput, drain_mmlist and others)
- *                 mapping->private_lock (in __set_page_dirty_buffers)
- *                   lock_page_memcg move_lock (in __set_page_dirty_buffers)
- *                     i_pages lock (widely used)
- *                       lruvec->lru_lock (in lock_page_lruvec_irq)
- *                 inode->i_lock (in set_page_dirty's __mark_inode_dirty)
- *                 bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
- *                   sb_lock (within inode_lock in fs/fs-writeback.c)
- *                   i_pages lock (widely used, in set_page_dirty,
- *                             in arch-dependent flush_dcache_mmap_lock,
- *                             within bdi.wb->list_lock in __sync_single_inode)
+ *     mapping->invalidate_lock (in filemap_fault)
+ *       page->flags PG_locked (lock_page)   * (see hugetlbfs below)
+ *         hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
+ *           mapping->i_mmap_rwsem
+ *             hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
+ *             anon_vma->rwsem
+ *               mm->page_table_lock or pte_lock
+ *                 swap_lock (in swap_duplicate, swap_info_get)
+ *                   mmlist_lock (in mmput, drain_mmlist and others)
+ *                   mapping->private_lock (in __set_page_dirty_buffers)
+ *                     lock_page_memcg move_lock (in __set_page_dirty_buffers)
+ *                       i_pages lock (widely used)
+ *                         lruvec->lru_lock (in lock_page_lruvec_irq)
+ *                   inode->i_lock (in set_page_dirty's __mark_inode_dirty)
+ *                   bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
+ *                     sb_lock (within inode_lock in fs/fs-writeback.c)
+ *                     i_pages lock (widely used, in set_page_dirty,
+ *                               in arch-dependent flush_dcache_mmap_lock,
+ *                               within bdi.wb->list_lock in __sync_single_inode)
  *
  * anon_vma->rwsem,mapping->i_mmap_rwsem   (memory_failure, collect_procs_anon)
  *   ->tasklist_lock
diff --git a/mm/truncate.c b/mm/truncate.c
index 2cf71d8c3c62..464ad70a081f 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -416,7 +416,7 @@ EXPORT_SYMBOL(truncate_inode_pages_range);
  * @mapping: mapping to truncate
  * @lstart: offset from which to truncate
  *
- * Called under (and serialised by) inode->i_rwsem.
+ * Called under (and serialised by) inode->i_rwsem and inode->i_mapping_rwsem.
  *
  * Note: When this function returns, there can be a page in the process of
  * deletion (inside __delete_from_page_cache()) in the specified range.  Thus
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
  2021-04-23 17:29 ` [PATCH 01/12] mm: Fix comments mentioning i_mutex Jan Kara
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 17:29 ` [PATCH 04/12] ext2: Convert to using invalidate_lock Jan Kara
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, linux-ext4

Convert ext4 to use mapping->invalidate_lock instead of its private
EXT4_I(inode)->i_mmap_sem. This is mostly search-and-replace. By this
conversion we fix a long standing race between hole punching and read(2)
/ readahead(2) paths that can lead to stale page cache contents.

CC: <linux-ext4@vger.kernel.org>
CC: Ted Tso <tytso@mit.edu>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/ext4/ext4.h     | 10 ----------
 fs/ext4/extents.c  | 25 +++++++++++++-----------
 fs/ext4/file.c     | 13 +++++++------
 fs/ext4/inode.c    | 47 +++++++++++++++++-----------------------------
 fs/ext4/ioctl.c    |  4 ++--
 fs/ext4/super.c    | 13 +++++--------
 fs/ext4/truncate.h |  8 +++++---
 7 files changed, 50 insertions(+), 70 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 826a56e3bbd2..2ae365458dca 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1081,15 +1081,6 @@ struct ext4_inode_info {
 	 * by other means, so we have i_data_sem.
 	 */
 	struct rw_semaphore i_data_sem;
-	/*
-	 * i_mmap_sem is for serializing page faults with truncate / punch hole
-	 * operations. We have to make sure that new page cannot be faulted in
-	 * a section of the inode that is being punched. We cannot easily use
-	 * i_data_sem for this since we need protection for the whole punch
-	 * operation and i_data_sem ranks below transaction start so we have
-	 * to occasionally drop it.
-	 */
-	struct rw_semaphore i_mmap_sem;
 	struct inode vfs_inode;
 	struct jbd2_inode *jinode;
 
@@ -2908,7 +2899,6 @@ extern int ext4_chunk_trans_blocks(struct inode *, int nrblocks);
 extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode,
 			     loff_t lstart, loff_t lend);
 extern vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf);
-extern vm_fault_t ext4_filemap_fault(struct vm_fault *vmf);
 extern qsize_t *ext4_get_reserved_space(struct inode *inode);
 extern int ext4_get_projid(struct inode *inode, kprojid_t *projid);
 extern void ext4_da_release_space(struct inode *inode, int to_free);
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 77c84d6f1af6..8bb6b84c8a84 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -4467,6 +4467,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
 			    loff_t len, int mode)
 {
 	struct inode *inode = file_inode(file);
+	struct address_space *mapping = file->f_mapping;
 	handle_t *handle = NULL;
 	unsigned int max_blocks;
 	loff_t new_size = 0;
@@ -4553,17 +4554,17 @@ static long ext4_zero_range(struct file *file, loff_t offset,
 		 * Prevent page faults from reinstantiating pages we have
 		 * released from page cache.
 		 */
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&mapping->invalidate_lock);
 
 		ret = ext4_break_layouts(inode);
 		if (ret) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			goto out_mutex;
 		}
 
 		ret = ext4_update_disksize_before_punch(inode, offset, len);
 		if (ret) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			goto out_mutex;
 		}
 		/* Now release the pages and zero block aligned part of pages */
@@ -4572,7 +4573,7 @@ static long ext4_zero_range(struct file *file, loff_t offset,
 
 		ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,
 					     flags);
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 		if (ret)
 			goto out_mutex;
 	}
@@ -5214,6 +5215,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
 static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 {
 	struct super_block *sb = inode->i_sb;
+	struct address_space *mapping = inode->i_mapping;
 	ext4_lblk_t punch_start, punch_stop;
 	handle_t *handle;
 	unsigned int credits;
@@ -5267,7 +5269,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 	 * Prevent page faults from reinstantiating pages we have released from
 	 * page cache.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = ext4_break_layouts(inode);
 	if (ret)
@@ -5282,15 +5284,15 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 	 * Write tail of the last page before removed range since it will get
 	 * removed from the page cache below.
 	 */
-	ret = filemap_write_and_wait_range(inode->i_mapping, ioffset, offset);
+	ret = filemap_write_and_wait_range(mapping, ioffset, offset);
 	if (ret)
 		goto out_mmap;
 	/*
 	 * Write data that will be shifted to preserve them when discarding
 	 * page cache below. We are also protected from pages becoming dirty
-	 * by i_mmap_sem.
+	 * by i_rwsem and invalidate_lock.
 	 */
-	ret = filemap_write_and_wait_range(inode->i_mapping, offset + len,
+	ret = filemap_write_and_wait_range(mapping, offset + len,
 					   LLONG_MAX);
 	if (ret)
 		goto out_mmap;
@@ -5343,7 +5345,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 	ext4_journal_stop(handle);
 	ext4_fc_stop_ineligible(sb);
 out_mmap:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out_mutex:
 	inode_unlock(inode);
 	return ret;
@@ -5360,6 +5362,7 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
 {
 	struct super_block *sb = inode->i_sb;
+	struct address_space *mapping = inode->i_mapping;
 	handle_t *handle;
 	struct ext4_ext_path *path;
 	struct ext4_extent *extent;
@@ -5418,7 +5421,7 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
 	 * Prevent page faults from reinstantiating pages we have released from
 	 * page cache.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = ext4_break_layouts(inode);
 	if (ret)
@@ -5519,7 +5522,7 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
 	ext4_journal_stop(handle);
 	ext4_fc_stop_ineligible(sb);
 out_mmap:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out_mutex:
 	inode_unlock(inode);
 	return ret;
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 194f5d00fa32..61fa787138d8 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -687,22 +687,23 @@ static vm_fault_t ext4_dax_huge_fault(struct vm_fault *vmf,
 	 */
 	bool write = (vmf->flags & FAULT_FLAG_WRITE) &&
 		(vmf->vma->vm_flags & VM_SHARED);
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	pfn_t pfn;
 
 	if (write) {
 		sb_start_pagefault(sb);
 		file_update_time(vmf->vma->vm_file);
-		down_read(&EXT4_I(inode)->i_mmap_sem);
+		down_read(&mapping->invalidate_lock);
 retry:
 		handle = ext4_journal_start_sb(sb, EXT4_HT_WRITE_PAGE,
 					       EXT4_DATA_TRANS_BLOCKS(sb));
 		if (IS_ERR(handle)) {
-			up_read(&EXT4_I(inode)->i_mmap_sem);
+			up_read(&mapping->invalidate_lock);
 			sb_end_pagefault(sb);
 			return VM_FAULT_SIGBUS;
 		}
 	} else {
-		down_read(&EXT4_I(inode)->i_mmap_sem);
+		down_read(&mapping->invalidate_lock);
 	}
 	result = dax_iomap_fault(vmf, pe_size, &pfn, &error, &ext4_iomap_ops);
 	if (write) {
@@ -714,10 +715,10 @@ static vm_fault_t ext4_dax_huge_fault(struct vm_fault *vmf,
 		/* Handling synchronous page fault? */
 		if (result & VM_FAULT_NEEDDSYNC)
 			result = dax_finish_sync_fault(vmf, pe_size, pfn);
-		up_read(&EXT4_I(inode)->i_mmap_sem);
+		up_read(&mapping->invalidate_lock);
 		sb_end_pagefault(sb);
 	} else {
-		up_read(&EXT4_I(inode)->i_mmap_sem);
+		up_read(&mapping->invalidate_lock);
 	}
 
 	return result;
@@ -739,7 +740,7 @@ static const struct vm_operations_struct ext4_dax_vm_ops = {
 #endif
 
 static const struct vm_operations_struct ext4_file_vm_ops = {
-	.fault		= ext4_filemap_fault,
+	.fault		= filemap_fault,
 	.map_pages	= filemap_map_pages,
 	.page_mkwrite   = ext4_page_mkwrite,
 };
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 0948a43f1b3d..62020bff7096 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3952,20 +3952,19 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
 	return ret;
 }
 
-static void ext4_wait_dax_page(struct ext4_inode_info *ei)
+static void ext4_wait_dax_page(struct inode *inode)
 {
-	up_write(&ei->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	schedule();
-	down_write(&ei->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 }
 
 int ext4_break_layouts(struct inode *inode)
 {
-	struct ext4_inode_info *ei = EXT4_I(inode);
 	struct page *page;
 	int error;
 
-	if (WARN_ON_ONCE(!rwsem_is_locked(&ei->i_mmap_sem)))
+	if (WARN_ON_ONCE(!rwsem_is_locked(&inode->i_mapping->invalidate_lock)))
 		return -EINVAL;
 
 	do {
@@ -3976,7 +3975,7 @@ int ext4_break_layouts(struct inode *inode)
 		error = ___wait_var_event(&page->_refcount,
 				atomic_read(&page->_refcount) == 1,
 				TASK_INTERRUPTIBLE, 0, 0,
-				ext4_wait_dax_page(ei));
+				ext4_wait_dax_page(inode));
 	} while (error == 0);
 
 	return error;
@@ -4007,9 +4006,9 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
 
 	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
 	if (ext4_has_inline_data(inode)) {
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&mapping->invalidate_lock);
 		ret = ext4_convert_inline_data(inode);
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 		if (ret)
 			return ret;
 	}
@@ -4060,7 +4059,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
 	 * Prevent page faults from reinstantiating pages we have released from
 	 * page cache.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = ext4_break_layouts(inode);
 	if (ret)
@@ -4133,7 +4132,7 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
 out_stop:
 	ext4_journal_stop(handle);
 out_dio:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out_mutex:
 	inode_unlock(inode);
 	return ret;
@@ -5428,11 +5427,11 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
 			inode_dio_wait(inode);
 		}
 
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&inode->i_mapping->invalidate_lock);
 
 		rc = ext4_break_layouts(inode);
 		if (rc) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&inode->i_mapping->invalidate_lock);
 			goto err_out;
 		}
 
@@ -5508,7 +5507,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
 				error = rc;
 		}
 out_mmap_sem:
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 	}
 
 	if (!error) {
@@ -5985,10 +5984,10 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
 	 * data (and journalled aops don't know how to handle these cases).
 	 */
 	if (val) {
-		down_write(&EXT4_I(inode)->i_mmap_sem);
+		down_write(&inode->i_mapping->invalidate_lock);
 		err = filemap_write_and_wait(inode->i_mapping);
 		if (err < 0) {
-			up_write(&EXT4_I(inode)->i_mmap_sem);
+			up_write(&inode->i_mapping->invalidate_lock);
 			return err;
 		}
 	}
@@ -6021,7 +6020,7 @@ int ext4_change_inode_journal_flag(struct inode *inode, int val)
 	percpu_up_write(&sbi->s_writepages_rwsem);
 
 	if (val)
-		up_write(&EXT4_I(inode)->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 
 	/* Finally we can mark the inode as dirty. */
 
@@ -6065,7 +6064,7 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	sb_start_pagefault(inode->i_sb);
 	file_update_time(vma->vm_file);
 
-	down_read(&EXT4_I(inode)->i_mmap_sem);
+	down_read(&mapping->invalidate_lock);
 
 	err = ext4_convert_inline_data(inode);
 	if (err)
@@ -6178,7 +6177,7 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 out_ret:
 	ret = block_page_mkwrite_return(err);
 out:
-	up_read(&EXT4_I(inode)->i_mmap_sem);
+	up_read(&mapping->invalidate_lock);
 	sb_end_pagefault(inode->i_sb);
 	return ret;
 out_error:
@@ -6186,15 +6185,3 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	ext4_journal_stop(handle);
 	goto out;
 }
-
-vm_fault_t ext4_filemap_fault(struct vm_fault *vmf)
-{
-	struct inode *inode = file_inode(vmf->vma->vm_file);
-	vm_fault_t ret;
-
-	down_read(&EXT4_I(inode)->i_mmap_sem);
-	ret = filemap_fault(vmf);
-	up_read(&EXT4_I(inode)->i_mmap_sem);
-
-	return ret;
-}
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index a2cf35066f46..ec4e4350e2b0 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -147,7 +147,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
 		goto journal_err_out;
 	}
 
-	down_write(&EXT4_I(inode)->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 	err = filemap_write_and_wait(inode->i_mapping);
 	if (err)
 		goto err_out;
@@ -255,7 +255,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
 	ext4_double_up_write_data_sem(inode, inode_bl);
 
 err_out:
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 journal_err_out:
 	unlock_two_nondirectories(inode, inode_bl);
 	iput(inode_bl);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index b9693680463a..0525a19fd39d 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -90,12 +90,9 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
 /*
  * Lock ordering
  *
- * Note the difference between i_mmap_sem (EXT4_I(inode)->i_mmap_sem) and
- * i_mmap_rwsem (inode->i_mmap_rwsem)!
- *
  * page fault path:
- * mmap_lock -> sb_start_pagefault -> i_mmap_sem (r) -> transaction start ->
- *   page lock -> i_data_sem (rw)
+ * mmap_lock -> sb_start_pagefault -> invalidate_lock (r) -> transaction start
+ *   -> page lock -> i_data_sem (rw)
  *
  * buffered write path:
  * sb_start_write -> i_mutex -> mmap_lock
@@ -103,8 +100,9 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
  *   i_data_sem (rw)
  *
  * truncate:
- * sb_start_write -> i_mutex -> i_mmap_sem (w) -> i_mmap_rwsem (w) -> page lock
- * sb_start_write -> i_mutex -> i_mmap_sem (w) -> transaction start ->
+ * sb_start_write -> i_mutex -> invalidate_lock (w) -> i_mmap_rwsem (w) ->
+ *   page lock
+ * sb_start_write -> i_mutex -> invalidate_lock (w) -> transaction start ->
  *   i_data_sem (rw)
  *
  * direct IO:
@@ -1349,7 +1347,6 @@ static void init_once(void *foo)
 	INIT_LIST_HEAD(&ei->i_orphan);
 	init_rwsem(&ei->xattr_sem);
 	init_rwsem(&ei->i_data_sem);
-	init_rwsem(&ei->i_mmap_sem);
 	inode_init_once(&ei->vfs_inode);
 	ext4_fc_init_inode(&ei->vfs_inode);
 }
diff --git a/fs/ext4/truncate.h b/fs/ext4/truncate.h
index bcbe3668c1d4..b7242e08c9dd 100644
--- a/fs/ext4/truncate.h
+++ b/fs/ext4/truncate.h
@@ -11,14 +11,16 @@
  */
 static inline void ext4_truncate_failed_write(struct inode *inode)
 {
+	struct address_space *mapping = inode->i_mapping;
+
 	/*
 	 * We don't need to call ext4_break_layouts() because the blocks we
 	 * are truncating were never visible to userspace.
 	 */
-	down_write(&EXT4_I(inode)->i_mmap_sem);
-	truncate_inode_pages(inode->i_mapping, inode->i_size);
+	down_write(&mapping->invalidate_lock);
+	truncate_inode_pages(mapping, inode->i_size);
 	ext4_truncate(inode);
-	up_write(&EXT4_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 04/12] ext2: Convert to using invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (2 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 17:29 ` [PATCH 05/12] xfs: Convert to use invalidate_lock Jan Kara
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, linux-ext4

Ext2 has its private dax_sem used for synchronizing page faults and
truncation. Use mapping->invalidate_lock instead as it is meant for this
purpose.

CC: <linux-ext4@vger.kernel.org>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/ext2/ext2.h  | 11 -----------
 fs/ext2/file.c  |  7 +++----
 fs/ext2/inode.c | 12 ++++++------
 fs/ext2/super.c |  3 ---
 4 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
index 3309fb2d327a..96d5ea5df78b 100644
--- a/fs/ext2/ext2.h
+++ b/fs/ext2/ext2.h
@@ -671,9 +671,6 @@ struct ext2_inode_info {
 	struct rw_semaphore xattr_sem;
 #endif
 	rwlock_t i_meta_lock;
-#ifdef CONFIG_FS_DAX
-	struct rw_semaphore dax_sem;
-#endif
 
 	/*
 	 * truncate_mutex is for serialising ext2_truncate() against
@@ -689,14 +686,6 @@ struct ext2_inode_info {
 #endif
 };
 
-#ifdef CONFIG_FS_DAX
-#define dax_sem_down_write(ext2_inode)	down_write(&(ext2_inode)->dax_sem)
-#define dax_sem_up_write(ext2_inode)	up_write(&(ext2_inode)->dax_sem)
-#else
-#define dax_sem_down_write(ext2_inode)
-#define dax_sem_up_write(ext2_inode)
-#endif
-
 /*
  * Inode dynamic state flags
  */
diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index 96044f5dbc0e..9d4870df95c4 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -81,7 +81,7 @@ static ssize_t ext2_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
  *
  * mmap_lock (MM)
  *   sb_start_pagefault (vfs, freeze)
- *     ext2_inode_info->dax_sem
+ *     address_space->invalidate_lock
  *       address_space->i_mmap_rwsem or page_lock (mutually exclusive in DAX)
  *         ext2_inode_info->truncate_mutex
  *
@@ -91,7 +91,6 @@ static ssize_t ext2_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
 static vm_fault_t ext2_dax_fault(struct vm_fault *vmf)
 {
 	struct inode *inode = file_inode(vmf->vma->vm_file);
-	struct ext2_inode_info *ei = EXT2_I(inode);
 	vm_fault_t ret;
 	bool write = (vmf->flags & FAULT_FLAG_WRITE) &&
 		(vmf->vma->vm_flags & VM_SHARED);
@@ -100,11 +99,11 @@ static vm_fault_t ext2_dax_fault(struct vm_fault *vmf)
 		sb_start_pagefault(inode->i_sb);
 		file_update_time(vmf->vma->vm_file);
 	}
-	down_read(&ei->dax_sem);
+	down_read(&inode->i_mapping->invalidate_lock);
 
 	ret = dax_iomap_fault(vmf, PE_SIZE_PTE, NULL, NULL, &ext2_iomap_ops);
 
-	up_read(&ei->dax_sem);
+	up_read(&inode->i_mapping->invalidate_lock);
 	if (write)
 		sb_end_pagefault(inode->i_sb);
 	return ret;
diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
index 68178b2234bd..e843be0ae53c 100644
--- a/fs/ext2/inode.c
+++ b/fs/ext2/inode.c
@@ -1175,7 +1175,7 @@ static void ext2_free_branches(struct inode *inode, __le32 *p, __le32 *q, int de
 		ext2_free_data(inode, p, q);
 }
 
-/* dax_sem must be held when calling this function */
+/* mapping->invalidate_lock must be held when calling this function */
 static void __ext2_truncate_blocks(struct inode *inode, loff_t offset)
 {
 	__le32 *i_data = EXT2_I(inode)->i_data;
@@ -1192,7 +1192,7 @@ static void __ext2_truncate_blocks(struct inode *inode, loff_t offset)
 	iblock = (offset + blocksize-1) >> EXT2_BLOCK_SIZE_BITS(inode->i_sb);
 
 #ifdef CONFIG_FS_DAX
-	WARN_ON(!rwsem_is_locked(&ei->dax_sem));
+	WARN_ON(!rwsem_is_locked(&inode->i_mapping->invalidate_lock));
 #endif
 
 	n = ext2_block_to_path(inode, iblock, offsets, NULL);
@@ -1274,9 +1274,9 @@ static void ext2_truncate_blocks(struct inode *inode, loff_t offset)
 	if (ext2_inode_is_fast_symlink(inode))
 		return;
 
-	dax_sem_down_write(EXT2_I(inode));
+	down_write(&inode->i_mapping->invalidate_lock);
 	__ext2_truncate_blocks(inode, offset);
-	dax_sem_up_write(EXT2_I(inode));
+	up_write(&inode->i_mapping->invalidate_lock);
 }
 
 static int ext2_setsize(struct inode *inode, loff_t newsize)
@@ -1306,10 +1306,10 @@ static int ext2_setsize(struct inode *inode, loff_t newsize)
 	if (error)
 		return error;
 
-	dax_sem_down_write(EXT2_I(inode));
+	down_write(&inode->i_mapping->invalidate_lock);
 	truncate_setsize(inode, newsize);
 	__ext2_truncate_blocks(inode, newsize);
-	dax_sem_up_write(EXT2_I(inode));
+	up_write(&inode->i_mapping->invalidate_lock);
 
 	inode->i_mtime = inode->i_ctime = current_time(inode);
 	if (inode_needs_sync(inode)) {
diff --git a/fs/ext2/super.c b/fs/ext2/super.c
index 6c4753277916..143e72c95acf 100644
--- a/fs/ext2/super.c
+++ b/fs/ext2/super.c
@@ -206,9 +206,6 @@ static void init_once(void *foo)
 	init_rwsem(&ei->xattr_sem);
 #endif
 	mutex_init(&ei->truncate_mutex);
-#ifdef CONFIG_FS_DAX
-	init_rwsem(&ei->dax_sem);
-#endif
 	inode_init_once(&ei->vfs_inode);
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 05/12] xfs: Convert to use invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (3 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 04/12] ext2: Convert to using invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 22:39   ` Dave Chinner
  2021-04-23 17:29 ` [PATCH 06/12] zonefs: Convert to using invalidate_lock Jan Kara
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Christoph Hellwig, linux-xfs, Darrick J. Wong

Use invalidate_lock instead of XFS internal i_mmap_lock. The intended
purpose of invalidate_lock is exactly the same. Note that the locking in
__xfs_filemap_fault() slightly changes as filemap_fault() already takes
invalidate_lock.

Reviewed-by: Christoph Hellwig <hch@lst.de>
CC: <linux-xfs@vger.kernel.org>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/xfs/xfs_file.c  | 12 ++++++-----
 fs/xfs/xfs_inode.c | 52 ++++++++++++++++++++++++++--------------------
 fs/xfs/xfs_inode.h |  1 -
 fs/xfs/xfs_super.c |  2 --
 4 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index a007ca0711d9..2fc04ce0e9f9 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1282,7 +1282,7 @@ xfs_file_llseek(
  *
  * mmap_lock (MM)
  *   sb_start_pagefault(vfs, freeze)
- *     i_mmaplock (XFS - truncate serialisation)
+ *     invalidate_lock (vfs - truncate serialisation)
  *       page_lock (MM)
  *         i_lock (XFS - extent map serialisation)
  */
@@ -1303,24 +1303,26 @@ __xfs_filemap_fault(
 		file_update_time(vmf->vma->vm_file);
 	}
 
-	xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
 	if (IS_DAX(inode)) {
 		pfn_t pfn;
 
+		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
 		ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,
 				(write_fault && !vmf->cow_page) ?
 				 &xfs_direct_write_iomap_ops :
 				 &xfs_read_iomap_ops);
 		if (ret & VM_FAULT_NEEDDSYNC)
 			ret = dax_finish_sync_fault(vmf, pe_size, pfn);
+		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
 	} else {
-		if (write_fault)
+		if (write_fault) {
+			xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
 			ret = iomap_page_mkwrite(vmf,
 					&xfs_buffered_write_iomap_ops);
-		else
+			xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
+		} else
 			ret = filemap_fault(vmf);
 	}
-	xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
 
 	if (write_fault)
 		sb_end_pagefault(inode->i_sb);
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index f93370bd7b1e..ac83409d0bf3 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -134,7 +134,7 @@ xfs_ilock_attr_map_shared(
 
 /*
  * In addition to i_rwsem in the VFS inode, the xfs inode contains 2
- * multi-reader locks: i_mmap_lock and the i_lock.  This routine allows
+ * multi-reader locks: invalidate_lock and the i_lock.  This routine allows
  * various combinations of the locks to be obtained.
  *
  * The 3 locks should always be ordered so that the IO lock is obtained first,
@@ -142,23 +142,23 @@ xfs_ilock_attr_map_shared(
  *
  * Basic locking order:
  *
- * i_rwsem -> i_mmap_lock -> page_lock -> i_ilock
+ * i_rwsem -> invalidate_lock -> page_lock -> i_ilock
  *
  * mmap_lock locking order:
  *
  * i_rwsem -> page lock -> mmap_lock
- * mmap_lock -> i_mmap_lock -> page_lock
+ * mmap_lock -> invalidate_lock -> page_lock
  *
  * The difference in mmap_lock locking order mean that we cannot hold the
- * i_mmap_lock over syscall based read(2)/write(2) based IO. These IO paths can
- * fault in pages during copy in/out (for buffered IO) or require the mmap_lock
- * in get_user_pages() to map the user pages into the kernel address space for
- * direct IO. Similarly the i_rwsem cannot be taken inside a page fault because
- * page faults already hold the mmap_lock.
+ * invalidate_lock over syscall based read(2)/write(2) based IO. These IO paths
+ * can fault in pages during copy in/out (for buffered IO) or require the
+ * mmap_lock in get_user_pages() to map the user pages into the kernel address
+ * space for direct IO. Similarly the i_rwsem cannot be taken inside a page
+ * fault because page faults already hold the mmap_lock.
  *
  * Hence to serialise fully against both syscall and mmap based IO, we need to
- * take both the i_rwsem and the i_mmap_lock. These locks should *only* be both
- * taken in places where we need to invalidate the page cache in a race
+ * take both the i_rwsem and the invalidate_lock. These locks should *only* be
+ * both taken in places where we need to invalidate the page cache in a race
  * free manner (e.g. truncate, hole punch and other extent manipulation
  * functions).
  */
@@ -190,10 +190,13 @@ xfs_ilock(
 				 XFS_IOLOCK_DEP(lock_flags));
 	}
 
-	if (lock_flags & XFS_MMAPLOCK_EXCL)
-		mrupdate_nested(&ip->i_mmaplock, XFS_MMAPLOCK_DEP(lock_flags));
-	else if (lock_flags & XFS_MMAPLOCK_SHARED)
-		mraccess_nested(&ip->i_mmaplock, XFS_MMAPLOCK_DEP(lock_flags));
+	if (lock_flags & XFS_MMAPLOCK_EXCL) {
+		down_write_nested(&VFS_I(ip)->i_mapping->invalidate_lock,
+				  XFS_MMAPLOCK_DEP(lock_flags));
+	} else if (lock_flags & XFS_MMAPLOCK_SHARED) {
+		down_read_nested(&VFS_I(ip)->i_mapping->invalidate_lock,
+				 XFS_MMAPLOCK_DEP(lock_flags));
+	}
 
 	if (lock_flags & XFS_ILOCK_EXCL)
 		mrupdate_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags));
@@ -242,10 +245,10 @@ xfs_ilock_nowait(
 	}
 
 	if (lock_flags & XFS_MMAPLOCK_EXCL) {
-		if (!mrtryupdate(&ip->i_mmaplock))
+		if (!down_write_trylock(&VFS_I(ip)->i_mapping->invalidate_lock))
 			goto out_undo_iolock;
 	} else if (lock_flags & XFS_MMAPLOCK_SHARED) {
-		if (!mrtryaccess(&ip->i_mmaplock))
+		if (!down_read_trylock(&VFS_I(ip)->i_mapping->invalidate_lock))
 			goto out_undo_iolock;
 	}
 
@@ -260,9 +263,9 @@ xfs_ilock_nowait(
 
 out_undo_mmaplock:
 	if (lock_flags & XFS_MMAPLOCK_EXCL)
-		mrunlock_excl(&ip->i_mmaplock);
+		up_write(&VFS_I(ip)->i_mapping->invalidate_lock);
 	else if (lock_flags & XFS_MMAPLOCK_SHARED)
-		mrunlock_shared(&ip->i_mmaplock);
+		up_read(&VFS_I(ip)->i_mapping->invalidate_lock);
 out_undo_iolock:
 	if (lock_flags & XFS_IOLOCK_EXCL)
 		up_write(&VFS_I(ip)->i_rwsem);
@@ -309,9 +312,9 @@ xfs_iunlock(
 		up_read(&VFS_I(ip)->i_rwsem);
 
 	if (lock_flags & XFS_MMAPLOCK_EXCL)
-		mrunlock_excl(&ip->i_mmaplock);
+		up_write(&VFS_I(ip)->i_mapping->invalidate_lock);
 	else if (lock_flags & XFS_MMAPLOCK_SHARED)
-		mrunlock_shared(&ip->i_mmaplock);
+		up_read(&VFS_I(ip)->i_mapping->invalidate_lock);
 
 	if (lock_flags & XFS_ILOCK_EXCL)
 		mrunlock_excl(&ip->i_lock);
@@ -337,7 +340,7 @@ xfs_ilock_demote(
 	if (lock_flags & XFS_ILOCK_EXCL)
 		mrdemote(&ip->i_lock);
 	if (lock_flags & XFS_MMAPLOCK_EXCL)
-		mrdemote(&ip->i_mmaplock);
+		downgrade_write(&VFS_I(ip)->i_mapping->invalidate_lock);
 	if (lock_flags & XFS_IOLOCK_EXCL)
 		downgrade_write(&VFS_I(ip)->i_rwsem);
 
@@ -358,8 +361,11 @@ xfs_isilocked(
 
 	if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) {
 		if (!(lock_flags & XFS_MMAPLOCK_SHARED))
-			return !!ip->i_mmaplock.mr_writer;
-		return rwsem_is_locked(&ip->i_mmaplock.mr_lock);
+			return !debug_locks ||
+				lockdep_is_held_type(
+					&VFS_I(ip)->i_mapping->invalidate_lock,
+					0);
+		return rwsem_is_locked(&VFS_I(ip)->i_mapping->invalidate_lock);
 	}
 
 	if (lock_flags & (XFS_IOLOCK_EXCL|XFS_IOLOCK_SHARED)) {
diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
index 88ee4c3930ae..147537be751c 100644
--- a/fs/xfs/xfs_inode.h
+++ b/fs/xfs/xfs_inode.h
@@ -40,7 +40,6 @@ typedef struct xfs_inode {
 	/* Transaction and locking information. */
 	struct xfs_inode_log_item *i_itemp;	/* logging information */
 	mrlock_t		i_lock;		/* inode lock */
-	mrlock_t		i_mmaplock;	/* inode mmap IO lock */
 	atomic_t		i_pincount;	/* inode pin count */
 
 	/*
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index e5e0713bebcd..a1536a02aaa5 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -715,8 +715,6 @@ xfs_fs_inode_init_once(
 	atomic_set(&ip->i_pincount, 0);
 	spin_lock_init(&ip->i_flags_lock);
 
-	mrlock_init(&ip->i_mmaplock, MRLOCK_ALLOW_EQUAL_PRI|MRLOCK_BARRIER,
-		     "xfsino", ip->i_ino);
 	mrlock_init(&ip->i_lock, MRLOCK_ALLOW_EQUAL_PRI|MRLOCK_BARRIER,
 		     "xfsino", ip->i_ino);
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 06/12] zonefs: Convert to using invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (4 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 05/12] xfs: Convert to use invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-26  6:40   ` Damien Le Moal
  2021-04-23 17:29 ` [PATCH 07/12] f2fs: " Jan Kara
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Damien Le Moal, Johannes Thumshirn

Use invalidate_lock instead of zonefs' private i_mmap_sem. The intended
purpose is exactly the same. By this conversion we also fix a race
between hole punching and read(2) / readahead(2) paths that can lead to
stale page cache contents.

CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: Johannes Thumshirn <jth@kernel.org>
CC: <linux-fsdevel@vger.kernel.org>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/zonefs/super.c  | 23 +++++------------------
 fs/zonefs/zonefs.h |  7 +++----
 2 files changed, 8 insertions(+), 22 deletions(-)

diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index 049e36c69ed7..60ac5587c880 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -462,7 +462,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
 	inode_dio_wait(inode);
 
 	/* Serialize against page faults */
-	down_write(&zi->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 
 	/* Serialize against zonefs_iomap_begin() */
 	mutex_lock(&zi->i_truncate_mutex);
@@ -500,7 +500,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
 
 unlock:
 	mutex_unlock(&zi->i_truncate_mutex);
-	up_write(&zi->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 
 	return ret;
 }
@@ -575,18 +575,6 @@ static int zonefs_file_fsync(struct file *file, loff_t start, loff_t end,
 	return ret;
 }
 
-static vm_fault_t zonefs_filemap_fault(struct vm_fault *vmf)
-{
-	struct zonefs_inode_info *zi = ZONEFS_I(file_inode(vmf->vma->vm_file));
-	vm_fault_t ret;
-
-	down_read(&zi->i_mmap_sem);
-	ret = filemap_fault(vmf);
-	up_read(&zi->i_mmap_sem);
-
-	return ret;
-}
-
 static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
 {
 	struct inode *inode = file_inode(vmf->vma->vm_file);
@@ -607,16 +595,16 @@ static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
 	file_update_time(vmf->vma->vm_file);
 
 	/* Serialize against truncates */
-	down_read(&zi->i_mmap_sem);
+	down_read(&inode->i_mapping->invalidate_lock);
 	ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops);
-	up_read(&zi->i_mmap_sem);
+	up_read(&inode->i_mapping->invalidate_lock);
 
 	sb_end_pagefault(inode->i_sb);
 	return ret;
 }
 
 static const struct vm_operations_struct zonefs_file_vm_ops = {
-	.fault		= zonefs_filemap_fault,
+	.fault		= filemap_fault,
 	.map_pages	= filemap_map_pages,
 	.page_mkwrite	= zonefs_filemap_page_mkwrite,
 };
@@ -1158,7 +1146,6 @@ static struct inode *zonefs_alloc_inode(struct super_block *sb)
 
 	inode_init_once(&zi->i_vnode);
 	mutex_init(&zi->i_truncate_mutex);
-	init_rwsem(&zi->i_mmap_sem);
 	zi->i_wr_refcnt = 0;
 
 	return &zi->i_vnode;
diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h
index 51141907097c..7b147907c328 100644
--- a/fs/zonefs/zonefs.h
+++ b/fs/zonefs/zonefs.h
@@ -70,12 +70,11 @@ struct zonefs_inode_info {
 	 * and changes to the inode private data, and in particular changes to
 	 * a sequential file size on completion of direct IO writes.
 	 * Serialization of mmap read IOs with truncate and syscall IO
-	 * operations is done with i_mmap_sem in addition to i_truncate_mutex.
-	 * Only zonefs_seq_file_truncate() takes both lock (i_mmap_sem first,
-	 * i_truncate_mutex second).
+	 * operations is done with invalidate_lock in addition to
+	 * i_truncate_mutex.  Only zonefs_seq_file_truncate() takes both lock
+	 * (invalidate_lock first, i_truncate_mutex second).
 	 */
 	struct mutex		i_truncate_mutex;
-	struct rw_semaphore	i_mmap_sem;
 
 	/* guarded by i_truncate_mutex */
 	unsigned int		i_wr_refcnt;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 07/12] f2fs: Convert to using invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (5 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 06/12] zonefs: Convert to using invalidate_lock Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 19:15     ` kernel test robot
  2021-04-23 20:05     ` kernel test robot
  2021-04-23 17:29 ` [PATCH 08/12] fuse: " Jan Kara
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Jaegeuk Kim, Chao Yu, linux-f2fs-devel

Use invalidate_lock instead of f2fs' private i_mmap_sem. The intended
purpose is exactly the same. By this conversion we fix a long standing
race between hole punching and read(2) / readahead(2) paths that can
lead to stale page cache contents.

CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Chao Yu <yuchao0@huawei.com>
CC: linux-f2fs-devel@lists.sourceforge.net
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/f2fs/data.c  |  4 ++--
 fs/f2fs/f2fs.h  |  1 -
 fs/f2fs/file.c  | 58 ++++++++++++++++++++++++-------------------------
 fs/f2fs/super.c |  1 -
 4 files changed, 30 insertions(+), 34 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 4e5257c763d0..932c773b7b97 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3154,12 +3154,12 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to)
 	/* In the fs-verity case, f2fs_end_enable_verity() does the truncate */
 	if (to > i_size && !f2fs_verity_in_progress(inode)) {
 		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-		down_write(&F2FS_I(inode)->i_mmap_sem);
+		down_write(&mapping->invalidate_lock);
 
 		truncate_pagecache(inode, i_size);
 		f2fs_truncate_blocks(inode, i_size, true);
 
-		up_write(&F2FS_I(inode)->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 	}
 }
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index e2d302ae3a46..f9010c69a57e 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -742,7 +742,6 @@ struct f2fs_inode_info {
 
 	/* avoid racing between foreground op and gc */
 	struct rw_semaphore i_gc_rwsem[2];
-	struct rw_semaphore i_mmap_sem;
 	struct rw_semaphore i_xattr_sem; /* avoid racing between reading and changing EAs */
 
 	int i_extra_isize;		/* size of extra space located in i_addr */
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index d26ff2ae3f5e..e58d67a264ac 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -37,10 +37,7 @@ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	vm_fault_t ret;
 
-	down_read(&F2FS_I(inode)->i_mmap_sem);
 	ret = filemap_fault(vmf);
-	up_read(&F2FS_I(inode)->i_mmap_sem);
-
 	if (!ret)
 		f2fs_update_iostat(F2FS_I_SB(inode), APP_MAPPED_READ_IO,
 							F2FS_BLKSIZE);
@@ -101,7 +98,7 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
 	f2fs_bug_on(sbi, f2fs_has_inline_data(inode));
 
 	file_update_time(vmf->vma->vm_file);
-	down_read(&F2FS_I(inode)->i_mmap_sem);
+	down_read(&inode->i_mapping->invalidate_lock);
 	lock_page(page);
 	if (unlikely(page->mapping != inode->i_mapping ||
 			page_offset(page) > i_size_read(inode) ||
@@ -160,7 +157,7 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
 
 	trace_f2fs_vm_page_mkwrite(page, DATA);
 out_sem:
-	up_read(&F2FS_I(inode)->i_mmap_sem);
+	up_read(&inode->i_mapping->invalidate_lock);
 
 	sb_end_pagefault(inode->i_sb);
 err:
@@ -941,7 +938,7 @@ int f2fs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
 		}
 
 		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-		down_write(&F2FS_I(inode)->i_mmap_sem);
+		down_write(&inode->i_mapping->invalidate_lock);
 
 		truncate_setsize(inode, attr->ia_size);
 
@@ -951,7 +948,7 @@ int f2fs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
 		 * do not trim all blocks after i_size if target size is
 		 * larger than i_size.
 		 */
-		up_write(&F2FS_I(inode)->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 		if (err)
 			return err;
@@ -1094,7 +1091,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
 			blk_end = (loff_t)pg_end << PAGE_SHIFT;
 
 			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-			down_write(&F2FS_I(inode)->i_mmap_sem);
+			down_write(&mapping->invalidate_lock);
 
 			truncate_inode_pages_range(mapping, blk_start,
 					blk_end - 1);
@@ -1103,7 +1100,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
 			f2fs_unlock_op(sbi);
 
-			up_write(&F2FS_I(inode)->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 		}
 	}
@@ -1338,7 +1335,7 @@ static int f2fs_do_collapse(struct inode *inode, loff_t offset, loff_t len)
 
 	/* avoid gc operation during block exchange */
 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 
 	f2fs_lock_op(sbi);
 	f2fs_drop_extent_tree(inode);
@@ -1346,7 +1343,7 @@ static int f2fs_do_collapse(struct inode *inode, loff_t offset, loff_t len)
 	ret = __exchange_data_block(inode, inode, end, start, nrpages - end, true);
 	f2fs_unlock_op(sbi);
 
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 	return ret;
 }
@@ -1377,13 +1374,13 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
 		return ret;
 
 	/* write out all moved pages, if possible */
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 	filemap_write_and_wait_range(inode->i_mapping, offset, LLONG_MAX);
 	truncate_pagecache(inode, offset);
 
 	new_size = i_size_read(inode) - len;
 	ret = f2fs_truncate_blocks(inode, new_size, true);
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	if (!ret)
 		f2fs_i_size_write(inode, new_size);
 	return ret;
@@ -1483,7 +1480,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
 			pgoff_t end;
 
 			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-			down_write(&F2FS_I(inode)->i_mmap_sem);
+			down_write(&mapping->invalidate_lock);
 
 			truncate_pagecache_range(inode,
 				(loff_t)index << PAGE_SHIFT,
@@ -1495,7 +1492,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
 			ret = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE);
 			if (ret) {
 				f2fs_unlock_op(sbi);
-				up_write(&F2FS_I(inode)->i_mmap_sem);
+				up_write(&mapping->invalidate_lock);
 				up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 				goto out;
 			}
@@ -1507,7 +1504,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
 			f2fs_put_dnode(&dn);
 
 			f2fs_unlock_op(sbi);
-			up_write(&F2FS_I(inode)->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 
 			f2fs_balance_fs(sbi, dn.node_changed);
@@ -1542,6 +1539,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
 static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
 {
 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+	struct address_space *mapping = inode->i_mapping;
 	pgoff_t nr, pg_start, pg_end, delta, idx;
 	loff_t new_size;
 	int ret = 0;
@@ -1564,14 +1562,14 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
 
 	f2fs_balance_fs(sbi, true);
 
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 	ret = f2fs_truncate_blocks(inode, i_size_read(inode), true);
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 	if (ret)
 		return ret;
 
 	/* write out all dirty pages from offset */
-	ret = filemap_write_and_wait_range(inode->i_mapping, offset, LLONG_MAX);
+	ret = filemap_write_and_wait_range(mapping, offset, LLONG_MAX);
 	if (ret)
 		return ret;
 
@@ -1582,7 +1580,7 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
 
 	/* avoid gc operation during block exchange */
 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 	truncate_pagecache(inode, offset);
 
 	while (!ret && idx > pg_start) {
@@ -1598,14 +1596,14 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
 					idx + delta, nr, false);
 		f2fs_unlock_op(sbi);
 	}
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 
 	/* write out all moved pages, if possible */
-	down_write(&F2FS_I(inode)->i_mmap_sem);
-	filemap_write_and_wait_range(inode->i_mapping, offset, LLONG_MAX);
+	down_write(&mapping->invalidate_lock);
+	filemap_write_and_wait_range(mapping, offset, LLONG_MAX);
 	truncate_pagecache(inode, offset);
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 
 	if (!ret)
 		f2fs_i_size_write(inode, new_size);
@@ -3566,7 +3564,7 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
 		goto out;
 
 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
 
@@ -3602,7 +3600,7 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
 	}
 
 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 out:
 	inode_unlock(inode);
 
@@ -3719,7 +3717,7 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
 	}
 
 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 
 	last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
 
@@ -3755,7 +3753,7 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
 	}
 
 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 
 	if (ret >= 0) {
 		F2FS_I(inode)->i_flags &= ~F2FS_IMMUTABLE_FL;
@@ -3875,7 +3873,7 @@ static int f2fs_sec_trim_file(struct file *filp, unsigned long arg)
 		goto err;
 
 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-	down_write(&F2FS_I(inode)->i_mmap_sem);
+	down_write(&mapping->invalidate_lock);
 
 	ret = filemap_write_and_wait_range(mapping, range.start,
 			to_end ? LLONG_MAX : end_addr - 1);
@@ -3962,7 +3960,7 @@ static int f2fs_sec_trim_file(struct file *filp, unsigned long arg)
 		ret = f2fs_secure_erase(prev_bdev, inode, prev_index,
 				prev_block, len, range.flags);
 out:
-	up_write(&F2FS_I(inode)->i_mmap_sem);
+	up_write(&mapping->invalidate_lock);
 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
 err:
 	inode_unlock(inode);
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 82592b19b4e0..c6ade37338ed 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -1176,7 +1176,6 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
 	mutex_init(&fi->inmem_lock);
 	init_rwsem(&fi->i_gc_rwsem[READ]);
 	init_rwsem(&fi->i_gc_rwsem[WRITE]);
-	init_rwsem(&fi->i_mmap_sem);
 	init_rwsem(&fi->i_xattr_sem);
 
 	/* Will be used by directory only */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 08/12] fuse: Convert to using invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (6 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 07/12] f2fs: " Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 17:29 ` [PATCH 09/12] shmem: " Jan Kara
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Miklos Szeredi

Use invalidate_lock instead of fuse's private i_mmap_sem. The intended
purpose is exactly the same. By this conversion we fix a long standing
race between hole punching and read(2) / readahead(2) paths that can
lead to stale page cache contents.

CC: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/fuse/dax.c    | 50 +++++++++++++++++++++++-------------------------
 fs/fuse/dir.c    | 11 ++++++-----
 fs/fuse/file.c   | 10 +++++-----
 fs/fuse/fuse_i.h |  7 -------
 fs/fuse/inode.c  |  1 -
 5 files changed, 35 insertions(+), 44 deletions(-)

diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
index ff99ab2a3c43..03e5477ee913 100644
--- a/fs/fuse/dax.c
+++ b/fs/fuse/dax.c
@@ -443,12 +443,12 @@ static int fuse_setup_new_dax_mapping(struct inode *inode, loff_t pos,
 	/*
 	 * Can't do inline reclaim in fault path. We call
 	 * dax_layout_busy_page() before we free a range. And
-	 * fuse_wait_dax_page() drops fi->i_mmap_sem lock and requires it.
-	 * In fault path we enter with fi->i_mmap_sem held and can't drop
-	 * it. Also in fault path we hold fi->i_mmap_sem shared and not
-	 * exclusive, so that creates further issues with fuse_wait_dax_page().
-	 * Hence return -EAGAIN and fuse_dax_fault() will wait for a memory
-	 * range to become free and retry.
+	 * fuse_wait_dax_page() drops mapping->invalidate_lock and requires it.
+	 * In fault path we enter with mapping->invalidate_lock held and can't
+	 * drop it. Also in fault path we hold mapping->invalidate_lock shared
+	 * and not exclusive, so that creates further issues with
+	 * fuse_wait_dax_page().  Hence return -EAGAIN and fuse_dax_fault()
+	 * will wait for a memory range to become free and retry.
 	 */
 	if (flags & IOMAP_FAULT) {
 		alloc_dmap = alloc_dax_mapping(fcd);
@@ -512,7 +512,7 @@ static int fuse_upgrade_dax_mapping(struct inode *inode, loff_t pos,
 	down_write(&fi->dax->sem);
 	node = interval_tree_iter_first(&fi->dax->tree, idx, idx);
 
-	/* We are holding either inode lock or i_mmap_sem, and that should
+	/* We are holding either inode lock or invalidate_lock, and that should
 	 * ensure that dmap can't be truncated. We are holding a reference
 	 * on dmap and that should make sure it can't be reclaimed. So dmap
 	 * should still be there in tree despite the fact we dropped and
@@ -659,14 +659,12 @@ static const struct iomap_ops fuse_iomap_ops = {
 
 static void fuse_wait_dax_page(struct inode *inode)
 {
-	struct fuse_inode *fi = get_fuse_inode(inode);
-
-	up_write(&fi->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	schedule();
-	down_write(&fi->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 }
 
-/* Should be called with fi->i_mmap_sem lock held exclusively */
+/* Should be called with mapping->invalidate_lock held exclusively */
 static int __fuse_dax_break_layouts(struct inode *inode, bool *retry,
 				    loff_t start, loff_t end)
 {
@@ -812,18 +810,18 @@ static vm_fault_t __fuse_dax_fault(struct vm_fault *vmf,
 	 * we do not want any read/write/mmap to make progress and try
 	 * to populate page cache or access memory we are trying to free.
 	 */
-	down_read(&get_fuse_inode(inode)->i_mmap_sem);
+	down_read(&inode->i_mapping->invalidate_lock);
 	ret = dax_iomap_fault(vmf, pe_size, &pfn, &error, &fuse_iomap_ops);
 	if ((ret & VM_FAULT_ERROR) && error == -EAGAIN) {
 		error = 0;
 		retry = true;
-		up_read(&get_fuse_inode(inode)->i_mmap_sem);
+		up_read(&inode->i_mapping->invalidate_lock);
 		goto retry;
 	}
 
 	if (ret & VM_FAULT_NEEDDSYNC)
 		ret = dax_finish_sync_fault(vmf, pe_size, pfn);
-	up_read(&get_fuse_inode(inode)->i_mmap_sem);
+	up_read(&inode->i_mapping->invalidate_lock);
 
 	if (write)
 		sb_end_pagefault(sb);
@@ -959,7 +957,7 @@ inode_inline_reclaim_one_dmap(struct fuse_conn_dax *fcd, struct inode *inode,
 	int ret;
 	struct interval_tree_node *node;
 
-	down_write(&fi->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 
 	/* Lookup a dmap and corresponding file offset to reclaim. */
 	down_read(&fi->dax->sem);
@@ -1020,7 +1018,7 @@ inode_inline_reclaim_one_dmap(struct fuse_conn_dax *fcd, struct inode *inode,
 out_write_dmap_sem:
 	up_write(&fi->dax->sem);
 out_mmap_sem:
-	up_write(&fi->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	return dmap;
 }
 
@@ -1049,10 +1047,10 @@ alloc_dax_mapping_reclaim(struct fuse_conn_dax *fcd, struct inode *inode)
 		 * had a reference or some other temporary failure,
 		 * Try again. We want to give up inline reclaim only
 		 * if there is no range assigned to this node. Otherwise
-		 * if a deadlock is possible if we sleep with fi->i_mmap_sem
-		 * held and worker to free memory can't make progress due
-		 * to unavailability of fi->i_mmap_sem lock. So sleep
-		 * only if fi->dax->nr=0
+		 * if a deadlock is possible if we sleep with
+		 * mapping->invalidate_lock held and worker to free memory
+		 * can't make progress due to unavailability of
+		 * mapping->invalidate_lock.  So sleep only if fi->dax->nr=0
 		 */
 		if (retry)
 			continue;
@@ -1060,8 +1058,8 @@ alloc_dax_mapping_reclaim(struct fuse_conn_dax *fcd, struct inode *inode)
 		 * There are no mappings which can be reclaimed. Wait for one.
 		 * We are not holding fi->dax->sem. So it is possible
 		 * that range gets added now. But as we are not holding
-		 * fi->i_mmap_sem, worker should still be able to free up
-		 * a range and wake us up.
+		 * mapping->invalidate_lock, worker should still be able to
+		 * free up a range and wake us up.
 		 */
 		if (!fi->dax->nr && !(fcd->nr_free_ranges > 0)) {
 			if (wait_event_killable_exclusive(fcd->range_waitq,
@@ -1107,7 +1105,7 @@ static int lookup_and_reclaim_dmap_locked(struct fuse_conn_dax *fcd,
 /*
  * Free a range of memory.
  * Locking:
- * 1. Take fi->i_mmap_sem to block dax faults.
+ * 1. Take mapping->invalidate_lock to block dax faults.
  * 2. Take fi->dax->sem to protect interval tree and also to make sure
  *    read/write can not reuse a dmap which we might be freeing.
  */
@@ -1121,7 +1119,7 @@ static int lookup_and_reclaim_dmap(struct fuse_conn_dax *fcd,
 	loff_t dmap_start = start_idx << FUSE_DAX_SHIFT;
 	loff_t dmap_end = (dmap_start + FUSE_DAX_SZ) - 1;
 
-	down_write(&fi->i_mmap_sem);
+	down_write(&inode->i_mapping->invalidate_lock);
 	ret = fuse_dax_break_layouts(inode, dmap_start, dmap_end);
 	if (ret) {
 		pr_debug("virtio_fs: fuse_dax_break_layouts() failed. err=%d\n",
@@ -1133,7 +1131,7 @@ static int lookup_and_reclaim_dmap(struct fuse_conn_dax *fcd,
 	ret = lookup_and_reclaim_dmap_locked(fcd, inode, start_idx);
 	up_write(&fi->dax->sem);
 out_mmap_sem:
-	up_write(&fi->i_mmap_sem);
+	up_write(&inode->i_mapping->invalidate_lock);
 	return ret;
 }
 
diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
index 06a18700a845..2f29ac4fa489 100644
--- a/fs/fuse/dir.c
+++ b/fs/fuse/dir.c
@@ -1601,6 +1601,7 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
 	struct fuse_mount *fm = get_fuse_mount(inode);
 	struct fuse_conn *fc = fm->fc;
 	struct fuse_inode *fi = get_fuse_inode(inode);
+	struct address_space *mapping = inode->i_mapping;
 	FUSE_ARGS(args);
 	struct fuse_setattr_in inarg;
 	struct fuse_attr_out outarg;
@@ -1625,11 +1626,11 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
 	}
 
 	if (FUSE_IS_DAX(inode) && is_truncate) {
-		down_write(&fi->i_mmap_sem);
+		down_write(&mapping->invalidate_lock);
 		fault_blocked = true;
 		err = fuse_dax_break_layouts(inode, 0, 0);
 		if (err) {
-			up_write(&fi->i_mmap_sem);
+			up_write(&mapping->invalidate_lock);
 			return err;
 		}
 	}
@@ -1739,13 +1740,13 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
 	if ((is_truncate || !is_wb) &&
 	    S_ISREG(inode->i_mode) && oldsize != outarg.attr.size) {
 		truncate_pagecache(inode, outarg.attr.size);
-		invalidate_inode_pages2(inode->i_mapping);
+		invalidate_inode_pages2(mapping);
 	}
 
 	clear_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
 out:
 	if (fault_blocked)
-		up_write(&fi->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 
 	return 0;
 
@@ -1756,7 +1757,7 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
 	clear_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
 
 	if (fault_blocked)
-		up_write(&fi->i_mmap_sem);
+		up_write(&mapping->invalidate_lock);
 	return err;
 }
 
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 8cccecb55fb8..7cacd8ff27eb 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -245,7 +245,7 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
 	}
 
 	if (dax_truncate) {
-		down_write(&get_fuse_inode(inode)->i_mmap_sem);
+		down_write(&inode->i_mapping->invalidate_lock);
 		err = fuse_dax_break_layouts(inode, 0, 0);
 		if (err)
 			goto out;
@@ -257,7 +257,7 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
 
 out:
 	if (dax_truncate)
-		up_write(&get_fuse_inode(inode)->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 
 	if (is_wb_truncate | dax_truncate) {
 		fuse_release_nowrite(inode);
@@ -3266,7 +3266,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
 	if (lock_inode) {
 		inode_lock(inode);
 		if (block_faults) {
-			down_write(&fi->i_mmap_sem);
+			down_write(&inode->i_mapping->invalidate_lock);
 			err = fuse_dax_break_layouts(inode, 0, 0);
 			if (err)
 				goto out;
@@ -3322,7 +3322,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
 		clear_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
 
 	if (block_faults)
-		up_write(&fi->i_mmap_sem);
+		up_write(&inode->i_mapping->invalidate_lock);
 
 	if (lock_inode)
 		inode_unlock(inode);
@@ -3391,7 +3391,7 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
 	 * modifications.  Yet this does give less guarantees than if the
 	 * copying was performed with write(2).
 	 *
-	 * To fix this a i_mmap_sem style lock could be used to prevent new
+	 * To fix this a mapping->invalidate_lock could be used to prevent new
 	 * faults while the copy is ongoing.
 	 */
 	err = fuse_writeback_range(inode_out, pos_out, pos_out + len - 1);
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index 63d97a15ffde..636fca293191 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -149,13 +149,6 @@ struct fuse_inode {
 	/** Lock to protect write related fields */
 	spinlock_t lock;
 
-	/**
-	 * Can't take inode lock in fault path (leads to circular dependency).
-	 * Introduce another semaphore which can be taken in fault path and
-	 * then other filesystem paths can take this to block faults.
-	 */
-	struct rw_semaphore i_mmap_sem;
-
 #ifdef CONFIG_FUSE_DAX
 	/*
 	 * Dax specific inode data
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index b0e18b470e91..bcc91e0565ed 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -85,7 +85,6 @@ static struct inode *fuse_alloc_inode(struct super_block *sb)
 	fi->orig_ino = 0;
 	fi->state = 0;
 	mutex_init(&fi->mutex);
-	init_rwsem(&fi->i_mmap_sem);
 	spin_lock_init(&fi->lock);
 	fi->forget = fuse_alloc_forget();
 	if (!fi->forget)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 09/12] shmem: Convert to using invalidate_lock
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (7 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 08/12] fuse: " Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-29  4:12     ` Hugh Dickins
  2021-04-23 17:29 ` [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate Jan Kara
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Hugh Dickins, linux-mm

Shmem uses a home-grown mechanism for serializing hole punch with page
fault. Use mapping->invalidate_lock for it instead. Admittedly the
home-grown mechanism locks out only the range being actually punched out
while invalidate_lock locks the whole mapping so it is serializing more.
But hole punch doesn't seem to be that critical operation and the
simplification is noticeable.

CC: Hugh Dickins <hughd@google.com>
CC: <linux-mm@kvack.org>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/shmem.c | 98 ++++--------------------------------------------------
 1 file changed, 7 insertions(+), 91 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 55b2888db542..f34162ac46de 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -95,12 +95,11 @@ static struct vfsmount *shm_mnt;
 #define SHORT_SYMLINK_LEN 128
 
 /*
- * shmem_fallocate communicates with shmem_fault or shmem_writepage via
- * inode->i_private (with i_rwsem making sure that it has only one user at
- * a time): we would prefer not to enlarge the shmem inode just for that.
+ * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
+ * i_rwsem making sure that it has only one user at a time): we would prefer
+ * not to enlarge the shmem inode just for that.
  */
 struct shmem_falloc {
-	wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
 	pgoff_t start;		/* start of range currently being fallocated */
 	pgoff_t next;		/* the next page offset to be fallocated */
 	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
@@ -1378,7 +1377,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 			spin_lock(&inode->i_lock);
 			shmem_falloc = inode->i_private;
 			if (shmem_falloc &&
-			    !shmem_falloc->waitq &&
 			    index >= shmem_falloc->start &&
 			    index < shmem_falloc->next)
 				shmem_falloc->nr_unswapped++;
@@ -2025,18 +2023,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
 	return error;
 }
 
-/*
- * This is like autoremove_wake_function, but it removes the wait queue
- * entry unconditionally - even if something else had already woken the
- * target.
- */
-static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key)
-{
-	int ret = default_wake_function(wait, mode, sync, key);
-	list_del_init(&wait->entry);
-	return ret;
-}
-
 static vm_fault_t shmem_fault(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
@@ -2046,65 +2032,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 	int err;
 	vm_fault_t ret = VM_FAULT_LOCKED;
 
-	/*
-	 * Trinity finds that probing a hole which tmpfs is punching can
-	 * prevent the hole-punch from ever completing: which in turn
-	 * locks writers out with its hold on i_rwsem.  So refrain from
-	 * faulting pages into the hole while it's being punched.  Although
-	 * shmem_undo_range() does remove the additions, it may be unable to
-	 * keep up, as each new page needs its own unmap_mapping_range() call,
-	 * and the i_mmap tree grows ever slower to scan if new vmas are added.
-	 *
-	 * It does not matter if we sometimes reach this check just before the
-	 * hole-punch begins, so that one fault then races with the punch:
-	 * we just need to make racing faults a rare case.
-	 *
-	 * The implementation below would be much simpler if we just used a
-	 * standard mutex or completion: but we cannot take i_rwsem in fault,
-	 * and bloating every shmem inode for this unlikely case would be sad.
-	 */
-	if (unlikely(inode->i_private)) {
-		struct shmem_falloc *shmem_falloc;
-
-		spin_lock(&inode->i_lock);
-		shmem_falloc = inode->i_private;
-		if (shmem_falloc &&
-		    shmem_falloc->waitq &&
-		    vmf->pgoff >= shmem_falloc->start &&
-		    vmf->pgoff < shmem_falloc->next) {
-			struct file *fpin;
-			wait_queue_head_t *shmem_falloc_waitq;
-			DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
-
-			ret = VM_FAULT_NOPAGE;
-			fpin = maybe_unlock_mmap_for_io(vmf, NULL);
-			if (fpin)
-				ret = VM_FAULT_RETRY;
-
-			shmem_falloc_waitq = shmem_falloc->waitq;
-			prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
-					TASK_UNINTERRUPTIBLE);
-			spin_unlock(&inode->i_lock);
-			schedule();
-
-			/*
-			 * shmem_falloc_waitq points into the shmem_fallocate()
-			 * stack of the hole-punching task: shmem_falloc_waitq
-			 * is usually invalid by the time we reach here, but
-			 * finish_wait() does not dereference it in that case;
-			 * though i_lock needed lest racing with wake_up_all().
-			 */
-			spin_lock(&inode->i_lock);
-			finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
-			spin_unlock(&inode->i_lock);
-
-			if (fpin)
-				fput(fpin);
-			return ret;
-		}
-		spin_unlock(&inode->i_lock);
-	}
-
 	sgp = SGP_CACHE;
 
 	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
@@ -2113,8 +2040,10 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 	else if (vma->vm_flags & VM_HUGEPAGE)
 		sgp = SGP_HUGE;
 
+	down_read(&inode->i_mapping->invalidate_lock);
 	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp,
 				  gfp, vma, vmf, &ret);
+	up_read(&inode->i_mapping->invalidate_lock);
 	if (err)
 		return vmf_error(err);
 	return ret;
@@ -2715,7 +2644,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		struct address_space *mapping = file->f_mapping;
 		loff_t unmap_start = round_up(offset, PAGE_SIZE);
 		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
-		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
 
 		/* protected by i_rwsem */
 		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
@@ -2723,24 +2651,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 			goto out;
 		}
 
-		shmem_falloc.waitq = &shmem_falloc_waitq;
-		shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
-		shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
-		spin_lock(&inode->i_lock);
-		inode->i_private = &shmem_falloc;
-		spin_unlock(&inode->i_lock);
-
+		down_write(&mapping->invalidate_lock);
 		if ((u64)unmap_end > (u64)unmap_start)
 			unmap_mapping_range(mapping, unmap_start,
 					    1 + unmap_end - unmap_start, 0);
 		shmem_truncate_range(inode, offset, offset + len - 1);
 		/* No need to unmap again: hole-punching leaves COWed pages */
-
-		spin_lock(&inode->i_lock);
-		inode->i_private = NULL;
-		wake_up_all(&shmem_falloc_waitq);
-		WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.head));
-		spin_unlock(&inode->i_lock);
+		up_write(&mapping->invalidate_lock);
 		error = 0;
 		goto out;
 	}
@@ -2763,7 +2680,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		goto out;
 	}
 
-	shmem_falloc.waitq = NULL;
 	shmem_falloc.start = start;
 	shmem_falloc.next  = start;
 	shmem_falloc.nr_falloced = 0;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (8 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 09/12] shmem: " Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 19:27     ` kernel test robot
  2021-04-29  3:24     ` Hugh Dickins
  2021-04-23 17:29 ` [PATCH 11/12] ceph: Fix race between hole punch and page fault Jan Kara
                   ` (2 subsequent siblings)
  12 siblings, 2 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Hugh Dickins, linux-mm

We have to handle pages added by currently running shmem_fallocate()
specially in shmem_writepage(). For this we use serialization mechanism
using structure attached to inode->i_private field. If we protect
allocation of pages in shmem_fallocate() with invalidate_lock instead,
we are sure added pages cannot be dirtied until shmem_fallocate() is done
(invalidate_lock blocks faults, i_rwsem blocks writes) and thus we
cannot see those pages in shmem_writepage() and there's no need for the
serialization mechanism.

CC: Hugh Dickins <hughd@google.com>
CC: <linux-mm@kvack.org>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/shmem.c | 61 ++++++------------------------------------------------
 1 file changed, 6 insertions(+), 55 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index f34162ac46de..7a2b0744031e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -94,18 +94,6 @@ static struct vfsmount *shm_mnt;
 /* Symlink up to this size is kmalloc'ed instead of using a swappable page */
 #define SHORT_SYMLINK_LEN 128
 
-/*
- * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
- * i_rwsem making sure that it has only one user at a time): we would prefer
- * not to enlarge the shmem inode just for that.
- */
-struct shmem_falloc {
-	pgoff_t start;		/* start of range currently being fallocated */
-	pgoff_t next;		/* the next page offset to be fallocated */
-	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
-	pgoff_t nr_unswapped;	/* how often writepage refused to swap out */
-};
-
 struct shmem_options {
 	unsigned long long blocks;
 	unsigned long long inodes;
@@ -1364,28 +1352,11 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
 	 * value into swapfile.c, the only way we can correctly account for a
 	 * fallocated page arriving here is now to initialize it and write it.
-	 *
-	 * That's okay for a page already fallocated earlier, but if we have
-	 * not yet completed the fallocation, then (a) we want to keep track
-	 * of this page in case we have to undo it, and (b) it may not be a
-	 * good idea to continue anyway, once we're pushing into swap.  So
-	 * reactivate the page, and let shmem_fallocate() quit when too many.
+	 * Since a page added by currently running fallocate call cannot be
+	 * dirtied and thus arrive here we know the fallocate has already
+	 * completed and we are fine writing it out.
 	 */
 	if (!PageUptodate(page)) {
-		if (inode->i_private) {
-			struct shmem_falloc *shmem_falloc;
-			spin_lock(&inode->i_lock);
-			shmem_falloc = inode->i_private;
-			if (shmem_falloc &&
-			    index >= shmem_falloc->start &&
-			    index < shmem_falloc->next)
-				shmem_falloc->nr_unswapped++;
-			else
-				shmem_falloc = NULL;
-			spin_unlock(&inode->i_lock);
-			if (shmem_falloc)
-				goto redirty;
-		}
 		clear_highpage(page);
 		flush_dcache_page(page);
 		SetPageUptodate(page);
@@ -2629,9 +2600,9 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 							 loff_t len)
 {
 	struct inode *inode = file_inode(file);
+	struct address_space *mapping = file->f_mapping;
 	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
 	struct shmem_inode_info *info = SHMEM_I(inode);
-	struct shmem_falloc shmem_falloc;
 	pgoff_t start, index, end;
 	int error;
 
@@ -2641,7 +2612,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 	inode_lock(inode);
 
 	if (mode & FALLOC_FL_PUNCH_HOLE) {
-		struct address_space *mapping = file->f_mapping;
 		loff_t unmap_start = round_up(offset, PAGE_SIZE);
 		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
 
@@ -2680,14 +2650,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		goto out;
 	}
 
-	shmem_falloc.start = start;
-	shmem_falloc.next  = start;
-	shmem_falloc.nr_falloced = 0;
-	shmem_falloc.nr_unswapped = 0;
-	spin_lock(&inode->i_lock);
-	inode->i_private = &shmem_falloc;
-	spin_unlock(&inode->i_lock);
-
+	down_write(&mapping->invalidate_lock);
 	for (index = start; index < end; index++) {
 		struct page *page;
 
@@ -2697,8 +2660,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		 */
 		if (signal_pending(current))
 			error = -EINTR;
-		else if (shmem_falloc.nr_unswapped > shmem_falloc.nr_falloced)
-			error = -ENOMEM;
 		else
 			error = shmem_getpage(inode, index, &page, SGP_FALLOC);
 		if (error) {
@@ -2711,14 +2672,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 			goto undone;
 		}
 
-		/*
-		 * Inform shmem_writepage() how far we have reached.
-		 * No need for lock or barrier: we have the page lock.
-		 */
-		shmem_falloc.next++;
-		if (!PageUptodate(page))
-			shmem_falloc.nr_falloced++;
-
 		/*
 		 * If !PageUptodate, leave it that way so that freeable pages
 		 * can be recognized if we need to rollback on error later.
@@ -2736,9 +2689,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		i_size_write(inode, offset + len);
 	inode->i_ctime = current_time(inode);
 undone:
-	spin_lock(&inode->i_lock);
-	inode->i_private = NULL;
-	spin_unlock(&inode->i_lock);
+	up_write(&mapping->invalidate_lock);
 out:
 	inode_unlock(inode);
 	return error;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 11/12] ceph: Fix race between hole punch and page fault
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (9 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 17:29 ` [PATCH 12/12] cifs: " Jan Kara
  2021-04-23 22:07   ` [f2fs-dev] " Dave Chinner
  12 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Jeff Layton, ceph-devel

Ceph has a following race between hole punching and page fault:

CPU1                                  CPU2
ceph_fallocate()
  ...
  ceph_zero_pagecache_range()
                                      ceph_filemap_fault()
                                        faults in page in the range being
                                        punched
  ceph_zero_objects()

And now we have a page in punched range with invalid data. Fix the
problem by using mapping->invalidate_lock similarly to other
filesystems. Note that using invalidate_lock also fixes a similar race
wrt ->readpage().

CC: Jeff Layton <jlayton@kernel.org>
CC: ceph-devel@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/ceph/addr.c | 9 ++++++---
 fs/ceph/file.c | 2 ++
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 26e66436f005..4f45e9754b5a 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1519,9 +1519,11 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
 		ret = VM_FAULT_SIGBUS;
 	} else {
 		struct address_space *mapping = inode->i_mapping;
-		struct page *page = find_or_create_page(mapping, 0,
-						mapping_gfp_constraint(mapping,
-						~__GFP_FS));
+		struct page *page;
+
+		down_read(&mapping->invalidate_lock);
+		page = find_or_create_page(mapping, 0,
+				mapping_gfp_constraint(mapping, ~__GFP_FS));
 		if (!page) {
 			ret = VM_FAULT_OOM;
 			goto out_inline;
@@ -1542,6 +1544,7 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
 		vmf->page = page;
 		ret = VM_FAULT_MAJOR | VM_FAULT_LOCKED;
 out_inline:
+		up_read(&mapping->invalidate_lock);
 		dout("filemap_fault %p %llu~%zd read inline data ret %x\n",
 		     inode, off, (size_t)PAGE_SIZE, ret);
 	}
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 209535d5b8d3..40fee8ff5cf9 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -2087,6 +2087,7 @@ static long ceph_fallocate(struct file *file, int mode,
 	if (ret < 0)
 		goto unlock;
 
+	down_write(&inode->i_mapping->invalidate_lock);
 	ceph_zero_pagecache_range(inode, offset, length);
 	ret = ceph_zero_objects(inode, offset, length);
 
@@ -2099,6 +2100,7 @@ static long ceph_fallocate(struct file *file, int mode,
 		if (dirty)
 			__mark_inode_dirty(inode, dirty);
 	}
+	up_write(&inode->i_mapping->invalidate_lock);
 
 	ceph_put_cap_refs(ci, got);
 unlock:
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 12/12] cifs: Fix race between hole punch and page fault
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
                   ` (10 preceding siblings ...)
  2021-04-23 17:29 ` [PATCH 11/12] ceph: Fix race between hole punch and page fault Jan Kara
@ 2021-04-23 17:29 ` Jan Kara
  2021-04-23 22:07   ` [f2fs-dev] " Dave Chinner
  12 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-23 17:29 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Christoph Hellwig, Amir Goldstein, Dave Chinner, Ted Tso,
	Jan Kara, Steve French, linux-cifs

Cifs has a following race between hole punching and page fault:

CPU1                                            CPU2
smb3_fallocate()
  smb3_punch_hole()
    truncate_pagecache_range()
                                                filemap_fault()
                                                  - loads old data into the
                                                    page cache
    SMB2_ioctl(..., FSCTL_SET_ZERO_DATA, ...)

And now we have stale data in the page cache. Fix the problem by locking
out faults (as well as reads) using mapping->invalidate_lock while hole
punch is running.

CC: Steve French <sfrench@samba.org>
CC: linux-cifs@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/cifs/smb2ops.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index f703204fb185..18231f9bc336 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -3543,6 +3543,7 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
 		return rc;
 	}
 
+	down_write(&inode->i_mapping->invalidate_lock);
 	/*
 	 * We implement the punch hole through ioctl, so we need remove the page
 	 * caches first, otherwise the data may be inconsistent with the server.
@@ -3560,6 +3561,7 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
 			sizeof(struct file_zero_data_information),
 			CIFSMaxBufSize, NULL, NULL);
 	free_xid(xid);
+	up_write(&inode->i_mapping->invalidate_lock);
 	return rc;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
@ 2021-04-23 18:30     ` Matthew Wilcox
  2021-04-23 23:04     ` [f2fs-dev] " Dave Chinner
  1 sibling, 0 replies; 40+ messages in thread
From: Matthew Wilcox @ 2021-04-23 18:30 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range.

One of the things I've had in mind for a while is moving the DAX locked
entry concept into the page cache proper.  It would avoid creating the
new semaphore, at the cost of taking the i_pages lock twice (once to
insert the entries that cover the range, and once to delete the entries).

It'd have pretty much the same effect, though -- read/fault/... would
block until the entry was deleted from the page cache.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [f2fs-dev] [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
@ 2021-04-23 18:30     ` Matthew Wilcox
  0 siblings, 0 replies; 40+ messages in thread
From: Matthew Wilcox @ 2021-04-23 18:30 UTC (permalink / raw)
  To: Jan Kara
  Cc: Jeff Layton, linux-cifs, Damien Le Moal, Ted Tso,
	Darrick J. Wong, Hugh Dickins, linux-ext4, Amir Goldstein,
	Dave Chinner, linux-f2fs-devel, Christoph Hellwig, linux-mm,
	Miklos Szeredi, linux-fsdevel, Jaegeuk Kim, ceph-devel,
	Johannes Thumshirn, Steve French, linux-xfs

On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range.

One of the things I've had in mind for a while is moving the DAX locked
entry concept into the page cache proper.  It would avoid creating the
new semaphore, at the cost of taking the i_pages lock twice (once to
insert the entries that cover the range, and once to delete the entries).

It'd have pretty much the same effect, though -- read/fault/... would
block until the entry was deleted from the page cache.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 07/12] f2fs: Convert to using invalidate_lock
  2021-04-23 17:29 ` [PATCH 07/12] f2fs: " Jan Kara
@ 2021-04-23 19:15     ` kernel test robot
  2021-04-23 20:05     ` kernel test robot
  1 sibling, 0 replies; 40+ messages in thread
From: kernel test robot @ 2021-04-23 19:15 UTC (permalink / raw)
  To: Jan Kara, linux-fsdevel
  Cc: kbuild-all, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, Jan Kara, Jaegeuk Kim, Chao Yu, Chao Yu,
	linux-f2fs-devel

[-- Attachment #1: Type: text/plain, Size: 5278 bytes --]

Hi Jan,

I love your patch! Yet something to improve:

[auto build test ERROR on ext4/dev]
[also build test ERROR on fuse/for-next linus/master v5.12-rc8]
[cannot apply to hnaz-linux-mm/master next-20210423]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
config: m68k-randconfig-m031-20210423 (attached as .config)
compiler: m68k-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
        git checkout 7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross W=1 ARCH=m68k 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   fs/f2fs/file.c: In function 'f2fs_release_compress_blocks':
>> fs/f2fs/file.c:3567:14: error: 'mapping' undeclared (first use in this function)
    3567 |  down_write(&mapping->invalidate_lock);
         |              ^~~~~~~
   fs/f2fs/file.c:3567:14: note: each undeclared identifier is reported only once for each function it appears in


vim +/mapping +3567 fs/f2fs/file.c

  3515	
  3516	static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
  3517	{
  3518		struct inode *inode = file_inode(filp);
  3519		struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
  3520		pgoff_t page_idx = 0, last_idx;
  3521		unsigned int released_blocks = 0;
  3522		int ret;
  3523		int writecount;
  3524	
  3525		if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
  3526			return -EOPNOTSUPP;
  3527	
  3528		if (!f2fs_compressed_file(inode))
  3529			return -EINVAL;
  3530	
  3531		if (f2fs_readonly(sbi->sb))
  3532			return -EROFS;
  3533	
  3534		ret = mnt_want_write_file(filp);
  3535		if (ret)
  3536			return ret;
  3537	
  3538		f2fs_balance_fs(F2FS_I_SB(inode), true);
  3539	
  3540		inode_lock(inode);
  3541	
  3542		writecount = atomic_read(&inode->i_writecount);
  3543		if ((filp->f_mode & FMODE_WRITE && writecount != 1) ||
  3544				(!(filp->f_mode & FMODE_WRITE) && writecount)) {
  3545			ret = -EBUSY;
  3546			goto out;
  3547		}
  3548	
  3549		if (IS_IMMUTABLE(inode)) {
  3550			ret = -EINVAL;
  3551			goto out;
  3552		}
  3553	
  3554		ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
  3555		if (ret)
  3556			goto out;
  3557	
  3558		F2FS_I(inode)->i_flags |= F2FS_IMMUTABLE_FL;
  3559		f2fs_set_inode_flags(inode);
  3560		inode->i_ctime = current_time(inode);
  3561		f2fs_mark_inode_dirty_sync(inode, true);
  3562	
  3563		if (!atomic_read(&F2FS_I(inode)->i_compr_blocks))
  3564			goto out;
  3565	
  3566		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
> 3567		down_write(&mapping->invalidate_lock);
  3568	
  3569		last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
  3570	
  3571		while (page_idx < last_idx) {
  3572			struct dnode_of_data dn;
  3573			pgoff_t end_offset, count;
  3574	
  3575			set_new_dnode(&dn, inode, NULL, NULL, 0);
  3576			ret = f2fs_get_dnode_of_data(&dn, page_idx, LOOKUP_NODE);
  3577			if (ret) {
  3578				if (ret == -ENOENT) {
  3579					page_idx = f2fs_get_next_page_offset(&dn,
  3580									page_idx);
  3581					ret = 0;
  3582					continue;
  3583				}
  3584				break;
  3585			}
  3586	
  3587			end_offset = ADDRS_PER_PAGE(dn.node_page, inode);
  3588			count = min(end_offset - dn.ofs_in_node, last_idx - page_idx);
  3589			count = round_up(count, F2FS_I(inode)->i_cluster_size);
  3590	
  3591			ret = release_compress_blocks(&dn, count);
  3592	
  3593			f2fs_put_dnode(&dn);
  3594	
  3595			if (ret < 0)
  3596				break;
  3597	
  3598			page_idx += count;
  3599			released_blocks += ret;
  3600		}
  3601	
  3602		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
  3603		up_write(&mapping->invalidate_lock);
  3604	out:
  3605		inode_unlock(inode);
  3606	
  3607		mnt_drop_write_file(filp);
  3608	
  3609		if (ret >= 0) {
  3610			ret = put_user(released_blocks, (u64 __user *)arg);
  3611		} else if (released_blocks &&
  3612				atomic_read(&F2FS_I(inode)->i_compr_blocks)) {
  3613			set_sbi_flag(sbi, SBI_NEED_FSCK);
  3614			f2fs_warn(sbi, "%s: partial blocks were released i_ino=%lx "
  3615				"iblocks=%llu, released=%u, compr_blocks=%u, "
  3616				"run fsck to fix.",
  3617				__func__, inode->i_ino, inode->i_blocks,
  3618				released_blocks,
  3619				atomic_read(&F2FS_I(inode)->i_compr_blocks));
  3620		}
  3621	
  3622		return ret;
  3623	}
  3624	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23381 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 07/12] f2fs: Convert to using invalidate_lock
@ 2021-04-23 19:15     ` kernel test robot
  0 siblings, 0 replies; 40+ messages in thread
From: kernel test robot @ 2021-04-23 19:15 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 5433 bytes --]

Hi Jan,

I love your patch! Yet something to improve:

[auto build test ERROR on ext4/dev]
[also build test ERROR on fuse/for-next linus/master v5.12-rc8]
[cannot apply to hnaz-linux-mm/master next-20210423]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
config: m68k-randconfig-m031-20210423 (attached as .config)
compiler: m68k-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
        git checkout 7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross W=1 ARCH=m68k 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   fs/f2fs/file.c: In function 'f2fs_release_compress_blocks':
>> fs/f2fs/file.c:3567:14: error: 'mapping' undeclared (first use in this function)
    3567 |  down_write(&mapping->invalidate_lock);
         |              ^~~~~~~
   fs/f2fs/file.c:3567:14: note: each undeclared identifier is reported only once for each function it appears in


vim +/mapping +3567 fs/f2fs/file.c

  3515	
  3516	static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
  3517	{
  3518		struct inode *inode = file_inode(filp);
  3519		struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
  3520		pgoff_t page_idx = 0, last_idx;
  3521		unsigned int released_blocks = 0;
  3522		int ret;
  3523		int writecount;
  3524	
  3525		if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
  3526			return -EOPNOTSUPP;
  3527	
  3528		if (!f2fs_compressed_file(inode))
  3529			return -EINVAL;
  3530	
  3531		if (f2fs_readonly(sbi->sb))
  3532			return -EROFS;
  3533	
  3534		ret = mnt_want_write_file(filp);
  3535		if (ret)
  3536			return ret;
  3537	
  3538		f2fs_balance_fs(F2FS_I_SB(inode), true);
  3539	
  3540		inode_lock(inode);
  3541	
  3542		writecount = atomic_read(&inode->i_writecount);
  3543		if ((filp->f_mode & FMODE_WRITE && writecount != 1) ||
  3544				(!(filp->f_mode & FMODE_WRITE) && writecount)) {
  3545			ret = -EBUSY;
  3546			goto out;
  3547		}
  3548	
  3549		if (IS_IMMUTABLE(inode)) {
  3550			ret = -EINVAL;
  3551			goto out;
  3552		}
  3553	
  3554		ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
  3555		if (ret)
  3556			goto out;
  3557	
  3558		F2FS_I(inode)->i_flags |= F2FS_IMMUTABLE_FL;
  3559		f2fs_set_inode_flags(inode);
  3560		inode->i_ctime = current_time(inode);
  3561		f2fs_mark_inode_dirty_sync(inode, true);
  3562	
  3563		if (!atomic_read(&F2FS_I(inode)->i_compr_blocks))
  3564			goto out;
  3565	
  3566		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
> 3567		down_write(&mapping->invalidate_lock);
  3568	
  3569		last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
  3570	
  3571		while (page_idx < last_idx) {
  3572			struct dnode_of_data dn;
  3573			pgoff_t end_offset, count;
  3574	
  3575			set_new_dnode(&dn, inode, NULL, NULL, 0);
  3576			ret = f2fs_get_dnode_of_data(&dn, page_idx, LOOKUP_NODE);
  3577			if (ret) {
  3578				if (ret == -ENOENT) {
  3579					page_idx = f2fs_get_next_page_offset(&dn,
  3580									page_idx);
  3581					ret = 0;
  3582					continue;
  3583				}
  3584				break;
  3585			}
  3586	
  3587			end_offset = ADDRS_PER_PAGE(dn.node_page, inode);
  3588			count = min(end_offset - dn.ofs_in_node, last_idx - page_idx);
  3589			count = round_up(count, F2FS_I(inode)->i_cluster_size);
  3590	
  3591			ret = release_compress_blocks(&dn, count);
  3592	
  3593			f2fs_put_dnode(&dn);
  3594	
  3595			if (ret < 0)
  3596				break;
  3597	
  3598			page_idx += count;
  3599			released_blocks += ret;
  3600		}
  3601	
  3602		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
  3603		up_write(&mapping->invalidate_lock);
  3604	out:
  3605		inode_unlock(inode);
  3606	
  3607		mnt_drop_write_file(filp);
  3608	
  3609		if (ret >= 0) {
  3610			ret = put_user(released_blocks, (u64 __user *)arg);
  3611		} else if (released_blocks &&
  3612				atomic_read(&F2FS_I(inode)->i_compr_blocks)) {
  3613			set_sbi_flag(sbi, SBI_NEED_FSCK);
  3614			f2fs_warn(sbi, "%s: partial blocks were released i_ino=%lx "
  3615				"iblocks=%llu, released=%u, compr_blocks=%u, "
  3616				"run fsck to fix.",
  3617				__func__, inode->i_ino, inode->i_blocks,
  3618				released_blocks,
  3619				atomic_read(&F2FS_I(inode)->i_compr_blocks));
  3620		}
  3621	
  3622		return ret;
  3623	}
  3624	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 23381 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate
  2021-04-23 17:29 ` [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate Jan Kara
@ 2021-04-23 19:27     ` kernel test robot
  2021-04-29  3:24     ` Hugh Dickins
  1 sibling, 0 replies; 40+ messages in thread
From: kernel test robot @ 2021-04-23 19:27 UTC (permalink / raw)
  To: Jan Kara, linux-fsdevel
  Cc: kbuild-all, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, Jan Kara, Hugh Dickins, linux-mm

[-- Attachment #1: Type: text/plain, Size: 9742 bytes --]

Hi Jan,

I love your patch! Perhaps something to improve:

[auto build test WARNING on ext4/dev]
[also build test WARNING on fuse/for-next linus/master v5.12-rc8]
[cannot apply to hnaz-linux-mm/master next-20210423]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
config: s390-randconfig-s031-20210424 (attached as .config)
compiler: s390-linux-gcc (GCC) 9.3.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.3-341-g8af24329-dirty
        # https://github.com/0day-ci/linux/commit/800cf89f11d437415eead2da969c3b07908fd406
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
        git checkout 800cf89f11d437415eead2da969c3b07908fd406
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' W=1 ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/shmem.c: In function 'shmem_writepage':
>> mm/shmem.c:1326:10: warning: variable 'index' set but not used [-Wunused-but-set-variable]
    1326 |  pgoff_t index;
         |          ^~~~~


vim +/index +1326 mm/shmem.c

^1da177e4c3f41 Linus Torvalds         2005-04-16  1316  
^1da177e4c3f41 Linus Torvalds         2005-04-16  1317  /*
^1da177e4c3f41 Linus Torvalds         2005-04-16  1318   * Move the page from the page cache to the swap cache.
^1da177e4c3f41 Linus Torvalds         2005-04-16  1319   */
^1da177e4c3f41 Linus Torvalds         2005-04-16  1320  static int shmem_writepage(struct page *page, struct writeback_control *wbc)
^1da177e4c3f41 Linus Torvalds         2005-04-16  1321  {
^1da177e4c3f41 Linus Torvalds         2005-04-16  1322  	struct shmem_inode_info *info;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1323  	struct address_space *mapping;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1324  	struct inode *inode;
6922c0c7abd387 Hugh Dickins           2011-08-03  1325  	swp_entry_t swap;
6922c0c7abd387 Hugh Dickins           2011-08-03 @1326  	pgoff_t index;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1327  
800d8c63b2e989 Kirill A. Shutemov     2016-07-26  1328  	VM_BUG_ON_PAGE(PageCompound(page), page);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1329  	BUG_ON(!PageLocked(page));
^1da177e4c3f41 Linus Torvalds         2005-04-16  1330  	mapping = page->mapping;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1331  	index = page->index;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1332  	inode = mapping->host;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1333  	info = SHMEM_I(inode);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1334  	if (info->flags & VM_LOCKED)
^1da177e4c3f41 Linus Torvalds         2005-04-16  1335  		goto redirty;
d9fe526a83b84e Hugh Dickins           2008-02-04  1336  	if (!total_swap_pages)
^1da177e4c3f41 Linus Torvalds         2005-04-16  1337  		goto redirty;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1338  
d9fe526a83b84e Hugh Dickins           2008-02-04  1339  	/*
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1340  	 * Our capabilities prevent regular writeback or sync from ever calling
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1341  	 * shmem_writepage; but a stacking filesystem might use ->writepage of
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1342  	 * its underlying filesystem, in which case tmpfs should write out to
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1343  	 * swap only in response to memory pressure, and not for the writeback
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1344  	 * threads or sync.
d9fe526a83b84e Hugh Dickins           2008-02-04  1345  	 */
48f170fb7d7db8 Hugh Dickins           2011-07-25  1346  	if (!wbc->for_reclaim) {
48f170fb7d7db8 Hugh Dickins           2011-07-25  1347  		WARN_ON_ONCE(1);	/* Still happens? Tell us about it! */
48f170fb7d7db8 Hugh Dickins           2011-07-25  1348  		goto redirty;
48f170fb7d7db8 Hugh Dickins           2011-07-25  1349  	}
1635f6a74152f1 Hugh Dickins           2012-05-29  1350  
1635f6a74152f1 Hugh Dickins           2012-05-29  1351  	/*
1635f6a74152f1 Hugh Dickins           2012-05-29  1352  	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
1635f6a74152f1 Hugh Dickins           2012-05-29  1353  	 * value into swapfile.c, the only way we can correctly account for a
1635f6a74152f1 Hugh Dickins           2012-05-29  1354  	 * fallocated page arriving here is now to initialize it and write it.
800cf89f11d437 Jan Kara               2021-04-23  1355  	 * Since a page added by currently running fallocate call cannot be
800cf89f11d437 Jan Kara               2021-04-23  1356  	 * dirtied and thus arrive here we know the fallocate has already
800cf89f11d437 Jan Kara               2021-04-23  1357  	 * completed and we are fine writing it out.
1635f6a74152f1 Hugh Dickins           2012-05-29  1358  	 */
1635f6a74152f1 Hugh Dickins           2012-05-29  1359  	if (!PageUptodate(page)) {
1635f6a74152f1 Hugh Dickins           2012-05-29  1360  		clear_highpage(page);
1635f6a74152f1 Hugh Dickins           2012-05-29  1361  		flush_dcache_page(page);
1635f6a74152f1 Hugh Dickins           2012-05-29  1362  		SetPageUptodate(page);
1635f6a74152f1 Hugh Dickins           2012-05-29  1363  	}
1635f6a74152f1 Hugh Dickins           2012-05-29  1364  
38d8b4e6bdc872 Huang Ying             2017-07-06  1365  	swap = get_swap_page(page);
48f170fb7d7db8 Hugh Dickins           2011-07-25  1366  	if (!swap.val)
48f170fb7d7db8 Hugh Dickins           2011-07-25  1367  		goto redirty;
d9fe526a83b84e Hugh Dickins           2008-02-04  1368  
b1dea800ac3959 Hugh Dickins           2011-05-11  1369  	/*
b1dea800ac3959 Hugh Dickins           2011-05-11  1370  	 * Add inode to shmem_unuse()'s list of swapped-out inodes,
6922c0c7abd387 Hugh Dickins           2011-08-03  1371  	 * if it's not already there.  Do it now before the page is
6922c0c7abd387 Hugh Dickins           2011-08-03  1372  	 * moved to swap cache, when its pagelock no longer protects
b1dea800ac3959 Hugh Dickins           2011-05-11  1373  	 * the inode from eviction.  But don't unlock the mutex until
6922c0c7abd387 Hugh Dickins           2011-08-03  1374  	 * we've incremented swapped, because shmem_unuse_inode() will
6922c0c7abd387 Hugh Dickins           2011-08-03  1375  	 * prune a !swapped inode from the swaplist under this mutex.
b1dea800ac3959 Hugh Dickins           2011-05-11  1376  	 */
b1dea800ac3959 Hugh Dickins           2011-05-11  1377  	mutex_lock(&shmem_swaplist_mutex);
05bf86b4ccfd0f Hugh Dickins           2011-05-14  1378  	if (list_empty(&info->swaplist))
b56a2d8af9147a Vineeth Remanan Pillai 2019-03-05  1379  		list_add(&info->swaplist, &shmem_swaplist);
b1dea800ac3959 Hugh Dickins           2011-05-11  1380  
4afab1cd256e42 Yang Shi               2019-11-30  1381  	if (add_to_swap_cache(page, swap,
3852f6768ede54 Joonsoo Kim            2020-08-11  1382  			__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
3852f6768ede54 Joonsoo Kim            2020-08-11  1383  			NULL) == 0) {
4595ef88d13613 Kirill A. Shutemov     2016-07-26  1384  		spin_lock_irq(&info->lock);
6922c0c7abd387 Hugh Dickins           2011-08-03  1385  		shmem_recalc_inode(inode);
267a4c76bbdb95 Hugh Dickins           2015-12-11  1386  		info->swapped++;
4595ef88d13613 Kirill A. Shutemov     2016-07-26  1387  		spin_unlock_irq(&info->lock);
6922c0c7abd387 Hugh Dickins           2011-08-03  1388  
267a4c76bbdb95 Hugh Dickins           2015-12-11  1389  		swap_shmem_alloc(swap);
267a4c76bbdb95 Hugh Dickins           2015-12-11  1390  		shmem_delete_from_page_cache(page, swp_to_radix_entry(swap));
267a4c76bbdb95 Hugh Dickins           2015-12-11  1391  
6922c0c7abd387 Hugh Dickins           2011-08-03  1392  		mutex_unlock(&shmem_swaplist_mutex);
d9fe526a83b84e Hugh Dickins           2008-02-04  1393  		BUG_ON(page_mapped(page));
9fab5619bdd7f8 Hugh Dickins           2009-03-31  1394  		swap_writepage(page, wbc);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1395  		return 0;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1396  	}
^1da177e4c3f41 Linus Torvalds         2005-04-16  1397  
6922c0c7abd387 Hugh Dickins           2011-08-03  1398  	mutex_unlock(&shmem_swaplist_mutex);
75f6d6d29a40b5 Minchan Kim            2017-07-06  1399  	put_swap_page(page, swap);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1400  redirty:
^1da177e4c3f41 Linus Torvalds         2005-04-16  1401  	set_page_dirty(page);
d9fe526a83b84e Hugh Dickins           2008-02-04  1402  	if (wbc->for_reclaim)
d9fe526a83b84e Hugh Dickins           2008-02-04  1403  		return AOP_WRITEPAGE_ACTIVATE;	/* Return with page locked */
d9fe526a83b84e Hugh Dickins           2008-02-04  1404  	unlock_page(page);
d9fe526a83b84e Hugh Dickins           2008-02-04  1405  	return 0;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1406  }
^1da177e4c3f41 Linus Torvalds         2005-04-16  1407  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 14079 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate
@ 2021-04-23 19:27     ` kernel test robot
  0 siblings, 0 replies; 40+ messages in thread
From: kernel test robot @ 2021-04-23 19:27 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 9880 bytes --]

Hi Jan,

I love your patch! Perhaps something to improve:

[auto build test WARNING on ext4/dev]
[also build test WARNING on fuse/for-next linus/master v5.12-rc8]
[cannot apply to hnaz-linux-mm/master next-20210423]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
config: s390-randconfig-s031-20210424 (attached as .config)
compiler: s390-linux-gcc (GCC) 9.3.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.3-341-g8af24329-dirty
        # https://github.com/0day-ci/linux/commit/800cf89f11d437415eead2da969c3b07908fd406
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
        git checkout 800cf89f11d437415eead2da969c3b07908fd406
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' W=1 ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/shmem.c: In function 'shmem_writepage':
>> mm/shmem.c:1326:10: warning: variable 'index' set but not used [-Wunused-but-set-variable]
    1326 |  pgoff_t index;
         |          ^~~~~


vim +/index +1326 mm/shmem.c

^1da177e4c3f41 Linus Torvalds         2005-04-16  1316  
^1da177e4c3f41 Linus Torvalds         2005-04-16  1317  /*
^1da177e4c3f41 Linus Torvalds         2005-04-16  1318   * Move the page from the page cache to the swap cache.
^1da177e4c3f41 Linus Torvalds         2005-04-16  1319   */
^1da177e4c3f41 Linus Torvalds         2005-04-16  1320  static int shmem_writepage(struct page *page, struct writeback_control *wbc)
^1da177e4c3f41 Linus Torvalds         2005-04-16  1321  {
^1da177e4c3f41 Linus Torvalds         2005-04-16  1322  	struct shmem_inode_info *info;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1323  	struct address_space *mapping;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1324  	struct inode *inode;
6922c0c7abd387 Hugh Dickins           2011-08-03  1325  	swp_entry_t swap;
6922c0c7abd387 Hugh Dickins           2011-08-03 @1326  	pgoff_t index;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1327  
800d8c63b2e989 Kirill A. Shutemov     2016-07-26  1328  	VM_BUG_ON_PAGE(PageCompound(page), page);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1329  	BUG_ON(!PageLocked(page));
^1da177e4c3f41 Linus Torvalds         2005-04-16  1330  	mapping = page->mapping;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1331  	index = page->index;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1332  	inode = mapping->host;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1333  	info = SHMEM_I(inode);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1334  	if (info->flags & VM_LOCKED)
^1da177e4c3f41 Linus Torvalds         2005-04-16  1335  		goto redirty;
d9fe526a83b84e Hugh Dickins           2008-02-04  1336  	if (!total_swap_pages)
^1da177e4c3f41 Linus Torvalds         2005-04-16  1337  		goto redirty;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1338  
d9fe526a83b84e Hugh Dickins           2008-02-04  1339  	/*
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1340  	 * Our capabilities prevent regular writeback or sync from ever calling
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1341  	 * shmem_writepage; but a stacking filesystem might use ->writepage of
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1342  	 * its underlying filesystem, in which case tmpfs should write out to
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1343  	 * swap only in response to memory pressure, and not for the writeback
97b713ba3ebaa6 Christoph Hellwig      2015-01-14  1344  	 * threads or sync.
d9fe526a83b84e Hugh Dickins           2008-02-04  1345  	 */
48f170fb7d7db8 Hugh Dickins           2011-07-25  1346  	if (!wbc->for_reclaim) {
48f170fb7d7db8 Hugh Dickins           2011-07-25  1347  		WARN_ON_ONCE(1);	/* Still happens? Tell us about it! */
48f170fb7d7db8 Hugh Dickins           2011-07-25  1348  		goto redirty;
48f170fb7d7db8 Hugh Dickins           2011-07-25  1349  	}
1635f6a74152f1 Hugh Dickins           2012-05-29  1350  
1635f6a74152f1 Hugh Dickins           2012-05-29  1351  	/*
1635f6a74152f1 Hugh Dickins           2012-05-29  1352  	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
1635f6a74152f1 Hugh Dickins           2012-05-29  1353  	 * value into swapfile.c, the only way we can correctly account for a
1635f6a74152f1 Hugh Dickins           2012-05-29  1354  	 * fallocated page arriving here is now to initialize it and write it.
800cf89f11d437 Jan Kara               2021-04-23  1355  	 * Since a page added by currently running fallocate call cannot be
800cf89f11d437 Jan Kara               2021-04-23  1356  	 * dirtied and thus arrive here we know the fallocate has already
800cf89f11d437 Jan Kara               2021-04-23  1357  	 * completed and we are fine writing it out.
1635f6a74152f1 Hugh Dickins           2012-05-29  1358  	 */
1635f6a74152f1 Hugh Dickins           2012-05-29  1359  	if (!PageUptodate(page)) {
1635f6a74152f1 Hugh Dickins           2012-05-29  1360  		clear_highpage(page);
1635f6a74152f1 Hugh Dickins           2012-05-29  1361  		flush_dcache_page(page);
1635f6a74152f1 Hugh Dickins           2012-05-29  1362  		SetPageUptodate(page);
1635f6a74152f1 Hugh Dickins           2012-05-29  1363  	}
1635f6a74152f1 Hugh Dickins           2012-05-29  1364  
38d8b4e6bdc872 Huang Ying             2017-07-06  1365  	swap = get_swap_page(page);
48f170fb7d7db8 Hugh Dickins           2011-07-25  1366  	if (!swap.val)
48f170fb7d7db8 Hugh Dickins           2011-07-25  1367  		goto redirty;
d9fe526a83b84e Hugh Dickins           2008-02-04  1368  
b1dea800ac3959 Hugh Dickins           2011-05-11  1369  	/*
b1dea800ac3959 Hugh Dickins           2011-05-11  1370  	 * Add inode to shmem_unuse()'s list of swapped-out inodes,
6922c0c7abd387 Hugh Dickins           2011-08-03  1371  	 * if it's not already there.  Do it now before the page is
6922c0c7abd387 Hugh Dickins           2011-08-03  1372  	 * moved to swap cache, when its pagelock no longer protects
b1dea800ac3959 Hugh Dickins           2011-05-11  1373  	 * the inode from eviction.  But don't unlock the mutex until
6922c0c7abd387 Hugh Dickins           2011-08-03  1374  	 * we've incremented swapped, because shmem_unuse_inode() will
6922c0c7abd387 Hugh Dickins           2011-08-03  1375  	 * prune a !swapped inode from the swaplist under this mutex.
b1dea800ac3959 Hugh Dickins           2011-05-11  1376  	 */
b1dea800ac3959 Hugh Dickins           2011-05-11  1377  	mutex_lock(&shmem_swaplist_mutex);
05bf86b4ccfd0f Hugh Dickins           2011-05-14  1378  	if (list_empty(&info->swaplist))
b56a2d8af9147a Vineeth Remanan Pillai 2019-03-05  1379  		list_add(&info->swaplist, &shmem_swaplist);
b1dea800ac3959 Hugh Dickins           2011-05-11  1380  
4afab1cd256e42 Yang Shi               2019-11-30  1381  	if (add_to_swap_cache(page, swap,
3852f6768ede54 Joonsoo Kim            2020-08-11  1382  			__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
3852f6768ede54 Joonsoo Kim            2020-08-11  1383  			NULL) == 0) {
4595ef88d13613 Kirill A. Shutemov     2016-07-26  1384  		spin_lock_irq(&info->lock);
6922c0c7abd387 Hugh Dickins           2011-08-03  1385  		shmem_recalc_inode(inode);
267a4c76bbdb95 Hugh Dickins           2015-12-11  1386  		info->swapped++;
4595ef88d13613 Kirill A. Shutemov     2016-07-26  1387  		spin_unlock_irq(&info->lock);
6922c0c7abd387 Hugh Dickins           2011-08-03  1388  
267a4c76bbdb95 Hugh Dickins           2015-12-11  1389  		swap_shmem_alloc(swap);
267a4c76bbdb95 Hugh Dickins           2015-12-11  1390  		shmem_delete_from_page_cache(page, swp_to_radix_entry(swap));
267a4c76bbdb95 Hugh Dickins           2015-12-11  1391  
6922c0c7abd387 Hugh Dickins           2011-08-03  1392  		mutex_unlock(&shmem_swaplist_mutex);
d9fe526a83b84e Hugh Dickins           2008-02-04  1393  		BUG_ON(page_mapped(page));
9fab5619bdd7f8 Hugh Dickins           2009-03-31  1394  		swap_writepage(page, wbc);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1395  		return 0;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1396  	}
^1da177e4c3f41 Linus Torvalds         2005-04-16  1397  
6922c0c7abd387 Hugh Dickins           2011-08-03  1398  	mutex_unlock(&shmem_swaplist_mutex);
75f6d6d29a40b5 Minchan Kim            2017-07-06  1399  	put_swap_page(page, swap);
^1da177e4c3f41 Linus Torvalds         2005-04-16  1400  redirty:
^1da177e4c3f41 Linus Torvalds         2005-04-16  1401  	set_page_dirty(page);
d9fe526a83b84e Hugh Dickins           2008-02-04  1402  	if (wbc->for_reclaim)
d9fe526a83b84e Hugh Dickins           2008-02-04  1403  		return AOP_WRITEPAGE_ACTIVATE;	/* Return with page locked */
d9fe526a83b84e Hugh Dickins           2008-02-04  1404  	unlock_page(page);
d9fe526a83b84e Hugh Dickins           2008-02-04  1405  	return 0;
^1da177e4c3f41 Linus Torvalds         2005-04-16  1406  }
^1da177e4c3f41 Linus Torvalds         2005-04-16  1407  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 14079 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 07/12] f2fs: Convert to using invalidate_lock
  2021-04-23 17:29 ` [PATCH 07/12] f2fs: " Jan Kara
@ 2021-04-23 20:05     ` kernel test robot
  2021-04-23 20:05     ` kernel test robot
  1 sibling, 0 replies; 40+ messages in thread
From: kernel test robot @ 2021-04-23 20:05 UTC (permalink / raw)
  To: Jan Kara, linux-fsdevel
  Cc: kbuild-all, clang-built-linux, Christoph Hellwig, Amir Goldstein,
	Dave Chinner, Ted Tso, Jan Kara, Jaegeuk Kim, Chao Yu, Chao Yu,
	linux-f2fs-devel

[-- Attachment #1: Type: text/plain, Size: 5429 bytes --]

Hi Jan,

I love your patch! Yet something to improve:

[auto build test ERROR on ext4/dev]
[also build test ERROR on fuse/for-next linus/master v5.12-rc8]
[cannot apply to hnaz-linux-mm/master next-20210423]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
config: arm-randconfig-r031-20210423 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 06234f758e1945084582cf80450b396f75a9c06e)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm cross compiling tool for clang build
        # apt-get install binutils-arm-linux-gnueabi
        # https://github.com/0day-ci/linux/commit/7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
        git checkout 7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 ARCH=arm 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> fs/f2fs/file.c:3567:14: error: use of undeclared identifier 'mapping'
           down_write(&mapping->invalidate_lock);
                       ^
   fs/f2fs/file.c:3603:12: error: use of undeclared identifier 'mapping'
           up_write(&mapping->invalidate_lock);
                     ^
   2 errors generated.


vim +/mapping +3567 fs/f2fs/file.c

  3515	
  3516	static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
  3517	{
  3518		struct inode *inode = file_inode(filp);
  3519		struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
  3520		pgoff_t page_idx = 0, last_idx;
  3521		unsigned int released_blocks = 0;
  3522		int ret;
  3523		int writecount;
  3524	
  3525		if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
  3526			return -EOPNOTSUPP;
  3527	
  3528		if (!f2fs_compressed_file(inode))
  3529			return -EINVAL;
  3530	
  3531		if (f2fs_readonly(sbi->sb))
  3532			return -EROFS;
  3533	
  3534		ret = mnt_want_write_file(filp);
  3535		if (ret)
  3536			return ret;
  3537	
  3538		f2fs_balance_fs(F2FS_I_SB(inode), true);
  3539	
  3540		inode_lock(inode);
  3541	
  3542		writecount = atomic_read(&inode->i_writecount);
  3543		if ((filp->f_mode & FMODE_WRITE && writecount != 1) ||
  3544				(!(filp->f_mode & FMODE_WRITE) && writecount)) {
  3545			ret = -EBUSY;
  3546			goto out;
  3547		}
  3548	
  3549		if (IS_IMMUTABLE(inode)) {
  3550			ret = -EINVAL;
  3551			goto out;
  3552		}
  3553	
  3554		ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
  3555		if (ret)
  3556			goto out;
  3557	
  3558		F2FS_I(inode)->i_flags |= F2FS_IMMUTABLE_FL;
  3559		f2fs_set_inode_flags(inode);
  3560		inode->i_ctime = current_time(inode);
  3561		f2fs_mark_inode_dirty_sync(inode, true);
  3562	
  3563		if (!atomic_read(&F2FS_I(inode)->i_compr_blocks))
  3564			goto out;
  3565	
  3566		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
> 3567		down_write(&mapping->invalidate_lock);
  3568	
  3569		last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
  3570	
  3571		while (page_idx < last_idx) {
  3572			struct dnode_of_data dn;
  3573			pgoff_t end_offset, count;
  3574	
  3575			set_new_dnode(&dn, inode, NULL, NULL, 0);
  3576			ret = f2fs_get_dnode_of_data(&dn, page_idx, LOOKUP_NODE);
  3577			if (ret) {
  3578				if (ret == -ENOENT) {
  3579					page_idx = f2fs_get_next_page_offset(&dn,
  3580									page_idx);
  3581					ret = 0;
  3582					continue;
  3583				}
  3584				break;
  3585			}
  3586	
  3587			end_offset = ADDRS_PER_PAGE(dn.node_page, inode);
  3588			count = min(end_offset - dn.ofs_in_node, last_idx - page_idx);
  3589			count = round_up(count, F2FS_I(inode)->i_cluster_size);
  3590	
  3591			ret = release_compress_blocks(&dn, count);
  3592	
  3593			f2fs_put_dnode(&dn);
  3594	
  3595			if (ret < 0)
  3596				break;
  3597	
  3598			page_idx += count;
  3599			released_blocks += ret;
  3600		}
  3601	
  3602		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
  3603		up_write(&mapping->invalidate_lock);
  3604	out:
  3605		inode_unlock(inode);
  3606	
  3607		mnt_drop_write_file(filp);
  3608	
  3609		if (ret >= 0) {
  3610			ret = put_user(released_blocks, (u64 __user *)arg);
  3611		} else if (released_blocks &&
  3612				atomic_read(&F2FS_I(inode)->i_compr_blocks)) {
  3613			set_sbi_flag(sbi, SBI_NEED_FSCK);
  3614			f2fs_warn(sbi, "%s: partial blocks were released i_ino=%lx "
  3615				"iblocks=%llu, released=%u, compr_blocks=%u, "
  3616				"run fsck to fix.",
  3617				__func__, inode->i_ino, inode->i_blocks,
  3618				released_blocks,
  3619				atomic_read(&F2FS_I(inode)->i_compr_blocks));
  3620		}
  3621	
  3622		return ret;
  3623	}
  3624	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 33609 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 07/12] f2fs: Convert to using invalidate_lock
@ 2021-04-23 20:05     ` kernel test robot
  0 siblings, 0 replies; 40+ messages in thread
From: kernel test robot @ 2021-04-23 20:05 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 5588 bytes --]

Hi Jan,

I love your patch! Yet something to improve:

[auto build test ERROR on ext4/dev]
[also build test ERROR on fuse/for-next linus/master v5.12-rc8]
[cannot apply to hnaz-linux-mm/master next-20210423]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
config: arm-randconfig-r031-20210423 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 06234f758e1945084582cf80450b396f75a9c06e)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm cross compiling tool for clang build
        # apt-get install binutils-arm-linux-gnueabi
        # https://github.com/0day-ci/linux/commit/7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jan-Kara/fs-Hole-punch-vs-page-cache-filling-races/20210424-013114
        git checkout 7a9e8e67e7f7d0070294e9f0a3567a3f28985383
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 ARCH=arm 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> fs/f2fs/file.c:3567:14: error: use of undeclared identifier 'mapping'
           down_write(&mapping->invalidate_lock);
                       ^
   fs/f2fs/file.c:3603:12: error: use of undeclared identifier 'mapping'
           up_write(&mapping->invalidate_lock);
                     ^
   2 errors generated.


vim +/mapping +3567 fs/f2fs/file.c

  3515	
  3516	static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
  3517	{
  3518		struct inode *inode = file_inode(filp);
  3519		struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
  3520		pgoff_t page_idx = 0, last_idx;
  3521		unsigned int released_blocks = 0;
  3522		int ret;
  3523		int writecount;
  3524	
  3525		if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
  3526			return -EOPNOTSUPP;
  3527	
  3528		if (!f2fs_compressed_file(inode))
  3529			return -EINVAL;
  3530	
  3531		if (f2fs_readonly(sbi->sb))
  3532			return -EROFS;
  3533	
  3534		ret = mnt_want_write_file(filp);
  3535		if (ret)
  3536			return ret;
  3537	
  3538		f2fs_balance_fs(F2FS_I_SB(inode), true);
  3539	
  3540		inode_lock(inode);
  3541	
  3542		writecount = atomic_read(&inode->i_writecount);
  3543		if ((filp->f_mode & FMODE_WRITE && writecount != 1) ||
  3544				(!(filp->f_mode & FMODE_WRITE) && writecount)) {
  3545			ret = -EBUSY;
  3546			goto out;
  3547		}
  3548	
  3549		if (IS_IMMUTABLE(inode)) {
  3550			ret = -EINVAL;
  3551			goto out;
  3552		}
  3553	
  3554		ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
  3555		if (ret)
  3556			goto out;
  3557	
  3558		F2FS_I(inode)->i_flags |= F2FS_IMMUTABLE_FL;
  3559		f2fs_set_inode_flags(inode);
  3560		inode->i_ctime = current_time(inode);
  3561		f2fs_mark_inode_dirty_sync(inode, true);
  3562	
  3563		if (!atomic_read(&F2FS_I(inode)->i_compr_blocks))
  3564			goto out;
  3565	
  3566		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
> 3567		down_write(&mapping->invalidate_lock);
  3568	
  3569		last_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
  3570	
  3571		while (page_idx < last_idx) {
  3572			struct dnode_of_data dn;
  3573			pgoff_t end_offset, count;
  3574	
  3575			set_new_dnode(&dn, inode, NULL, NULL, 0);
  3576			ret = f2fs_get_dnode_of_data(&dn, page_idx, LOOKUP_NODE);
  3577			if (ret) {
  3578				if (ret == -ENOENT) {
  3579					page_idx = f2fs_get_next_page_offset(&dn,
  3580									page_idx);
  3581					ret = 0;
  3582					continue;
  3583				}
  3584				break;
  3585			}
  3586	
  3587			end_offset = ADDRS_PER_PAGE(dn.node_page, inode);
  3588			count = min(end_offset - dn.ofs_in_node, last_idx - page_idx);
  3589			count = round_up(count, F2FS_I(inode)->i_cluster_size);
  3590	
  3591			ret = release_compress_blocks(&dn, count);
  3592	
  3593			f2fs_put_dnode(&dn);
  3594	
  3595			if (ret < 0)
  3596				break;
  3597	
  3598			page_idx += count;
  3599			released_blocks += ret;
  3600		}
  3601	
  3602		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
  3603		up_write(&mapping->invalidate_lock);
  3604	out:
  3605		inode_unlock(inode);
  3606	
  3607		mnt_drop_write_file(filp);
  3608	
  3609		if (ret >= 0) {
  3610			ret = put_user(released_blocks, (u64 __user *)arg);
  3611		} else if (released_blocks &&
  3612				atomic_read(&F2FS_I(inode)->i_compr_blocks)) {
  3613			set_sbi_flag(sbi, SBI_NEED_FSCK);
  3614			f2fs_warn(sbi, "%s: partial blocks were released i_ino=%lx "
  3615				"iblocks=%llu, released=%u, compr_blocks=%u, "
  3616				"run fsck to fix.",
  3617				__func__, inode->i_ino, inode->i_blocks,
  3618				released_blocks,
  3619				atomic_read(&F2FS_I(inode)->i_compr_blocks));
  3620		}
  3621	
  3622		return ret;
  3623	}
  3624	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 33609 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
  2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
@ 2021-04-23 22:07   ` Dave Chinner
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
                     ` (11 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Dave Chinner @ 2021-04-23 22:07 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Ted Tso,
	ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

Hi Jan,

In future, can you please use the same cc-list for the entire
patchset?

The stuff that has hit the XFS list (where I'm replying from)
doesn't give me any context as to what the core changes are that
allow XFS to be changed, so I can't review them in isolation.

I've got to spend time now reconstructing the patchset into a single
series because the delivery has been spread across three different
mailing lists and so hit 3 different procmail filters.  I'll comment
on the patches once I've reconstructed the series and read through
it as a whole...

/me considers the way people use "cc" tags in git commits for
including mailing lists on individual patches actively harmful.
Unless the recipient is subscribed to all the mailing lists the
patchset was CC'd to, they can't easily find the bits of the
patchset that didn't arrive in their mail box. Individual mailing
lists should receive entire patchsets for review, not random,
individual, context free patches. 

And, FWIW, cc'ing the cover letter to all the mailing lists is not
good enough. Being able to see the code change as a whole is what
matters for review, not the cover letter...

Cheers,

Dave.

On Fri, Apr 23, 2021 at 07:29:29PM +0200, Jan Kara wrote:
> Hello,
> 
> here is another version of my patches to address races between hole punching
> and page cache filling functions for ext4 and other filesystems. I think
> we are coming close to a complete solution so I've removed the RFC tag from
> the subject. I went through all filesystems supporting hole punching and
> converted them from their private locks to a generic one (usually fixing the
> race ext4 had as a side effect). I also found out ceph & cifs didn't have
> any protection from the hole punch vs page fault race either so I've added
> appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
> GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
> both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
> read, write, and hole punch paths are not properly serialized possibly leading
> to fs corruption. Both issues are non-trivial so respective fs maintainers
> have to deal with those (I've informed them and problems were generally
> confirmed). Anyway, for all the other filesystem this kind of race should
> be closed.
> 
> As a next step, I'd like to actually make sure all calls to
> truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
> and then we can also get rid of i_size checks in some places (truncate can
> use the same serialization scheme as hole punch). But that step is mostly
> a cleanup so I'd like to get these functional fixes in first.
> 
> Changes since v3:
> * Renamed and moved lock to struct address_space
> * Added conversions of tmpfs, ceph, cifs, fuse, f2fs
> * Fixed error handling path in filemap_read()
> * Removed .page_mkwrite() cleanup from the series for now
> 
> Changes since v2:
> * Added documentation and comments regarding lock ordering and how the lock is
>   supposed to be used
> * Added conversions of ext2, xfs, zonefs
> * Added patch removing i_mapping_sem protection from .page_mkwrite handlers
> 
> Changes since v1:
> * Moved to using inode->i_mapping_sem instead of aops handler to acquire
>   appropriate lock
> 
> ---
> Motivation:
> 
> Amir has reported [1] a that ext4 has a potential issues when reads can race
> with hole punching possibly exposing stale data from freed blocks or even
> corrupting filesystem when stale mapping data gets used for writeout. The
> problem is that during hole punching, new page cache pages can get instantiated
> and block mapping from the looked up in a punched range after
> truncate_inode_pages() has run but before the filesystem removes blocks from
> the file. In principle any filesystem implementing hole punching thus needs to
> implement a mechanism to block instantiating page cache pages during hole
> punching to avoid this race. This is further complicated by the fact that there
> are multiple places that can instantiate pages in page cache.  We can have
> regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
> result in reading in page cache pages through force_page_cache_readahead().
> 
> There are couple of ways how to fix this. First way (currently implemented by
> XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
> serialized with hole punching. This is easy to do but as a result all reads
> would then be serialized with writes and thus mixed read-write workloads suffer
> heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
> when creating new pages in the page cache and looking up their corresponding
> block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
> which provides necessary serialization with hole punching for ext4.
> 
> 								Honza
> 
> [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/
> 
> Previous versions:
> Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
> Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz
> 
> CC: ceph-devel@vger.kernel.org
> CC: Chao Yu <yuchao0@huawei.com>
> CC: Damien Le Moal <damien.lemoal@wdc.com>
> CC: "Darrick J. Wong" <darrick.wong@oracle.com>
> CC: Hugh Dickins <hughd@google.com>
> CC: Jaegeuk Kim <jaegeuk@kernel.org>
> CC: Jeff Layton <jlayton@kernel.org>
> CC: Johannes Thumshirn <jth@kernel.org>
> CC: linux-cifs@vger.kernel.org
> CC: <linux-ext4@vger.kernel.org>
> CC: linux-f2fs-devel@lists.sourceforge.net
> CC: <linux-fsdevel@vger.kernel.org>
> CC: <linux-mm@kvack.org>
> CC: <linux-xfs@vger.kernel.org>
> CC: Miklos Szeredi <miklos@szeredi.hu>
> CC: Steve French <sfrench@samba.org>
> CC: Ted Tso <tytso@mit.edu>
> 

-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [f2fs-dev] [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
@ 2021-04-23 22:07   ` Dave Chinner
  0 siblings, 0 replies; 40+ messages in thread
From: Dave Chinner @ 2021-04-23 22:07 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-cifs, Damien Le Moal, Ted Tso, Darrick J. Wong,
	Jeff Layton, linux-ext4, Amir Goldstein, Hugh Dickins,
	linux-f2fs-devel, Christoph Hellwig, linux-mm, Miklos Szeredi,
	linux-fsdevel, Jaegeuk Kim, ceph-devel, Johannes Thumshirn,
	Steve French, linux-xfs

Hi Jan,

In future, can you please use the same cc-list for the entire
patchset?

The stuff that has hit the XFS list (where I'm replying from)
doesn't give me any context as to what the core changes are that
allow XFS to be changed, so I can't review them in isolation.

I've got to spend time now reconstructing the patchset into a single
series because the delivery has been spread across three different
mailing lists and so hit 3 different procmail filters.  I'll comment
on the patches once I've reconstructed the series and read through
it as a whole...

/me considers the way people use "cc" tags in git commits for
including mailing lists on individual patches actively harmful.
Unless the recipient is subscribed to all the mailing lists the
patchset was CC'd to, they can't easily find the bits of the
patchset that didn't arrive in their mail box. Individual mailing
lists should receive entire patchsets for review, not random,
individual, context free patches. 

And, FWIW, cc'ing the cover letter to all the mailing lists is not
good enough. Being able to see the code change as a whole is what
matters for review, not the cover letter...

Cheers,

Dave.

On Fri, Apr 23, 2021 at 07:29:29PM +0200, Jan Kara wrote:
> Hello,
> 
> here is another version of my patches to address races between hole punching
> and page cache filling functions for ext4 and other filesystems. I think
> we are coming close to a complete solution so I've removed the RFC tag from
> the subject. I went through all filesystems supporting hole punching and
> converted them from their private locks to a generic one (usually fixing the
> race ext4 had as a side effect). I also found out ceph & cifs didn't have
> any protection from the hole punch vs page fault race either so I've added
> appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
> GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
> both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
> read, write, and hole punch paths are not properly serialized possibly leading
> to fs corruption. Both issues are non-trivial so respective fs maintainers
> have to deal with those (I've informed them and problems were generally
> confirmed). Anyway, for all the other filesystem this kind of race should
> be closed.
> 
> As a next step, I'd like to actually make sure all calls to
> truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
> and then we can also get rid of i_size checks in some places (truncate can
> use the same serialization scheme as hole punch). But that step is mostly
> a cleanup so I'd like to get these functional fixes in first.
> 
> Changes since v3:
> * Renamed and moved lock to struct address_space
> * Added conversions of tmpfs, ceph, cifs, fuse, f2fs
> * Fixed error handling path in filemap_read()
> * Removed .page_mkwrite() cleanup from the series for now
> 
> Changes since v2:
> * Added documentation and comments regarding lock ordering and how the lock is
>   supposed to be used
> * Added conversions of ext2, xfs, zonefs
> * Added patch removing i_mapping_sem protection from .page_mkwrite handlers
> 
> Changes since v1:
> * Moved to using inode->i_mapping_sem instead of aops handler to acquire
>   appropriate lock
> 
> ---
> Motivation:
> 
> Amir has reported [1] a that ext4 has a potential issues when reads can race
> with hole punching possibly exposing stale data from freed blocks or even
> corrupting filesystem when stale mapping data gets used for writeout. The
> problem is that during hole punching, new page cache pages can get instantiated
> and block mapping from the looked up in a punched range after
> truncate_inode_pages() has run but before the filesystem removes blocks from
> the file. In principle any filesystem implementing hole punching thus needs to
> implement a mechanism to block instantiating page cache pages during hole
> punching to avoid this race. This is further complicated by the fact that there
> are multiple places that can instantiate pages in page cache.  We can have
> regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
> result in reading in page cache pages through force_page_cache_readahead().
> 
> There are couple of ways how to fix this. First way (currently implemented by
> XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
> serialized with hole punching. This is easy to do but as a result all reads
> would then be serialized with writes and thus mixed read-write workloads suffer
> heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
> when creating new pages in the page cache and looking up their corresponding
> block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
> which provides necessary serialization with hole punching for ext4.
> 
> 								Honza
> 
> [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/
> 
> Previous versions:
> Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
> Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz
> 
> CC: ceph-devel@vger.kernel.org
> CC: Chao Yu <yuchao0@huawei.com>
> CC: Damien Le Moal <damien.lemoal@wdc.com>
> CC: "Darrick J. Wong" <darrick.wong@oracle.com>
> CC: Hugh Dickins <hughd@google.com>
> CC: Jaegeuk Kim <jaegeuk@kernel.org>
> CC: Jeff Layton <jlayton@kernel.org>
> CC: Johannes Thumshirn <jth@kernel.org>
> CC: linux-cifs@vger.kernel.org
> CC: <linux-ext4@vger.kernel.org>
> CC: linux-f2fs-devel@lists.sourceforge.net
> CC: <linux-fsdevel@vger.kernel.org>
> CC: <linux-mm@kvack.org>
> CC: <linux-xfs@vger.kernel.org>
> CC: Miklos Szeredi <miklos@szeredi.hu>
> CC: Steve French <sfrench@samba.org>
> CC: Ted Tso <tytso@mit.edu>
> 

-- 
Dave Chinner
david@fromorbit.com


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 05/12] xfs: Convert to use invalidate_lock
  2021-04-23 17:29 ` [PATCH 05/12] xfs: Convert to use invalidate_lock Jan Kara
@ 2021-04-23 22:39   ` Dave Chinner
  0 siblings, 0 replies; 40+ messages in thread
From: Dave Chinner @ 2021-04-23 22:39 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Ted Tso,
	Christoph Hellwig, linux-xfs, Darrick J. Wong

On Fri, Apr 23, 2021 at 07:29:34PM +0200, Jan Kara wrote:
> Use invalidate_lock instead of XFS internal i_mmap_lock. The intended
> purpose of invalidate_lock is exactly the same. Note that the locking in
> __xfs_filemap_fault() slightly changes as filemap_fault() already takes
> invalidate_lock.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> CC: <linux-xfs@vger.kernel.org>
> CC: "Darrick J. Wong" <darrick.wong@oracle.com>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/xfs/xfs_file.c  | 12 ++++++-----
>  fs/xfs/xfs_inode.c | 52 ++++++++++++++++++++++++++--------------------
>  fs/xfs/xfs_inode.h |  1 -
>  fs/xfs/xfs_super.c |  2 --
>  4 files changed, 36 insertions(+), 31 deletions(-)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index a007ca0711d9..2fc04ce0e9f9 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1282,7 +1282,7 @@ xfs_file_llseek(
>   *
>   * mmap_lock (MM)
>   *   sb_start_pagefault(vfs, freeze)
> - *     i_mmaplock (XFS - truncate serialisation)
> + *     invalidate_lock (vfs - truncate serialisation)
>   *       page_lock (MM)
>   *         i_lock (XFS - extent map serialisation)
>   */

I think this needs to say "vfs/XFS_MMAPLOCK", because it's not
obvious from reading the code that these are the same thing...

> @@ -1303,24 +1303,26 @@ __xfs_filemap_fault(
>  		file_update_time(vmf->vma->vm_file);
>  	}
>  
> -	xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
>  	if (IS_DAX(inode)) {
>  		pfn_t pfn;
>  
> +		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
>  		ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,
>  				(write_fault && !vmf->cow_page) ?
>  				 &xfs_direct_write_iomap_ops :
>  				 &xfs_read_iomap_ops);
>  		if (ret & VM_FAULT_NEEDDSYNC)
>  			ret = dax_finish_sync_fault(vmf, pe_size, pfn);
> +		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
>  	} else {
> -		if (write_fault)
> +		if (write_fault) {
> +			xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
>  			ret = iomap_page_mkwrite(vmf,
>  					&xfs_buffered_write_iomap_ops);
> -		else
> +			xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> +		} else
>  			ret = filemap_fault(vmf);
>  	}
> -	xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);

This is kinda messy. If you lift the filemap_fault() path out of
this, then there rest of the code remains largely unchanged:

	if (!IS_DAX(inode) && !write_fault)
		return filemap_fault(vmf);

        if (write_fault) {
		sb_start_pagefault(inode->i_sb);
		file_update_time(vmf->vma->vm_file);
	}

	xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
	if (IS_DAX(inode)) {
		ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,
				(write_fault && !vmf->cow_page) ?
				 &xfs_direct_write_iomap_ops :
				 &xfs_read_iomap_ops);
		if (ret & VM_FAULT_NEEDDSYNC)
			ret = dax_finish_sync_fault(vmf, pe_size, pfn);
	} else {
		ret = iomap_page_mkwrite(vmf, &xfs_buffered_write_iomap_ops);
	}
	xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);

	if (write_fault)
		sb_end_pagefault(inode->i_sb);
	return ret;

>  
>  	if (write_fault)
>  		sb_end_pagefault(inode->i_sb);
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index f93370bd7b1e..ac83409d0bf3 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -134,7 +134,7 @@ xfs_ilock_attr_map_shared(
>  
>  /*
>   * In addition to i_rwsem in the VFS inode, the xfs inode contains 2
> - * multi-reader locks: i_mmap_lock and the i_lock.  This routine allows
> + * multi-reader locks: invalidate_lock and the i_lock.  This routine allows
>   * various combinations of the locks to be obtained.
>   *
>   * The 3 locks should always be ordered so that the IO lock is obtained first,
> @@ -142,23 +142,23 @@ xfs_ilock_attr_map_shared(
>   *
>   * Basic locking order:
>   *
> - * i_rwsem -> i_mmap_lock -> page_lock -> i_ilock
> + * i_rwsem -> invalidate_lock -> page_lock -> i_ilock
>   *
>   * mmap_lock locking order:
>   *
>   * i_rwsem -> page lock -> mmap_lock
> - * mmap_lock -> i_mmap_lock -> page_lock
> + * mmap_lock -> invalidate_lock -> page_lock

Same here - while XFS_MMAPLOCK still exists, the comments should
really associate the VFS invalidate lock with the XFS_MMAPLOCK...

>   *
>   * The difference in mmap_lock locking order mean that we cannot hold the
> - * i_mmap_lock over syscall based read(2)/write(2) based IO. These IO paths can
> - * fault in pages during copy in/out (for buffered IO) or require the mmap_lock
> - * in get_user_pages() to map the user pages into the kernel address space for
> - * direct IO. Similarly the i_rwsem cannot be taken inside a page fault because
> - * page faults already hold the mmap_lock.
> + * invalidate_lock over syscall based read(2)/write(2) based IO. These IO paths
> + * can fault in pages during copy in/out (for buffered IO) or require the
> + * mmap_lock in get_user_pages() to map the user pages into the kernel address
> + * space for direct IO. Similarly the i_rwsem cannot be taken inside a page
> + * fault because page faults already hold the mmap_lock.
>   *
>   * Hence to serialise fully against both syscall and mmap based IO, we need to
> - * take both the i_rwsem and the i_mmap_lock. These locks should *only* be both
> - * taken in places where we need to invalidate the page cache in a race
> + * take both the i_rwsem and the invalidate_lock. These locks should *only* be
> + * both taken in places where we need to invalidate the page cache in a race
>   * free manner (e.g. truncate, hole punch and other extent manipulation
>   * functions).
>   */
> @@ -190,10 +190,13 @@ xfs_ilock(
>  				 XFS_IOLOCK_DEP(lock_flags));
>  	}
>  
> -	if (lock_flags & XFS_MMAPLOCK_EXCL)
> -		mrupdate_nested(&ip->i_mmaplock, XFS_MMAPLOCK_DEP(lock_flags));
> -	else if (lock_flags & XFS_MMAPLOCK_SHARED)
> -		mraccess_nested(&ip->i_mmaplock, XFS_MMAPLOCK_DEP(lock_flags));
> +	if (lock_flags & XFS_MMAPLOCK_EXCL) {
> +		down_write_nested(&VFS_I(ip)->i_mapping->invalidate_lock,
> +				  XFS_MMAPLOCK_DEP(lock_flags));
> +	} else if (lock_flags & XFS_MMAPLOCK_SHARED) {
> +		down_read_nested(&VFS_I(ip)->i_mapping->invalidate_lock,
> +				 XFS_MMAPLOCK_DEP(lock_flags));
> +	}

Well, that neuters all the "xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)"
checks when lockdep is not enabled. IOWs, CONFIG_XFS_DEBUG=y will no
longer report incorrect use of read vs write locking.  That was the
main issue that stopped the last attempt to convert the mrlocks to
plain rwsems.

> @@ -358,8 +361,11 @@ xfs_isilocked(
>  
>  	if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) {
>  		if (!(lock_flags & XFS_MMAPLOCK_SHARED))
> -			return !!ip->i_mmaplock.mr_writer;
> -		return rwsem_is_locked(&ip->i_mmaplock.mr_lock);
> +			return !debug_locks ||
> +				lockdep_is_held_type(
> +					&VFS_I(ip)->i_mapping->invalidate_lock,
> +					0);
> +		return rwsem_is_locked(&VFS_I(ip)->i_mapping->invalidate_lock);
>  	}

Yeah, this is the problem - the use of lockdep_is_held_type() here.
We need rwsem_is_write_locked() here, not a dependency on lockdep.

I know, you're only copying the IOLOCK stuff, but I really don't
like the fact that we end up more dependent on lockdep for catching
basic lock usage mistakes. Lockdep is simply not usable in typical
developer QA environment because of the performance overhead and
false positives it introduces. It already takes 3-4 hours to do a
single fstests run on XFS without lockdep and it at least doubles
when lockdep is enabled.  Hence requiring lockdep just to check a
rwsem is held in write mode is not an improvement on the development
and verification side of things....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
@ 2021-04-23 23:04     ` Dave Chinner
  2021-04-23 23:04     ` [f2fs-dev] " Dave Chinner
  1 sibling, 0 replies; 40+ messages in thread
From: Dave Chinner @ 2021-04-23 23:04 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Ted Tso,
	ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range. inode->i_rwsem does not work because page fault happens under
> mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> the performance for mixed read-write workloads suffer.
> 
> So create a new rw_semaphore in the address_space - invalidate_lock -
> that protects adding of pages to page cache for page faults / reads /
> readahead.
.....
> diff --git a/fs/inode.c b/fs/inode.c
> index a047ab306f9a..43596dd8b61e 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
>  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
>  	mapping->private_data = NULL;
>  	mapping->writeback_index = 0;
> +	init_rwsem(&mapping->invalidate_lock);
> +	lockdep_set_class(&mapping->invalidate_lock,
> +			  &sb->s_type->invalidate_lock_key);
>  	inode->i_private = NULL;
>  	inode->i_mapping = mapping;
>  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */

Oh, lockdep. That might be a problem here.

The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
tracked as nesting properly against the IOLOCK and the ILOCK. When
you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
add subclass annotations to the lock and they are going to be
different to the locking that the VFS does.

We'll see this from xfs_lock_two_inodes() (e.g. in
xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
oper.....

Oooooh. The page cache copy done when breaking a shared extent needs
to lock out page faults on both the source and destination, but it
still needs to be able to populate the page cache of both the source
and destination file.....

.... and vfs_dedupe_file_range_compare() has to be able to read
pages from both the source and destination file to determine that
the contents are identical and that's done while we hold the
XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
user data modification operations being run....

I now have many doubts that this "serialise page faults by locking
out page cache instantiation" method actually works as a generic
mechanism. It's not just page cache invalidation that relies on
being able to lock out page faults: copy-on-write and deduplication
both require the ability to populate the page cache with source data
while page faults are locked out so the data can be compared/copied
atomically with the extent level manipulations and so user data
modifications cannot occur until the physical extent manipulation
operation has completed.

Having only just realised this is a problem, no solution has
immediately popped into my mind. I'll chew on it over the weekend,
but I'm not hopeful at this point...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [f2fs-dev] [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
@ 2021-04-23 23:04     ` Dave Chinner
  0 siblings, 0 replies; 40+ messages in thread
From: Dave Chinner @ 2021-04-23 23:04 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-cifs, Damien Le Moal, Ted Tso, Darrick J. Wong,
	Jeff Layton, linux-ext4, Amir Goldstein, Hugh Dickins,
	linux-f2fs-devel, Christoph Hellwig, linux-mm, Miklos Szeredi,
	linux-fsdevel, Jaegeuk Kim, ceph-devel, Johannes Thumshirn,
	Steve French, linux-xfs

On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
> 
> fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
>   truncate_inode_pages_range()
> 						  <create pages in page
> 						   cache here>
>   <update fs block mapping and free blocks>
> 
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range. inode->i_rwsem does not work because page fault happens under
> mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> the performance for mixed read-write workloads suffer.
> 
> So create a new rw_semaphore in the address_space - invalidate_lock -
> that protects adding of pages to page cache for page faults / reads /
> readahead.
.....
> diff --git a/fs/inode.c b/fs/inode.c
> index a047ab306f9a..43596dd8b61e 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
>  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
>  	mapping->private_data = NULL;
>  	mapping->writeback_index = 0;
> +	init_rwsem(&mapping->invalidate_lock);
> +	lockdep_set_class(&mapping->invalidate_lock,
> +			  &sb->s_type->invalidate_lock_key);
>  	inode->i_private = NULL;
>  	inode->i_mapping = mapping;
>  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */

Oh, lockdep. That might be a problem here.

The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
tracked as nesting properly against the IOLOCK and the ILOCK. When
you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
add subclass annotations to the lock and they are going to be
different to the locking that the VFS does.

We'll see this from xfs_lock_two_inodes() (e.g. in
xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
oper.....

Oooooh. The page cache copy done when breaking a shared extent needs
to lock out page faults on both the source and destination, but it
still needs to be able to populate the page cache of both the source
and destination file.....

.... and vfs_dedupe_file_range_compare() has to be able to read
pages from both the source and destination file to determine that
the contents are identical and that's done while we hold the
XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
user data modification operations being run....

I now have many doubts that this "serialise page faults by locking
out page cache instantiation" method actually works as a generic
mechanism. It's not just page cache invalidation that relies on
being able to lock out page faults: copy-on-write and deduplication
both require the ability to populate the page cache with source data
while page faults are locked out so the data can be compared/copied
atomically with the extent level manipulations and so user data
modifications cannot occur until the physical extent manipulation
operation has completed.

Having only just realised this is a problem, no solution has
immediately popped into my mind. I'll chew on it over the weekend,
but I'm not hopeful at this point...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
  2021-04-23 22:07   ` [f2fs-dev] " Dave Chinner
@ 2021-04-23 23:51     ` Matthew Wilcox
  -1 siblings, 0 replies; 40+ messages in thread
From: Matthew Wilcox @ 2021-04-23 23:51 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jan Kara, linux-fsdevel, Christoph Hellwig, Amir Goldstein,
	Ted Tso, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> I've got to spend time now reconstructing the patchset into a single
> series because the delivery has been spread across three different
> mailing lists and so hit 3 different procmail filters.  I'll comment
> on the patches once I've reconstructed the series and read through
> it as a whole...

$ b4 mbox 20210423171010.12-1-jack@suse.cz
Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
Grabbing thread from lore.kernel.org/ceph-devel
6 messages in the thread
Saved ./20210423171010.12-1-jack@suse.cz.mbx


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [f2fs-dev] [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
@ 2021-04-23 23:51     ` Matthew Wilcox
  0 siblings, 0 replies; 40+ messages in thread
From: Matthew Wilcox @ 2021-04-23 23:51 UTC (permalink / raw)
  To: Dave Chinner
  Cc: linux-cifs, Damien Le Moal, Jan Kara, Darrick J. Wong,
	Jeff Layton, linux-ext4, Amir Goldstein, Hugh Dickins,
	linux-f2fs-devel, Christoph Hellwig, linux-mm, Miklos Szeredi,
	Jaegeuk Kim, linux-fsdevel, Ted Tso, ceph-devel,
	Johannes Thumshirn, Steve French, linux-xfs

On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> I've got to spend time now reconstructing the patchset into a single
> series because the delivery has been spread across three different
> mailing lists and so hit 3 different procmail filters.  I'll comment
> on the patches once I've reconstructed the series and read through
> it as a whole...

$ b4 mbox 20210423171010.12-1-jack@suse.cz
Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
Grabbing thread from lore.kernel.org/ceph-devel
6 messages in the thread
Saved ./20210423171010.12-1-jack@suse.cz.mbx



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
  2021-04-23 23:51     ` [f2fs-dev] " Matthew Wilcox
@ 2021-04-24  6:11       ` Christoph Hellwig
  -1 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-04-24  6:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Dave Chinner, Jan Kara, linux-fsdevel, Christoph Hellwig,
	Amir Goldstein, Ted Tso, ceph-devel, Chao Yu, Damien Le Moal,
	Darrick J. Wong, Hugh Dickins, Jaegeuk Kim, Jeff Layton,
	Johannes Thumshirn, linux-cifs, linux-ext4, linux-f2fs-devel,
	linux-mm, linux-xfs, Miklos Szeredi, Steve French

On Sat, Apr 24, 2021 at 12:51:49AM +0100, Matthew Wilcox wrote:
> On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> > I've got to spend time now reconstructing the patchset into a single
> > series because the delivery has been spread across three different
> > mailing lists and so hit 3 different procmail filters.  I'll comment
> > on the patches once I've reconstructed the series and read through
> > it as a whole...
> 
> $ b4 mbox 20210423171010.12-1-jack@suse.cz
> Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
> Grabbing thread from lore.kernel.org/ceph-devel
> 6 messages in the thread
> Saved ./20210423171010.12-1-jack@suse.cz.mbx

Yikes.  Just send them damn mails.  Or switch the lists to NNTP, but
don't let the people who are reviewing your patches do stupid work
with weird tools.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [f2fs-dev] [PATCH 0/12 v4] fs: Hole punch vs page cache filling races
@ 2021-04-24  6:11       ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-04-24  6:11 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Jeff Layton, linux-cifs, Damien Le Moal, Jan Kara,
	Darrick J. Wong, Hugh Dickins, linux-ext4, Amir Goldstein,
	Dave Chinner, linux-f2fs-devel, Christoph Hellwig, linux-mm,
	Miklos Szeredi, Jaegeuk Kim, linux-fsdevel, Ted Tso, ceph-devel,
	Johannes Thumshirn, Steve French, linux-xfs

On Sat, Apr 24, 2021 at 12:51:49AM +0100, Matthew Wilcox wrote:
> On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> > I've got to spend time now reconstructing the patchset into a single
> > series because the delivery has been spread across three different
> > mailing lists and so hit 3 different procmail filters.  I'll comment
> > on the patches once I've reconstructed the series and read through
> > it as a whole...
> 
> $ b4 mbox 20210423171010.12-1-jack@suse.cz
> Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
> Grabbing thread from lore.kernel.org/ceph-devel
> 6 messages in the thread
> Saved ./20210423171010.12-1-jack@suse.cz.mbx

Yikes.  Just send them damn mails.  Or switch the lists to NNTP, but
don't let the people who are reviewing your patches do stupid work
with weird tools.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/12] zonefs: Convert to using invalidate_lock
  2021-04-23 17:29 ` [PATCH 06/12] zonefs: Convert to using invalidate_lock Jan Kara
@ 2021-04-26  6:40   ` Damien Le Moal
  2021-04-26 16:24     ` Jan Kara
  0 siblings, 1 reply; 40+ messages in thread
From: Damien Le Moal @ 2021-04-26  6:40 UTC (permalink / raw)
  To: Jan Kara, linux-fsdevel
  Cc: hch, Amir Goldstein, Dave Chinner, Ted Tso, Johannes Thumshirn

On 2021/04/24 2:30, Jan Kara wrote:
> Use invalidate_lock instead of zonefs' private i_mmap_sem. The intended
> purpose is exactly the same. By this conversion we also fix a race
> between hole punching and read(2) / readahead(2) paths that can lead to
> stale page cache contents.

zonefs does not support hole punching since the blocks of a file are determined
by the device zone configuration and cannot change, ever. So I think you can
remove the second sentence above.

> 
> CC: Damien Le Moal <damien.lemoal@wdc.com>
> CC: Johannes Thumshirn <jth@kernel.org>
> CC: <linux-fsdevel@vger.kernel.org>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/zonefs/super.c  | 23 +++++------------------
>  fs/zonefs/zonefs.h |  7 +++----
>  2 files changed, 8 insertions(+), 22 deletions(-)
> 
> diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
> index 049e36c69ed7..60ac5587c880 100644
> --- a/fs/zonefs/super.c
> +++ b/fs/zonefs/super.c
> @@ -462,7 +462,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
>  	inode_dio_wait(inode);
>  
>  	/* Serialize against page faults */
> -	down_write(&zi->i_mmap_sem);
> +	down_write(&inode->i_mapping->invalidate_lock);
>  
>  	/* Serialize against zonefs_iomap_begin() */
>  	mutex_lock(&zi->i_truncate_mutex);
> @@ -500,7 +500,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
>  
>  unlock:
>  	mutex_unlock(&zi->i_truncate_mutex);
> -	up_write(&zi->i_mmap_sem);
> +	up_write(&inode->i_mapping->invalidate_lock);
>  
>  	return ret;
>  }
> @@ -575,18 +575,6 @@ static int zonefs_file_fsync(struct file *file, loff_t start, loff_t end,
>  	return ret;
>  }
>  
> -static vm_fault_t zonefs_filemap_fault(struct vm_fault *vmf)
> -{
> -	struct zonefs_inode_info *zi = ZONEFS_I(file_inode(vmf->vma->vm_file));
> -	vm_fault_t ret;
> -
> -	down_read(&zi->i_mmap_sem);
> -	ret = filemap_fault(vmf);
> -	up_read(&zi->i_mmap_sem);
> -
> -	return ret;
> -}
> -
>  static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
>  {
>  	struct inode *inode = file_inode(vmf->vma->vm_file);
> @@ -607,16 +595,16 @@ static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
>  	file_update_time(vmf->vma->vm_file);
>  
>  	/* Serialize against truncates */
> -	down_read(&zi->i_mmap_sem);
> +	down_read(&inode->i_mapping->invalidate_lock);
>  	ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops);
> -	up_read(&zi->i_mmap_sem);
> +	up_read(&inode->i_mapping->invalidate_lock);
>  
>  	sb_end_pagefault(inode->i_sb);
>  	return ret;
>  }
>  
>  static const struct vm_operations_struct zonefs_file_vm_ops = {
> -	.fault		= zonefs_filemap_fault,
> +	.fault		= filemap_fault,
>  	.map_pages	= filemap_map_pages,
>  	.page_mkwrite	= zonefs_filemap_page_mkwrite,
>  };
> @@ -1158,7 +1146,6 @@ static struct inode *zonefs_alloc_inode(struct super_block *sb)
>  
>  	inode_init_once(&zi->i_vnode);
>  	mutex_init(&zi->i_truncate_mutex);
> -	init_rwsem(&zi->i_mmap_sem);
>  	zi->i_wr_refcnt = 0;
>  
>  	return &zi->i_vnode;
> diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h
> index 51141907097c..7b147907c328 100644
> --- a/fs/zonefs/zonefs.h
> +++ b/fs/zonefs/zonefs.h
> @@ -70,12 +70,11 @@ struct zonefs_inode_info {
>  	 * and changes to the inode private data, and in particular changes to
>  	 * a sequential file size on completion of direct IO writes.
>  	 * Serialization of mmap read IOs with truncate and syscall IO
> -	 * operations is done with i_mmap_sem in addition to i_truncate_mutex.
> -	 * Only zonefs_seq_file_truncate() takes both lock (i_mmap_sem first,
> -	 * i_truncate_mutex second).
> +	 * operations is done with invalidate_lock in addition to
> +	 * i_truncate_mutex.  Only zonefs_seq_file_truncate() takes both lock
> +	 * (invalidate_lock first, i_truncate_mutex second).
>  	 */
>  	struct mutex		i_truncate_mutex;
> -	struct rw_semaphore	i_mmap_sem;
>  
>  	/* guarded by i_truncate_mutex */
>  	unsigned int		i_wr_refcnt;
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
  2021-04-23 23:04     ` [f2fs-dev] " Dave Chinner
@ 2021-04-26 15:46       ` Jan Kara
  -1 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-26 15:46 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jan Kara, linux-fsdevel, Christoph Hellwig, Amir Goldstein,
	Ted Tso, ceph-devel, Chao Yu, Damien Le Moal, Darrick J. Wong,
	Hugh Dickins, Jaegeuk Kim, Jeff Layton, Johannes Thumshirn,
	linux-cifs, linux-ext4, linux-f2fs-devel, linux-mm, linux-xfs,
	Miklos Szeredi, Steve French

On Sat 24-04-21 09:04:49, Dave Chinner wrote:
> On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> > Currently, serializing operations such as page fault, read, or readahead
> > against hole punching is rather difficult. The basic race scheme is
> > like:
> > 
> > fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
> >   truncate_inode_pages_range()
> > 						  <create pages in page
> > 						   cache here>
> >   <update fs block mapping and free blocks>
> > 
> > Now the problem is in this way read / page fault / readahead can
> > instantiate pages in page cache with potentially stale data (if blocks
> > get quickly reused). Avoiding this race is not simple - page locks do
> > not work because we want to make sure there are *no* pages in given
> > range. inode->i_rwsem does not work because page fault happens under
> > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> > the performance for mixed read-write workloads suffer.
> > 
> > So create a new rw_semaphore in the address_space - invalidate_lock -
> > that protects adding of pages to page cache for page faults / reads /
> > readahead.
> .....
> > diff --git a/fs/inode.c b/fs/inode.c
> > index a047ab306f9a..43596dd8b61e 100644
> > --- a/fs/inode.c
> > +++ b/fs/inode.c
> > @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
> >  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
> >  	mapping->private_data = NULL;
> >  	mapping->writeback_index = 0;
> > +	init_rwsem(&mapping->invalidate_lock);
> > +	lockdep_set_class(&mapping->invalidate_lock,
> > +			  &sb->s_type->invalidate_lock_key);
> >  	inode->i_private = NULL;
> >  	inode->i_mapping = mapping;
> >  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */
> 
> Oh, lockdep. That might be a problem here.
> 
> The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
> tracked as nesting properly against the IOLOCK and the ILOCK. When
> you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
> add subclass annotations to the lock and they are going to be
> different to the locking that the VFS does.
> 
> We'll see this from xfs_lock_two_inodes() (e.g. in
> xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
> oper.....

Thanks for the pointer. I was kind of wondering what lockdep nesting games
XFS plays but then forgot to look into details. Anyway, I've preserved the
nesting annotations in XFS and fstests run on XFS passed without lockdep
complaining so there isn't at least an obvious breakage. Also as far as I'm
checking the code XFS usage in and lock nesting of MMAPLOCK should be
compatible with the nesting VFS enforces (also see below)...
 
> Oooooh. The page cache copy done when breaking a shared extent needs
> to lock out page faults on both the source and destination, but it
> still needs to be able to populate the page cache of both the source
> and destination file.....
> 
> .... and vfs_dedupe_file_range_compare() has to be able to read
> pages from both the source and destination file to determine that
> the contents are identical and that's done while we hold the
> XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
> user data modification operations being run....

So I started wondering why fstests passed when reading this :) The reason
is that vfs_dedupe_get_page() does not use standard page cache filling path
(neither readahead API nor filemap_read()), instead it uses
read_mapping_page() and so gets into page cache filling path below the
level at which we get invalidate_lock and thus everything works as it
should. So read_mapping_page() is similar to places like e.g.
block_truncate_page() or block_write_begin() which may end up filling in
page cache contents but they rely on upper layers to already hold
appropriate locks. I'll add a comment to read_mapping_page() about this.
Once all filesystems are converted to use invalidate_lock, I also want to
add WARN_ON_ONCE() to various places verifying that invalidate_lock is held
as it should...
 
> I now have many doubts that this "serialise page faults by locking
> out page cache instantiation" method actually works as a generic
> mechanism. It's not just page cache invalidation that relies on
> being able to lock out page faults: copy-on-write and deduplication
> both require the ability to populate the page cache with source data
> while page faults are locked out so the data can be compared/copied
> atomically with the extent level manipulations and so user data
> modifications cannot occur until the physical extent manipulation
> operation has completed.

Hum, that is a good point. So there are actually two different things you
want to block at different places:

1) You really want to block page cache instantiation for operations such as
hole punch as that operation mutates data and thus contents would become
stale.

2) You want to block page cache *modification* for operations such as
dedupe while keeping page cache in place. This is somewhat different
requirement but invalidate_lock can, in principle, cover it as well.
Basically we just need to keep invalidate_lock usage in .page_mkwrite
helpers. The question remains whether invalidate_lock is still a good name
with this usage in mind and I probably need to update a documentation to
reflect this usage.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [f2fs-dev] [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock
@ 2021-04-26 15:46       ` Jan Kara
  0 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-26 15:46 UTC (permalink / raw)
  To: Dave Chinner
  Cc: linux-cifs, Damien Le Moal, Jan Kara, Darrick J. Wong,
	Jeff Layton, linux-ext4, Amir Goldstein, Hugh Dickins,
	linux-f2fs-devel, Christoph Hellwig, linux-mm, Miklos Szeredi,
	Jaegeuk Kim, linux-fsdevel, Ted Tso, ceph-devel,
	Johannes Thumshirn, Steve French, linux-xfs

On Sat 24-04-21 09:04:49, Dave Chinner wrote:
> On Fri, Apr 23, 2021 at 07:29:31PM +0200, Jan Kara wrote:
> > Currently, serializing operations such as page fault, read, or readahead
> > against hole punching is rather difficult. The basic race scheme is
> > like:
> > 
> > fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
> >   truncate_inode_pages_range()
> > 						  <create pages in page
> > 						   cache here>
> >   <update fs block mapping and free blocks>
> > 
> > Now the problem is in this way read / page fault / readahead can
> > instantiate pages in page cache with potentially stale data (if blocks
> > get quickly reused). Avoiding this race is not simple - page locks do
> > not work because we want to make sure there are *no* pages in given
> > range. inode->i_rwsem does not work because page fault happens under
> > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> > the performance for mixed read-write workloads suffer.
> > 
> > So create a new rw_semaphore in the address_space - invalidate_lock -
> > that protects adding of pages to page cache for page faults / reads /
> > readahead.
> .....
> > diff --git a/fs/inode.c b/fs/inode.c
> > index a047ab306f9a..43596dd8b61e 100644
> > --- a/fs/inode.c
> > +++ b/fs/inode.c
> > @@ -191,6 +191,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
> >  	mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
> >  	mapping->private_data = NULL;
> >  	mapping->writeback_index = 0;
> > +	init_rwsem(&mapping->invalidate_lock);
> > +	lockdep_set_class(&mapping->invalidate_lock,
> > +			  &sb->s_type->invalidate_lock_key);
> >  	inode->i_private = NULL;
> >  	inode->i_mapping = mapping;
> >  	INIT_HLIST_HEAD(&inode->i_dentry);	/* buggered by rcu freeing */
> 
> Oh, lockdep. That might be a problem here.
> 
> The XFS_MMAPLOCK has non-trivial lockdep annotations so that it is
> tracked as nesting properly against the IOLOCK and the ILOCK. When
> you end up using xfs_ilock(XFS_MMAPLOCK..) to lock this, XFS will
> add subclass annotations to the lock and they are going to be
> different to the locking that the VFS does.
> 
> We'll see this from xfs_lock_two_inodes() (e.g. in
> xfs_swap_extents()) and xfs_ilock2_io_mmap() during reflink
> oper.....

Thanks for the pointer. I was kind of wondering what lockdep nesting games
XFS plays but then forgot to look into details. Anyway, I've preserved the
nesting annotations in XFS and fstests run on XFS passed without lockdep
complaining so there isn't at least an obvious breakage. Also as far as I'm
checking the code XFS usage in and lock nesting of MMAPLOCK should be
compatible with the nesting VFS enforces (also see below)...
 
> Oooooh. The page cache copy done when breaking a shared extent needs
> to lock out page faults on both the source and destination, but it
> still needs to be able to populate the page cache of both the source
> and destination file.....
> 
> .... and vfs_dedupe_file_range_compare() has to be able to read
> pages from both the source and destination file to determine that
> the contents are identical and that's done while we hold the
> XFS_MMAPLOCK exclusively so the compare is atomic w.r.t. all other
> user data modification operations being run....

So I started wondering why fstests passed when reading this :) The reason
is that vfs_dedupe_get_page() does not use standard page cache filling path
(neither readahead API nor filemap_read()), instead it uses
read_mapping_page() and so gets into page cache filling path below the
level at which we get invalidate_lock and thus everything works as it
should. So read_mapping_page() is similar to places like e.g.
block_truncate_page() or block_write_begin() which may end up filling in
page cache contents but they rely on upper layers to already hold
appropriate locks. I'll add a comment to read_mapping_page() about this.
Once all filesystems are converted to use invalidate_lock, I also want to
add WARN_ON_ONCE() to various places verifying that invalidate_lock is held
as it should...
 
> I now have many doubts that this "serialise page faults by locking
> out page cache instantiation" method actually works as a generic
> mechanism. It's not just page cache invalidation that relies on
> being able to lock out page faults: copy-on-write and deduplication
> both require the ability to populate the page cache with source data
> while page faults are locked out so the data can be compared/copied
> atomically with the extent level manipulations and so user data
> modifications cannot occur until the physical extent manipulation
> operation has completed.

Hum, that is a good point. So there are actually two different things you
want to block at different places:

1) You really want to block page cache instantiation for operations such as
hole punch as that operation mutates data and thus contents would become
stale.

2) You want to block page cache *modification* for operations such as
dedupe while keeping page cache in place. This is somewhat different
requirement but invalidate_lock can, in principle, cover it as well.
Basically we just need to keep invalidate_lock usage in .page_mkwrite
helpers. The question remains whether invalidate_lock is still a good name
with this usage in mind and I probably need to update a documentation to
reflect this usage.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/12] zonefs: Convert to using invalidate_lock
  2021-04-26  6:40   ` Damien Le Moal
@ 2021-04-26 16:24     ` Jan Kara
  0 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-26 16:24 UTC (permalink / raw)
  To: Damien Le Moal
  Cc: Jan Kara, linux-fsdevel, hch, Amir Goldstein, Dave Chinner,
	Ted Tso, Johannes Thumshirn

On Mon 26-04-21 06:40:27, Damien Le Moal wrote:
> On 2021/04/24 2:30, Jan Kara wrote:
> > Use invalidate_lock instead of zonefs' private i_mmap_sem. The intended
> > purpose is exactly the same. By this conversion we also fix a race
> > between hole punching and read(2) / readahead(2) paths that can lead to
> > stale page cache contents.
> 
> zonefs does not support hole punching since the blocks of a file are determined
> by the device zone configuration and cannot change, ever. So I think you can
> remove the second sentence above.

Sure, thanks for correction. Updated.

								Honza

> 
> > 
> > CC: Damien Le Moal <damien.lemoal@wdc.com>
> > CC: Johannes Thumshirn <jth@kernel.org>
> > CC: <linux-fsdevel@vger.kernel.org>
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  fs/zonefs/super.c  | 23 +++++------------------
> >  fs/zonefs/zonefs.h |  7 +++----
> >  2 files changed, 8 insertions(+), 22 deletions(-)
> > 
> > diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
> > index 049e36c69ed7..60ac5587c880 100644
> > --- a/fs/zonefs/super.c
> > +++ b/fs/zonefs/super.c
> > @@ -462,7 +462,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
> >  	inode_dio_wait(inode);
> >  
> >  	/* Serialize against page faults */
> > -	down_write(&zi->i_mmap_sem);
> > +	down_write(&inode->i_mapping->invalidate_lock);
> >  
> >  	/* Serialize against zonefs_iomap_begin() */
> >  	mutex_lock(&zi->i_truncate_mutex);
> > @@ -500,7 +500,7 @@ static int zonefs_file_truncate(struct inode *inode, loff_t isize)
> >  
> >  unlock:
> >  	mutex_unlock(&zi->i_truncate_mutex);
> > -	up_write(&zi->i_mmap_sem);
> > +	up_write(&inode->i_mapping->invalidate_lock);
> >  
> >  	return ret;
> >  }
> > @@ -575,18 +575,6 @@ static int zonefs_file_fsync(struct file *file, loff_t start, loff_t end,
> >  	return ret;
> >  }
> >  
> > -static vm_fault_t zonefs_filemap_fault(struct vm_fault *vmf)
> > -{
> > -	struct zonefs_inode_info *zi = ZONEFS_I(file_inode(vmf->vma->vm_file));
> > -	vm_fault_t ret;
> > -
> > -	down_read(&zi->i_mmap_sem);
> > -	ret = filemap_fault(vmf);
> > -	up_read(&zi->i_mmap_sem);
> > -
> > -	return ret;
> > -}
> > -
> >  static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
> >  {
> >  	struct inode *inode = file_inode(vmf->vma->vm_file);
> > @@ -607,16 +595,16 @@ static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
> >  	file_update_time(vmf->vma->vm_file);
> >  
> >  	/* Serialize against truncates */
> > -	down_read(&zi->i_mmap_sem);
> > +	down_read(&inode->i_mapping->invalidate_lock);
> >  	ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops);
> > -	up_read(&zi->i_mmap_sem);
> > +	up_read(&inode->i_mapping->invalidate_lock);
> >  
> >  	sb_end_pagefault(inode->i_sb);
> >  	return ret;
> >  }
> >  
> >  static const struct vm_operations_struct zonefs_file_vm_ops = {
> > -	.fault		= zonefs_filemap_fault,
> > +	.fault		= filemap_fault,
> >  	.map_pages	= filemap_map_pages,
> >  	.page_mkwrite	= zonefs_filemap_page_mkwrite,
> >  };
> > @@ -1158,7 +1146,6 @@ static struct inode *zonefs_alloc_inode(struct super_block *sb)
> >  
> >  	inode_init_once(&zi->i_vnode);
> >  	mutex_init(&zi->i_truncate_mutex);
> > -	init_rwsem(&zi->i_mmap_sem);
> >  	zi->i_wr_refcnt = 0;
> >  
> >  	return &zi->i_vnode;
> > diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h
> > index 51141907097c..7b147907c328 100644
> > --- a/fs/zonefs/zonefs.h
> > +++ b/fs/zonefs/zonefs.h
> > @@ -70,12 +70,11 @@ struct zonefs_inode_info {
> >  	 * and changes to the inode private data, and in particular changes to
> >  	 * a sequential file size on completion of direct IO writes.
> >  	 * Serialization of mmap read IOs with truncate and syscall IO
> > -	 * operations is done with i_mmap_sem in addition to i_truncate_mutex.
> > -	 * Only zonefs_seq_file_truncate() takes both lock (i_mmap_sem first,
> > -	 * i_truncate_mutex second).
> > +	 * operations is done with invalidate_lock in addition to
> > +	 * i_truncate_mutex.  Only zonefs_seq_file_truncate() takes both lock
> > +	 * (invalidate_lock first, i_truncate_mutex second).
> >  	 */
> >  	struct mutex		i_truncate_mutex;
> > -	struct rw_semaphore	i_mmap_sem;
> >  
> >  	/* guarded by i_truncate_mutex */
> >  	unsigned int		i_wr_refcnt;
> > 
> 
> 
> -- 
> Damien Le Moal
> Western Digital Research
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate
  2021-04-23 17:29 ` [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate Jan Kara
@ 2021-04-29  3:24     ` Hugh Dickins
  2021-04-29  3:24     ` Hugh Dickins
  1 sibling, 0 replies; 40+ messages in thread
From: Hugh Dickins @ 2021-04-29  3:24 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, Hugh Dickins, linux-mm

On Fri, 23 Apr 2021, Jan Kara wrote:

> We have to handle pages added by currently running shmem_fallocate()
> specially in shmem_writepage(). For this we use serialization mechanism
> using structure attached to inode->i_private field. If we protect
> allocation of pages in shmem_fallocate() with invalidate_lock instead,
> we are sure added pages cannot be dirtied until shmem_fallocate() is done
> (invalidate_lock blocks faults, i_rwsem blocks writes) and thus we
> cannot see those pages in shmem_writepage() and there's no need for the
> serialization mechanism.

Appealing diffstat, but NAK, this patch is based on a false premise.

All those pages that are added by shmem_fallocate() are marked dirty:
see the set_page_dirty(page) and comment above it in shmem_fallocate().

It's intentional, that they should percolate through to shmem_writepage()
when memory is tight, and give feedback to fallocate that it should stop
(instead of getting into that horrid OOM-kill flurry that never even
frees any tmpfs anyway).

Hugh

> 
> CC: Hugh Dickins <hughd@google.com>
> CC: <linux-mm@kvack.org>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/shmem.c | 61 ++++++------------------------------------------------
>  1 file changed, 6 insertions(+), 55 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index f34162ac46de..7a2b0744031e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -94,18 +94,6 @@ static struct vfsmount *shm_mnt;
>  /* Symlink up to this size is kmalloc'ed instead of using a swappable page */
>  #define SHORT_SYMLINK_LEN 128
>  
> -/*
> - * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> - * i_rwsem making sure that it has only one user at a time): we would prefer
> - * not to enlarge the shmem inode just for that.
> - */
> -struct shmem_falloc {
> -	pgoff_t start;		/* start of range currently being fallocated */
> -	pgoff_t next;		/* the next page offset to be fallocated */
> -	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> -	pgoff_t nr_unswapped;	/* how often writepage refused to swap out */
> -};
> -
>  struct shmem_options {
>  	unsigned long long blocks;
>  	unsigned long long inodes;
> @@ -1364,28 +1352,11 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>  	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
>  	 * value into swapfile.c, the only way we can correctly account for a
>  	 * fallocated page arriving here is now to initialize it and write it.
> -	 *
> -	 * That's okay for a page already fallocated earlier, but if we have
> -	 * not yet completed the fallocation, then (a) we want to keep track
> -	 * of this page in case we have to undo it, and (b) it may not be a
> -	 * good idea to continue anyway, once we're pushing into swap.  So
> -	 * reactivate the page, and let shmem_fallocate() quit when too many.

(b) there commenting on communicating back to fallocate by nr_unswapped.

> +	 * Since a page added by currently running fallocate call cannot be
> +	 * dirtied and thus arrive here we know the fallocate has already
> +	 * completed and we are fine writing it out.
>  	 */
>  	if (!PageUptodate(page)) {
> -		if (inode->i_private) {
> -			struct shmem_falloc *shmem_falloc;
> -			spin_lock(&inode->i_lock);
> -			shmem_falloc = inode->i_private;
> -			if (shmem_falloc &&
> -			    index >= shmem_falloc->start &&
> -			    index < shmem_falloc->next)
> -				shmem_falloc->nr_unswapped++;
> -			else
> -				shmem_falloc = NULL;
> -			spin_unlock(&inode->i_lock);
> -			if (shmem_falloc)
> -				goto redirty;
> -		}
>  		clear_highpage(page);
>  		flush_dcache_page(page);
>  		SetPageUptodate(page);
> @@ -2629,9 +2600,9 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  							 loff_t len)
>  {
>  	struct inode *inode = file_inode(file);
> +	struct address_space *mapping = file->f_mapping;
>  	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
>  	struct shmem_inode_info *info = SHMEM_I(inode);
> -	struct shmem_falloc shmem_falloc;
>  	pgoff_t start, index, end;
>  	int error;
>  
> @@ -2641,7 +2612,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  	inode_lock(inode);
>  
>  	if (mode & FALLOC_FL_PUNCH_HOLE) {
> -		struct address_space *mapping = file->f_mapping;
>  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
>  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
>  
> @@ -2680,14 +2650,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		goto out;
>  	}
>  
> -	shmem_falloc.start = start;
> -	shmem_falloc.next  = start;
> -	shmem_falloc.nr_falloced = 0;
> -	shmem_falloc.nr_unswapped = 0;
> -	spin_lock(&inode->i_lock);
> -	inode->i_private = &shmem_falloc;
> -	spin_unlock(&inode->i_lock);
> -
> +	down_write(&mapping->invalidate_lock);
>  	for (index = start; index < end; index++) {
>  		struct page *page;
>  
> @@ -2697,8 +2660,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		 */
>  		if (signal_pending(current))
>  			error = -EINTR;
> -		else if (shmem_falloc.nr_unswapped > shmem_falloc.nr_falloced)
> -			error = -ENOMEM;
>  		else
>  			error = shmem_getpage(inode, index, &page, SGP_FALLOC);
>  		if (error) {
> @@ -2711,14 +2672,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  			goto undone;
>  		}
>  
> -		/*
> -		 * Inform shmem_writepage() how far we have reached.
> -		 * No need for lock or barrier: we have the page lock.
> -		 */
> -		shmem_falloc.next++;
> -		if (!PageUptodate(page))
> -			shmem_falloc.nr_falloced++;
> -
>  		/*
>  		 * If !PageUptodate, leave it that way so that freeable pages
>  		 * can be recognized if we need to rollback on error later.

Which goes on to say:

		 * But set_page_dirty so that memory pressure will swap rather
		 * than free the pages we are allocating (and SGP_CACHE pages
		 * might still be clean: we now need to mark those dirty too).
		 */
		set_page_dirty(page);
		unlock_page(page);
		put_page(page);
		cond_resched();

> @@ -2736,9 +2689,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		i_size_write(inode, offset + len);
>  	inode->i_ctime = current_time(inode);
>  undone:
> -	spin_lock(&inode->i_lock);
> -	inode->i_private = NULL;
> -	spin_unlock(&inode->i_lock);
> +	up_write(&mapping->invalidate_lock);
>  out:
>  	inode_unlock(inode);
>  	return error;
> -- 
> 2.26.2

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate
@ 2021-04-29  3:24     ` Hugh Dickins
  0 siblings, 0 replies; 40+ messages in thread
From: Hugh Dickins @ 2021-04-29  3:24 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, Hugh Dickins, linux-mm

On Fri, 23 Apr 2021, Jan Kara wrote:

> We have to handle pages added by currently running shmem_fallocate()
> specially in shmem_writepage(). For this we use serialization mechanism
> using structure attached to inode->i_private field. If we protect
> allocation of pages in shmem_fallocate() with invalidate_lock instead,
> we are sure added pages cannot be dirtied until shmem_fallocate() is done
> (invalidate_lock blocks faults, i_rwsem blocks writes) and thus we
> cannot see those pages in shmem_writepage() and there's no need for the
> serialization mechanism.

Appealing diffstat, but NAK, this patch is based on a false premise.

All those pages that are added by shmem_fallocate() are marked dirty:
see the set_page_dirty(page) and comment above it in shmem_fallocate().

It's intentional, that they should percolate through to shmem_writepage()
when memory is tight, and give feedback to fallocate that it should stop
(instead of getting into that horrid OOM-kill flurry that never even
frees any tmpfs anyway).

Hugh

> 
> CC: Hugh Dickins <hughd@google.com>
> CC: <linux-mm@kvack.org>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/shmem.c | 61 ++++++------------------------------------------------
>  1 file changed, 6 insertions(+), 55 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index f34162ac46de..7a2b0744031e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -94,18 +94,6 @@ static struct vfsmount *shm_mnt;
>  /* Symlink up to this size is kmalloc'ed instead of using a swappable page */
>  #define SHORT_SYMLINK_LEN 128
>  
> -/*
> - * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> - * i_rwsem making sure that it has only one user at a time): we would prefer
> - * not to enlarge the shmem inode just for that.
> - */
> -struct shmem_falloc {
> -	pgoff_t start;		/* start of range currently being fallocated */
> -	pgoff_t next;		/* the next page offset to be fallocated */
> -	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> -	pgoff_t nr_unswapped;	/* how often writepage refused to swap out */
> -};
> -
>  struct shmem_options {
>  	unsigned long long blocks;
>  	unsigned long long inodes;
> @@ -1364,28 +1352,11 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>  	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
>  	 * value into swapfile.c, the only way we can correctly account for a
>  	 * fallocated page arriving here is now to initialize it and write it.
> -	 *
> -	 * That's okay for a page already fallocated earlier, but if we have
> -	 * not yet completed the fallocation, then (a) we want to keep track
> -	 * of this page in case we have to undo it, and (b) it may not be a
> -	 * good idea to continue anyway, once we're pushing into swap.  So
> -	 * reactivate the page, and let shmem_fallocate() quit when too many.

(b) there commenting on communicating back to fallocate by nr_unswapped.

> +	 * Since a page added by currently running fallocate call cannot be
> +	 * dirtied and thus arrive here we know the fallocate has already
> +	 * completed and we are fine writing it out.
>  	 */
>  	if (!PageUptodate(page)) {
> -		if (inode->i_private) {
> -			struct shmem_falloc *shmem_falloc;
> -			spin_lock(&inode->i_lock);
> -			shmem_falloc = inode->i_private;
> -			if (shmem_falloc &&
> -			    index >= shmem_falloc->start &&
> -			    index < shmem_falloc->next)
> -				shmem_falloc->nr_unswapped++;
> -			else
> -				shmem_falloc = NULL;
> -			spin_unlock(&inode->i_lock);
> -			if (shmem_falloc)
> -				goto redirty;
> -		}
>  		clear_highpage(page);
>  		flush_dcache_page(page);
>  		SetPageUptodate(page);
> @@ -2629,9 +2600,9 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  							 loff_t len)
>  {
>  	struct inode *inode = file_inode(file);
> +	struct address_space *mapping = file->f_mapping;
>  	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
>  	struct shmem_inode_info *info = SHMEM_I(inode);
> -	struct shmem_falloc shmem_falloc;
>  	pgoff_t start, index, end;
>  	int error;
>  
> @@ -2641,7 +2612,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  	inode_lock(inode);
>  
>  	if (mode & FALLOC_FL_PUNCH_HOLE) {
> -		struct address_space *mapping = file->f_mapping;
>  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
>  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
>  
> @@ -2680,14 +2650,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		goto out;
>  	}
>  
> -	shmem_falloc.start = start;
> -	shmem_falloc.next  = start;
> -	shmem_falloc.nr_falloced = 0;
> -	shmem_falloc.nr_unswapped = 0;
> -	spin_lock(&inode->i_lock);
> -	inode->i_private = &shmem_falloc;
> -	spin_unlock(&inode->i_lock);
> -
> +	down_write(&mapping->invalidate_lock);
>  	for (index = start; index < end; index++) {
>  		struct page *page;
>  
> @@ -2697,8 +2660,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		 */
>  		if (signal_pending(current))
>  			error = -EINTR;
> -		else if (shmem_falloc.nr_unswapped > shmem_falloc.nr_falloced)
> -			error = -ENOMEM;
>  		else
>  			error = shmem_getpage(inode, index, &page, SGP_FALLOC);
>  		if (error) {
> @@ -2711,14 +2672,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  			goto undone;
>  		}
>  
> -		/*
> -		 * Inform shmem_writepage() how far we have reached.
> -		 * No need for lock or barrier: we have the page lock.
> -		 */
> -		shmem_falloc.next++;
> -		if (!PageUptodate(page))
> -			shmem_falloc.nr_falloced++;
> -
>  		/*
>  		 * If !PageUptodate, leave it that way so that freeable pages
>  		 * can be recognized if we need to rollback on error later.

Which goes on to say:

		 * But set_page_dirty so that memory pressure will swap rather
		 * than free the pages we are allocating (and SGP_CACHE pages
		 * might still be clean: we now need to mark those dirty too).
		 */
		set_page_dirty(page);
		unlock_page(page);
		put_page(page);
		cond_resched();

> @@ -2736,9 +2689,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		i_size_write(inode, offset + len);
>  	inode->i_ctime = current_time(inode);
>  undone:
> -	spin_lock(&inode->i_lock);
> -	inode->i_private = NULL;
> -	spin_unlock(&inode->i_lock);
> +	up_write(&mapping->invalidate_lock);
>  out:
>  	inode_unlock(inode);
>  	return error;
> -- 
> 2.26.2


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 09/12] shmem: Convert to using invalidate_lock
  2021-04-23 17:29 ` [PATCH 09/12] shmem: " Jan Kara
@ 2021-04-29  4:12     ` Hugh Dickins
  0 siblings, 0 replies; 40+ messages in thread
From: Hugh Dickins @ 2021-04-29  4:12 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, Hugh Dickins, linux-mm

On Fri, 23 Apr 2021, Jan Kara wrote:

> Shmem uses a home-grown mechanism for serializing hole punch with page
> fault. Use mapping->invalidate_lock for it instead. Admittedly the
> home-grown mechanism locks out only the range being actually punched out
> while invalidate_lock locks the whole mapping so it is serializing more.
> But hole punch doesn't seem to be that critical operation and the
> simplification is noticeable.

Home-grown indeed (and went through several different bugginesses,
Linus fixing issues in its waitq handling found years later).

I'd love to remove it all (rather than replace it by a new rwsem),
but never enough courage+time to do so: on optimistic days (that is,
rarely) I like to think that none of it would be needed nowadays;
but its gestation was difficult, and I cannot easily reproduce the
testing that demanded it (Sasha and Vlastimil helped a lot).

If you're interested in the history, I cannot point to one thread,
but "shmem: fix faulting into a hole while it's punched" finds
some of them, June/July 2014.  You've pushed me into re-reading
there, but I've not yet found the crucial evidence that stopped us
from reverting this mechanism, once we had abandoned the hole-punch
"pincer" in shmem_undo_range().

tmpfs's problem with faulting versus hole-punch was not the data
integrity issue you are attacking with invalidate_lock, but a
starvation issue triggered in Trinity fuzzing.

If invalidate_lock had existed at the time, I might have reused it
for this purpose too - I certainly wanted to avoid enlarging the
inode with another rwsem just for this; but also reluctant to add
another layer of locking to the common path (maybe I'm just silly
to try to avoid an rwsem which is so rarely taken for writing?).

But the code as it stands is working satisfactorily with minimal
overhead: so I'm not in a rush to remove or replace it yet. Thank
you for including tmpfs in your reach, but I think for the moment
I'd prefer you to leave this change out of the series. Maybe later
when it's settled in the fs/ filesystems (perhaps making guarantees
that we might want to extend to tmpfs) we could make this change -
but I'd still rather let hole-punch and fault race freely without it.

But your 01/12, fixing mm comments mentioning i_mutex, looked good:
Acked-by: Hugh Dickins <hughd@google.com>
to that one.  But I think it would be better extracted from this
invalidate_lock series, and just sent to akpm cc linux-mm on its own.

Thanks,
Hugh

> 
> CC: Hugh Dickins <hughd@google.com>
> CC: <linux-mm@kvack.org>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/shmem.c | 98 ++++--------------------------------------------------
>  1 file changed, 7 insertions(+), 91 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 55b2888db542..f34162ac46de 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -95,12 +95,11 @@ static struct vfsmount *shm_mnt;
>  #define SHORT_SYMLINK_LEN 128
>  
>  /*
> - * shmem_fallocate communicates with shmem_fault or shmem_writepage via
> - * inode->i_private (with i_rwsem making sure that it has only one user at
> - * a time): we would prefer not to enlarge the shmem inode just for that.
> + * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> + * i_rwsem making sure that it has only one user at a time): we would prefer
> + * not to enlarge the shmem inode just for that.
>   */
>  struct shmem_falloc {
> -	wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
>  	pgoff_t start;		/* start of range currently being fallocated */
>  	pgoff_t next;		/* the next page offset to be fallocated */
>  	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> @@ -1378,7 +1377,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>  			spin_lock(&inode->i_lock);
>  			shmem_falloc = inode->i_private;
>  			if (shmem_falloc &&
> -			    !shmem_falloc->waitq &&
>  			    index >= shmem_falloc->start &&
>  			    index < shmem_falloc->next)
>  				shmem_falloc->nr_unswapped++;
> @@ -2025,18 +2023,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  	return error;
>  }
>  
> -/*
> - * This is like autoremove_wake_function, but it removes the wait queue
> - * entry unconditionally - even if something else had already woken the
> - * target.
> - */
> -static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key)
> -{
> -	int ret = default_wake_function(wait, mode, sync, key);
> -	list_del_init(&wait->entry);
> -	return ret;
> -}
> -
>  static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  {
>  	struct vm_area_struct *vma = vmf->vma;
> @@ -2046,65 +2032,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  	int err;
>  	vm_fault_t ret = VM_FAULT_LOCKED;
>  
> -	/*
> -	 * Trinity finds that probing a hole which tmpfs is punching can
> -	 * prevent the hole-punch from ever completing: which in turn
> -	 * locks writers out with its hold on i_rwsem.  So refrain from
> -	 * faulting pages into the hole while it's being punched.  Although
> -	 * shmem_undo_range() does remove the additions, it may be unable to
> -	 * keep up, as each new page needs its own unmap_mapping_range() call,
> -	 * and the i_mmap tree grows ever slower to scan if new vmas are added.
> -	 *
> -	 * It does not matter if we sometimes reach this check just before the
> -	 * hole-punch begins, so that one fault then races with the punch:
> -	 * we just need to make racing faults a rare case.
> -	 *
> -	 * The implementation below would be much simpler if we just used a
> -	 * standard mutex or completion: but we cannot take i_rwsem in fault,
> -	 * and bloating every shmem inode for this unlikely case would be sad.
> -	 */
> -	if (unlikely(inode->i_private)) {
> -		struct shmem_falloc *shmem_falloc;
> -
> -		spin_lock(&inode->i_lock);
> -		shmem_falloc = inode->i_private;
> -		if (shmem_falloc &&
> -		    shmem_falloc->waitq &&
> -		    vmf->pgoff >= shmem_falloc->start &&
> -		    vmf->pgoff < shmem_falloc->next) {
> -			struct file *fpin;
> -			wait_queue_head_t *shmem_falloc_waitq;
> -			DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
> -
> -			ret = VM_FAULT_NOPAGE;
> -			fpin = maybe_unlock_mmap_for_io(vmf, NULL);
> -			if (fpin)
> -				ret = VM_FAULT_RETRY;
> -
> -			shmem_falloc_waitq = shmem_falloc->waitq;
> -			prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
> -					TASK_UNINTERRUPTIBLE);
> -			spin_unlock(&inode->i_lock);
> -			schedule();
> -
> -			/*
> -			 * shmem_falloc_waitq points into the shmem_fallocate()
> -			 * stack of the hole-punching task: shmem_falloc_waitq
> -			 * is usually invalid by the time we reach here, but
> -			 * finish_wait() does not dereference it in that case;
> -			 * though i_lock needed lest racing with wake_up_all().
> -			 */
> -			spin_lock(&inode->i_lock);
> -			finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
> -			spin_unlock(&inode->i_lock);
> -
> -			if (fpin)
> -				fput(fpin);
> -			return ret;
> -		}
> -		spin_unlock(&inode->i_lock);
> -	}
> -
>  	sgp = SGP_CACHE;
>  
>  	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
> @@ -2113,8 +2040,10 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  	else if (vma->vm_flags & VM_HUGEPAGE)
>  		sgp = SGP_HUGE;
>  
> +	down_read(&inode->i_mapping->invalidate_lock);
>  	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp,
>  				  gfp, vma, vmf, &ret);
> +	up_read(&inode->i_mapping->invalidate_lock);
>  	if (err)
>  		return vmf_error(err);
>  	return ret;
> @@ -2715,7 +2644,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		struct address_space *mapping = file->f_mapping;
>  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
>  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
> -		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
>  
>  		/* protected by i_rwsem */
>  		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
> @@ -2723,24 +2651,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  			goto out;
>  		}
>  
> -		shmem_falloc.waitq = &shmem_falloc_waitq;
> -		shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
> -		shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
> -		spin_lock(&inode->i_lock);
> -		inode->i_private = &shmem_falloc;
> -		spin_unlock(&inode->i_lock);
> -
> +		down_write(&mapping->invalidate_lock);
>  		if ((u64)unmap_end > (u64)unmap_start)
>  			unmap_mapping_range(mapping, unmap_start,
>  					    1 + unmap_end - unmap_start, 0);
>  		shmem_truncate_range(inode, offset, offset + len - 1);
>  		/* No need to unmap again: hole-punching leaves COWed pages */
> -
> -		spin_lock(&inode->i_lock);
> -		inode->i_private = NULL;
> -		wake_up_all(&shmem_falloc_waitq);
> -		WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.head));
> -		spin_unlock(&inode->i_lock);
> +		up_write(&mapping->invalidate_lock);
>  		error = 0;
>  		goto out;
>  	}
> @@ -2763,7 +2680,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		goto out;
>  	}
>  
> -	shmem_falloc.waitq = NULL;
>  	shmem_falloc.start = start;
>  	shmem_falloc.next  = start;
>  	shmem_falloc.nr_falloced = 0;
> -- 
> 2.26.2

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 09/12] shmem: Convert to using invalidate_lock
@ 2021-04-29  4:12     ` Hugh Dickins
  0 siblings, 0 replies; 40+ messages in thread
From: Hugh Dickins @ 2021-04-29  4:12 UTC (permalink / raw)
  To: Jan Kara
  Cc: linux-fsdevel, Christoph Hellwig, Amir Goldstein, Dave Chinner,
	Ted Tso, Hugh Dickins, linux-mm

On Fri, 23 Apr 2021, Jan Kara wrote:

> Shmem uses a home-grown mechanism for serializing hole punch with page
> fault. Use mapping->invalidate_lock for it instead. Admittedly the
> home-grown mechanism locks out only the range being actually punched out
> while invalidate_lock locks the whole mapping so it is serializing more.
> But hole punch doesn't seem to be that critical operation and the
> simplification is noticeable.

Home-grown indeed (and went through several different bugginesses,
Linus fixing issues in its waitq handling found years later).

I'd love to remove it all (rather than replace it by a new rwsem),
but never enough courage+time to do so: on optimistic days (that is,
rarely) I like to think that none of it would be needed nowadays;
but its gestation was difficult, and I cannot easily reproduce the
testing that demanded it (Sasha and Vlastimil helped a lot).

If you're interested in the history, I cannot point to one thread,
but "shmem: fix faulting into a hole while it's punched" finds
some of them, June/July 2014.  You've pushed me into re-reading
there, but I've not yet found the crucial evidence that stopped us
from reverting this mechanism, once we had abandoned the hole-punch
"pincer" in shmem_undo_range().

tmpfs's problem with faulting versus hole-punch was not the data
integrity issue you are attacking with invalidate_lock, but a
starvation issue triggered in Trinity fuzzing.

If invalidate_lock had existed at the time, I might have reused it
for this purpose too - I certainly wanted to avoid enlarging the
inode with another rwsem just for this; but also reluctant to add
another layer of locking to the common path (maybe I'm just silly
to try to avoid an rwsem which is so rarely taken for writing?).

But the code as it stands is working satisfactorily with minimal
overhead: so I'm not in a rush to remove or replace it yet. Thank
you for including tmpfs in your reach, but I think for the moment
I'd prefer you to leave this change out of the series. Maybe later
when it's settled in the fs/ filesystems (perhaps making guarantees
that we might want to extend to tmpfs) we could make this change -
but I'd still rather let hole-punch and fault race freely without it.

But your 01/12, fixing mm comments mentioning i_mutex, looked good:
Acked-by: Hugh Dickins <hughd@google.com>
to that one.  But I think it would be better extracted from this
invalidate_lock series, and just sent to akpm cc linux-mm on its own.

Thanks,
Hugh

> 
> CC: Hugh Dickins <hughd@google.com>
> CC: <linux-mm@kvack.org>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/shmem.c | 98 ++++--------------------------------------------------
>  1 file changed, 7 insertions(+), 91 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 55b2888db542..f34162ac46de 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -95,12 +95,11 @@ static struct vfsmount *shm_mnt;
>  #define SHORT_SYMLINK_LEN 128
>  
>  /*
> - * shmem_fallocate communicates with shmem_fault or shmem_writepage via
> - * inode->i_private (with i_rwsem making sure that it has only one user at
> - * a time): we would prefer not to enlarge the shmem inode just for that.
> + * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> + * i_rwsem making sure that it has only one user at a time): we would prefer
> + * not to enlarge the shmem inode just for that.
>   */
>  struct shmem_falloc {
> -	wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
>  	pgoff_t start;		/* start of range currently being fallocated */
>  	pgoff_t next;		/* the next page offset to be fallocated */
>  	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> @@ -1378,7 +1377,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>  			spin_lock(&inode->i_lock);
>  			shmem_falloc = inode->i_private;
>  			if (shmem_falloc &&
> -			    !shmem_falloc->waitq &&
>  			    index >= shmem_falloc->start &&
>  			    index < shmem_falloc->next)
>  				shmem_falloc->nr_unswapped++;
> @@ -2025,18 +2023,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  	return error;
>  }
>  
> -/*
> - * This is like autoremove_wake_function, but it removes the wait queue
> - * entry unconditionally - even if something else had already woken the
> - * target.
> - */
> -static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key)
> -{
> -	int ret = default_wake_function(wait, mode, sync, key);
> -	list_del_init(&wait->entry);
> -	return ret;
> -}
> -
>  static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  {
>  	struct vm_area_struct *vma = vmf->vma;
> @@ -2046,65 +2032,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  	int err;
>  	vm_fault_t ret = VM_FAULT_LOCKED;
>  
> -	/*
> -	 * Trinity finds that probing a hole which tmpfs is punching can
> -	 * prevent the hole-punch from ever completing: which in turn
> -	 * locks writers out with its hold on i_rwsem.  So refrain from
> -	 * faulting pages into the hole while it's being punched.  Although
> -	 * shmem_undo_range() does remove the additions, it may be unable to
> -	 * keep up, as each new page needs its own unmap_mapping_range() call,
> -	 * and the i_mmap tree grows ever slower to scan if new vmas are added.
> -	 *
> -	 * It does not matter if we sometimes reach this check just before the
> -	 * hole-punch begins, so that one fault then races with the punch:
> -	 * we just need to make racing faults a rare case.
> -	 *
> -	 * The implementation below would be much simpler if we just used a
> -	 * standard mutex or completion: but we cannot take i_rwsem in fault,
> -	 * and bloating every shmem inode for this unlikely case would be sad.
> -	 */
> -	if (unlikely(inode->i_private)) {
> -		struct shmem_falloc *shmem_falloc;
> -
> -		spin_lock(&inode->i_lock);
> -		shmem_falloc = inode->i_private;
> -		if (shmem_falloc &&
> -		    shmem_falloc->waitq &&
> -		    vmf->pgoff >= shmem_falloc->start &&
> -		    vmf->pgoff < shmem_falloc->next) {
> -			struct file *fpin;
> -			wait_queue_head_t *shmem_falloc_waitq;
> -			DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
> -
> -			ret = VM_FAULT_NOPAGE;
> -			fpin = maybe_unlock_mmap_for_io(vmf, NULL);
> -			if (fpin)
> -				ret = VM_FAULT_RETRY;
> -
> -			shmem_falloc_waitq = shmem_falloc->waitq;
> -			prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
> -					TASK_UNINTERRUPTIBLE);
> -			spin_unlock(&inode->i_lock);
> -			schedule();
> -
> -			/*
> -			 * shmem_falloc_waitq points into the shmem_fallocate()
> -			 * stack of the hole-punching task: shmem_falloc_waitq
> -			 * is usually invalid by the time we reach here, but
> -			 * finish_wait() does not dereference it in that case;
> -			 * though i_lock needed lest racing with wake_up_all().
> -			 */
> -			spin_lock(&inode->i_lock);
> -			finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
> -			spin_unlock(&inode->i_lock);
> -
> -			if (fpin)
> -				fput(fpin);
> -			return ret;
> -		}
> -		spin_unlock(&inode->i_lock);
> -	}
> -
>  	sgp = SGP_CACHE;
>  
>  	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
> @@ -2113,8 +2040,10 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  	else if (vma->vm_flags & VM_HUGEPAGE)
>  		sgp = SGP_HUGE;
>  
> +	down_read(&inode->i_mapping->invalidate_lock);
>  	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp,
>  				  gfp, vma, vmf, &ret);
> +	up_read(&inode->i_mapping->invalidate_lock);
>  	if (err)
>  		return vmf_error(err);
>  	return ret;
> @@ -2715,7 +2644,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		struct address_space *mapping = file->f_mapping;
>  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
>  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
> -		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
>  
>  		/* protected by i_rwsem */
>  		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
> @@ -2723,24 +2651,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  			goto out;
>  		}
>  
> -		shmem_falloc.waitq = &shmem_falloc_waitq;
> -		shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
> -		shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
> -		spin_lock(&inode->i_lock);
> -		inode->i_private = &shmem_falloc;
> -		spin_unlock(&inode->i_lock);
> -
> +		down_write(&mapping->invalidate_lock);
>  		if ((u64)unmap_end > (u64)unmap_start)
>  			unmap_mapping_range(mapping, unmap_start,
>  					    1 + unmap_end - unmap_start, 0);
>  		shmem_truncate_range(inode, offset, offset + len - 1);
>  		/* No need to unmap again: hole-punching leaves COWed pages */
> -
> -		spin_lock(&inode->i_lock);
> -		inode->i_private = NULL;
> -		wake_up_all(&shmem_falloc_waitq);
> -		WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.head));
> -		spin_unlock(&inode->i_lock);
> +		up_write(&mapping->invalidate_lock);
>  		error = 0;
>  		goto out;
>  	}
> @@ -2763,7 +2680,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		goto out;
>  	}
>  
> -	shmem_falloc.waitq = NULL;
>  	shmem_falloc.start = start;
>  	shmem_falloc.next  = start;
>  	shmem_falloc.nr_falloced = 0;
> -- 
> 2.26.2


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate
  2021-04-29  3:24     ` Hugh Dickins
  (?)
@ 2021-04-29  9:20     ` Jan Kara
  -1 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-29  9:20 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Jan Kara, linux-fsdevel, Christoph Hellwig, Amir Goldstein,
	Dave Chinner, Ted Tso, linux-mm

On Wed 28-04-21 20:24:59, Hugh Dickins wrote:
> On Fri, 23 Apr 2021, Jan Kara wrote:
> 
> > We have to handle pages added by currently running shmem_fallocate()
> > specially in shmem_writepage(). For this we use serialization mechanism
> > using structure attached to inode->i_private field. If we protect
> > allocation of pages in shmem_fallocate() with invalidate_lock instead,
> > we are sure added pages cannot be dirtied until shmem_fallocate() is done
> > (invalidate_lock blocks faults, i_rwsem blocks writes) and thus we
> > cannot see those pages in shmem_writepage() and there's no need for the
> > serialization mechanism.
> 
> Appealing diffstat, but NAK, this patch is based on a false premise.
> 
> All those pages that are added by shmem_fallocate() are marked dirty:
> see the set_page_dirty(page) and comment above it in shmem_fallocate().

Aha, I missed that set_page_dirty(). Thanks for correcting me.

> It's intentional, that they should percolate through to shmem_writepage()
> when memory is tight, and give feedback to fallocate that it should stop
> (instead of getting into that horrid OOM-kill flurry that never even
> frees any tmpfs anyway).

I understand the reason, I just feel there should be a better way of
communicating memory pressure than ->writepage talking to ->fallocate
through structure attached to inode->i_private. But now I see it isn't as
simple as I thought ;) Anyway for now I'll just remove this patch. Thanks
for review!

								Honza

> > CC: Hugh Dickins <hughd@google.com>
> > CC: <linux-mm@kvack.org>
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  mm/shmem.c | 61 ++++++------------------------------------------------
> >  1 file changed, 6 insertions(+), 55 deletions(-)
> > 
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index f34162ac46de..7a2b0744031e 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -94,18 +94,6 @@ static struct vfsmount *shm_mnt;
> >  /* Symlink up to this size is kmalloc'ed instead of using a swappable page */
> >  #define SHORT_SYMLINK_LEN 128
> >  
> > -/*
> > - * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> > - * i_rwsem making sure that it has only one user at a time): we would prefer
> > - * not to enlarge the shmem inode just for that.
> > - */
> > -struct shmem_falloc {
> > -	pgoff_t start;		/* start of range currently being fallocated */
> > -	pgoff_t next;		/* the next page offset to be fallocated */
> > -	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> > -	pgoff_t nr_unswapped;	/* how often writepage refused to swap out */
> > -};
> > -
> >  struct shmem_options {
> >  	unsigned long long blocks;
> >  	unsigned long long inodes;
> > @@ -1364,28 +1352,11 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
> >  	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
> >  	 * value into swapfile.c, the only way we can correctly account for a
> >  	 * fallocated page arriving here is now to initialize it and write it.
> > -	 *
> > -	 * That's okay for a page already fallocated earlier, but if we have
> > -	 * not yet completed the fallocation, then (a) we want to keep track
> > -	 * of this page in case we have to undo it, and (b) it may not be a
> > -	 * good idea to continue anyway, once we're pushing into swap.  So
> > -	 * reactivate the page, and let shmem_fallocate() quit when too many.
> 
> (b) there commenting on communicating back to fallocate by nr_unswapped.
> 
> > +	 * Since a page added by currently running fallocate call cannot be
> > +	 * dirtied and thus arrive here we know the fallocate has already
> > +	 * completed and we are fine writing it out.
> >  	 */
> >  	if (!PageUptodate(page)) {
> > -		if (inode->i_private) {
> > -			struct shmem_falloc *shmem_falloc;
> > -			spin_lock(&inode->i_lock);
> > -			shmem_falloc = inode->i_private;
> > -			if (shmem_falloc &&
> > -			    index >= shmem_falloc->start &&
> > -			    index < shmem_falloc->next)
> > -				shmem_falloc->nr_unswapped++;
> > -			else
> > -				shmem_falloc = NULL;
> > -			spin_unlock(&inode->i_lock);
> > -			if (shmem_falloc)
> > -				goto redirty;
> > -		}
> >  		clear_highpage(page);
> >  		flush_dcache_page(page);
> >  		SetPageUptodate(page);
> > @@ -2629,9 +2600,9 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  							 loff_t len)
> >  {
> >  	struct inode *inode = file_inode(file);
> > +	struct address_space *mapping = file->f_mapping;
> >  	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
> >  	struct shmem_inode_info *info = SHMEM_I(inode);
> > -	struct shmem_falloc shmem_falloc;
> >  	pgoff_t start, index, end;
> >  	int error;
> >  
> > @@ -2641,7 +2612,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  	inode_lock(inode);
> >  
> >  	if (mode & FALLOC_FL_PUNCH_HOLE) {
> > -		struct address_space *mapping = file->f_mapping;
> >  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
> >  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
> >  
> > @@ -2680,14 +2650,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		goto out;
> >  	}
> >  
> > -	shmem_falloc.start = start;
> > -	shmem_falloc.next  = start;
> > -	shmem_falloc.nr_falloced = 0;
> > -	shmem_falloc.nr_unswapped = 0;
> > -	spin_lock(&inode->i_lock);
> > -	inode->i_private = &shmem_falloc;
> > -	spin_unlock(&inode->i_lock);
> > -
> > +	down_write(&mapping->invalidate_lock);
> >  	for (index = start; index < end; index++) {
> >  		struct page *page;
> >  
> > @@ -2697,8 +2660,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		 */
> >  		if (signal_pending(current))
> >  			error = -EINTR;
> > -		else if (shmem_falloc.nr_unswapped > shmem_falloc.nr_falloced)
> > -			error = -ENOMEM;
> >  		else
> >  			error = shmem_getpage(inode, index, &page, SGP_FALLOC);
> >  		if (error) {
> > @@ -2711,14 +2672,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  			goto undone;
> >  		}
> >  
> > -		/*
> > -		 * Inform shmem_writepage() how far we have reached.
> > -		 * No need for lock or barrier: we have the page lock.
> > -		 */
> > -		shmem_falloc.next++;
> > -		if (!PageUptodate(page))
> > -			shmem_falloc.nr_falloced++;
> > -
> >  		/*
> >  		 * If !PageUptodate, leave it that way so that freeable pages
> >  		 * can be recognized if we need to rollback on error later.
> 
> Which goes on to say:
> 
> 		 * But set_page_dirty so that memory pressure will swap rather
> 		 * than free the pages we are allocating (and SGP_CACHE pages
> 		 * might still be clean: we now need to mark those dirty too).
> 		 */
> 		set_page_dirty(page);
> 		unlock_page(page);
> 		put_page(page);
> 		cond_resched();
> 
> > @@ -2736,9 +2689,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		i_size_write(inode, offset + len);
> >  	inode->i_ctime = current_time(inode);
> >  undone:
> > -	spin_lock(&inode->i_lock);
> > -	inode->i_private = NULL;
> > -	spin_unlock(&inode->i_lock);
> > +	up_write(&mapping->invalidate_lock);
> >  out:
> >  	inode_unlock(inode);
> >  	return error;
> > -- 
> > 2.26.2
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 09/12] shmem: Convert to using invalidate_lock
  2021-04-29  4:12     ` Hugh Dickins
  (?)
@ 2021-04-29  9:30     ` Jan Kara
  -1 siblings, 0 replies; 40+ messages in thread
From: Jan Kara @ 2021-04-29  9:30 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Jan Kara, linux-fsdevel, Christoph Hellwig, Amir Goldstein,
	Dave Chinner, Ted Tso, linux-mm

On Wed 28-04-21 21:12:36, Hugh Dickins wrote:
> On Fri, 23 Apr 2021, Jan Kara wrote:
> 
> > Shmem uses a home-grown mechanism for serializing hole punch with page
> > fault. Use mapping->invalidate_lock for it instead. Admittedly the
> > home-grown mechanism locks out only the range being actually punched out
> > while invalidate_lock locks the whole mapping so it is serializing more.
> > But hole punch doesn't seem to be that critical operation and the
> > simplification is noticeable.
> 
> Home-grown indeed (and went through several different bugginesses,
> Linus fixing issues in its waitq handling found years later).
> 
> I'd love to remove it all (rather than replace it by a new rwsem),
> but never enough courage+time to do so: on optimistic days (that is,
> rarely) I like to think that none of it would be needed nowadays;
> but its gestation was difficult, and I cannot easily reproduce the
> testing that demanded it (Sasha and Vlastimil helped a lot).
> 
> If you're interested in the history, I cannot point to one thread,
> but "shmem: fix faulting into a hole while it's punched" finds
> some of them, June/July 2014.  You've pushed me into re-reading
> there, but I've not yet found the crucial evidence that stopped us
> from reverting this mechanism, once we had abandoned the hole-punch
> "pincer" in shmem_undo_range().
> 
> tmpfs's problem with faulting versus hole-punch was not the data
> integrity issue you are attacking with invalidate_lock, but a
> starvation issue triggered in Trinity fuzzing.
> 
> If invalidate_lock had existed at the time, I might have reused it
> for this purpose too - I certainly wanted to avoid enlarging the
> inode with another rwsem just for this; but also reluctant to add
> another layer of locking to the common path (maybe I'm just silly
> to try to avoid an rwsem which is so rarely taken for writing?).
> 
> But the code as it stands is working satisfactorily with minimal
> overhead: so I'm not in a rush to remove or replace it yet. Thank
> you for including tmpfs in your reach, but I think for the moment
> I'd prefer you to leave this change out of the series. Maybe later
> when it's settled in the fs/ filesystems (perhaps making guarantees
> that we might want to extend to tmpfs) we could make this change -
> but I'd still rather let hole-punch and fault race freely without it.

OK, I'll remove the patch from the series for now. As you say, tmpfs is not
buggy so we can postpone the cleanup for later.

> But your 01/12, fixing mm comments mentioning i_mutex, looked good:
> Acked-by: Hugh Dickins <hughd@google.com>
> to that one.  But I think it would be better extracted from this
> invalidate_lock series, and just sent to akpm cc linux-mm on its own.

Thanks for review and yes, I guess I can send that patch to Andrew earlier.

								Honza

> > 
> > CC: Hugh Dickins <hughd@google.com>
> > CC: <linux-mm@kvack.org>
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  mm/shmem.c | 98 ++++--------------------------------------------------
> >  1 file changed, 7 insertions(+), 91 deletions(-)
> > 
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 55b2888db542..f34162ac46de 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -95,12 +95,11 @@ static struct vfsmount *shm_mnt;
> >  #define SHORT_SYMLINK_LEN 128
> >  
> >  /*
> > - * shmem_fallocate communicates with shmem_fault or shmem_writepage via
> > - * inode->i_private (with i_rwsem making sure that it has only one user at
> > - * a time): we would prefer not to enlarge the shmem inode just for that.
> > + * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> > + * i_rwsem making sure that it has only one user at a time): we would prefer
> > + * not to enlarge the shmem inode just for that.
> >   */
> >  struct shmem_falloc {
> > -	wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
> >  	pgoff_t start;		/* start of range currently being fallocated */
> >  	pgoff_t next;		/* the next page offset to be fallocated */
> >  	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> > @@ -1378,7 +1377,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
> >  			spin_lock(&inode->i_lock);
> >  			shmem_falloc = inode->i_private;
> >  			if (shmem_falloc &&
> > -			    !shmem_falloc->waitq &&
> >  			    index >= shmem_falloc->start &&
> >  			    index < shmem_falloc->next)
> >  				shmem_falloc->nr_unswapped++;
> > @@ -2025,18 +2023,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
> >  	return error;
> >  }
> >  
> > -/*
> > - * This is like autoremove_wake_function, but it removes the wait queue
> > - * entry unconditionally - even if something else had already woken the
> > - * target.
> > - */
> > -static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key)
> > -{
> > -	int ret = default_wake_function(wait, mode, sync, key);
> > -	list_del_init(&wait->entry);
> > -	return ret;
> > -}
> > -
> >  static vm_fault_t shmem_fault(struct vm_fault *vmf)
> >  {
> >  	struct vm_area_struct *vma = vmf->vma;
> > @@ -2046,65 +2032,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
> >  	int err;
> >  	vm_fault_t ret = VM_FAULT_LOCKED;
> >  
> > -	/*
> > -	 * Trinity finds that probing a hole which tmpfs is punching can
> > -	 * prevent the hole-punch from ever completing: which in turn
> > -	 * locks writers out with its hold on i_rwsem.  So refrain from
> > -	 * faulting pages into the hole while it's being punched.  Although
> > -	 * shmem_undo_range() does remove the additions, it may be unable to
> > -	 * keep up, as each new page needs its own unmap_mapping_range() call,
> > -	 * and the i_mmap tree grows ever slower to scan if new vmas are added.
> > -	 *
> > -	 * It does not matter if we sometimes reach this check just before the
> > -	 * hole-punch begins, so that one fault then races with the punch:
> > -	 * we just need to make racing faults a rare case.
> > -	 *
> > -	 * The implementation below would be much simpler if we just used a
> > -	 * standard mutex or completion: but we cannot take i_rwsem in fault,
> > -	 * and bloating every shmem inode for this unlikely case would be sad.
> > -	 */
> > -	if (unlikely(inode->i_private)) {
> > -		struct shmem_falloc *shmem_falloc;
> > -
> > -		spin_lock(&inode->i_lock);
> > -		shmem_falloc = inode->i_private;
> > -		if (shmem_falloc &&
> > -		    shmem_falloc->waitq &&
> > -		    vmf->pgoff >= shmem_falloc->start &&
> > -		    vmf->pgoff < shmem_falloc->next) {
> > -			struct file *fpin;
> > -			wait_queue_head_t *shmem_falloc_waitq;
> > -			DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
> > -
> > -			ret = VM_FAULT_NOPAGE;
> > -			fpin = maybe_unlock_mmap_for_io(vmf, NULL);
> > -			if (fpin)
> > -				ret = VM_FAULT_RETRY;
> > -
> > -			shmem_falloc_waitq = shmem_falloc->waitq;
> > -			prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
> > -					TASK_UNINTERRUPTIBLE);
> > -			spin_unlock(&inode->i_lock);
> > -			schedule();
> > -
> > -			/*
> > -			 * shmem_falloc_waitq points into the shmem_fallocate()
> > -			 * stack of the hole-punching task: shmem_falloc_waitq
> > -			 * is usually invalid by the time we reach here, but
> > -			 * finish_wait() does not dereference it in that case;
> > -			 * though i_lock needed lest racing with wake_up_all().
> > -			 */
> > -			spin_lock(&inode->i_lock);
> > -			finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
> > -			spin_unlock(&inode->i_lock);
> > -
> > -			if (fpin)
> > -				fput(fpin);
> > -			return ret;
> > -		}
> > -		spin_unlock(&inode->i_lock);
> > -	}
> > -
> >  	sgp = SGP_CACHE;
> >  
> >  	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
> > @@ -2113,8 +2040,10 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
> >  	else if (vma->vm_flags & VM_HUGEPAGE)
> >  		sgp = SGP_HUGE;
> >  
> > +	down_read(&inode->i_mapping->invalidate_lock);
> >  	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp,
> >  				  gfp, vma, vmf, &ret);
> > +	up_read(&inode->i_mapping->invalidate_lock);
> >  	if (err)
> >  		return vmf_error(err);
> >  	return ret;
> > @@ -2715,7 +2644,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		struct address_space *mapping = file->f_mapping;
> >  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
> >  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
> > -		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
> >  
> >  		/* protected by i_rwsem */
> >  		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
> > @@ -2723,24 +2651,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  			goto out;
> >  		}
> >  
> > -		shmem_falloc.waitq = &shmem_falloc_waitq;
> > -		shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
> > -		shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
> > -		spin_lock(&inode->i_lock);
> > -		inode->i_private = &shmem_falloc;
> > -		spin_unlock(&inode->i_lock);
> > -
> > +		down_write(&mapping->invalidate_lock);
> >  		if ((u64)unmap_end > (u64)unmap_start)
> >  			unmap_mapping_range(mapping, unmap_start,
> >  					    1 + unmap_end - unmap_start, 0);
> >  		shmem_truncate_range(inode, offset, offset + len - 1);
> >  		/* No need to unmap again: hole-punching leaves COWed pages */
> > -
> > -		spin_lock(&inode->i_lock);
> > -		inode->i_private = NULL;
> > -		wake_up_all(&shmem_falloc_waitq);
> > -		WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.head));
> > -		spin_unlock(&inode->i_lock);
> > +		up_write(&mapping->invalidate_lock);
> >  		error = 0;
> >  		goto out;
> >  	}
> > @@ -2763,7 +2680,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		goto out;
> >  	}
> >  
> > -	shmem_falloc.waitq = NULL;
> >  	shmem_falloc.start = start;
> >  	shmem_falloc.next  = start;
> >  	shmem_falloc.nr_falloced = 0;
> > -- 
> > 2.26.2
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2021-04-29  9:30 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-23 17:29 [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Jan Kara
2021-04-23 17:29 ` [PATCH 01/12] mm: Fix comments mentioning i_mutex Jan Kara
2021-04-23 17:29 ` [PATCH 02/12] mm: Protect operations adding pages to page cache with invalidate_lock Jan Kara
2021-04-23 18:30   ` Matthew Wilcox
2021-04-23 18:30     ` [f2fs-dev] " Matthew Wilcox
2021-04-23 23:04   ` Dave Chinner
2021-04-23 23:04     ` [f2fs-dev] " Dave Chinner
2021-04-26 15:46     ` Jan Kara
2021-04-26 15:46       ` [f2fs-dev] " Jan Kara
2021-04-23 17:29 ` [PATCH 03/12] ext4: Convert to use mapping->invalidate_lock Jan Kara
2021-04-23 17:29 ` [PATCH 04/12] ext2: Convert to using invalidate_lock Jan Kara
2021-04-23 17:29 ` [PATCH 05/12] xfs: Convert to use invalidate_lock Jan Kara
2021-04-23 22:39   ` Dave Chinner
2021-04-23 17:29 ` [PATCH 06/12] zonefs: Convert to using invalidate_lock Jan Kara
2021-04-26  6:40   ` Damien Le Moal
2021-04-26 16:24     ` Jan Kara
2021-04-23 17:29 ` [PATCH 07/12] f2fs: " Jan Kara
2021-04-23 19:15   ` kernel test robot
2021-04-23 19:15     ` kernel test robot
2021-04-23 20:05   ` kernel test robot
2021-04-23 20:05     ` kernel test robot
2021-04-23 17:29 ` [PATCH 08/12] fuse: " Jan Kara
2021-04-23 17:29 ` [PATCH 09/12] shmem: " Jan Kara
2021-04-29  4:12   ` Hugh Dickins
2021-04-29  4:12     ` Hugh Dickins
2021-04-29  9:30     ` Jan Kara
2021-04-23 17:29 ` [PATCH 10/12] shmem: Use invalidate_lock to protect fallocate Jan Kara
2021-04-23 19:27   ` kernel test robot
2021-04-23 19:27     ` kernel test robot
2021-04-29  3:24   ` Hugh Dickins
2021-04-29  3:24     ` Hugh Dickins
2021-04-29  9:20     ` Jan Kara
2021-04-23 17:29 ` [PATCH 11/12] ceph: Fix race between hole punch and page fault Jan Kara
2021-04-23 17:29 ` [PATCH 12/12] cifs: " Jan Kara
2021-04-23 22:07 ` [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Dave Chinner
2021-04-23 22:07   ` [f2fs-dev] " Dave Chinner
2021-04-23 23:51   ` Matthew Wilcox
2021-04-23 23:51     ` [f2fs-dev] " Matthew Wilcox
2021-04-24  6:11     ` Christoph Hellwig
2021-04-24  6:11       ` [f2fs-dev] " Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.