* [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio
@ 2024-03-27 15:04 David Howells
2024-03-27 15:56 ` Trond Myklebust
2024-03-27 17:55 ` [RFC PATCH v2] " David Howells
0 siblings, 2 replies; 6+ messages in thread
From: David Howells @ 2024-03-27 15:04 UTC (permalink / raw)
To: Matthew Wilcox, Miklos Szeredi, Trond Myklebust, Christoph Hellwig
Cc: dhowells, Andrew Morton, Alexander Viro, Christian Brauner,
Jeff Layton, linux-mm, linux-fsdevel, netfs, v9fs, linux-afs,
ceph-devel, linux-cifs, linux-nfs, devel, linux-kernel
Implement a replacement for launder_folio[1]. The key feature of
invalidate_inode_pages2() is that it locks each folio individually, unmaps
it to prevent mmap'd accesses interfering and calls the ->launder_folio()
address_space op to flush it. This has problems: firstly, each folio is
written individually as one or more small writes; secondly, adjacent folios
cannot be added so easily into the laundry; thirdly, it's yet another op to
implement.
Here's a bit of a hacked together solution which should probably be moved
to mm/:
Use the mmap lock to cause future faulting to wait, then unmap all the
folios if we have mmaps, then, conditionally, use ->writepages() to flush
any dirty data back and then discard all pages. The caller needs to hold a
lock to prevent ->write_iter() getting underfoot.
Note that this does not prevent ->read_iter() from accessing the file
whilst we do this since that may operate without locking.
We also have the writeback_control available and so have the opportunity to
set a flag in it to tell the filesystem that we're doing an invalidation.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Matthew Wilcox <willy@infradead.org>
cc: Miklos Szeredi <miklos@szeredi.hu>
cc: Trond Myklebust <trond.myklebust@hammerspace.com>
cc: Christoph Hellwig <hch@lst.de>
cc: Andrew Morton <akpm@linux-foundation.org>
cc: Alexander Viro <viro@zeniv.linux.org.uk>
cc: Christian Brauner <brauner@kernel.org>
cc: Jeff Layton <jlayton@kernel.org>
cc: linux-mm@kvack.org
cc: linux-fsdevel@vger.kernel.org
cc: netfs@lists.linux.dev
cc: v9fs@lists.linux.dev
cc: linux-afs@lists.infradead.org
cc: ceph-devel@vger.kernel.org
cc: linux-cifs@vger.kernel.org
cc: linux-nfs@vger.kernel.org
cc: devel@lists.orangefs.org
Link: https://lore.kernel.org/r/1668172.1709764777@warthog.procyon.org.uk/ [1]
---
fs/netfs/misc.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++++
include/linux/netfs.h | 3 ++
mm/memory.c | 3 +-
3 files changed, 61 insertions(+), 1 deletion(-)
diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
index bc1fc54fb724..774ce825fbec 100644
--- a/fs/netfs/misc.c
+++ b/fs/netfs/misc.c
@@ -250,3 +250,59 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp)
return true;
}
EXPORT_SYMBOL(netfs_release_folio);
+
+extern void unmap_mapping_range_tree(struct rb_root_cached *root,
+ pgoff_t first_index,
+ pgoff_t last_index,
+ struct zap_details *details);
+
+/**
+ * netfs_invalidate_inode - Invalidate/forcibly write back an inode's pagecache
+ * @inode: The inode to flush
+ * @flush: Set to write back rather than simply invalidate.
+ *
+ * Invalidate all the folios on an inode, possibly writing them back first.
+ * Whilst the operation is undertaken, the mmap lock is held to prevent
+ * ->fault() from reinstalling the folios. The caller must hold a lock on the
+ * inode sufficient to prevent ->write_iter() from dirtying more folios.
+ */
+int netfs_invalidate_inode(struct inode *inode, bool flush)
+{
+ struct address_space *mapping = inode->i_mapping;
+
+ if (!mapping || !mapping->nrpages)
+ goto out;
+
+ /* Prevent folios from being faulted in. */
+ i_mmap_lock_write(mapping);
+
+ if (!mapping->nrpages)
+ goto unlock;
+
+ /* Assume there are probably PTEs only if there are mmaps. */
+ if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))) {
+ struct zap_details details = { };
+
+ unmap_mapping_range_tree(&mapping->i_mmap, 0, LLONG_MAX, &details);
+ }
+
+ /* Write back the data if we're asked to. */
+ if (flush) {
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_ALL,
+ .nr_to_write = LONG_MAX,
+ .range_start = 0,
+ .range_end = LLONG_MAX,
+ };
+
+ filemap_fdatawrite_wbc(mapping, &wbc);
+ }
+
+ /* Wait for writeback to complete on all folios and discard. */
+ truncate_inode_pages_range(mapping, 0, LLONG_MAX);
+
+unlock:
+ i_mmap_unlock_write(mapping);
+out:
+ return filemap_check_errors(mapping);
+}
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 298552f5122c..40dc34ee291d 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -400,6 +400,9 @@ ssize_t netfs_buffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *fr
ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct iov_iter *from);
ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from);
+/* High-level invalidation API */
+int netfs_invalidate_inode(struct inode *inode, bool flush);
+
/* Address operations API */
struct readahead_control;
void netfs_readahead(struct readahead_control *);
diff --git a/mm/memory.c b/mm/memory.c
index f2bc6dd15eb8..106f32c7d7fb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3665,7 +3665,7 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma,
zap_page_range_single(vma, start_addr, end_addr - start_addr, details);
}
-static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
+inline void unmap_mapping_range_tree(struct rb_root_cached *root,
pgoff_t first_index,
pgoff_t last_index,
struct zap_details *details)
@@ -3685,6 +3685,7 @@ static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
details);
}
}
+EXPORT_SYMBOL_GPL(unmap_mapping_range_tree);
/**
* unmap_mapping_folio() - Unmap single folio from processes.
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio
2024-03-27 15:04 [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio David Howells
@ 2024-03-27 15:56 ` Trond Myklebust
2024-03-27 17:46 ` Matthew Wilcox
2024-03-27 17:55 ` [RFC PATCH v2] " David Howells
1 sibling, 1 reply; 6+ messages in thread
From: Trond Myklebust @ 2024-03-27 15:56 UTC (permalink / raw)
To: hch, miklos, willy, dhowells
Cc: ceph-devel, linux-cifs, brauner, linux-mm, linux-fsdevel, akpm,
v9fs, netfs, jlayton, viro, linux-nfs, linux-kernel, devel,
linux-afs
On Wed, 2024-03-27 at 15:04 +0000, David Howells wrote:
> Implement a replacement for launder_folio[1]. The key feature of
> invalidate_inode_pages2() is that it locks each folio individually,
> unmaps
> it to prevent mmap'd accesses interfering and calls the -
> >launder_folio()
> address_space op to flush it. This has problems: firstly, each folio
> is
> written individually as one or more small writes; secondly, adjacent
> folios
> cannot be added so easily into the laundry; thirdly, it's yet another
> op to
> implement.
>
> Here's a bit of a hacked together solution which should probably be
> moved
> to mm/:
>
> Use the mmap lock to cause future faulting to wait, then unmap all
> the
> folios if we have mmaps, then, conditionally, use ->writepages() to
> flush
> any dirty data back and then discard all pages. The caller needs to
> hold a
> lock to prevent ->write_iter() getting underfoot.
>
> Note that this does not prevent ->read_iter() from accessing the file
> whilst we do this since that may operate without locking.
>
> We also have the writeback_control available and so have the
> opportunity to
> set a flag in it to tell the filesystem that we're doing an
> invalidation.
>
> Signed-off-by: David Howells <dhowells@redhat.com>
> cc: Matthew Wilcox <willy@infradead.org>
> cc: Miklos Szeredi <miklos@szeredi.hu>
> cc: Trond Myklebust <trond.myklebust@hammerspace.com>
> cc: Christoph Hellwig <hch@lst.de>
> cc: Andrew Morton <akpm@linux-foundation.org>
> cc: Alexander Viro <viro@zeniv.linux.org.uk>
> cc: Christian Brauner <brauner@kernel.org>
> cc: Jeff Layton <jlayton@kernel.org>
> cc: linux-mm@kvack.org
> cc: linux-fsdevel@vger.kernel.org
> cc: netfs@lists.linux.dev
> cc: v9fs@lists.linux.dev
> cc: linux-afs@lists.infradead.org
> cc: ceph-devel@vger.kernel.org
> cc: linux-cifs@vger.kernel.org
> cc: linux-nfs@vger.kernel.org
> cc: devel@lists.orangefs.org
> Link:
> https://lore.kernel.org/r/1668172.1709764777@warthog.procyon.org.uk/
> [1]
> ---
> fs/netfs/misc.c | 56
> ++++++++++++++++++++++++++++++++++++++++++++++++++
> include/linux/netfs.h | 3 ++
> mm/memory.c | 3 +-
> 3 files changed, 61 insertions(+), 1 deletion(-)
>
> diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
> index bc1fc54fb724..774ce825fbec 100644
> --- a/fs/netfs/misc.c
> +++ b/fs/netfs/misc.c
> @@ -250,3 +250,59 @@ bool netfs_release_folio(struct folio *folio,
> gfp_t gfp)
> return true;
> }
> EXPORT_SYMBOL(netfs_release_folio);
> +
> +extern void unmap_mapping_range_tree(struct rb_root_cached *root,
> + pgoff_t first_index,
> + pgoff_t last_index,
> + struct zap_details *details);
> +
> +/**
> + * netfs_invalidate_inode - Invalidate/forcibly write back an
> inode's pagecache
> + * @inode: The inode to flush
> + * @flush: Set to write back rather than simply invalidate.
> + *
> + * Invalidate all the folios on an inode, possibly writing them back
> first.
> + * Whilst the operation is undertaken, the mmap lock is held to
> prevent
> + * ->fault() from reinstalling the folios. The caller must hold a
> lock on the
> + * inode sufficient to prevent ->write_iter() from dirtying more
> folios.
> + */
> +int netfs_invalidate_inode(struct inode *inode, bool flush)
> +{
> + struct address_space *mapping = inode->i_mapping;
> +
> + if (!mapping || !mapping->nrpages)
> + goto out;
> +
> + /* Prevent folios from being faulted in. */
> + i_mmap_lock_write(mapping);
> +
> + if (!mapping->nrpages)
> + goto unlock;
> +
> + /* Assume there are probably PTEs only if there are mmaps.
> */
> + if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))) {
> + struct zap_details details = { };
> +
> + unmap_mapping_range_tree(&mapping->i_mmap, 0,
> LLONG_MAX, &details);
> + }
> +
> + /* Write back the data if we're asked to. */
> + if (flush) {
> + struct writeback_control wbc = {
> + .sync_mode = WB_SYNC_ALL,
> + .nr_to_write = LONG_MAX,
> + .range_start = 0,
> + .range_end = LLONG_MAX,
> + };
> +
> + filemap_fdatawrite_wbc(mapping, &wbc);
> + }
> +
> + /* Wait for writeback to complete on all folios and discard.
> */
> + truncate_inode_pages_range(mapping, 0, LLONG_MAX);
> +
> +unlock:
> + i_mmap_unlock_write(mapping);
> +out:
> + return filemap_check_errors(mapping);
> +}
> diff --git a/include/linux/netfs.h b/include/linux/netfs.h
> index 298552f5122c..40dc34ee291d 100644
> --- a/include/linux/netfs.h
> +++ b/include/linux/netfs.h
> @@ -400,6 +400,9 @@ ssize_t netfs_buffered_write_iter_locked(struct
> kiocb *iocb, struct iov_iter *fr
> ssize_t netfs_unbuffered_write_iter(struct kiocb *iocb, struct
> iov_iter *from);
> ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter
> *from);
>
> +/* High-level invalidation API */
> +int netfs_invalidate_inode(struct inode *inode, bool flush);
> +
> /* Address operations API */
> struct readahead_control;
> void netfs_readahead(struct readahead_control *);
> diff --git a/mm/memory.c b/mm/memory.c
> index f2bc6dd15eb8..106f32c7d7fb 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3665,7 +3665,7 @@ static void unmap_mapping_range_vma(struct
> vm_area_struct *vma,
> zap_page_range_single(vma, start_addr, end_addr -
> start_addr, details);
> }
>
> -static inline void unmap_mapping_range_tree(struct rb_root_cached
> *root,
> +inline void unmap_mapping_range_tree(struct rb_root_cached *root,
> pgoff_t first_index,
> pgoff_t last_index,
> struct zap_details
> *details)
> @@ -3685,6 +3685,7 @@ static inline void
> unmap_mapping_range_tree(struct rb_root_cached *root,
> details);
> }
> }
> +EXPORT_SYMBOL_GPL(unmap_mapping_range_tree);
>
> /**
> * unmap_mapping_folio() - Unmap single folio from processes.
>
This is hardly a drop-in replacement for launder_page. The whole point
of using invalidate_inode_pages2() was that it only requires taking the
page locks, allowing us to use it in contexts such as
nfs_release_file().
The above use of truncate_inode_pages_range() will require any caller
to grab several locks in order to prevent data loss through races with
write system calls.
--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio
2024-03-27 15:56 ` Trond Myklebust
@ 2024-03-27 17:46 ` Matthew Wilcox
0 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox @ 2024-03-27 17:46 UTC (permalink / raw)
To: Trond Myklebust
Cc: hch, miklos, dhowells, ceph-devel, linux-cifs, brauner, linux-mm,
linux-fsdevel, akpm, v9fs, netfs, jlayton, viro, linux-nfs,
linux-kernel, devel, linux-afs
On Wed, Mar 27, 2024 at 03:56:50PM +0000, Trond Myklebust wrote:
> On Wed, 2024-03-27 at 15:04 +0000, David Howells wrote:
> > Implement a replacement for launder_folio[1]. The key feature of
> > invalidate_inode_pages2() is that it locks each folio individually,
> > unmaps
> > it to prevent mmap'd accesses interfering and calls the -
> > >launder_folio()
> > address_space op to flush it. This has problems: firstly, each folio
> > is
> > written individually as one or more small writes; secondly, adjacent
> > folios
> > cannot be added so easily into the laundry; thirdly, it's yet another
> > op to
> > implement.
>
> This is hardly a drop-in replacement for launder_page. The whole point
> of using invalidate_inode_pages2() was that it only requires taking the
> page locks, allowing us to use it in contexts such as
> nfs_release_file().
>
> The above use of truncate_inode_pages_range() will require any caller
> to grab several locks in order to prevent data loss through races with
> write system calls.
I don't understand why you need launder_folio now
that you have a page_mkwrite implementation (your commit
e3db7691e9f3dff3289f64e3d98583e28afe03db used this as justification).
Other filesystems (except the network filesystems that copied the NFS
implementation) don't implement launder_folio.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC PATCH v2] mm, netfs: Provide a means of invalidation without using launder_folio
2024-03-27 15:04 [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio David Howells
2024-03-27 15:56 ` Trond Myklebust
@ 2024-03-27 17:55 ` David Howells
2024-03-27 18:45 ` Matthew Wilcox
2024-03-27 20:37 ` David Howells
1 sibling, 2 replies; 6+ messages in thread
From: David Howells @ 2024-03-27 17:55 UTC (permalink / raw)
To: dhowells
Cc: Matthew Wilcox, Miklos Szeredi, Trond Myklebust,
Christoph Hellwig, Andrew Morton, Alexander Viro,
Christian Brauner, Jeff Layton, linux-mm, linux-fsdevel, netfs,
v9fs, linux-afs, ceph-devel, linux-cifs, linux-nfs, devel,
linux-kernel
mm, netfs: Provide a means of invalidation without using launder_folio
Implement a replacement for launder_folio. The key feature of
invalidate_inode_pages2() is that it locks each folio individually, unmaps
it to prevent mmap'd accesses interfering and calls the ->launder_folio()
address_space op to flush it. This has problems: firstly, each folio is
written individually as one or more small writes; secondly, adjacent folios
cannot be added so easily into the laundry; thirdly, it's yet another op to
implement.
Instead, use the invalidate lock to cause anyone wanting to add a folio to
the inode to wait, then unmap all the folios if we have mmaps, then,
conditionally, use ->writepages() to flush any dirty data back and then
discard all pages.
The invalidate lock prevents ->read_iter(), ->write_iter() and faulting
through mmap all from adding pages for the duration.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Matthew Wilcox <willy@infradead.org>
cc: Miklos Szeredi <miklos@szeredi.hu>
cc: Trond Myklebust <trond.myklebust@hammerspace.com>
cc: Christoph Hellwig <hch@lst.de>
cc: Andrew Morton <akpm@linux-foundation.org>
cc: Alexander Viro <viro@zeniv.linux.org.uk>
cc: Christian Brauner <brauner@kernel.org>
cc: Jeff Layton <jlayton@kernel.org>
cc: linux-mm@kvack.org
cc: linux-fsdevel@vger.kernel.org
cc: netfs@lists.linux.dev
cc: v9fs@lists.linux.dev
cc: linux-afs@lists.infradead.org
cc: ceph-devel@vger.kernel.org
cc: linux-cifs@vger.kernel.org
cc: linux-nfs@vger.kernel.org
cc: devel@lists.orangefs.org
---
include/linux/pagemap.h | 1 +
mm/filemap.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 49 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 2df35e65557d..4eb3d4177a53 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -40,6 +40,7 @@ int filemap_fdatawait_keep_errors(struct address_space *mapping);
int filemap_fdatawait_range(struct address_space *, loff_t lstart, loff_t lend);
int filemap_fdatawait_range_keep_errors(struct address_space *mapping,
loff_t start_byte, loff_t end_byte);
+int filemap_invalidate_inode(struct inode *inode, bool flush);
static inline int filemap_fdatawait(struct address_space *mapping)
{
diff --git a/mm/filemap.c b/mm/filemap.c
index 25983f0f96e3..98f439bedb44 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -4134,6 +4134,54 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
}
EXPORT_SYMBOL(filemap_release_folio);
+/**
+ * filemap_invalidate_inode - Invalidate/forcibly write back an inode's pagecache
+ * @inode: The inode to flush
+ * @flush: Set to write back rather than simply invalidate.
+ *
+ * Invalidate all the folios on an inode, possibly writing them back first.
+ * Whilst the operation is undertaken, the invalidate lock is held to prevent
+ * new folios from being installed.
+ */
+int filemap_invalidate_inode(struct inode *inode, bool flush)
+{
+ struct address_space *mapping = inode->i_mapping;
+
+ if (!mapping || !mapping->nrpages)
+ goto out;
+
+ /* Prevent new folios from being added to the inode. */
+ filemap_invalidate_lock(mapping);
+
+ if (!mapping->nrpages)
+ goto unlock;
+
+ /* Assume there are probably PTEs only if there are mmaps. */
+ if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)))
+ unmap_mapping_pages(mapping, 0, ULONG_MAX, false);
+
+ /* Write back the data if we're asked to. */
+ if (flush) {
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_ALL,
+ .nr_to_write = LONG_MAX,
+ .range_start = 0,
+ .range_end = LLONG_MAX,
+ };
+
+ filemap_fdatawrite_wbc(mapping, &wbc);
+ }
+
+ /* Wait for writeback to complete on all folios and discard. */
+ truncate_inode_pages_range(mapping, 0, LLONG_MAX);
+
+unlock:
+ filemap_invalidate_unlock(mapping);
+out:
+ return filemap_check_errors(mapping);
+}
+EXPORT_SYMBOL(filemap_invalidate_inode);
+
#ifdef CONFIG_CACHESTAT_SYSCALL
/**
* filemap_cachestat() - compute the page cache statistics of a mapping
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [RFC PATCH v2] mm, netfs: Provide a means of invalidation without using launder_folio
2024-03-27 17:55 ` [RFC PATCH v2] " David Howells
@ 2024-03-27 18:45 ` Matthew Wilcox
2024-03-27 20:37 ` David Howells
1 sibling, 0 replies; 6+ messages in thread
From: Matthew Wilcox @ 2024-03-27 18:45 UTC (permalink / raw)
To: David Howells
Cc: Miklos Szeredi, Trond Myklebust, Christoph Hellwig,
Andrew Morton, Alexander Viro, Christian Brauner, Jeff Layton,
linux-mm, linux-fsdevel, netfs, v9fs, linux-afs, ceph-devel,
linux-cifs, linux-nfs, devel, linux-kernel
On Wed, Mar 27, 2024 at 05:55:45PM +0000, David Howells wrote:
> +int filemap_invalidate_inode(struct inode *inode, bool flush)
> +{
> + struct address_space *mapping = inode->i_mapping;
> +
> + if (!mapping || !mapping->nrpages)
> + goto out;
> +
> + /* Prevent new folios from being added to the inode. */
> + filemap_invalidate_lock(mapping);
I'm kind of surprised that the callers wouldn't want to hold that lock
over a call to this function. I guess you're working on the callers,
so you'd know better than I would, but I would have used lockdep to
assert that invalidate_lock was held.
> + if (!mapping->nrpages)
> + goto unlock;
> +
> + /* Assume there are probably PTEs only if there are mmaps. */
> + if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)))
> + unmap_mapping_pages(mapping, 0, ULONG_MAX, false);
Is this optimisation worth it? We're already doing some expensive
operations here, does saving cycling the i_mmap_lock really help
anything? You'll note that unmap_mapping_pages() already does this
check inside the lock.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH v2] mm, netfs: Provide a means of invalidation without using launder_folio
2024-03-27 17:55 ` [RFC PATCH v2] " David Howells
2024-03-27 18:45 ` Matthew Wilcox
@ 2024-03-27 20:37 ` David Howells
1 sibling, 0 replies; 6+ messages in thread
From: David Howells @ 2024-03-27 20:37 UTC (permalink / raw)
To: Matthew Wilcox
Cc: dhowells, Miklos Szeredi, Trond Myklebust, Christoph Hellwig,
Andrew Morton, Alexander Viro, Christian Brauner, Jeff Layton,
linux-mm, linux-fsdevel, netfs, v9fs, linux-afs, ceph-devel,
linux-cifs, linux-nfs, devel, linux-kernel
Matthew Wilcox <willy@infradead.org> wrote:
> > + /* Prevent new folios from being added to the inode. */
> > + filemap_invalidate_lock(mapping);
>
> I'm kind of surprised that the callers wouldn't want to hold that lock
> over a call to this function. I guess you're working on the callers,
> so you'd know better than I would, but I would have used lockdep to
> assert that invalidate_lock was held.
I'm not sure. None of the places that look like they'd be calling this
currently take that lock (though possibly they should).
Also, should I provide it with explicit range, I wonder?
> > + if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)))
> > + unmap_mapping_pages(mapping, 0, ULONG_MAX, false);
>
> Is this optimisation worth it?
Perhaps not.
David
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-03-27 20:38 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-27 15:04 [RFC PATCH] mm, netfs: Provide a means of invalidation without using launder_folio David Howells
2024-03-27 15:56 ` Trond Myklebust
2024-03-27 17:46 ` Matthew Wilcox
2024-03-27 17:55 ` [RFC PATCH v2] " David Howells
2024-03-27 18:45 ` Matthew Wilcox
2024-03-27 20:37 ` David Howells
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).