* [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region()
@ 2023-03-02 23:16 David Howells
2023-03-02 23:16 ` [PATCH 1/3] mm: Add a function to get a single tagged folio from a file David Howells
` (8 more replies)
0 siblings, 9 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:16 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: David Howells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
Hi Linus, Steve,
Could you consider applying these please?
I've split the patch that I proposed[1] to revert Vishal's patch to afs and
Linus's changes to cifs back to the point where find_get_pages_range_tag()
was being used to get a single folio and then replace that with a function,
filemap_get_folio_tag() that just gets a single folio and done some
benchmarking against this and some conversions to use write_cache_pages()
in various ways.
This is using the following to do testing of the write paths:
fio --ioengine=libaio --direct=0 --gtod_reduce=1 --name=readtest \
--filename=/xfstest.test/foo --iodepth=128 --time_based \
--runtime=120 --readwrite=randread --iodepth_low=96 \
--iodepth_batch=16 --numjobs=4 --size=16M --bs=4k
The base for comparison, the upstream kernel at commit:
d2980d8d826554fa6981d621e569a453787472f8
"Merge tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git./linux/kernel/git/akpm/mm"
plus the accumulated fixes on Steve's cifs for-next branch.
AFS firstly. The code that's upstream keeps track of the dirtied region of
a folio in page->private, so I tried removing that to see what difference
it makes, in addition to trying conversions to use write_cache_pages(). I
also tried giving afs it's own copy of write_cache_pages() in order to
eliminate the function pointer - in case that had a signifcant effect due
to spectre mitigations.
Base:
WRITE: bw=302MiB/s (316MB/s), 71.9MiB/s-78.9MiB/s (75.3MB/s-82.8MB/s)
WRITE: bw=303MiB/s (318MB/s), 65.9MiB/s-84.0MiB/s (69.1MB/s-88.1MB/s)
WRITE: bw=310MiB/s (325MB/s), 73.6MiB/s-87.3MiB/s (77.1MB/s-91.5MB/s)
Base + Partial revert (these patches):
WRITE: bw=348MiB/s (365MB/s), 86.4MiB/s-87.5MiB/s (90.6MB/s-91.8MB/s)
WRITE: bw=350MiB/s (367MB/s), 86.6MiB/s-88.4MiB/s (90.8MB/s-92.7MB/s)
WRITE: bw=387MiB/s (406MB/s), 96.8MiB/s-97.0MiB/s (101MB/s-102MB/s)
Base + write_cache_pages():
WRITE: bw=280MiB/s (294MB/s), 69.7MiB/s-70.5MiB/s (73.0MB/s-73.9MB/s)
WRITE: bw=285MiB/s (299MB/s), 70.9MiB/s-71.5MiB/s (74.4MB/s-74.9MB/s)
WRITE: bw=290MiB/s (304MB/s), 71.6MiB/s-73.2MiB/s (75.1MB/s-76.8MB/s)
Base + Page-dirty-region removed:
WRITE: bw=301MiB/s (315MB/s), 70.4MiB/s-80.2MiB/s (73.8MB/s-84.1MB/s)
WRITE: bw=325MiB/s (341MB/s), 78.5MiB/s-87.1MiB/s (82.3MB/s-91.3MB/s)
WRITE: bw=320MiB/s (335MB/s), 71.6MiB/s-88.6MiB/s (75.0MB/s-92.9MB/s)
Base + Page-dirty-region tracking removed + write_cache_pages():
WRITE: bw=288MiB/s (302MB/s), 71.9MiB/s-72.3MiB/s (75.4MB/s-75.8MB/s)
WRITE: bw=284MiB/s (297MB/s), 70.7MiB/s-71.3MiB/s (74.1MB/s-74.8MB/s)
WRITE: bw=287MiB/s (301MB/s), 71.2MiB/s-72.6MiB/s (74.7MB/s-76.1MB/s)
Base + Page-dirty-region tracking removed + Own write_cache_pages()
WRITE: bw=302MiB/s (316MB/s), 75.1MiB/s-76.1MiB/s (78.7MB/s-79.8MB/s)
WRITE: bw=302MiB/s (316MB/s), 74.5MiB/s-76.1MiB/s (78.1MB/s-79.8MB/s)
WRITE: bw=301MiB/s (316MB/s), 75.2MiB/s-75.5MiB/s (78.9MB/s-79.1MB/s)
So the partially reverted code appears significantly faster than code based
on write_cache_pages(). Removing the page-dirty-region tracking also slows
things down - I have a suspicion that this may be due to multipage folios
enlarging the apparently dirty regions of a file.
And then CIFS. There's no dirtied region tracking here, so just the
partial reversion, a conversion to write_cache_pages() and its own version
of write_cache_pages() to eliminate the function pointer.
Base:
WRITE: bw=464MiB/s (487MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s)
WRITE: bw=463MiB/s (486MB/s), 116MiB/s-116MiB/s (121MB/s-122MB/s)
WRITE: bw=465MiB/s (488MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s)
Base + Partial revert (these patches):
WRITE: bw=470MiB/s (493MB/s), 117MiB/s-118MiB/s (123MB/s-123MB/s)
WRITE: bw=467MiB/s (489MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s)
WRITE: bw=464MiB/s (486MB/s), 116MiB/s-116MiB/s (121MB/s-122MB/s)
Base + write_cache_pages():
WRITE: bw=457MiB/s (479MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s)
WRITE: bw=449MiB/s (471MB/s), 112MiB/s-113MiB/s (118MB/s-118MB/s)
WRITE: bw=459MiB/s (482MB/s), 115MiB/s-115MiB/s (120MB/s-121MB/s)
Base + Own write_cache_pages():
WRITE: bw=451MiB/s (473MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s)
WRITE: bw=455MiB/s (478MB/s), 114MiB/s-114MiB/s (119MB/s-120MB/s)
WRITE: bw=453MiB/s (475MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s)
WRITE: bw=459MiB/s (481MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s)
Here the partially reverted code appears slightly better - but the results
are very close so I'm not sure if it's statistically significant.
I've pushed the patches here also:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-cifs
David
Link: https://lore.kernel.org/r/2214157.1677250083@warthog.procyon.org.uk/ [1]
David Howells (3):
mm: Add a function to get a single tagged folio from a file
afs: Partially revert and use filemap_get_folio_tag()
cifs: Partially revert and use filemap_get_folio_tag()
fs/afs/write.c | 118 +++++++++++++++++++---------------------
fs/cifs/file.c | 115 +++++++++++++++++----------------------
include/linux/pagemap.h | 2 +
mm/filemap.c | 58 ++++++++++++++++++++
4 files changed, 166 insertions(+), 127 deletions(-)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] mm: Add a function to get a single tagged folio from a file
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
@ 2023-03-02 23:16 ` David Howells
2023-03-02 23:21 ` Matthew Wilcox
2023-03-02 23:16 ` [PATCH 2/3] afs: Partially revert and use filemap_get_folio_tag() David Howells
` (7 subsequent siblings)
8 siblings, 1 reply; 11+ messages in thread
From: David Howells @ 2023-03-02 23:16 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: David Howells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel, Steve French, Andrew Morton,
linux-mm
Add a function to get a single tagged folio from a file rather than a batch
for use in afs and cifs where, in the common case, the batch is likely to
be rendered irrelevant by the {afs,cifs}_extend_writeback() function.
For filemap_get_folios_tag() to be of use, the batch has to be passed down,
and if it contains scattered, non-contiguous folios, these are likely to
end up being pinned by the batch for significant periods of time whilst I/O
is undertaken on earlier pages.
Further, for write_cache_pages() to be useful, it would need to wait for
PG_fscache which is used to indicate that I/O is in progress from a folio to
the cache - but it can't do this unconditionally as some filesystems, such
as btrfs, use PG_private_2 for other purposes.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Steve French <sfrench@samba.org>
cc: Linus Torvalds <torvalds@linux-foundation.org>
cc: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
cc: Andrew Morton <akpm@linux-foundation.org>
cc: linux-afs@lists.infradead.org
cc: linux-cifs@vger.kernel.org
cc: linux-mm@kvack.org
Link: https://lore.kernel.org/r/2214157.1677250083@warthog.procyon.org.uk/
---
include/linux/pagemap.h | 2 ++
mm/filemap.c | 58 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 60 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 0acb8e1fb7af..577535633006 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -741,6 +741,8 @@ unsigned filemap_get_folios_contig(struct address_space *mapping,
pgoff_t *start, pgoff_t end, struct folio_batch *fbatch);
unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch);
+struct folio *filemap_get_folio_tag(struct address_space *mapping, pgoff_t *start,
+ pgoff_t end, xa_mark_t tag);
struct page *grab_cache_page_write_begin(struct address_space *mapping,
pgoff_t index);
diff --git a/mm/filemap.c b/mm/filemap.c
index 2723104cc06a..1b1e9c661018 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2339,6 +2339,64 @@ unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
}
EXPORT_SYMBOL(filemap_get_folios_tag);
+/**
+ * filemap_get_folio_tag - Get the first folio matching @tag
+ * @mapping: The address_space to search
+ * @start: The starting page index
+ * @end: The final page index (inclusive)
+ * @tag: The tag index
+ *
+ * Search for and return the first folios in the mapping starting at index
+ * @start and up to index @end (inclusive). The folio is returned with an
+ * elevated reference count.
+ *
+ * If a folio is returned, it may start before @start; if it does, it will
+ * contain @start. The folio may also extend beyond @end; if it does, it will
+ * contain @end. If folios are added to or removed from the page cache while
+ * this is running, they may or may not be found by this call.
+ *
+ * Return: The folio that was found or NULL. @start is also updated to index
+ * the next folio for the traversal or will be left pointing after @end.
+ */
+struct folio *filemap_get_folio_tag(struct address_space *mapping, pgoff_t *start,
+ pgoff_t end, xa_mark_t tag)
+{
+ XA_STATE(xas, &mapping->i_pages, *start);
+ struct folio *folio;
+
+ rcu_read_lock();
+ while ((folio = find_get_entry(&xas, end, tag)) != NULL) {
+ /*
+ * Shadow entries should never be tagged, but this iteration
+ * is lockless so there is a window for page reclaim to evict
+ * a page we saw tagged. Skip over it.
+ */
+ if (xa_is_value(folio))
+ continue;
+
+ if (folio_test_hugetlb(folio))
+ *start = folio->index + 1;
+ else
+ *start = folio_next_index(folio);
+ goto out;
+ }
+
+ /*
+ * We come here when there is no page beyond @end. We take care to not
+ * overflow the index @start as it confuses some of the callers. This
+ * breaks the iteration when there is a page at index -1 but that is
+ * already broke anyway.
+ */
+ if (end == (pgoff_t)-1)
+ *start = (pgoff_t)-1;
+ else
+ *start = end + 1;
+out:
+ rcu_read_unlock();
+ return folio;
+}
+EXPORT_SYMBOL(filemap_get_folio_tag);
+
/*
* CD/DVDs are error prone. When a medium error occurs, the driver may fail
* a _large_ part of the i/o request. Imagine the worst scenario:
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] afs: Partially revert and use filemap_get_folio_tag()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
2023-03-02 23:16 ` [PATCH 1/3] mm: Add a function to get a single tagged folio from a file David Howells
@ 2023-03-02 23:16 ` David Howells
2023-03-02 23:16 ` [PATCH 3/3] cifs: " David Howells
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:16 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: David Howells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel, Steve French, Andrew Morton,
linux-mm
Partially revert the changes made by:
acc8d8588cb7e3e64b0d2fa611dad06574cd67b1.
afs: convert afs_writepages_region() to use filemap_get_folios_tag()
The issue is that filemap_get_folios_tag() gets a batch of pages at a time,
and then afs_writepages_region() goes through them one at a time, extends
each into an operation with as many pages as will fit using the loop in
afs_extend_writeback() and submits it - but, in the common case, this means
that the other pages in the batch already got annexed and processed in
afs_extend_writeback() and we end up doing duplicate processing.
Switching to write_cache_pages() isn't an immediate substitute as that
doesn't take account of PG_fscache (and this bit is used in other ways by
other filesystems).
So go back to finding the next folio from the VM one at a time and then
extending the op onwards.
Fixes: acc8d8588cb7 ("afs: convert afs_writepages_region() to use filemap_get_folios_tag()")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Linus Torvalds <torvalds@linux-foundation.org>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Steve French <sfrench@samba.org>
cc: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
cc: Andrew Morton <akpm@linux-foundation.org>
cc: linux-afs@lists.infradead.org
cc: linux-mm@kvack.org
Link: https://lore.kernel.org/r/2214157.1677250083@warthog.procyon.org.uk/
---
fs/afs/write.c | 118 ++++++++++++++++++++++++-------------------------
1 file changed, 57 insertions(+), 61 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 571f3b9a417e..2ed76697be96 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -704,87 +704,83 @@ static int afs_writepages_region(struct address_space *mapping,
bool max_one_loop)
{
struct folio *folio;
- struct folio_batch fbatch;
ssize_t ret;
- unsigned int i;
- int n, skips = 0;
+ int skips = 0;
_enter("%llx,%llx,", start, end);
- folio_batch_init(&fbatch);
do {
pgoff_t index = start / PAGE_SIZE;
- n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, &fbatch);
-
- if (!n)
+ folio = filemap_get_folio_tag(mapping, &index, end / PAGE_SIZE,
+ PAGECACHE_TAG_DIRTY);
+ if (!folio)
break;
- for (i = 0; i < n; i++) {
- folio = fbatch.folios[i];
- start = folio_pos(folio); /* May regress with THPs */
- _debug("wback %lx", folio_index(folio));
+ start = folio_pos(folio); /* May regress with THPs */
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0) {
- folio_batch_release(&fbatch);
- return ret;
- }
- } else {
- if (!folio_trylock(folio))
- continue;
- }
+ _debug("wback %lx", folio_index(folio));
- if (folio->mapping != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- continue;
+ /* At this point we hold neither the i_pages lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
+ */
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ ret = folio_lock_killable(folio);
+ if (ret < 0) {
+ folio_put(folio);
+ return ret;
+ }
+ } else {
+ if (!folio_trylock(folio)) {
+ folio_put(folio);
+ return 0;
}
+ }
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode != WB_SYNC_NONE) {
- folio_wait_writeback(folio);
+ if (folio_mapping(folio) != mapping ||
+ !folio_test_dirty(folio)) {
+ start += folio_size(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+ continue;
+ }
+
+ if (folio_test_writeback(folio) ||
+ folio_test_fscache(folio)) {
+ folio_unlock(folio);
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ folio_wait_writeback(folio);
#ifdef CONFIG_AFS_FSCACHE
- folio_wait_fscache(folio);
+ folio_wait_fscache(folio);
#endif
- } else {
- start += folio_size(folio);
- }
- if (wbc->sync_mode == WB_SYNC_NONE) {
- if (skips >= 5 || need_resched()) {
- *_next = start;
- _leave(" = 0 [%llx]", *_next);
- return 0;
- }
- skips++;
- }
- continue;
+ } else {
+ start += folio_size(folio);
}
-
- if (!folio_clear_dirty_for_io(folio))
- BUG();
- ret = afs_write_back_from_locked_folio(mapping, wbc,
- folio, start, end);
- if (ret < 0) {
- _leave(" = %zd", ret);
- folio_batch_release(&fbatch);
- return ret;
+ folio_put(folio);
+ if (wbc->sync_mode == WB_SYNC_NONE) {
+ if (skips >= 5 || need_resched())
+ break;
+ skips++;
}
+ continue;
+ }
- start += ret;
+ if (!folio_clear_dirty_for_io(folio))
+ BUG();
+ ret = afs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
+ folio_put(folio);
+ if (ret < 0) {
+ _leave(" = %zd", ret);
+ return ret;
}
- folio_batch_release(&fbatch);
+ start += ret;
+
+ if (max_one_loop)
+ break;
+
cond_resched();
} while (wbc->nr_to_write > 0);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] cifs: Partially revert and use filemap_get_folio_tag()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
2023-03-02 23:16 ` [PATCH 1/3] mm: Add a function to get a single tagged folio from a file David Howells
2023-03-02 23:16 ` [PATCH 2/3] afs: Partially revert and use filemap_get_folio_tag() David Howells
@ 2023-03-02 23:16 ` David Howells
2023-03-02 23:20 ` [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (5 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:16 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: David Howells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel, Steve French, Andrew Morton,
linux-mm
Mirror the changes made to afs to partially revert the changes made by:
acc8d8588cb7e3e64b0d2fa611dad06574cd67b1.
"afs: convert afs_writepages_region() to use filemap_get_folios_tag()"
that were then mirrored into cifs.
The issue is that filemap_get_folios_tag() gets a batch of pages at a time,
and then cifs_writepages_region() goes through them one at a time, extends
each into an operation with as many pages as will fit using the loop in
cifs_extend_writeback() and submits it - but, in the common case, this
means that the other pages in the batch already got annexed and processed
in cifs_extend_writeback() and we end up doing duplicate processing.
Switching to write_cache_pages() isn't an immediate substitute as that
doesn't take account of PG_fscache (and this bit is used in other ways by
other filesystems).
So go back to finding the next folio from the VM one at a time and then
extending the op onwards.
Fixes: 3822a7c40997 ("Merge tag 'mm-stable-2023-02-20-13-37' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm")
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Steve French <sfrench@samba.org>
cc: Linus Torvalds <torvalds@linux-foundation.org>
cc: Shyam Prasad N <nspmangalore@gmail.com>
cc: Rohith Surabattula <rohiths.msft@gmail.com>
cc: Jeff Layton <jlayton@kernel.org>
cc: Paulo Alcantara <pc@cjr.nz>
cc: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
cc: Andrew Morton <akpm@linux-foundation.org>
cc: linux-cifs@vger.kernel.org
cc: linux-mm@kvack.org
Link: https://lore.kernel.org/r/2214157.1677250083@warthog.procyon.org.uk/
---
fs/cifs/file.c | 115 +++++++++++++++++++++----------------------------
1 file changed, 49 insertions(+), 66 deletions(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 4d4a2d82636d..a3e89e741b42 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2864,93 +2864,76 @@ static int cifs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
loff_t start, loff_t end, loff_t *_next)
{
- struct folio_batch fbatch;
+ struct folio *folio;
+ ssize_t ret;
int skips = 0;
- folio_batch_init(&fbatch);
do {
- int nr;
pgoff_t index = start / PAGE_SIZE;
- nr = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, &fbatch);
- if (!nr)
+ folio = filemap_get_folio_tag(mapping, &index, end / PAGE_SIZE,
+ PAGECACHE_TAG_DIRTY);
+ if (!folio)
break;
- for (int i = 0; i < nr; i++) {
- ssize_t ret;
- struct folio *folio = fbatch.folios[i];
-
-redo_folio:
- start = folio_pos(folio); /* May regress with THPs */
+ start = folio_pos(folio); /* May regress with THPs */
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0)
- goto write_error;
- } else {
- if (!folio_trylock(folio))
- goto skip_write;
+ /* At this point we hold neither the i_pages lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
+ */
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ ret = folio_lock_killable(folio);
+ if (ret < 0) {
+ folio_put(folio);
+ return ret;
}
-
- if (folio_mapping(folio) != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- continue;
+ } else {
+ if (!folio_trylock(folio)) {
+ folio_put(folio);
+ return 0;
}
+ }
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode == WB_SYNC_NONE)
- goto skip_write;
+ if (folio_mapping(folio) != mapping ||
+ !folio_test_dirty(folio)) {
+ start += folio_size(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+ continue;
+ }
+ if (folio_test_writeback(folio) ||
+ folio_test_fscache(folio)) {
+ folio_unlock(folio);
+ if (wbc->sync_mode != WB_SYNC_NONE) {
folio_wait_writeback(folio);
#ifdef CONFIG_CIFS_FSCACHE
folio_wait_fscache(folio);
#endif
- goto redo_folio;
+ } else {
+ start += folio_size(folio);
}
-
- if (!folio_clear_dirty_for_io(folio))
- /* We hold the page lock - it should've been dirty. */
- WARN_ON(1);
-
- ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
- if (ret < 0)
- goto write_error;
-
- start += ret;
- continue;
-
-write_error:
- folio_batch_release(&fbatch);
- *_next = start;
- return ret;
-
-skip_write:
- /*
- * Too many skipped writes, or need to reschedule?
- * Treat it as a write error without an error code.
- */
- if (skips >= 5 || need_resched()) {
- ret = 0;
- goto write_error;
+ folio_put(folio);
+ if (wbc->sync_mode == WB_SYNC_NONE) {
+ if (skips >= 5 || need_resched())
+ break;
+ skips++;
}
-
- /* Otherwise, just skip that folio and go on to the next */
- skips++;
- start += folio_size(folio);
continue;
}
- folio_batch_release(&fbatch);
+ if (!folio_clear_dirty_for_io(folio))
+ /* We hold the page lock - it should've been dirty. */
+ WARN_ON(1);
+
+ ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
+ folio_put(folio);
+ if (ret < 0)
+ return ret;
+
+ start += ret;
cond_resched();
} while (wbc->nr_to_write > 0);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (2 preceding siblings ...)
2023-03-02 23:16 ` [PATCH 3/3] cifs: " David Howells
@ 2023-03-02 23:20 ` David Howells
2023-03-02 23:23 ` Test patch to remove per-page dirty region tracking from afs David Howells
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:20 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: dhowells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
David Howells <dhowells@redhat.com> wrote:
> AFS firstly. ...
>
> Base + write_cache_pages():
> WRITE: bw=280MiB/s (294MB/s), 69.7MiB/s-70.5MiB/s (73.0MB/s-73.9MB/s)
> WRITE: bw=285MiB/s (299MB/s), 70.9MiB/s-71.5MiB/s (74.4MB/s-74.9MB/s)
> WRITE: bw=290MiB/s (304MB/s), 71.6MiB/s-73.2MiB/s (75.1MB/s-76.8MB/s)
Here's the patch to convert AFS to use write_cache_pages(), retaining the use
of page->private to track the dirtied part of the page.
David
---
write.c | 382 +++++++++++++---------------------------------------------------
1 file changed, 78 insertions(+), 304 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 571f3b9a417e..01323fa58e1c 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -14,11 +14,6 @@
#include <linux/netfs.h>
#include "internal.h"
-static int afs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next,
- bool max_one_loop);
-
static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len,
loff_t i_size, bool caching);
@@ -56,10 +51,8 @@ static int afs_flush_conflicting_write(struct address_space *mapping,
.range_start = folio_pos(folio),
.range_end = LLONG_MAX,
};
- loff_t next;
- return afs_writepages_region(mapping, &wbc, folio_pos(folio), LLONG_MAX,
- &next, true);
+ return afs_writepages(mapping, &wbc);
}
/*
@@ -449,212 +442,57 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t
return afs_put_operation(op);
}
-/*
- * Extend the region to be written back to include subsequent contiguously
- * dirty pages if possible, but don't sleep while doing so.
- *
- * If this page holds new content, then we can include filler zeros in the
- * writeback.
- */
-static void afs_extend_writeback(struct address_space *mapping,
- struct afs_vnode *vnode,
- long *_count,
- loff_t start,
- loff_t max_len,
- bool new_content,
- bool caching,
- unsigned int *_len)
-{
- struct pagevec pvec;
- struct folio *folio;
- unsigned long priv;
- unsigned int psize, filler = 0;
- unsigned int f, t;
- loff_t len = *_len;
- pgoff_t index = (start + len) / PAGE_SIZE;
- bool stop = true;
- unsigned int i;
-
- XA_STATE(xas, &mapping->i_pages, index);
- pagevec_init(&pvec);
-
- do {
- /* Firstly, we gather up a batch of contiguous dirty pages
- * under the RCU read lock - but we can't clear the dirty flags
- * there if any of those pages are mapped.
- */
- rcu_read_lock();
-
- xas_for_each(&xas, folio, ULONG_MAX) {
- stop = true;
- if (xas_retry(&xas, folio))
- continue;
- if (xa_is_value(folio))
- break;
- if (folio_index(folio) != index)
- break;
-
- if (!folio_try_get_rcu(folio)) {
- xas_reset(&xas);
- continue;
- }
-
- /* Has the page moved or been split? */
- if (unlikely(folio != xas_reload(&xas))) {
- folio_put(folio);
- break;
- }
-
- if (!folio_trylock(folio)) {
- folio_put(folio);
- break;
- }
- if (!folio_test_dirty(folio) ||
- folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- psize = folio_size(folio);
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- if (f != 0 && !new_content) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- len += filler + t;
- filler = psize - t;
- if (len >= max_len || *_count <= 0)
- stop = true;
- else if (t == psize || new_content)
- stop = false;
-
- index += folio_nr_pages(folio);
- if (!pagevec_add(&pvec, &folio->page))
- break;
- if (stop)
- break;
- }
-
- if (!stop)
- xas_pause(&xas);
- rcu_read_unlock();
-
- /* Now, if we obtained any pages, we can shift them to being
- * writable and mark them for caching.
- */
- if (!pagevec_count(&pvec))
- break;
-
- for (i = 0; i < pagevec_count(&pvec); i++) {
- folio = page_folio(pvec.pages[i]);
- trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
-
- if (!folio_clear_dirty_for_io(folio))
- BUG();
- if (folio_start_writeback(folio))
- BUG();
- afs_folio_start_fscache(caching, folio);
-
- *_count -= folio_nr_pages(folio);
- folio_unlock(folio);
- }
-
- pagevec_release(&pvec);
- cond_resched();
- } while (!stop);
-
- *_len = len;
-}
+struct afs_writepages_context {
+ unsigned long long start;
+ unsigned long long end;
+ unsigned long long annex_at;
+ bool begun;
+ bool caching;
+ bool new_content;
+};
/*
- * Synchronously write back the locked page and any subsequent non-locked dirty
- * pages.
+ * Flush a block of pages to the server and the cache.
*/
-static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
- struct writeback_control *wbc,
- struct folio *folio,
- loff_t start, loff_t end)
+static int afs_writepages_submit(struct address_space *mapping,
+ struct writeback_control *wbc,
+ struct afs_writepages_context *ctx)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct iov_iter iter;
- unsigned long priv;
- unsigned int offset, to, len, max_len;
- loff_t i_size = i_size_read(&vnode->netfs.inode);
- bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
- bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
- long count = wbc->nr_to_write;
+ unsigned long long i_size = i_size_read(&vnode->netfs.inode);
+ size_t len = ctx->end - ctx->start;
int ret;
- _enter(",%lx,%llx-%llx", folio_index(folio), start, end);
-
- if (folio_start_writeback(folio))
- BUG();
- afs_folio_start_fscache(caching, folio);
-
- count -= folio_nr_pages(folio);
-
- /* Find all consecutive lockable dirty pages that have contiguous
- * written regions, stopping when we find a page that is not
- * immediately lockable, is not dirty or is missing, or we reach the
- * end of the range.
- */
- priv = (unsigned long)folio_get_private(folio);
- offset = afs_folio_dirty_from(folio, priv);
- to = afs_folio_dirty_to(folio, priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
-
- len = to - offset;
- start += offset;
- if (start < i_size) {
- /* Trim the write to the EOF; the extra data is ignored. Also
- * put an upper limit on the size of a single storedata op.
- */
- max_len = 65536 * 4096;
- max_len = min_t(unsigned long long, max_len, end - start + 1);
- max_len = min_t(unsigned long long, max_len, i_size - start);
-
- if (len < max_len &&
- (to == folio_size(folio) || new_content))
- afs_extend_writeback(mapping, vnode, &count,
- start, max_len, new_content,
- caching, &len);
- len = min_t(loff_t, len, max_len);
- }
+ _enter("%llx-%llx", ctx->start, ctx->start + len - 1);
/* We now have a contiguous set of dirty pages, each with writeback
- * set; the first page is still locked at this point, but all the rest
- * have been unlocked.
+ * set.
*/
- folio_unlock(folio);
-
- if (start < i_size) {
- _debug("write back %x @%llx [%llx]", len, start, i_size);
+ if (ctx->start < i_size) {
+ if (len > i_size - ctx->start)
+ len = i_size - ctx->start;
+ _debug("write back %zx @%llx [%llx]", len, ctx->start, i_size);
/* Speculatively write to the cache. We have to fix this up
* later if the store fails.
*/
- afs_write_to_cache(vnode, start, len, i_size, caching);
+ afs_write_to_cache(vnode, ctx->start, len, i_size, ctx->caching);
- iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len);
- ret = afs_store_data(vnode, &iter, start, false);
+ iov_iter_xarray(&iter, ITER_SOURCE,
+ &mapping->i_pages, ctx->start, len);
+ ret = afs_store_data(vnode, &iter, ctx->start, false);
} else {
- _debug("write discard %x @%llx [%llx]", len, start, i_size);
+ _debug("write discard %zx @%llx [%llx]", len, ctx->start, i_size);
/* The dirty region was entirely beyond the EOF. */
- fscache_clear_page_bits(mapping, start, len, caching);
- afs_pages_written_back(vnode, start, len);
+ fscache_clear_page_bits(mapping, ctx->start, len, ctx->caching);
+ afs_pages_written_back(vnode, ctx->start, len);
ret = 0;
}
switch (ret) {
case 0:
- wbc->nr_to_write = count;
ret = len;
break;
@@ -668,13 +506,13 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
case -EKEYREJECTED:
case -EKEYREVOKED:
case -ENETRESET:
- afs_redirty_pages(wbc, mapping, start, len);
+ afs_redirty_pages(wbc, mapping, ctx->start, len);
mapping_set_error(mapping, ret);
break;
case -EDQUOT:
case -ENOSPC:
- afs_redirty_pages(wbc, mapping, start, len);
+ afs_redirty_pages(wbc, mapping, ctx->start, len);
mapping_set_error(mapping, -ENOSPC);
break;
@@ -686,7 +524,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
case -ENOMEDIUM:
case -ENXIO:
trace_afs_file_error(vnode, ret, afs_file_error_writeback_fail);
- afs_kill_pages(mapping, start, len);
+ afs_kill_pages(mapping, ctx->start, len);
mapping_set_error(mapping, ret);
break;
}
@@ -696,100 +534,51 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
}
/*
- * write a region of pages back to the server
+ * Add a page to the set and flush when large enough.
*/
-static int afs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next,
- bool max_one_loop)
+static int afs_writepages_add_folio(struct folio *folio,
+ struct writeback_control *wbc, void *data)
{
- struct folio *folio;
- struct folio_batch fbatch;
- ssize_t ret;
- unsigned int i;
- int n, skips = 0;
-
- _enter("%llx,%llx,", start, end);
- folio_batch_init(&fbatch);
-
- do {
- pgoff_t index = start / PAGE_SIZE;
+ struct afs_writepages_context *ctx = data;
+ struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host);
+ unsigned long long pos = folio_pos(folio);
+ unsigned long priv;
+ size_t f, t;
+ int ret;
- n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, &fbatch);
+ priv = (unsigned long)folio_get_private(folio);
+ f = afs_folio_dirty_from(folio, priv);
+ t = afs_folio_dirty_to(folio, priv);
- if (!n)
- break;
- for (i = 0; i < n; i++) {
- folio = fbatch.folios[i];
- start = folio_pos(folio); /* May regress with THPs */
-
- _debug("wback %lx", folio_index(folio));
-
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0) {
- folio_batch_release(&fbatch);
- return ret;
- }
- } else {
- if (!folio_trylock(folio))
- continue;
- }
-
- if (folio->mapping != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- continue;
- }
-
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode != WB_SYNC_NONE) {
- folio_wait_writeback(folio);
-#ifdef CONFIG_AFS_FSCACHE
- folio_wait_fscache(folio);
-#endif
- } else {
- start += folio_size(folio);
- }
- if (wbc->sync_mode == WB_SYNC_NONE) {
- if (skips >= 5 || need_resched()) {
- *_next = start;
- _leave(" = 0 [%llx]", *_next);
- return 0;
- }
- skips++;
- }
- continue;
- }
-
- if (!folio_clear_dirty_for_io(folio))
- BUG();
- ret = afs_write_back_from_locked_folio(mapping, wbc,
- folio, start, end);
- if (ret < 0) {
- _leave(" = %zd", ret);
- folio_batch_release(&fbatch);
- return ret;
- }
-
- start += ret;
+ if (ctx->begun) {
+ if ((f == 0 || ctx->new_content) &&
+ pos == ctx->annex_at) {
+ trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
+ goto add;
}
+ ret = afs_writepages_submit(folio->mapping, wbc, ctx);
+ if (ret < 0)
+ return ret;
+ }
+
+ ctx->begun = true;
+ ctx->start = pos + f;
+ trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
+add:
+ ctx->end = pos + t;
+ ctx->annex_at = pos + folio_size(folio);
- folio_batch_release(&fbatch);
- cond_resched();
- } while (wbc->nr_to_write > 0);
+ folio_wait_fscache(folio);
+ folio_start_writeback(folio);
+ afs_folio_start_fscache(ctx->caching, folio);
+ folio_unlock(folio);
- *_next = start;
- _leave(" = 0 [%llx]", *_next);
+ if (ctx->end - ctx->start >= 65536 * 4096) {
+ ret = afs_writepages_submit(folio->mapping, wbc, ctx);
+ if (ret < 0)
+ return ret;
+ ctx->begun = false;
+ }
return 0;
}
@@ -800,7 +589,10 @@ int afs_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
- loff_t start, next;
+ struct afs_writepages_context ctx = {
+ .caching = fscache_cookie_enabled(afs_vnode_cache(vnode)),
+ .new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags),
+ };
int ret;
_enter("");
@@ -814,29 +606,11 @@ int afs_writepages(struct address_space *mapping,
else if (!down_read_trylock(&vnode->validate_lock))
return 0;
- if (wbc->range_cyclic) {
- start = mapping->writeback_index * PAGE_SIZE;
- ret = afs_writepages_region(mapping, wbc, start, LLONG_MAX,
- &next, false);
- if (ret == 0) {
- mapping->writeback_index = next / PAGE_SIZE;
- if (start > 0 && wbc->nr_to_write > 0) {
- ret = afs_writepages_region(mapping, wbc, 0,
- start, &next, false);
- if (ret == 0)
- mapping->writeback_index =
- next / PAGE_SIZE;
- }
- }
- } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
- ret = afs_writepages_region(mapping, wbc, 0, LLONG_MAX,
- &next, false);
- if (wbc->nr_to_write > 0 && ret == 0)
- mapping->writeback_index = next / PAGE_SIZE;
- } else {
- ret = afs_writepages_region(mapping, wbc,
- wbc->range_start, wbc->range_end,
- &next, false);
+ ret = write_cache_pages(mapping, wbc, afs_writepages_add_folio, &ctx);
+ if (ret >= 0 && ctx.begun) {
+ ret = afs_writepages_submit(mapping, wbc, &ctx);
+ if (ret < 0)
+ return ret;
}
up_read(&vnode->validate_lock);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] mm: Add a function to get a single tagged folio from a file
2023-03-02 23:16 ` [PATCH 1/3] mm: Add a function to get a single tagged folio from a file David Howells
@ 2023-03-02 23:21 ` Matthew Wilcox
0 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2023-03-02 23:21 UTC (permalink / raw)
To: David Howells
Cc: Linus Torvalds, Steve French, Vishal Moola, Shyam Prasad N,
Rohith Surabattula, Tom Talpey, Stefan Metzmacher,
Paulo Alcantara, Jeff Layton, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel, Steve French, Andrew Morton,
linux-mm
On Thu, Mar 02, 2023 at 11:16:36PM +0000, David Howells wrote:
> Add a function to get a single tagged folio from a file rather than a batch
> for use in afs and cifs where, in the common case, the batch is likely to
> be rendered irrelevant by the {afs,cifs}_extend_writeback() function.
I think this is the wrong way to go. I'll work on a replacement once
I've got a couple of other things off my plate.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Test patch to remove per-page dirty region tracking from afs
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (3 preceding siblings ...)
2023-03-02 23:20 ` [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
@ 2023-03-02 23:23 ` David Howells
2023-03-02 23:29 ` Test patch to make afs use write_cache_pages() David Howells
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:23 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: dhowells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
David Howells <dhowells@redhat.com> wrote:
> AFS firstly. ...
>
> Base + Page-dirty-region removed:
> WRITE: bw=301MiB/s (315MB/s), 70.4MiB/s-80.2MiB/s (73.8MB/s-84.1MB/s)
> WRITE: bw=325MiB/s (341MB/s), 78.5MiB/s-87.1MiB/s (82.3MB/s-91.3MB/s)
> WRITE: bw=320MiB/s (335MB/s), 71.6MiB/s-88.6MiB/s (75.0MB/s-92.9MB/s)
Here's a patch to remove the use of page->private data to track the dirty part
of a page from afs.
David
---
fs/afs/file.c | 68 ----------------
fs/afs/internal.h | 56 -------------
fs/afs/write.c | 187 +++++++++------------------------------------
fs/cifs/file.c | 1
include/trace/events/afs.h | 14 ---
5 files changed, 45 insertions(+), 281 deletions(-)
diff --git a/fs/afs/file.c b/fs/afs/file.c
index 68d6d5dc608d..a2f3316fa174 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -402,80 +402,18 @@ int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
return 0;
}
-/*
- * Adjust the dirty region of the page on truncation or full invalidation,
- * getting rid of the markers altogether if the region is entirely invalidated.
- */
-static void afs_invalidate_dirty(struct folio *folio, size_t offset,
- size_t length)
-{
- struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
- unsigned long priv;
- unsigned int f, t, end = offset + length;
-
- priv = (unsigned long)folio_get_private(folio);
-
- /* we clean up only if the entire page is being invalidated */
- if (offset == 0 && length == folio_size(folio))
- goto full_invalidate;
-
- /* If the page was dirtied by page_mkwrite(), the PTE stays writable
- * and we don't get another notification to tell us to expand it
- * again.
- */
- if (afs_is_folio_dirty_mmapped(priv))
- return;
-
- /* We may need to shorten the dirty region */
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
-
- if (t <= offset || f >= end)
- return; /* Doesn't overlap */
-
- if (f < offset && t > end)
- return; /* Splits the dirty region - just absorb it */
-
- if (f >= offset && t <= end)
- goto undirty;
-
- if (f < offset)
- t = offset;
- else
- f = end;
- if (f == t)
- goto undirty;
-
- priv = afs_folio_dirty(folio, f, t);
- folio_change_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("trunc"), folio);
- return;
-
-undirty:
- trace_afs_folio_dirty(vnode, tracepoint_string("undirty"), folio);
- folio_clear_dirty_for_io(folio);
-full_invalidate:
- trace_afs_folio_dirty(vnode, tracepoint_string("inval"), folio);
- folio_detach_private(folio);
-}
-
/*
* invalidate part or all of a page
* - release a page and clean up its private data if offset is 0 (indicating
* the entire page)
*/
static void afs_invalidate_folio(struct folio *folio, size_t offset,
- size_t length)
+ size_t length)
{
- _enter("{%lu},%zu,%zu", folio->index, offset, length);
-
- BUG_ON(!folio_test_locked(folio));
-
- if (folio_get_private(folio))
- afs_invalidate_dirty(folio, offset, length);
+ struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
+ trace_afs_folio_dirty(vnode, tracepoint_string("inval"), folio);
folio_wait_fscache(folio);
- _leave("");
}
/*
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index ad8523d0d038..90d66b20ca8c 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -890,62 +890,6 @@ static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int fl
i_size_read(&vnode->netfs.inode), flags);
}
-/*
- * We use folio->private to hold the amount of the folio that we've written to,
- * splitting the field into two parts. However, we need to represent a range
- * 0...FOLIO_SIZE, so we reduce the resolution if the size of the folio
- * exceeds what we can encode.
- */
-#ifdef CONFIG_64BIT
-#define __AFS_FOLIO_PRIV_MASK 0x7fffffffUL
-#define __AFS_FOLIO_PRIV_SHIFT 32
-#define __AFS_FOLIO_PRIV_MMAPPED 0x80000000UL
-#else
-#define __AFS_FOLIO_PRIV_MASK 0x7fffUL
-#define __AFS_FOLIO_PRIV_SHIFT 16
-#define __AFS_FOLIO_PRIV_MMAPPED 0x8000UL
-#endif
-
-static inline unsigned int afs_folio_dirty_resolution(struct folio *folio)
-{
- int shift = folio_shift(folio) - (__AFS_FOLIO_PRIV_SHIFT - 1);
- return (shift > 0) ? shift : 0;
-}
-
-static inline size_t afs_folio_dirty_from(struct folio *folio, unsigned long priv)
-{
- unsigned long x = priv & __AFS_FOLIO_PRIV_MASK;
-
- /* The lower bound is inclusive */
- return x << afs_folio_dirty_resolution(folio);
-}
-
-static inline size_t afs_folio_dirty_to(struct folio *folio, unsigned long priv)
-{
- unsigned long x = (priv >> __AFS_FOLIO_PRIV_SHIFT) & __AFS_FOLIO_PRIV_MASK;
-
- /* The upper bound is immediately beyond the region */
- return (x + 1) << afs_folio_dirty_resolution(folio);
-}
-
-static inline unsigned long afs_folio_dirty(struct folio *folio, size_t from, size_t to)
-{
- unsigned int res = afs_folio_dirty_resolution(folio);
- from >>= res;
- to = (to - 1) >> res;
- return (to << __AFS_FOLIO_PRIV_SHIFT) | from;
-}
-
-static inline unsigned long afs_folio_dirty_mmapped(unsigned long priv)
-{
- return priv | __AFS_FOLIO_PRIV_MMAPPED;
-}
-
-static inline bool afs_is_folio_dirty_mmapped(unsigned long priv)
-{
- return priv & __AFS_FOLIO_PRIV_MMAPPED;
-}
-
#include <trace/events/afs.h>
/*****************************************************************************/
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 571f3b9a417e..d2f6623c8eab 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -14,11 +14,6 @@
#include <linux/netfs.h>
#include "internal.h"
-static int afs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next,
- bool max_one_loop);
-
static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len,
loff_t i_size, bool caching);
@@ -43,25 +38,6 @@ static void afs_folio_start_fscache(bool caching, struct folio *folio)
}
#endif
-/*
- * Flush out a conflicting write. This may extend the write to the surrounding
- * pages if also dirty and contiguous to the conflicting region..
- */
-static int afs_flush_conflicting_write(struct address_space *mapping,
- struct folio *folio)
-{
- struct writeback_control wbc = {
- .sync_mode = WB_SYNC_ALL,
- .nr_to_write = LONG_MAX,
- .range_start = folio_pos(folio),
- .range_end = LLONG_MAX,
- };
- loff_t next;
-
- return afs_writepages_region(mapping, &wbc, folio_pos(folio), LLONG_MAX,
- &next, true);
-}
-
/*
* prepare to perform part of a write to a page
*/
@@ -71,10 +47,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
{
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
struct folio *folio;
- unsigned long priv;
- unsigned f, from;
- unsigned t, to;
- pgoff_t index;
int ret;
_enter("{%llx:%llu},%llx,%x",
@@ -88,49 +60,17 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
if (ret < 0)
return ret;
- index = folio_index(folio);
- from = pos - index * PAGE_SIZE;
- to = from + len;
-
try_again:
- /* See if this page is already partially written in a way that we can
- * merge the new write with.
- */
- if (folio_test_private(folio)) {
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- ASSERTCMP(f, <=, t);
-
- if (folio_test_writeback(folio)) {
- trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio);
- folio_unlock(folio);
- goto wait_for_writeback;
- }
- /* If the file is being filled locally, allow inter-write
- * spaces to be merged into writes. If it's not, only write
- * back what the user gives us.
- */
- if (!test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags) &&
- (to < f || from > t))
- goto flush_conflicting_write;
+ if (folio_test_writeback(folio)) {
+ trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio);
+ folio_unlock(folio);
+ goto wait_for_writeback;
}
*_page = folio_file_page(folio, pos / PAGE_SIZE);
_leave(" = 0");
return 0;
- /* The previous write and this write aren't adjacent or overlapping, so
- * flush the page out.
- */
-flush_conflicting_write:
- trace_afs_folio_dirty(vnode, tracepoint_string("confl"), folio);
- folio_unlock(folio);
-
- ret = afs_flush_conflicting_write(mapping, folio);
- if (ret < 0)
- goto error;
-
wait_for_writeback:
ret = folio_wait_writeback_killable(folio);
if (ret < 0)
@@ -156,9 +96,6 @@ int afs_write_end(struct file *file, struct address_space *mapping,
{
struct folio *folio = page_folio(subpage);
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
- unsigned long priv;
- unsigned int f, from = offset_in_folio(folio, pos);
- unsigned int t, to = from + copied;
loff_t i_size, write_end_pos;
_enter("{%llx:%llu},{%lx}",
@@ -188,25 +125,10 @@ int afs_write_end(struct file *file, struct address_space *mapping,
fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos);
}
- if (folio_test_private(folio)) {
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- if (from < f)
- f = from;
- if (to > t)
- t = to;
- priv = afs_folio_dirty(folio, f, t);
- folio_change_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("dirty+"), folio);
- } else {
- priv = afs_folio_dirty(folio, from, to);
- folio_attach_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("dirty"), folio);
- }
-
if (folio_mark_dirty(folio))
- _debug("dirtied %lx", folio_index(folio));
+ trace_afs_folio_dirty(vnode, tracepoint_string("dirty"), folio);
+ else
+ trace_afs_folio_dirty(vnode, tracepoint_string("dirty+"), folio);
out:
folio_unlock(folio);
@@ -465,18 +387,16 @@ static void afs_extend_writeback(struct address_space *mapping,
bool caching,
unsigned int *_len)
{
- struct pagevec pvec;
+ struct folio_batch batch;
struct folio *folio;
- unsigned long priv;
- unsigned int psize, filler = 0;
- unsigned int f, t;
+ size_t psize;
loff_t len = *_len;
pgoff_t index = (start + len) / PAGE_SIZE;
bool stop = true;
unsigned int i;
-
XA_STATE(xas, &mapping->i_pages, index);
- pagevec_init(&pvec);
+
+ folio_batch_init(&batch);
do {
/* Firstly, we gather up a batch of contiguous dirty pages
@@ -493,7 +413,6 @@ static void afs_extend_writeback(struct address_space *mapping,
break;
if (folio_index(folio) != index)
break;
-
if (!folio_try_get_rcu(folio)) {
xas_reset(&xas);
continue;
@@ -518,24 +437,13 @@ static void afs_extend_writeback(struct address_space *mapping,
}
psize = folio_size(folio);
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- if (f != 0 && !new_content) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- len += filler + t;
- filler = psize - t;
+ len += psize;
+ stop = false;
if (len >= max_len || *_count <= 0)
stop = true;
- else if (t == psize || new_content)
- stop = false;
index += folio_nr_pages(folio);
- if (!pagevec_add(&pvec, &folio->page))
+ if (!folio_batch_add(&batch, folio))
break;
if (stop)
break;
@@ -548,11 +456,11 @@ static void afs_extend_writeback(struct address_space *mapping,
/* Now, if we obtained any pages, we can shift them to being
* writable and mark them for caching.
*/
- if (!pagevec_count(&pvec))
+ if (!folio_batch_count(&batch))
break;
- for (i = 0; i < pagevec_count(&pvec); i++) {
- folio = page_folio(pvec.pages[i]);
+ for (i = 0; i < folio_batch_count(&batch); i++) {
+ folio = batch.folios[i];
trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
if (!folio_clear_dirty_for_io(folio))
@@ -565,7 +473,7 @@ static void afs_extend_writeback(struct address_space *mapping,
folio_unlock(folio);
}
- pagevec_release(&pvec);
+ folio_batch_release(&batch);
cond_resched();
} while (!stop);
@@ -583,8 +491,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct iov_iter iter;
- unsigned long priv;
- unsigned int offset, to, len, max_len;
+ unsigned int len, max_len;
loff_t i_size = i_size_read(&vnode->netfs.inode);
bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
@@ -599,18 +506,14 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
count -= folio_nr_pages(folio);
- /* Find all consecutive lockable dirty pages that have contiguous
- * written regions, stopping when we find a page that is not
- * immediately lockable, is not dirty or is missing, or we reach the
- * end of the range.
+ /* Find all consecutive lockable dirty pages, stopping when we find a
+ * page that is not immediately lockable, is not dirty or is missing,
+ * or we reach the end of the range.
*/
- priv = (unsigned long)folio_get_private(folio);
- offset = afs_folio_dirty_from(folio, priv);
- to = afs_folio_dirty_to(folio, priv);
trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
- len = to - offset;
- start += offset;
+ len = folio_size(folio);
+ start = folio_pos(folio);
if (start < i_size) {
/* Trim the write to the EOF; the extra data is ignored. Also
* put an upper limit on the size of a single storedata op.
@@ -619,8 +522,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
max_len = min_t(unsigned long long, max_len, end - start + 1);
max_len = min_t(unsigned long long, max_len, i_size - start);
- if (len < max_len &&
- (to == folio_size(folio) || new_content))
+ if (len < max_len)
afs_extend_writeback(mapping, vnode, &count,
start, max_len, new_content,
caching, &len);
@@ -909,7 +811,6 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
struct inode *inode = file_inode(file);
struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = file->private_data;
- unsigned long priv;
vm_fault_t ret = VM_FAULT_RETRY;
_enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, folio_index(folio));
@@ -942,15 +843,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
goto out;
}
- priv = afs_folio_dirty(folio, 0, folio_size(folio));
- priv = afs_folio_dirty_mmapped(priv);
- if (folio_test_private(folio)) {
- folio_change_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite+"), folio);
- } else {
- folio_attach_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio);
- }
+ trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio);
file_update_time(file);
ret = VM_FAULT_LOCKED;
@@ -992,33 +885,33 @@ void afs_prune_wb_keys(struct afs_vnode *vnode)
*/
int afs_launder_folio(struct folio *folio)
{
- struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
+ struct inode *inode = folio_inode(folio);
+ struct afs_vnode *vnode = AFS_FS_I(inode);
struct iov_iter iter;
struct bio_vec bv;
- unsigned long priv;
- unsigned int f, t;
int ret = 0;
_enter("{%lx}", folio->index);
- priv = (unsigned long)folio_get_private(folio);
if (folio_clear_dirty_for_io(folio)) {
- f = 0;
- t = folio_size(folio);
- if (folio_test_private(folio)) {
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- }
+ unsigned long long i_size = i_size_read(inode);
+ unsigned long long pos = folio_pos(folio);
+ size_t size = folio_size(folio);
- bvec_set_folio(&bv, folio, t - f, f);
- iov_iter_bvec(&iter, ITER_SOURCE, &bv, 1, bv.bv_len);
+ if (pos >= i_size)
+ goto out;
+ if (i_size - pos < size)
+ size = i_size - pos;
+
+ bvec_set_folio(&bv, folio, size, 0);
+ iov_iter_bvec(&iter, ITER_SOURCE, &bv, 1, size);
trace_afs_folio_dirty(vnode, tracepoint_string("launder"), folio);
- ret = afs_store_data(vnode, &iter, folio_pos(folio) + f, true);
+ ret = afs_store_data(vnode, &iter, pos, true);
}
+out:
trace_afs_folio_dirty(vnode, tracepoint_string("laundered"), folio);
- folio_detach_private(folio);
folio_wait_fscache(folio);
return ret;
}
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 4d4a2d82636d..3d304d4a54d6 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2674,7 +2674,6 @@ static void cifs_extend_writeback(struct address_space *mapping,
break;
}
- max_pages -= nr_pages;
psize = folio_size(folio);
len += psize;
stop = false;
diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
index e9d412d19dbb..4540aa801edd 100644
--- a/include/trace/events/afs.h
+++ b/include/trace/events/afs.h
@@ -1025,26 +1025,16 @@ TRACE_EVENT(afs_folio_dirty,
__field(struct afs_vnode *, vnode )
__field(const char *, where )
__field(pgoff_t, index )
- __field(unsigned long, from )
- __field(unsigned long, to )
),
TP_fast_assign(
- unsigned long priv = (unsigned long)folio_get_private(folio);
__entry->vnode = vnode;
__entry->where = where;
__entry->index = folio_index(folio);
- __entry->from = afs_folio_dirty_from(folio, priv);
- __entry->to = afs_folio_dirty_to(folio, priv);
- __entry->to |= (afs_is_folio_dirty_mmapped(priv) ?
- (1UL << (BITS_PER_LONG - 1)) : 0);
),
- TP_printk("vn=%p %lx %s %lx-%lx%s",
- __entry->vnode, __entry->index, __entry->where,
- __entry->from,
- __entry->to & ~(1UL << (BITS_PER_LONG - 1)),
- __entry->to & (1UL << (BITS_PER_LONG - 1)) ? " M" : "")
+ TP_printk("vn=%p %lx %s",
+ __entry->vnode, __entry->index, __entry->where)
);
TRACE_EVENT(afs_call_state,
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Test patch to make afs use write_cache_pages()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (4 preceding siblings ...)
2023-03-02 23:23 ` Test patch to remove per-page dirty region tracking from afs David Howells
@ 2023-03-02 23:29 ` David Howells
2023-03-02 23:32 ` Test patch to make afs use its own version of write_cache_pages() David Howells
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:29 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: dhowells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
David Howells <dhowells@redhat.com> wrote:
> AFS firstly. ...
>
> Base + Page-dirty-region tracking removed + write_cache_pages():
> WRITE: bw=288MiB/s (302MB/s), 71.9MiB/s-72.3MiB/s (75.4MB/s-75.8MB/s)
> WRITE: bw=284MiB/s (297MB/s), 70.7MiB/s-71.3MiB/s (74.1MB/s-74.8MB/s)
> WRITE: bw=287MiB/s (301MB/s), 71.2MiB/s-72.6MiB/s (74.7MB/s-76.1MB/s)
Here's a patch to make afs use write_cache_pages() with no per-page dirty
region tracking. afs_extend_writeback() is removed and the folios are
accumulated via a callback function.
This goes on top of "Test patch to remove per-page dirty region tracking from
afs".
David
---
write.c | 345 ++++++++++++----------------------------------------------------
1 file changed, 67 insertions(+), 278 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c
index d2f6623c8eab..af414b72d42a 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -371,193 +371,55 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t
return afs_put_operation(op);
}
-/*
- * Extend the region to be written back to include subsequent contiguously
- * dirty pages if possible, but don't sleep while doing so.
- *
- * If this page holds new content, then we can include filler zeros in the
- * writeback.
- */
-static void afs_extend_writeback(struct address_space *mapping,
- struct afs_vnode *vnode,
- long *_count,
- loff_t start,
- loff_t max_len,
- bool new_content,
- bool caching,
- unsigned int *_len)
-{
- struct folio_batch batch;
- struct folio *folio;
- size_t psize;
- loff_t len = *_len;
- pgoff_t index = (start + len) / PAGE_SIZE;
- bool stop = true;
- unsigned int i;
- XA_STATE(xas, &mapping->i_pages, index);
-
- folio_batch_init(&batch);
-
- do {
- /* Firstly, we gather up a batch of contiguous dirty pages
- * under the RCU read lock - but we can't clear the dirty flags
- * there if any of those pages are mapped.
- */
- rcu_read_lock();
-
- xas_for_each(&xas, folio, ULONG_MAX) {
- stop = true;
- if (xas_retry(&xas, folio))
- continue;
- if (xa_is_value(folio))
- break;
- if (folio_index(folio) != index)
- break;
- if (!folio_try_get_rcu(folio)) {
- xas_reset(&xas);
- continue;
- }
-
- /* Has the page moved or been split? */
- if (unlikely(folio != xas_reload(&xas))) {
- folio_put(folio);
- break;
- }
-
- if (!folio_trylock(folio)) {
- folio_put(folio);
- break;
- }
- if (!folio_test_dirty(folio) ||
- folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- psize = folio_size(folio);
- len += psize;
- stop = false;
- if (len >= max_len || *_count <= 0)
- stop = true;
-
- index += folio_nr_pages(folio);
- if (!folio_batch_add(&batch, folio))
- break;
- if (stop)
- break;
- }
-
- if (!stop)
- xas_pause(&xas);
- rcu_read_unlock();
-
- /* Now, if we obtained any pages, we can shift them to being
- * writable and mark them for caching.
- */
- if (!folio_batch_count(&batch))
- break;
-
- for (i = 0; i < folio_batch_count(&batch); i++) {
- folio = batch.folios[i];
- trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
-
- if (!folio_clear_dirty_for_io(folio))
- BUG();
- if (folio_start_writeback(folio))
- BUG();
- afs_folio_start_fscache(caching, folio);
-
- *_count -= folio_nr_pages(folio);
- folio_unlock(folio);
- }
-
- folio_batch_release(&batch);
- cond_resched();
- } while (!stop);
-
- *_len = len;
-}
+struct afs_writepages_context {
+ unsigned long long start;
+ size_t len;
+ bool begun;
+ bool caching;
+};
/*
- * Synchronously write back the locked page and any subsequent non-locked dirty
- * pages.
+ * Flush a block of pages to the server and the cache.
*/
-static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
- struct writeback_control *wbc,
- struct folio *folio,
- loff_t start, loff_t end)
+static int afs_writepages_submit(struct address_space *mapping,
+ struct writeback_control *wbc,
+ struct afs_writepages_context *ctx)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct iov_iter iter;
- unsigned int len, max_len;
- loff_t i_size = i_size_read(&vnode->netfs.inode);
- bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
- bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
- long count = wbc->nr_to_write;
+ unsigned long long i_size = i_size_read(&vnode->netfs.inode);
int ret;
- _enter(",%lx,%llx-%llx", folio_index(folio), start, end);
-
- if (folio_start_writeback(folio))
- BUG();
- afs_folio_start_fscache(caching, folio);
-
- count -= folio_nr_pages(folio);
-
- /* Find all consecutive lockable dirty pages, stopping when we find a
- * page that is not immediately lockable, is not dirty or is missing,
- * or we reach the end of the range.
- */
- trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
-
- len = folio_size(folio);
- start = folio_pos(folio);
- if (start < i_size) {
- /* Trim the write to the EOF; the extra data is ignored. Also
- * put an upper limit on the size of a single storedata op.
- */
- max_len = 65536 * 4096;
- max_len = min_t(unsigned long long, max_len, end - start + 1);
- max_len = min_t(unsigned long long, max_len, i_size - start);
-
- if (len < max_len)
- afs_extend_writeback(mapping, vnode, &count,
- start, max_len, new_content,
- caching, &len);
- len = min_t(loff_t, len, max_len);
- }
+ _enter("%llx-%llx", ctx->start, ctx->start + ctx->len - 1);
/* We now have a contiguous set of dirty pages, each with writeback
- * set; the first page is still locked at this point, but all the rest
- * have been unlocked.
+ * set.
*/
- folio_unlock(folio);
-
- if (start < i_size) {
- _debug("write back %x @%llx [%llx]", len, start, i_size);
+ if (ctx->start < i_size) {
+ if (ctx->len > i_size - ctx->start)
+ ctx->len = i_size - ctx->start;
+ _debug("write back %zx @%llx [%llx]", ctx->len, ctx->start, i_size);
/* Speculatively write to the cache. We have to fix this up
* later if the store fails.
*/
- afs_write_to_cache(vnode, start, len, i_size, caching);
+ afs_write_to_cache(vnode, ctx->start, ctx->len, i_size, ctx->caching);
- iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len);
- ret = afs_store_data(vnode, &iter, start, false);
+ iov_iter_xarray(&iter, ITER_SOURCE,
+ &mapping->i_pages, ctx->start, ctx->len);
+ ret = afs_store_data(vnode, &iter, ctx->start, false);
} else {
- _debug("write discard %x @%llx [%llx]", len, start, i_size);
+ _debug("write discard %zx @%llx [%llx]", ctx->len, ctx->start, i_size);
/* The dirty region was entirely beyond the EOF. */
- fscache_clear_page_bits(mapping, start, len, caching);
- afs_pages_written_back(vnode, start, len);
+ fscache_clear_page_bits(mapping, ctx->start, ctx->len, ctx->caching);
+ afs_pages_written_back(vnode, ctx->start, ctx->len);
ret = 0;
}
switch (ret) {
case 0:
- wbc->nr_to_write = count;
- ret = len;
+ ret = ctx->len;
break;
default:
@@ -570,13 +432,13 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
case -EKEYREJECTED:
case -EKEYREVOKED:
case -ENETRESET:
- afs_redirty_pages(wbc, mapping, start, len);
+ afs_redirty_pages(wbc, mapping, ctx->start, ctx->len);
mapping_set_error(mapping, ret);
break;
case -EDQUOT:
case -ENOSPC:
- afs_redirty_pages(wbc, mapping, start, len);
+ afs_redirty_pages(wbc, mapping, ctx->start, ctx->len);
mapping_set_error(mapping, -ENOSPC);
break;
@@ -588,7 +450,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
case -ENOMEDIUM:
case -ENXIO:
trace_afs_file_error(vnode, ret, afs_file_error_writeback_fail);
- afs_kill_pages(mapping, start, len);
+ afs_kill_pages(mapping, ctx->start, ctx->len);
mapping_set_error(mapping, ret);
break;
}
@@ -598,100 +460,43 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
}
/*
- * write a region of pages back to the server
+ * Add a page to the set and flush when large enough.
*/
-static int afs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next,
- bool max_one_loop)
+static int afs_writepages_add_folio(struct folio *folio,
+ struct writeback_control *wbc, void *data)
{
- struct folio *folio;
- struct folio_batch fbatch;
- ssize_t ret;
- unsigned int i;
- int n, skips = 0;
-
- _enter("%llx,%llx,", start, end);
- folio_batch_init(&fbatch);
-
- do {
- pgoff_t index = start / PAGE_SIZE;
-
- n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, &fbatch);
+ struct afs_writepages_context *ctx = data;
+ struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host);
+ int ret;
- if (!n)
- break;
- for (i = 0; i < n; i++) {
- folio = fbatch.folios[i];
- start = folio_pos(folio); /* May regress with THPs */
-
- _debug("wback %lx", folio_index(folio));
-
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0) {
- folio_batch_release(&fbatch);
- return ret;
- }
- } else {
- if (!folio_trylock(folio))
- continue;
- }
-
- if (folio->mapping != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- continue;
- }
-
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode != WB_SYNC_NONE) {
- folio_wait_writeback(folio);
-#ifdef CONFIG_AFS_FSCACHE
- folio_wait_fscache(folio);
-#endif
- } else {
- start += folio_size(folio);
- }
- if (wbc->sync_mode == WB_SYNC_NONE) {
- if (skips >= 5 || need_resched()) {
- *_next = start;
- _leave(" = 0 [%llx]", *_next);
- return 0;
- }
- skips++;
- }
- continue;
- }
-
- if (!folio_clear_dirty_for_io(folio))
- BUG();
- ret = afs_write_back_from_locked_folio(mapping, wbc,
- folio, start, end);
- if (ret < 0) {
- _leave(" = %zd", ret);
- folio_batch_release(&fbatch);
- return ret;
- }
-
- start += ret;
+ if (ctx->begun) {
+ if (folio_pos(folio) == ctx->start + ctx->len) {
+ trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
+ goto add;
}
+ ret = afs_writepages_submit(folio->mapping, wbc, ctx);
+ if (ret < 0)
+ return ret;
+ }
- folio_batch_release(&fbatch);
- cond_resched();
- } while (wbc->nr_to_write > 0);
+ ctx->begun = true;
+ ctx->start = folio_pos(folio);
+ ctx->len = 0;
+ trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
+add:
+ ctx->len += folio_size(folio);
- *_next = start;
- _leave(" = 0 [%llx]", *_next);
+ folio_wait_fscache(folio);
+ folio_start_writeback(folio);
+ afs_folio_start_fscache(ctx->caching, folio);
+ folio_unlock(folio);
+
+ if (ctx->len >= 65536 * 4096) {
+ ret = afs_writepages_submit(folio->mapping, wbc, ctx);
+ if (ret < 0)
+ return ret;
+ ctx->begun = false;
+ }
return 0;
}
@@ -702,7 +507,9 @@ int afs_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
- loff_t start, next;
+ struct afs_writepages_context ctx = {
+ .caching = fscache_cookie_enabled(afs_vnode_cache(vnode)),
+ };
int ret;
_enter("");
@@ -716,29 +523,11 @@ int afs_writepages(struct address_space *mapping,
else if (!down_read_trylock(&vnode->validate_lock))
return 0;
- if (wbc->range_cyclic) {
- start = mapping->writeback_index * PAGE_SIZE;
- ret = afs_writepages_region(mapping, wbc, start, LLONG_MAX,
- &next, false);
- if (ret == 0) {
- mapping->writeback_index = next / PAGE_SIZE;
- if (start > 0 && wbc->nr_to_write > 0) {
- ret = afs_writepages_region(mapping, wbc, 0,
- start, &next, false);
- if (ret == 0)
- mapping->writeback_index =
- next / PAGE_SIZE;
- }
- }
- } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
- ret = afs_writepages_region(mapping, wbc, 0, LLONG_MAX,
- &next, false);
- if (wbc->nr_to_write > 0 && ret == 0)
- mapping->writeback_index = next / PAGE_SIZE;
- } else {
- ret = afs_writepages_region(mapping, wbc,
- wbc->range_start, wbc->range_end,
- &next, false);
+ ret = write_cache_pages(mapping, wbc, afs_writepages_add_folio, &ctx);
+ if (ret >= 0 && ctx.begun) {
+ ret = afs_writepages_submit(mapping, wbc, &ctx);
+ if (ret < 0)
+ return ret;
}
up_read(&vnode->validate_lock);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Test patch to make afs use its own version of write_cache_pages()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (5 preceding siblings ...)
2023-03-02 23:29 ` Test patch to make afs use write_cache_pages() David Howells
@ 2023-03-02 23:32 ` David Howells
2023-03-02 23:36 ` cifs test patch to convert to using write_cache_pages() David Howells
2023-03-02 23:41 ` cifs test patch to make cifs use its own version of write_cache_pages() David Howells
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:32 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: dhowells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
David Howells <dhowells@redhat.com> wrote:
> AFS firstly. ...
>
> Base + Page-dirty-region tracking removed + Own write_cache_pages()
> WRITE: bw=302MiB/s (316MB/s), 75.1MiB/s-76.1MiB/s (78.7MB/s-79.8MB/s)
> WRITE: bw=302MiB/s (316MB/s), 74.5MiB/s-76.1MiB/s (78.1MB/s-79.8MB/s)
> WRITE: bw=301MiB/s (316MB/s), 75.2MiB/s-75.5MiB/s (78.9MB/s-79.1MB/s)
This goes on top of "Test patch to remove per-page dirty region tracking from
afs" and "Test patch to make afs use write_cache_pages()"
David
---
write.c | 141 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 138 insertions(+), 3 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 86b6e7cbe17c..d66c05acda8c 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -463,9 +463,9 @@ static int afs_writepages_submit(struct address_space *mapping,
* Add a page to the set and flush when large enough.
*/
static int afs_writepages_add_folio(struct folio *folio,
- struct writeback_control *wbc, void *data)
+ struct writeback_control *wbc,
+ struct afs_writepages_context *ctx)
{
- struct afs_writepages_context *ctx = data;
struct afs_vnode *vnode = AFS_FS_I(folio->mapping->host);
int ret;
@@ -499,6 +499,141 @@ static int afs_writepages_add_folio(struct folio *folio,
}
return 0;
}
+static int afs_write_cache_pages(struct address_space *mapping,
+ struct writeback_control *wbc,
+ struct afs_writepages_context *ctx)
+{
+ int ret = 0;
+ int done = 0;
+ int error;
+ struct folio_batch fbatch;
+ int nr_folios;
+ pgoff_t index;
+ pgoff_t end; /* Inclusive */
+ pgoff_t done_index;
+ int range_whole = 0;
+ xa_mark_t tag;
+
+ folio_batch_init(&fbatch);
+ if (wbc->range_cyclic) {
+ index = mapping->writeback_index; /* prev offset */
+ end = -1;
+ } else {
+ index = wbc->range_start >> PAGE_SHIFT;
+ end = wbc->range_end >> PAGE_SHIFT;
+ if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
+ range_whole = 1;
+ }
+ if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) {
+ tag_pages_for_writeback(mapping, index, end);
+ tag = PAGECACHE_TAG_TOWRITE;
+ } else {
+ tag = PAGECACHE_TAG_DIRTY;
+ }
+ done_index = index;
+ while (!done && (index <= end)) {
+ int i;
+
+ nr_folios = filemap_get_folios_tag(mapping, &index, end,
+ tag, &fbatch);
+
+ if (nr_folios == 0)
+ break;
+
+ for (i = 0; i < nr_folios; i++) {
+ struct folio *folio = fbatch.folios[i];
+
+ done_index = folio->index;
+
+ folio_lock(folio);
+
+ /*
+ * Page truncated or invalidated. We can freely skip it
+ * then, even for data integrity operations: the page
+ * has disappeared concurrently, so there could be no
+ * real expectation of this data integrity operation
+ * even if there is now a new, dirty page at the same
+ * pagecache address.
+ */
+ if (unlikely(folio->mapping != mapping)) {
+continue_unlock:
+ folio_unlock(folio);
+ continue;
+ }
+
+ if (!folio_test_dirty(folio)) {
+ /* someone wrote it for us */
+ goto continue_unlock;
+ }
+
+ if (folio_test_writeback(folio)) {
+ if (wbc->sync_mode != WB_SYNC_NONE)
+ folio_wait_writeback(folio);
+ else
+ goto continue_unlock;
+ }
+
+ BUG_ON(folio_test_writeback(folio));
+ if (!folio_clear_dirty_for_io(folio))
+ goto continue_unlock;
+
+ //trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
+ error = afs_writepages_add_folio(folio, wbc, ctx);
+ if (unlikely(error)) {
+ /*
+ * Handle errors according to the type of
+ * writeback. There's no need to continue for
+ * background writeback. Just push done_index
+ * past this page so media errors won't choke
+ * writeout for the entire file. For integrity
+ * writeback, we must process the entire dirty
+ * set regardless of errors because the fs may
+ * still have state to clear for each page. In
+ * that case we continue processing and return
+ * the first error.
+ */
+ if (error == AOP_WRITEPAGE_ACTIVATE) {
+ folio_unlock(folio);
+ error = 0;
+ } else if (wbc->sync_mode != WB_SYNC_ALL) {
+ ret = error;
+ done_index = folio->index +
+ folio_nr_pages(folio);
+ done = 1;
+ break;
+ }
+ if (!ret)
+ ret = error;
+ }
+
+ /*
+ * We stop writing back only if we are not doing
+ * integrity sync. In case of integrity sync we have to
+ * keep going until we have written all the pages
+ * we tagged for writeback prior to entering this loop.
+ */
+ if (--wbc->nr_to_write <= 0 &&
+ wbc->sync_mode == WB_SYNC_NONE) {
+ done = 1;
+ break;
+ }
+ }
+ folio_batch_release(&fbatch);
+ cond_resched();
+ }
+
+ /*
+ * If we hit the last page and there is more work to be done: wrap
+ * back the index back to the start of the file for the next
+ * time we are called.
+ */
+ if (wbc->range_cyclic && !done)
+ done_index = 0;
+ if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
+ mapping->writeback_index = done_index;
+
+ return ret;
+}
/*
* write some of the pending data back to the server
@@ -523,7 +658,7 @@ int afs_writepages(struct address_space *mapping,
else if (!down_read_trylock(&vnode->validate_lock))
return 0;
- ret = write_cache_pages(mapping, wbc, afs_writepages_add_folio, &ctx);
+ ret = afs_write_cache_pages(mapping, wbc, &ctx);
if (ret >= 0 && ctx.begun)
ret = afs_writepages_submit(mapping, wbc, &ctx);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* cifs test patch to convert to using write_cache_pages()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (6 preceding siblings ...)
2023-03-02 23:32 ` Test patch to make afs use its own version of write_cache_pages() David Howells
@ 2023-03-02 23:36 ` David Howells
2023-03-02 23:41 ` cifs test patch to make cifs use its own version of write_cache_pages() David Howells
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:36 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: dhowells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
David Howells <dhowells@redhat.com> wrote:
> And then CIFS. ...
>
> Base + write_cache_pages():
> WRITE: bw=457MiB/s (479MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s)
> WRITE: bw=449MiB/s (471MB/s), 112MiB/s-113MiB/s (118MB/s-118MB/s)
> WRITE: bw=459MiB/s (482MB/s), 115MiB/s-115MiB/s (120MB/s-121MB/s)
Here's my patch to convert cifs to use write_cache_pages().
David
---
fs/cifs/file.c | 400 ++++++++++++++++-----------------------------------------
1 file changed, 115 insertions(+), 285 deletions(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 3d304d4a54d6..04e2466609d9 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2613,140 +2613,35 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to)
return rc;
}
-/*
- * Extend the region to be written back to include subsequent contiguously
- * dirty pages if possible, but don't sleep while doing so.
- */
-static void cifs_extend_writeback(struct address_space *mapping,
- long *_count,
- loff_t start,
- int max_pages,
- size_t max_len,
- unsigned int *_len)
-{
- struct folio_batch batch;
- struct folio *folio;
- unsigned int psize, nr_pages;
- size_t len = *_len;
- pgoff_t index = (start + len) / PAGE_SIZE;
- bool stop = true;
- unsigned int i;
- XA_STATE(xas, &mapping->i_pages, index);
-
- folio_batch_init(&batch);
-
- do {
- /* Firstly, we gather up a batch of contiguous dirty pages
- * under the RCU read lock - but we can't clear the dirty flags
- * there if any of those pages are mapped.
- */
- rcu_read_lock();
-
- xas_for_each(&xas, folio, ULONG_MAX) {
- stop = true;
- if (xas_retry(&xas, folio))
- continue;
- if (xa_is_value(folio))
- break;
- if (folio_index(folio) != index)
- break;
- if (!folio_try_get_rcu(folio)) {
- xas_reset(&xas);
- continue;
- }
- nr_pages = folio_nr_pages(folio);
- if (nr_pages > max_pages)
- break;
-
- /* Has the page moved or been split? */
- if (unlikely(folio != xas_reload(&xas))) {
- folio_put(folio);
- break;
- }
-
- if (!folio_trylock(folio)) {
- folio_put(folio);
- break;
- }
- if (!folio_test_dirty(folio) || folio_test_writeback(folio)) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- psize = folio_size(folio);
- len += psize;
- stop = false;
- if (max_pages <= 0 || len >= max_len || *_count <= 0)
- stop = true;
-
- index += nr_pages;
- if (!folio_batch_add(&batch, folio))
- break;
- if (stop)
- break;
- }
-
- if (!stop)
- xas_pause(&xas);
- rcu_read_unlock();
-
- /* Now, if we obtained any pages, we can shift them to being
- * writable and mark them for caching.
- */
- if (!folio_batch_count(&batch))
- break;
-
- for (i = 0; i < folio_batch_count(&batch); i++) {
- folio = batch.folios[i];
- /* The folio should be locked, dirty and not undergoing
- * writeback from the loop above.
- */
- if (!folio_clear_dirty_for_io(folio))
- WARN_ON(1);
- if (folio_start_writeback(folio))
- WARN_ON(1);
-
- *_count -= folio_nr_pages(folio);
- folio_unlock(folio);
- }
-
- folio_batch_release(&batch);
- cond_resched();
- } while (!stop);
-
- *_len = len;
-}
+struct cifs_writepages_context {
+ struct cifs_writedata *wdata;
+ struct TCP_Server_Info *server;
+ struct cifs_credits credits;
+ unsigned long long start;
+ size_t len;
+ size_t wsize;
+ unsigned int xid;
+ bool begun;
+ bool caching;
+};
/*
- * Write back the locked page and any subsequent non-locked dirty pages.
+ * Set up a writeback op.
*/
-static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
- struct writeback_control *wbc,
- struct folio *folio,
- loff_t start, loff_t end)
+static int cifs_writeback_begin(struct address_space *mapping,
+ struct writeback_control *wbc,
+ unsigned long long start,
+ struct cifs_writepages_context *ctx)
{
struct inode *inode = mapping->host;
struct TCP_Server_Info *server;
struct cifs_writedata *wdata;
struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
- struct cifs_credits credits_on_stack;
- struct cifs_credits *credits = &credits_on_stack;
struct cifsFileInfo *cfile = NULL;
- unsigned int xid, wsize, len;
- loff_t i_size = i_size_read(inode);
- size_t max_len;
- long count = wbc->nr_to_write;
+ unsigned int wsize, len;
int rc;
- /* The folio should be locked, dirty and not undergoing writeback. */
- if (folio_start_writeback(folio))
- WARN_ON(1);
-
- count -= folio_nr_pages(folio);
- len = folio_size(folio);
-
- xid = get_xid();
+ ctx->xid = get_xid();
server = cifs_pick_channel(cifs_sb_master_tcon(cifs_sb)->ses);
rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, &cfile);
@@ -2756,7 +2651,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
}
rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->wsize,
- &wsize, credits);
+ &wsize, &ctx->credits);
if (rc != 0)
goto err_close;
@@ -2767,56 +2662,60 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
}
wdata->sync_mode = wbc->sync_mode;
- wdata->offset = folio_pos(folio);
+ wdata->offset = start;
wdata->pid = cfile->pid;
- wdata->credits = credits_on_stack;
+ wdata->credits = ctx->credits;
wdata->cfile = cfile;
wdata->server = server;
- cfile = NULL;
-
- /* Find all consecutive lockable dirty pages, stopping when we find a
- * page that is not immediately lockable, is not dirty or is missing,
- * or we reach the end of the range.
- */
- if (start < i_size) {
- /* Trim the write to the EOF; the extra data is ignored. Also
- * put an upper limit on the size of a single storedata op.
- */
- max_len = wsize;
- max_len = min_t(unsigned long long, max_len, end - start + 1);
- max_len = min_t(unsigned long long, max_len, i_size - start);
-
- if (len < max_len) {
- int max_pages = INT_MAX;
-
-#ifdef CONFIG_CIFS_SMB_DIRECT
- if (server->smbd_conn)
- max_pages = server->smbd_conn->max_frmr_depth;
-#endif
- max_pages -= folio_nr_pages(folio);
+ ctx->wsize = wsize;
+ ctx->server = server;
+ ctx->wdata = wdata;
+ ctx->begun = true;
+ return 0;
- if (max_pages > 0)
- cifs_extend_writeback(mapping, &count, start,
- max_pages, max_len, &len);
- }
- len = min_t(loff_t, len, max_len);
+err_uncredit:
+ add_credits_and_wake_if(server, &ctx->credits, 0);
+err_close:
+ if (cfile)
+ cifsFileInfo_put(cfile);
+err_xid:
+ free_xid(ctx->xid);
+ if (is_retryable_error(rc)) {
+ cifs_pages_write_redirty(inode, start, len);
+ } else {
+ cifs_pages_write_failed(inode, start, len);
+ mapping_set_error(mapping, rc);
}
+ /* Indication to update ctime and mtime as close is deferred */
+ set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags);
+ return rc;
+}
- wdata->bytes = len;
+/*
+ * Flush a block of pages to the server and the cache.
+ */
+static int cifs_writepages_submit(struct address_space *mapping,
+ struct writeback_control *wbc,
+ struct cifs_writepages_context *ctx)
+{
+ struct TCP_Server_Info *server = ctx->server;
+ struct cifs_writedata *wdata = ctx->wdata;
+ unsigned long long i_size = i_size_read(mapping->host);
+ int rc;
- /* We now have a contiguous set of dirty pages, each with writeback
- * set; the first page is still locked at this point, but all the rest
- * have been unlocked.
+ /* We now have a contiguous set of dirty pages, each with
+ * writeback set.
*/
- folio_unlock(folio);
-
- if (start < i_size) {
- iov_iter_xarray(&wdata->iter, ITER_SOURCE, &mapping->i_pages,
- start, len);
+ if (ctx->start < i_size) {
+ if (ctx->len > i_size - ctx->start)
+ ctx->len = i_size - ctx->start;
+ wdata->bytes = ctx->len;
+ iov_iter_xarray(&wdata->iter, ITER_SOURCE,
+ &mapping->i_pages, ctx->start, wdata->bytes);
rc = adjust_credits(wdata->server, &wdata->credits, wdata->bytes);
if (rc)
- goto err_wdata;
+ goto err;
if (wdata->cfile->invalidHandle)
rc = -EAGAIN;
@@ -2827,133 +2726,79 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
kref_put(&wdata->refcount, cifs_writedata_release);
goto err_close;
}
+
} else {
/* The dirty region was entirely beyond the EOF. */
- cifs_pages_written_back(inode, start, len);
+ cifs_pages_written_back(mapping->host, ctx->start, ctx->len);
rc = 0;
}
-err_wdata:
+err:
kref_put(&wdata->refcount, cifs_writedata_release);
-err_uncredit:
- add_credits_and_wake_if(server, credits, 0);
+ add_credits_and_wake_if(server, &ctx->credits, 0);
err_close:
- if (cfile)
- cifsFileInfo_put(cfile);
-err_xid:
- free_xid(xid);
+ free_xid(ctx->xid);
if (rc == 0) {
- wbc->nr_to_write = count;
- rc = len;
+ rc = 0;
} else if (is_retryable_error(rc)) {
- cifs_pages_write_redirty(inode, start, len);
+ cifs_pages_write_redirty(mapping->host, ctx->start, ctx->len);
} else {
- cifs_pages_write_failed(inode, start, len);
+ cifs_pages_write_failed(mapping->host, ctx->start, ctx->len);
mapping_set_error(mapping, rc);
}
+
/* Indication to update ctime and mtime as close is deferred */
- set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags);
+ set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(mapping->host)->flags);
+ ctx->wdata = NULL;
+ ctx->begun = false;
return rc;
}
/*
- * write a region of pages back to the server
+ * Add a page to the set and flush when large enough.
*/
-static int cifs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next)
+static int cifs_writepages_add_folio(struct folio *folio,
+ struct writeback_control *wbc, void *data)
{
- struct folio_batch fbatch;
- int skips = 0;
-
- folio_batch_init(&fbatch);
- do {
- int nr;
- pgoff_t index = start / PAGE_SIZE;
-
- nr = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, &fbatch);
- if (!nr)
- break;
-
- for (int i = 0; i < nr; i++) {
- ssize_t ret;
- struct folio *folio = fbatch.folios[i];
-
-redo_folio:
- start = folio_pos(folio); /* May regress with THPs */
-
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0)
- goto write_error;
- } else {
- if (!folio_trylock(folio))
- goto skip_write;
- }
-
- if (folio_mapping(folio) != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- continue;
- }
-
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode == WB_SYNC_NONE)
- goto skip_write;
-
- folio_wait_writeback(folio);
-#ifdef CONFIG_CIFS_FSCACHE
- folio_wait_fscache(folio);
-#endif
- goto redo_folio;
- }
-
- if (!folio_clear_dirty_for_io(folio))
- /* We hold the page lock - it should've been dirty. */
- WARN_ON(1);
+ struct cifs_writepages_context *ctx = data;
+ unsigned long long i_size = i_size_read(folio->mapping->host);
+ unsigned long long pos = folio_pos(folio);
+ size_t size = folio_size(folio);
+ int ret;
- ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
- if (ret < 0)
- goto write_error;
+ if (pos < i_size && size > i_size - pos)
+ size = i_size - pos;
- start += ret;
- continue;
-
-write_error:
- folio_batch_release(&fbatch);
- *_next = start;
+ if (ctx->begun) {
+ if (pos == ctx->start + ctx->len &&
+ ctx->len + size <= ctx->wsize)
+ goto add;
+ ret = cifs_writepages_submit(folio->mapping, wbc, ctx);
+ if (ret < 0) {
+ ctx->begun = false;
return ret;
+ }
+ }
-skip_write:
- /*
- * Too many skipped writes, or need to reschedule?
- * Treat it as a write error without an error code.
- */
- if (skips >= 5 || need_resched()) {
- ret = 0;
- goto write_error;
- }
+ ret = cifs_writeback_begin(folio->mapping, wbc, pos, ctx);
+ if (ret < 0)
+ return ret;
- /* Otherwise, just skip that folio and go on to the next */
- skips++;
- start += folio_size(folio);
- continue;
- }
+ ctx->start = folio_pos(folio);
+ ctx->len = 0;
+add:
+ ctx->len += folio_size(folio);
- folio_batch_release(&fbatch);
- cond_resched();
- } while (wbc->nr_to_write > 0);
+ folio_wait_fscache(folio);
+ folio_start_writeback(folio);
+ folio_unlock(folio);
- *_next = start;
+ if (ctx->len >= ctx->wsize) {
+ ret = cifs_writepages_submit(folio->mapping, wbc, ctx);
+ if (ret < 0)
+ return ret;
+ ctx->begun = false;
+ }
return 0;
}
@@ -2963,7 +2808,7 @@ static int cifs_writepages_region(struct address_space *mapping,
static int cifs_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
- loff_t start, next;
+ struct cifs_writepages_context ctx = {};
int ret;
/* We have to be careful as we can end up racing with setattr()
@@ -2971,26 +2816,11 @@ static int cifs_writepages(struct address_space *mapping,
* to prevent it.
*/
- if (wbc->range_cyclic) {
- start = mapping->writeback_index * PAGE_SIZE;
- ret = cifs_writepages_region(mapping, wbc, start, LLONG_MAX, &next);
- if (ret == 0) {
- mapping->writeback_index = next / PAGE_SIZE;
- if (start > 0 && wbc->nr_to_write > 0) {
- ret = cifs_writepages_region(mapping, wbc, 0,
- start, &next);
- if (ret == 0)
- mapping->writeback_index =
- next / PAGE_SIZE;
- }
- }
- } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
- ret = cifs_writepages_region(mapping, wbc, 0, LLONG_MAX, &next);
- if (wbc->nr_to_write > 0 && ret == 0)
- mapping->writeback_index = next / PAGE_SIZE;
- } else {
- ret = cifs_writepages_region(mapping, wbc,
- wbc->range_start, wbc->range_end, &next);
+ ret = write_cache_pages(mapping, wbc, cifs_writepages_add_folio, &ctx);
+ if (ret >= 0 && ctx.begun) {
+ ret = cifs_writepages_submit(mapping, wbc, &ctx);
+ if (ret < 0)
+ return ret;
}
return ret;
^ permalink raw reply related [flat|nested] 11+ messages in thread
* cifs test patch to make cifs use its own version of write_cache_pages()
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
` (7 preceding siblings ...)
2023-03-02 23:36 ` cifs test patch to convert to using write_cache_pages() David Howells
@ 2023-03-02 23:41 ` David Howells
8 siblings, 0 replies; 11+ messages in thread
From: David Howells @ 2023-03-02 23:41 UTC (permalink / raw)
To: Linus Torvalds, Steve French
Cc: dhowells, Vishal Moola, Shyam Prasad N, Rohith Surabattula,
Tom Talpey, Stefan Metzmacher, Paulo Alcantara, Jeff Layton,
Matthew Wilcox, Marc Dionne, linux-afs, linux-cifs,
linux-fsdevel, linux-kernel
David Howells <dhowells@redhat.com> wrote:
> And then CIFS. ...
>
> Base + Own write_cache_pages():
> WRITE: bw=451MiB/s (473MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s)
> WRITE: bw=455MiB/s (478MB/s), 114MiB/s-114MiB/s (119MB/s-120MB/s)
> WRITE: bw=453MiB/s (475MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s)
> WRITE: bw=459MiB/s (481MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s)
Here's my patch to give cifs its own copy of write_cache_pages() so that the
function pointer can be eliminated in case some sort of spectre thing is
causing a slowdown.
This goes on top of "cifs test patch to convert to using write_cache_pages()".
David
---
fs/cifs/file.c | 137 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 136 insertions(+), 1 deletion(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 04e2466609d9..c33c7db729c7 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2802,6 +2802,141 @@ static int cifs_writepages_add_folio(struct folio *folio,
return 0;
}
+static int cifs_write_cache_pages(struct address_space *mapping,
+ struct writeback_control *wbc,
+ struct cifs_writepages_context *ctx)
+{
+ int ret = 0;
+ int done = 0;
+ int error;
+ struct folio_batch fbatch;
+ int nr_folios;
+ pgoff_t index;
+ pgoff_t end; /* Inclusive */
+ pgoff_t done_index;
+ int range_whole = 0;
+ xa_mark_t tag;
+
+ folio_batch_init(&fbatch);
+ if (wbc->range_cyclic) {
+ index = mapping->writeback_index; /* prev offset */
+ end = -1;
+ } else {
+ index = wbc->range_start >> PAGE_SHIFT;
+ end = wbc->range_end >> PAGE_SHIFT;
+ if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
+ range_whole = 1;
+ }
+ if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages) {
+ tag_pages_for_writeback(mapping, index, end);
+ tag = PAGECACHE_TAG_TOWRITE;
+ } else {
+ tag = PAGECACHE_TAG_DIRTY;
+ }
+ done_index = index;
+ while (!done && (index <= end)) {
+ int i;
+
+ nr_folios = filemap_get_folios_tag(mapping, &index, end,
+ tag, &fbatch);
+
+ if (nr_folios == 0)
+ break;
+
+ for (i = 0; i < nr_folios; i++) {
+ struct folio *folio = fbatch.folios[i];
+
+ done_index = folio->index;
+
+ folio_lock(folio);
+
+ /*
+ * Page truncated or invalidated. We can freely skip it
+ * then, even for data integrity operations: the page
+ * has disappeared concurrently, so there could be no
+ * real expectation of this data integrity operation
+ * even if there is now a new, dirty page at the same
+ * pagecache address.
+ */
+ if (unlikely(folio->mapping != mapping)) {
+continue_unlock:
+ folio_unlock(folio);
+ continue;
+ }
+
+ if (!folio_test_dirty(folio)) {
+ /* someone wrote it for us */
+ goto continue_unlock;
+ }
+
+ if (folio_test_writeback(folio)) {
+ if (wbc->sync_mode != WB_SYNC_NONE)
+ folio_wait_writeback(folio);
+ else
+ goto continue_unlock;
+ }
+
+ BUG_ON(folio_test_writeback(folio));
+ if (!folio_clear_dirty_for_io(folio))
+ goto continue_unlock;
+
+ error = cifs_writepages_add_folio(folio, wbc, ctx);
+ if (unlikely(error)) {
+ /*
+ * Handle errors according to the type of
+ * writeback. There's no need to continue for
+ * background writeback. Just push done_index
+ * past this page so media errors won't choke
+ * writeout for the entire file. For integrity
+ * writeback, we must process the entire dirty
+ * set regardless of errors because the fs may
+ * still have state to clear for each page. In
+ * that case we continue processing and return
+ * the first error.
+ */
+ if (error == AOP_WRITEPAGE_ACTIVATE) {
+ folio_unlock(folio);
+ error = 0;
+ } else if (wbc->sync_mode != WB_SYNC_ALL) {
+ ret = error;
+ done_index = folio->index +
+ folio_nr_pages(folio);
+ done = 1;
+ break;
+ }
+ if (!ret)
+ ret = error;
+ }
+
+ /*
+ * We stop writing back only if we are not doing
+ * integrity sync. In case of integrity sync we have to
+ * keep going until we have written all the pages
+ * we tagged for writeback prior to entering this loop.
+ */
+ if (--wbc->nr_to_write <= 0 &&
+ wbc->sync_mode == WB_SYNC_NONE) {
+ done = 1;
+ break;
+ }
+ }
+ folio_batch_release(&fbatch);
+ cond_resched();
+ }
+
+ /*
+ * If we hit the last page and there is more work to be done: wrap
+ * back the index back to the start of the file for the next
+ * time we are called.
+ */
+ if (wbc->range_cyclic && !done)
+ done_index = 0;
+ if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
+ mapping->writeback_index = done_index;
+
+ return ret;
+}
+
/*
* Write some of the pending data back to the server
*/
@@ -2816,7 +2951,7 @@ static int cifs_writepages(struct address_space *mapping,
* to prevent it.
*/
- ret = write_cache_pages(mapping, wbc, cifs_writepages_add_folio, &ctx);
+ ret = cifs_write_cache_pages(mapping, wbc, &ctx);
if (ret >= 0 && ctx.begun) {
ret = cifs_writepages_submit(mapping, wbc, &ctx);
if (ret < 0)
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-03-02 23:42 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-02 23:16 [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
2023-03-02 23:16 ` [PATCH 1/3] mm: Add a function to get a single tagged folio from a file David Howells
2023-03-02 23:21 ` Matthew Wilcox
2023-03-02 23:16 ` [PATCH 2/3] afs: Partially revert and use filemap_get_folio_tag() David Howells
2023-03-02 23:16 ` [PATCH 3/3] cifs: " David Howells
2023-03-02 23:20 ` [PATCH 0/3] smb3, afs: Revert changes to {cifs,afs}_writepages_region() David Howells
2023-03-02 23:23 ` Test patch to remove per-page dirty region tracking from afs David Howells
2023-03-02 23:29 ` Test patch to make afs use write_cache_pages() David Howells
2023-03-02 23:32 ` Test patch to make afs use its own version of write_cache_pages() David Howells
2023-03-02 23:36 ` cifs test patch to convert to using write_cache_pages() David Howells
2023-03-02 23:41 ` cifs test patch to make cifs use its own version of write_cache_pages() David Howells
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).