From: David Howells <dhowells@redhat.com> To: Trond Myklebust <trondmy@hammerspace.com>, Anna Schumaker <anna.schumaker@netapp.com>, Steve French <sfrench@samba.org>, Dominique Martinet <asmadeus@codewreck.org> Cc: dhowells@redhat.com, Jeff Layton <jlayton@redhat.com>, David Wysochanski <dwysocha@redhat.com>, Matthew Wilcox <willy@infradead.org>, Alexander Viro <viro@zeniv.linux.org.uk>, linux-cachefs@redhat.com, linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 21/32] afs: Extract writeback extension into its own function Date: Mon, 25 Jan 2021 21:35:05 +0000 Message-ID: <161161050546.2537118.2202554806419189453.stgit@warthog.procyon.org.uk> (raw) In-Reply-To: <161161025063.2537118.2009249444682241405.stgit@warthog.procyon.org.uk> Extract writeback extension into its own function to break up the writeback function a bit. Signed-off-by: David Howells <dhowells@redhat.com> --- fs/afs/write.c | 109 ++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 67 insertions(+), 42 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index e1791de90478..89c804bfe253 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -490,47 +490,25 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, } /* - * Synchronously write back the locked page and any subsequent non-locked dirty - * pages. + * Extend the region to be written back to include subsequent contiguously + * dirty pages if possible, but don't sleep while doing so. + * + * If this page holds new content, then we can include filler zeros in the + * writeback. */ -static int afs_write_back_from_locked_page(struct address_space *mapping, - struct writeback_control *wbc, - struct page *primary_page, - pgoff_t final_page) +static void afs_extend_writeback(struct address_space *mapping, + struct afs_vnode *vnode, + long *_count, + pgoff_t start, + pgoff_t final_page, + unsigned *_offset, + unsigned *_to, + bool new_content) { - struct afs_vnode *vnode = AFS_FS_I(mapping->host); - struct iov_iter iter; struct page *pages[8], *page; - unsigned long count, priv; - unsigned n, offset, to, f, t; - pgoff_t start, first, last; - loff_t i_size, pos, end; - int loop, ret; - - _enter(",%lx", primary_page->index); - - count = 1; - if (test_set_page_writeback(primary_page)) - BUG(); - - /* Find all consecutive lockable dirty pages that have contiguous - * written regions, stopping when we find a page that is not - * immediately lockable, is not dirty or is missing, or we reach the - * end of the range. - */ - start = primary_page->index; - priv = page_private(primary_page); - offset = afs_page_dirty_from(primary_page, priv); - to = afs_page_dirty_to(primary_page, priv); - trace_afs_page_dirty(vnode, tracepoint_string("store"), primary_page); - - WARN_ON(offset == to); - if (offset == to) - trace_afs_page_dirty(vnode, tracepoint_string("WARN"), primary_page); - - if (start >= final_page || - (to < PAGE_SIZE && !test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags))) - goto no_more; + unsigned long count = *_count, priv; + unsigned offset = *_offset, to = *_to, n, f, t; + int loop; start++; do { @@ -551,8 +529,7 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, for (loop = 0; loop < n; loop++) { page = pages[loop]; - if (to != PAGE_SIZE && - !test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags)) + if (to != PAGE_SIZE && !new_content) break; if (page->index > final_page) break; @@ -566,8 +543,7 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, priv = page_private(page); f = afs_page_dirty_from(page, priv); t = afs_page_dirty_to(page, priv); - if (f != 0 && - !test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags)) { + if (f != 0 && !new_content) { unlock_page(page); break; } @@ -593,6 +569,55 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, } while (start <= final_page && count < 65536); no_more: + *_count = count; + *_offset = offset; + *_to = to; +} + +/* + * Synchronously write back the locked page and any subsequent non-locked dirty + * pages. + */ +static int afs_write_back_from_locked_page(struct address_space *mapping, + struct writeback_control *wbc, + struct page *primary_page, + pgoff_t final_page) +{ + struct afs_vnode *vnode = AFS_FS_I(mapping->host); + struct iov_iter iter; + unsigned long count, priv; + unsigned offset, to; + pgoff_t start, first, last; + loff_t i_size, pos, end; + bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); + int ret; + + _enter(",%lx", primary_page->index); + + count = 1; + if (test_set_page_writeback(primary_page)) + BUG(); + + /* Find all consecutive lockable dirty pages that have contiguous + * written regions, stopping when we find a page that is not + * immediately lockable, is not dirty or is missing, or we reach the + * end of the range. + */ + start = primary_page->index; + priv = page_private(primary_page); + offset = afs_page_dirty_from(primary_page, priv); + to = afs_page_dirty_to(primary_page, priv); + trace_afs_page_dirty(vnode, tracepoint_string("store"), primary_page); + + WARN_ON(offset == to); + if (offset == to) + trace_afs_page_dirty(vnode, tracepoint_string("WARN"), primary_page); + + if (start < final_page && + (to == PAGE_SIZE || new_content)) + afs_extend_writeback(mapping, vnode, &count, start, final_page, + &offset, &to, new_content); + /* We now have a contiguous set of dirty pages, each with writeback * set; the first page is still locked at this point, but all the rest * have been unlocked.
next prev parent reply index Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-01-25 21:30 [PATCH 00/32] Network fs helper library & fscache kiocb API [ver #2] David Howells 2021-01-25 21:31 ` [PATCH 01/32] iov_iter: Add ITER_XARRAY David Howells 2021-01-25 21:31 ` [PATCH 02/32] vm: Add wait/unlock functions for PG_fscache David Howells 2021-01-25 21:31 ` [PATCH 03/32] mm: Implement readahead_control pageset expansion David Howells 2021-01-25 21:31 ` [PATCH 04/32] vfs: Export rw_verify_area() for use by cachefiles David Howells 2021-01-25 21:31 ` [PATCH 05/32] netfs: Make a netfs helper module David Howells 2021-01-25 21:32 ` [PATCH 06/32] netfs: Provide readahead and readpage netfs helpers David Howells 2021-01-25 21:32 ` [PATCH 07/32] netfs: Add tracepoints David Howells 2021-01-25 21:32 ` [PATCH 08/32] netfs: Gather stats David Howells 2021-01-25 21:32 ` [PATCH 09/32] netfs: Add write_begin helper David Howells 2021-01-25 21:32 ` [PATCH 10/32] netfs: Define an interface to talk to a cache David Howells 2021-01-25 21:32 ` [PATCH 11/32] fscache, cachefiles: Add alternate API to use kiocb for read/write to cache David Howells 2021-01-25 21:33 ` [PATCH 12/32] afs: Disable use of the fscache I/O routines David Howells 2021-01-25 21:33 ` [PATCH 13/32] afs: Pass page into dirty region helpers to provide THP size David Howells 2021-01-25 21:33 ` [PATCH 14/32] afs: Print the operation debug_id when logging an unexpected data version David Howells 2021-01-25 21:33 ` [PATCH 15/32] afs: Move key to afs_read struct David Howells 2021-01-25 21:34 ` [PATCH 16/32] afs: Don't truncate iter during data fetch David Howells 2021-01-25 21:34 ` [PATCH 17/32] afs: Log remote unmarshalling errors David Howells 2021-01-25 21:34 ` [PATCH 18/32] afs: Set up the iov_iter before calling afs_extract_data() David Howells 2021-01-25 21:34 ` [PATCH 19/32] afs: Use ITER_XARRAY for writing David Howells 2021-01-25 21:34 ` [PATCH 20/32] afs: Wait on PG_fscache before modifying/releasing a page David Howells 2021-01-25 21:35 ` David Howells [this message] 2021-01-25 21:35 ` [PATCH 22/32] afs: Prepare for use of THPs David Howells 2021-01-25 21:35 ` [PATCH 23/32] afs: Use the fs operation ops to handle FetchData completion David Howells 2021-01-25 21:35 ` [PATCH 24/32] afs: Use new fscache read helper API David Howells 2021-01-25 21:35 ` [PATCH 25/32] NFS: Clean up nfs_readpage() and nfs_readpages() David Howells 2021-01-26 3:59 ` Matthew Wilcox 2021-01-26 15:33 ` David Wysochanski 2021-01-25 21:36 ` [PATCH 26/32] NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read succeeds David Howells 2021-01-25 21:36 ` [PATCH 27/32] NFS: Refactor nfs_readpage() and nfs_readpage_async() to use nfs_readdesc David Howells 2021-01-26 4:05 ` Matthew Wilcox 2021-01-26 15:24 ` David Wysochanski 2021-01-25 21:36 ` [PATCH 28/32] NFS: Call readpage_async_filler() from nfs_readpage_async() David Howells 2021-01-25 21:36 ` [PATCH 29/32] NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async() David Howells 2021-01-25 21:37 ` [PATCH 30/32] NFS: Allow internal use of read structs and functions David Howells 2021-01-25 21:37 ` [PATCH 31/32] NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage David Howells 2021-01-25 21:37 ` [PATCH 32/32] NFS: Convert readpage to readahead and use netfs_readahead for fscache David Howells 2021-01-26 1:36 ` Matthew Wilcox 2021-01-26 15:24 ` Chuck Lever 2021-01-26 19:10 ` David Wysochanski 2021-01-26 18:25 ` David Wysochanski
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=161161050546.2537118.2202554806419189453.stgit@warthog.procyon.org.uk \ --to=dhowells@redhat.com \ --cc=anna.schumaker@netapp.com \ --cc=asmadeus@codewreck.org \ --cc=ceph-devel@vger.kernel.org \ --cc=dwysocha@redhat.com \ --cc=jlayton@redhat.com \ --cc=linux-afs@lists.infradead.org \ --cc=linux-cachefs@redhat.com \ --cc=linux-cifs@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-nfs@vger.kernel.org \ --cc=sfrench@samba.org \ --cc=trondmy@hammerspace.com \ --cc=v9fs-developer@lists.sourceforge.net \ --cc=viro@zeniv.linux.org.uk \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
CEPH-Devel Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/ceph-devel/0 ceph-devel/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 ceph-devel ceph-devel/ https://lore.kernel.org/ceph-devel \ ceph-devel@vger.kernel.org public-inbox-index ceph-devel Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.ceph-devel AGPL code for this site: git clone https://public-inbox.org/public-inbox.git