linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it
@ 2022-01-27 16:01 David Howells
  2022-01-27 16:02 ` [PATCH 1/4] Fix a warning about a malformed kernel doc comment in cifs by removing the David Howells
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: David Howells @ 2022-01-27 16:01 UTC (permalink / raw)
  To: Steve French
  Cc: Shyam Prasad N, linux-cifs, linux-cachefs, Jeff Layton,
	Matthew Wilcox, dhowells, Shyam Prasad N, Jeff Layton,
	Matthew Wilcox, Linus Torvalds, linux-cifs, linux-cachefs,
	linux-fsdevel, linux-kernel


Hi Steve,

Here are some patches to make cifs actually do I/O to the cache after it
got disabled in the fscache rewrite[1] plus a warning fix that you might
want to detach and take separately:

 (1) Fix a kernel doc warning.

 (2) Change cifs from using ->readpages() to using ->readahead().

 (3) Provide a netfs cache op to query for the presence of data in the
     cache.[*]

 (4) Make ->readahead() call

The patches can be found here also:

	https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-rewrite

David

[*] Ideally, we would use the netfslib read helpers, but it's probably better
    to roll iterators down into cifs's I/O layer before doing that[2].

Link: https://lore.kernel.org/r/164021479106.640689.17404516570194656552.stgit@warthog.procyon.org.uk/ [1]
Link: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=cifs-experimental [2]

---
David Howells (4):
      Fix a warning about a malformed kernel doc comment in cifs by removing the
      cifs: Transition from ->readpages() to ->readahead()
      netfs, cachefiles: Add a method to query presence of data in the cache
      cifs: Implement cache I/O by accessing the cache directly


 Documentation/filesystems/netfs_library.rst |  16 ++
 fs/cachefiles/io.c                          |  59 ++++++
 fs/cifs/connect.c                           |   2 +-
 fs/cifs/file.c                              | 221 ++++++++------------
 fs/cifs/fscache.c                           | 126 +++++++++--
 fs/cifs/fscache.h                           |  79 ++++---
 include/linux/netfs.h                       |   7 +
 7 files changed, 322 insertions(+), 188 deletions(-)



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] Fix a warning about a malformed kernel doc comment in cifs by removing the
  2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
@ 2022-01-27 16:02 ` David Howells
  2022-01-27 16:02 ` [PATCH 2/4] cifs: Transition from ->readpages() to ->readahead() David Howells
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: David Howells @ 2022-01-27 16:02 UTC (permalink / raw)
  To: Steve French
  Cc: dhowells, Shyam Prasad N, Jeff Layton, Matthew Wilcox,
	Linus Torvalds, linux-cifs, linux-cachefs, linux-fsdevel,
	linux-kernel

marker.

Signed-off-by: David Howells <dhowells@redhat.com>
---

 fs/cifs/connect.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 11a22a30ee14..ed210d774a21 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -162,7 +162,7 @@ static void cifs_resolve_server(struct work_struct *work)
 	mutex_unlock(&server->srv_mutex);
 }
 
-/**
+/*
  * Mark all sessions and tcons for reconnect.
  *
  * @server needs to be previously set to CifsNeedReconnect.



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4] cifs: Transition from ->readpages() to ->readahead()
  2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
  2022-01-27 16:02 ` [PATCH 1/4] Fix a warning about a malformed kernel doc comment in cifs by removing the David Howells
@ 2022-01-27 16:02 ` David Howells
  2022-01-27 16:02 ` [PATCH 3/4] netfs, cachefiles: Add a method to query presence of data in the cache David Howells
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: David Howells @ 2022-01-27 16:02 UTC (permalink / raw)
  To: Steve French
  Cc: Shyam Prasad N, Matthew Wilcox, Jeff Layton, linux-cifs,
	linux-cachefs, dhowells, Shyam Prasad N, Jeff Layton,
	Matthew Wilcox, Linus Torvalds, linux-cifs, linux-cachefs,
	linux-fsdevel, linux-kernel

Transition the cifs filesystem from using the old ->readpages() method to
using the new ->readahead() method.

For the moment, this removes any invocation of fscache to read data from
the local cache, leaving that to another patch.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Steve French <smfrench@gmail.com>
cc: Shyam Prasad N <nspmangalore@gmail.com>
cc: Matthew Wilcox <willy@infradead.org>
cc: Jeff Layton <jlayton@kernel.org>
cc: linux-cifs@vger.kernel.org
cc: linux-cachefs@redhat.com
---

 fs/cifs/file.c |  172 +++++++++++---------------------------------------------
 1 file changed, 35 insertions(+), 137 deletions(-)

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 59334be9ed3b..be62dc29dc54 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -4269,8 +4269,6 @@ cifs_readv_complete(struct work_struct *work)
 	for (i = 0; i < rdata->nr_pages; i++) {
 		struct page *page = rdata->pages[i];
 
-		lru_cache_add(page);
-
 		if (rdata->result == 0 ||
 		    (rdata->result == -EAGAIN && got_bytes)) {
 			flush_dcache_page(page);
@@ -4340,7 +4338,6 @@ readpages_fill_pages(struct TCP_Server_Info *server,
 			 * fill them until the writes are flushed.
 			 */
 			zero_user(page, 0, PAGE_SIZE);
-			lru_cache_add(page);
 			flush_dcache_page(page);
 			SetPageUptodate(page);
 			unlock_page(page);
@@ -4350,7 +4347,6 @@ readpages_fill_pages(struct TCP_Server_Info *server,
 			continue;
 		} else {
 			/* no need to hold page hostage */
-			lru_cache_add(page);
 			unlock_page(page);
 			put_page(page);
 			rdata->pages[i] = NULL;
@@ -4393,92 +4389,16 @@ cifs_readpages_copy_into_pages(struct TCP_Server_Info *server,
 	return readpages_fill_pages(server, rdata, iter, iter->count);
 }
 
-static int
-readpages_get_pages(struct address_space *mapping, struct list_head *page_list,
-		    unsigned int rsize, struct list_head *tmplist,
-		    unsigned int *nr_pages, loff_t *offset, unsigned int *bytes)
+static void cifs_readahead(struct readahead_control *ractl)
 {
-	struct page *page, *tpage;
-	unsigned int expected_index;
 	int rc;
-	gfp_t gfp = readahead_gfp_mask(mapping);
-
-	INIT_LIST_HEAD(tmplist);
-
-	page = lru_to_page(page_list);
-
-	/*
-	 * Lock the page and put it in the cache. Since no one else
-	 * should have access to this page, we're safe to simply set
-	 * PG_locked without checking it first.
-	 */
-	__SetPageLocked(page);
-	rc = add_to_page_cache_locked(page, mapping,
-				      page->index, gfp);
-
-	/* give up if we can't stick it in the cache */
-	if (rc) {
-		__ClearPageLocked(page);
-		return rc;
-	}
-
-	/* move first page to the tmplist */
-	*offset = (loff_t)page->index << PAGE_SHIFT;
-	*bytes = PAGE_SIZE;
-	*nr_pages = 1;
-	list_move_tail(&page->lru, tmplist);
-
-	/* now try and add more pages onto the request */
-	expected_index = page->index + 1;
-	list_for_each_entry_safe_reverse(page, tpage, page_list, lru) {
-		/* discontinuity ? */
-		if (page->index != expected_index)
-			break;
-
-		/* would this page push the read over the rsize? */
-		if (*bytes + PAGE_SIZE > rsize)
-			break;
-
-		__SetPageLocked(page);
-		rc = add_to_page_cache_locked(page, mapping, page->index, gfp);
-		if (rc) {
-			__ClearPageLocked(page);
-			break;
-		}
-		list_move_tail(&page->lru, tmplist);
-		(*bytes) += PAGE_SIZE;
-		expected_index++;
-		(*nr_pages)++;
-	}
-	return rc;
-}
-
-static int cifs_readpages(struct file *file, struct address_space *mapping,
-	struct list_head *page_list, unsigned num_pages)
-{
-	int rc;
-	int err = 0;
-	struct list_head tmplist;
-	struct cifsFileInfo *open_file = file->private_data;
-	struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);
+	struct cifsFileInfo *open_file = ractl->file->private_data;
+	struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file);
 	struct TCP_Server_Info *server;
 	pid_t pid;
-	unsigned int xid;
+	unsigned int xid, last_batch_size = 0;
 
 	xid = get_xid();
-	/*
-	 * Reads as many pages as possible from fscache. Returns -ENOBUFS
-	 * immediately if the cookie is negative
-	 *
-	 * After this point, every page in the list might have PG_fscache set,
-	 * so we will need to clean that up off of every page we don't use.
-	 */
-	rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list,
-					 &num_pages);
-	if (rc == 0) {
-		free_xid(xid);
-		return rc;
-	}
 
 	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)
 		pid = open_file->pid;
@@ -4489,39 +4409,32 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 	server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses);
 
 	cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n",
-		 __func__, file, mapping, num_pages);
+		 __func__, ractl->file, ractl->mapping, readahead_count(ractl));
 
 	/*
-	 * Start with the page at end of list and move it to private
-	 * list. Do the same with any following pages until we hit
-	 * the rsize limit, hit an index discontinuity, or run out of
-	 * pages. Issue the async read and then start the loop again
-	 * until the list is empty.
-	 *
-	 * Note that list order is important. The page_list is in
-	 * the order of declining indexes. When we put the pages in
-	 * the rdata->pages, then we want them in increasing order.
+	 * Chop the readahead request up into rsize-sized read requests.
 	 */
-	while (!list_empty(page_list) && !err) {
-		unsigned int i, nr_pages, bytes, rsize;
-		loff_t offset;
-		struct page *page, *tpage;
+	while (readahead_count(ractl) - last_batch_size) {
+		unsigned int i, nr_pages, got, rsize;
+		struct page *page;
 		struct cifs_readdata *rdata;
 		struct cifs_credits credits_on_stack;
 		struct cifs_credits *credits = &credits_on_stack;
 
 		if (open_file->invalidHandle) {
 			rc = cifs_reopen_file(open_file, true);
-			if (rc == -EAGAIN)
-				continue;
-			else if (rc)
+			if (rc) {
+				if (rc == -EAGAIN)
+					continue;
 				break;
+			}
 		}
 
 		rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
 						   &rsize, credits);
 		if (rc)
 			break;
+		nr_pages = min_t(size_t, rsize / PAGE_SIZE, readahead_count(ractl));
 
 		/*
 		 * Give up immediately if rsize is too small to read an entire
@@ -4529,16 +4442,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 		 * reach this point however since we set ra_pages to 0 when the
 		 * rsize is smaller than a cache page.
 		 */
-		if (unlikely(rsize < PAGE_SIZE)) {
-			add_credits_and_wake_if(server, credits, 0);
-			free_xid(xid);
-			return 0;
-		}
-
-		nr_pages = 0;
-		err = readpages_get_pages(mapping, page_list, rsize, &tmplist,
-					 &nr_pages, &offset, &bytes);
-		if (!nr_pages) {
+		if (unlikely(!nr_pages)) {
 			add_credits_and_wake_if(server, credits, 0);
 			break;
 		}
@@ -4546,36 +4450,31 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 		rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete);
 		if (!rdata) {
 			/* best to give up if we're out of mem */
-			list_for_each_entry_safe(page, tpage, &tmplist, lru) {
-				list_del(&page->lru);
-				lru_cache_add(page);
-				unlock_page(page);
-				put_page(page);
-			}
-			rc = -ENOMEM;
 			add_credits_and_wake_if(server, credits, 0);
 			break;
 		}
 
-		rdata->cfile = cifsFileInfo_get(open_file);
-		rdata->server = server;
-		rdata->mapping = mapping;
-		rdata->offset = offset;
-		rdata->bytes = bytes;
-		rdata->pid = pid;
-		rdata->pagesz = PAGE_SIZE;
-		rdata->tailsz = PAGE_SIZE;
+		got = __readahead_batch(ractl, rdata->pages, nr_pages);
+		if (got != nr_pages) {
+			pr_warn("__readahead_batch() returned %u/%u\n",
+				got, nr_pages);
+			nr_pages = got;
+		}
+
+		rdata->nr_pages = nr_pages;
+		rdata->bytes	= readahead_batch_length(ractl);
+		rdata->cfile	= cifsFileInfo_get(open_file);
+		rdata->server	= server;
+		rdata->mapping	= ractl->mapping;
+		rdata->offset	= readahead_pos(ractl);
+		rdata->pid	= pid;
+		rdata->pagesz	= PAGE_SIZE;
+		rdata->tailsz	= PAGE_SIZE;
 		rdata->read_into_pages = cifs_readpages_read_into_pages;
 		rdata->copy_into_pages = cifs_readpages_copy_into_pages;
-		rdata->credits = credits_on_stack;
-
-		list_for_each_entry_safe(page, tpage, &tmplist, lru) {
-			list_del(&page->lru);
-			rdata->pages[rdata->nr_pages++] = page;
-		}
+		rdata->credits	= credits_on_stack;
 
 		rc = adjust_credits(server, &rdata->credits, rdata->bytes);
-
 		if (!rc) {
 			if (rdata->cfile->invalidHandle)
 				rc = -EAGAIN;
@@ -4587,7 +4486,6 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 			add_credits_and_wake_if(server, &rdata->credits, 0);
 			for (i = 0; i < rdata->nr_pages; i++) {
 				page = rdata->pages[i];
-				lru_cache_add(page);
 				unlock_page(page);
 				put_page(page);
 			}
@@ -4597,10 +4495,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 		}
 
 		kref_put(&rdata->refcount, cifs_readdata_release);
+		last_batch_size = nr_pages;
 	}
 
 	free_xid(xid);
-	return rc;
 }
 
 /*
@@ -4924,7 +4822,7 @@ void cifs_oplock_break(struct work_struct *work)
  * In the non-cached mode (mount with cache=none), we shunt off direct read and write requests
  * so this method should never be called.
  *
- * Direct IO is not yet supported in the cached mode. 
+ * Direct IO is not yet supported in the cached mode.
  */
 static ssize_t
 cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter)
@@ -5006,7 +4904,7 @@ static int cifs_set_page_dirty(struct page *page)
 
 const struct address_space_operations cifs_addr_ops = {
 	.readpage = cifs_readpage,
-	.readpages = cifs_readpages,
+	.readahead = cifs_readahead,
 	.writepage = cifs_writepage,
 	.writepages = cifs_writepages,
 	.write_begin = cifs_write_begin,



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4] netfs, cachefiles: Add a method to query presence of data in the cache
  2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
  2022-01-27 16:02 ` [PATCH 1/4] Fix a warning about a malformed kernel doc comment in cifs by removing the David Howells
  2022-01-27 16:02 ` [PATCH 2/4] cifs: Transition from ->readpages() to ->readahead() David Howells
@ 2022-01-27 16:02 ` David Howells
  2022-01-27 16:02 ` [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly David Howells
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: David Howells @ 2022-01-27 16:02 UTC (permalink / raw)
  To: Steve French
  Cc: linux-cachefs, dhowells, Shyam Prasad N, Jeff Layton,
	Matthew Wilcox, Linus Torvalds, linux-cifs, linux-cachefs,
	linux-fsdevel, linux-kernel

Add a netfs_cache_ops method by which a network filesystem can ask the
cache about what data it has available and where so that it can make a
multipage read more efficient.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-cachefs@redhat.com
---

 Documentation/filesystems/netfs_library.rst |   16 +++++++
 fs/cachefiles/io.c                          |   59 +++++++++++++++++++++++++++
 include/linux/netfs.h                       |    7 +++
 3 files changed, 82 insertions(+)

diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst
index 136f8da3d0e2..4f373a8ec47b 100644
--- a/Documentation/filesystems/netfs_library.rst
+++ b/Documentation/filesystems/netfs_library.rst
@@ -462,6 +462,10 @@ operation table looks like the following::
 			     struct iov_iter *iter,
 			     netfs_io_terminated_t term_func,
 			     void *term_func_priv);
+
+		int (*query_occupancy)(struct netfs_cache_resources *cres,
+				       loff_t start, size_t len, size_t granularity,
+				       loff_t *_data_start, size_t *_data_len);
 	};
 
 With a termination handler function pointer::
@@ -536,6 +540,18 @@ The methods defined in the table are:
    indicating whether the termination is definitely happening in the caller's
    context.
 
+ * ``query_occupancy()``
+
+   [Required] Called to find out where the next piece of data is within a
+   particular region of the cache.  The start and length of the region to be
+   queried are passed in, along with the granularity to which the answer needs
+   to be aligned.  The function passes back the start and length of the data,
+   if any, available within that region.  Note that there may be a hole at the
+   front.
+
+   It returns 0 if some data was found, -ENODATA if there was no usable data
+   within the region or -ENOBUFS if there is no caching on this file.
+
 Note that these methods are passed a pointer to the cache resource structure,
 not the read request structure as they could be used in other situations where
 there isn't a read request structure as well, such as writing dirty data to the
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 04eb52736990..753986ea1583 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -191,6 +191,64 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
 	return ret;
 }
 
+/*
+ * Query the occupancy of the cache in a region, returning where the next chunk
+ * of data starts and how long it is.
+ */
+static int cachefiles_query_occupancy(struct netfs_cache_resources *cres,
+				      loff_t start, size_t len, size_t granularity,
+				      loff_t *_data_start, size_t *_data_len)
+{
+	struct cachefiles_object *object;
+	struct file *file;
+	loff_t off, off2;
+
+	*_data_start = -1;
+	*_data_len = 0;
+
+	if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ))
+		return -ENOBUFS;
+
+	object = cachefiles_cres_object(cres);
+	file = cachefiles_cres_file(cres);
+	granularity = max_t(size_t, object->volume->cache->bsize, granularity);
+
+	_enter("%pD,%li,%llx,%zx/%llx",
+	       file, file_inode(file)->i_ino, start, len,
+	       i_size_read(file_inode(file)));
+
+	off = cachefiles_inject_read_error();
+	if (off == 0)
+		off = vfs_llseek(file, start, SEEK_DATA);
+	if (off == -ENXIO)
+		return -ENODATA; /* Beyond EOF */
+	if (off < 0 && off >= (loff_t)-MAX_ERRNO)
+		return -ENOBUFS; /* Error. */
+	if (round_up(off, granularity) >= start + len)
+		return -ENODATA; /* No data in range */
+
+	off2 = cachefiles_inject_read_error();
+	if (off2 == 0)
+		off2 = vfs_llseek(file, off, SEEK_HOLE);
+	if (off2 == -ENXIO)
+		return -ENODATA; /* Beyond EOF */
+	if (off2 < 0 && off2 >= (loff_t)-MAX_ERRNO)
+		return -ENOBUFS; /* Error. */
+
+	/* Round away partial blocks */
+	off = round_up(off, granularity);
+	off2 = round_down(off2, granularity);
+	if (off2 <= off)
+		return -ENODATA;
+
+	*_data_start = off;
+	if (off2 > start + len)
+		*_data_len = len;
+	else
+		*_data_len = off2 - off;
+	return 0;
+}
+
 /*
  * Handle completion of a write to the cache.
  */
@@ -545,6 +603,7 @@ static const struct netfs_cache_ops cachefiles_netfs_cache_ops = {
 	.write			= cachefiles_write,
 	.prepare_read		= cachefiles_prepare_read,
 	.prepare_write		= cachefiles_prepare_write,
+	.query_occupancy	= cachefiles_query_occupancy,
 };
 
 /*
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index b46c39d98bbd..614f22213e21 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -244,6 +244,13 @@ struct netfs_cache_ops {
 	int (*prepare_write)(struct netfs_cache_resources *cres,
 			     loff_t *_start, size_t *_len, loff_t i_size,
 			     bool no_space_allocated_yet);
+
+	/* Query the occupancy of the cache in a region, returning where the
+	 * next chunk of data starts and how long it is.
+	 */
+	int (*query_occupancy)(struct netfs_cache_resources *cres,
+			       loff_t start, size_t len, size_t granularity,
+			       loff_t *_data_start, size_t *_data_len);
 };
 
 struct readahead_control;



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly
  2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
                   ` (2 preceding siblings ...)
  2022-01-27 16:02 ` [PATCH 3/4] netfs, cachefiles: Add a method to query presence of data in the cache David Howells
@ 2022-01-27 16:02 ` David Howells
  2022-01-28  0:55   ` Steve French
  2022-01-27 16:09 ` [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
  2022-01-28 10:58 ` Jeff Layton
  5 siblings, 1 reply; 9+ messages in thread
From: David Howells @ 2022-01-27 16:02 UTC (permalink / raw)
  To: Steve French
  Cc: Shyam Prasad N, linux-cifs, linux-cachefs, dhowells,
	Shyam Prasad N, Jeff Layton, Matthew Wilcox, Linus Torvalds,
	linux-cifs, linux-cachefs, linux-fsdevel, linux-kernel

Move cifs to using fscache DIO API instead of the old upstream I/O API as
that has been removed.  This is a stopgap solution as the intention is that
at sometime in the future, the cache will move to using larger blocks and
won't be able to store individual pages in order to deal with the potential
for data corruption due to the backing filesystem being able insert/remove
bridging blocks of zeros into its extent list[1].

cifs then reads and writes cache pages synchronously and one page at a time.

The preferred change would be to use the netfs lib, but the new I/O API can
be used directly.  It's just that as the cache now needs to track data for
itself, caching blocks may exceed page size...

This code is somewhat borrowed from my "fallback I/O" patchset[2].

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Steve French <smfrench@gmail.com>
cc: Shyam Prasad N <nspmangalore@gmail.com>
cc: linux-cifs@vger.kernel.org
cc: linux-cachefs@redhat.com
Link: https://lore.kernel.org/r/YO17ZNOcq+9PajfQ@mit.edu [1]
Link: https://lore.kernel.org/r/202112100957.2oEDT20W-lkp@intel.com/ [2]
---

 fs/cifs/file.c    |   55 +++++++++++++++++++++--
 fs/cifs/fscache.c |  126 +++++++++++++++++++++++++++++++++++++++++++++--------
 fs/cifs/fscache.h |   79 +++++++++++++++++++++------------
 3 files changed, 207 insertions(+), 53 deletions(-)

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index be62dc29dc54..1b41b6f2a04b 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -4276,12 +4276,12 @@ cifs_readv_complete(struct work_struct *work)
 		} else
 			SetPageError(page);
 
-		unlock_page(page);
-
 		if (rdata->result == 0 ||
 		    (rdata->result == -EAGAIN && got_bytes))
 			cifs_readpage_to_fscache(rdata->mapping->host, page);
 
+		unlock_page(page);
+
 		got_bytes -= min_t(unsigned int, PAGE_SIZE, got_bytes);
 
 		put_page(page);
@@ -4396,7 +4396,11 @@ static void cifs_readahead(struct readahead_control *ractl)
 	struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file);
 	struct TCP_Server_Info *server;
 	pid_t pid;
-	unsigned int xid, last_batch_size = 0;
+	unsigned int xid, nr_pages, last_batch_size = 0, cache_nr_pages = 0;
+	pgoff_t next_cached = ULONG_MAX;
+	bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) &&
+		cifs_inode_cookie(ractl->mapping->host)->cache_priv;
+	bool check_cache = caching;
 
 	xid = get_xid();
 
@@ -4414,12 +4418,52 @@ static void cifs_readahead(struct readahead_control *ractl)
 	/*
 	 * Chop the readahead request up into rsize-sized read requests.
 	 */
-	while (readahead_count(ractl) - last_batch_size) {
-		unsigned int i, nr_pages, got, rsize;
+	while ((nr_pages = readahead_count(ractl) - last_batch_size)) {
+		unsigned int i, got, rsize;
 		struct page *page;
 		struct cifs_readdata *rdata;
 		struct cifs_credits credits_on_stack;
 		struct cifs_credits *credits = &credits_on_stack;
+		pgoff_t index = readahead_index(ractl) + last_batch_size;
+
+		/*
+		 * Find out if we have anything cached in the range of
+		 * interest, and if so, where the next chunk of cached data is.
+		 */
+		if (caching) {
+			if (check_cache) {
+				rc = cifs_fscache_query_occupancy(
+					ractl->mapping->host, index, nr_pages,
+					&next_cached, &cache_nr_pages);
+				if (rc < 0)
+					caching = false;
+				check_cache = false;
+			}
+
+			if (index == next_cached) {
+				/*
+				 * TODO: Send a whole batch of pages to be read
+				 * by the cache.
+				 */
+				page = readahead_page(ractl);
+				BUG_ON(!page);
+				if (cifs_readpage_from_fscache(ractl->mapping->host,
+							       page) < 0) {
+					/*
+					 * TODO: Deal with cache read failure
+					 * here, but for the moment, delegate
+					 * that to readpage.
+					 */
+					caching = false;
+				}
+				unlock_page(page);
+				next_cached++;
+				cache_nr_pages--;
+				if (cache_nr_pages == 0)
+					check_cache = true;
+				continue;
+			}
+		}
 
 		if (open_file->invalidHandle) {
 			rc = cifs_reopen_file(open_file, true);
@@ -4435,6 +4479,7 @@ static void cifs_readahead(struct readahead_control *ractl)
 		if (rc)
 			break;
 		nr_pages = min_t(size_t, rsize / PAGE_SIZE, readahead_count(ractl));
+		nr_pages = min_t(size_t, nr_pages, next_cached - index);
 
 		/*
 		 * Give up immediately if rsize is too small to read an entire
diff --git a/fs/cifs/fscache.c b/fs/cifs/fscache.c
index efaac4d5ff55..f98cfcc0d397 100644
--- a/fs/cifs/fscache.c
+++ b/fs/cifs/fscache.c
@@ -134,37 +134,127 @@ void cifs_fscache_release_inode_cookie(struct inode *inode)
 	}
 }
 
+static inline void fscache_end_operation(struct netfs_cache_resources *cres)
+{
+	const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
+
+	if (ops)
+		ops->end_operation(cres);
+}
+
 /*
- * Retrieve a page from FS-Cache
+ * Fallback page reading interface.
  */
-int __cifs_readpage_from_fscache(struct inode *inode, struct page *page)
+static int fscache_fallback_read_page(struct inode *inode, struct page *page)
 {
-	cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",
-		 __func__, CIFS_I(inode)->fscache, page, inode);
-	return -ENOBUFS; // Needs conversion to using netfslib
+	struct netfs_cache_resources cres;
+	struct fscache_cookie *cookie = cifs_inode_cookie(inode);
+	struct iov_iter iter;
+	struct bio_vec bvec[1];
+	int ret;
+
+	memset(&cres, 0, sizeof(cres));
+	bvec[0].bv_page		= page;
+	bvec[0].bv_offset	= 0;
+	bvec[0].bv_len		= PAGE_SIZE;
+	iov_iter_bvec(&iter, READ, bvec, ARRAY_SIZE(bvec), PAGE_SIZE);
+
+	ret = fscache_begin_read_operation(&cres, cookie);
+	if (ret < 0)
+		return ret;
+
+	ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL,
+			   NULL, NULL);
+	fscache_end_operation(&cres);
+	return ret;
 }
 
 /*
- * Retrieve a set of pages from FS-Cache
+ * Fallback page writing interface.
  */
-int __cifs_readpages_from_fscache(struct inode *inode,
-				struct address_space *mapping,
-				struct list_head *pages,
-				unsigned *nr_pages)
+static int fscache_fallback_write_page(struct inode *inode, struct page *page,
+				       bool no_space_allocated_yet)
 {
-	cifs_dbg(FYI, "%s: (0x%p/%u/0x%p)\n",
-		 __func__, CIFS_I(inode)->fscache, *nr_pages, inode);
-	return -ENOBUFS; // Needs conversion to using netfslib
+	struct netfs_cache_resources cres;
+	struct fscache_cookie *cookie = cifs_inode_cookie(inode);
+	struct iov_iter iter;
+	struct bio_vec bvec[1];
+	loff_t start = page_offset(page);
+	size_t len = PAGE_SIZE;
+	int ret;
+
+	memset(&cres, 0, sizeof(cres));
+	bvec[0].bv_page		= page;
+	bvec[0].bv_offset	= 0;
+	bvec[0].bv_len		= PAGE_SIZE;
+	iov_iter_bvec(&iter, WRITE, bvec, ARRAY_SIZE(bvec), PAGE_SIZE);
+
+	ret = fscache_begin_write_operation(&cres, cookie);
+	if (ret < 0)
+		return ret;
+
+	ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode),
+				      no_space_allocated_yet);
+	if (ret == 0)
+		ret = fscache_write(&cres, page_offset(page), &iter, NULL, NULL);
+	fscache_end_operation(&cres);
+	return ret;
 }
 
-void __cifs_readpage_to_fscache(struct inode *inode, struct page *page)
+/*
+ * Retrieve a page from FS-Cache
+ */
+int __cifs_readpage_from_fscache(struct inode *inode, struct page *page)
 {
-	struct cifsInodeInfo *cifsi = CIFS_I(inode);
+	int ret;
 
-	WARN_ON(!cifsi->fscache);
+	cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",
+		 __func__, cifs_inode_cookie(inode), page, inode);
 
+	ret = fscache_fallback_read_page(inode, page);
+	if (ret < 0)
+		return ret;
+
+	/* Read completed synchronously */
+	SetPageUptodate(page);
+	return 0;
+}
+
+void __cifs_readpage_to_fscache(struct inode *inode, struct page *page)
+{
 	cifs_dbg(FYI, "%s: (fsc: %p, p: %p, i: %p)\n",
-		 __func__, cifsi->fscache, page, inode);
+		 __func__, cifs_inode_cookie(inode), page, inode);
+
+	fscache_fallback_write_page(inode, page, true);
+}
+
+/*
+ * Query the cache occupancy.
+ */
+int __cifs_fscache_query_occupancy(struct inode *inode,
+				   pgoff_t first, unsigned nr_pages,
+				   pgoff_t *_data_first,
+				   unsigned int *_data_nr_pages)
+{
+	struct netfs_cache_resources cres;
+	struct fscache_cookie *cookie = cifs_inode_cookie(inode);
+	loff_t start, data_start;
+	size_t len, data_len;
+	int ret;
 
-	// Needs conversion to using netfslib
+	ret = fscache_begin_read_operation(&cres, cookie);
+	if (ret < 0)
+		return ret;
+
+	start = first * PAGE_SIZE;
+	len = nr_pages * PAGE_SIZE;
+	ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE,
+					&data_start, &data_len);
+	if (ret == 0) {
+		*_data_first = data_start / PAGE_SIZE;
+		*_data_nr_pages = len / PAGE_SIZE;
+	}
+
+	fscache_end_operation(&cres);
+	return ret;
 }
diff --git a/fs/cifs/fscache.h b/fs/cifs/fscache.h
index c6ca49ac33d4..ed4b08df1961 100644
--- a/fs/cifs/fscache.h
+++ b/fs/cifs/fscache.h
@@ -9,6 +9,7 @@
 #ifndef _CIFS_FSCACHE_H
 #define _CIFS_FSCACHE_H
 
+#include <linux/swap.h>
 #include <linux/fscache.h>
 
 #include "cifsglob.h"
@@ -58,14 +59,6 @@ void cifs_fscache_fill_coherency(struct inode *inode,
 }
 
 
-extern int cifs_fscache_release_page(struct page *page, gfp_t gfp);
-extern int __cifs_readpage_from_fscache(struct inode *, struct page *);
-extern int __cifs_readpages_from_fscache(struct inode *,
-					 struct address_space *,
-					 struct list_head *,
-					 unsigned *);
-extern void __cifs_readpage_to_fscache(struct inode *, struct page *);
-
 static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode)
 {
 	return CIFS_I(inode)->fscache;
@@ -80,33 +73,52 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags
 			   i_size_read(inode), flags);
 }
 
-static inline int cifs_readpage_from_fscache(struct inode *inode,
-					     struct page *page)
-{
-	if (CIFS_I(inode)->fscache)
-		return __cifs_readpage_from_fscache(inode, page);
+extern int __cifs_fscache_query_occupancy(struct inode *inode,
+					  pgoff_t first, unsigned nr_pages,
+					  pgoff_t *_data_first,
+					  unsigned int *_data_nr_pages);
 
-	return -ENOBUFS;
+static inline int cifs_fscache_query_occupancy(struct inode *inode,
+					       pgoff_t first, unsigned nr_pages,
+					       pgoff_t *_data_first,
+					       unsigned int *_data_nr_pages)
+{
+	if (!cifs_inode_cookie(inode))
+		return -ENOBUFS;
+	return __cifs_fscache_query_occupancy(inode, first, nr_pages,
+					      _data_first, _data_nr_pages);
 }
 
-static inline int cifs_readpages_from_fscache(struct inode *inode,
-					      struct address_space *mapping,
-					      struct list_head *pages,
-					      unsigned *nr_pages)
+extern int __cifs_readpage_from_fscache(struct inode *, struct page *);
+extern void __cifs_readpage_to_fscache(struct inode *, struct page *);
+
+
+static inline int cifs_readpage_from_fscache(struct inode *inode,
+					     struct page *page)
 {
-	if (CIFS_I(inode)->fscache)
-		return __cifs_readpages_from_fscache(inode, mapping, pages,
-						     nr_pages);
+	if (cifs_inode_cookie(inode))
+		return __cifs_readpage_from_fscache(inode, page);
 	return -ENOBUFS;
 }
 
 static inline void cifs_readpage_to_fscache(struct inode *inode,
 					    struct page *page)
 {
-	if (PageFsCache(page))
+	if (cifs_inode_cookie(inode))
 		__cifs_readpage_to_fscache(inode, page);
 }
 
+static inline int cifs_fscache_release_page(struct page *page, gfp_t gfp)
+{
+	if (PageFsCache(page)) {
+		if (current_is_kswapd() || !(gfp & __GFP_FS))
+			return false;
+		wait_on_page_fscache(page);
+		fscache_note_page_release(cifs_inode_cookie(page->mapping->host));
+	}
+	return true;
+}
+
 #else /* CONFIG_CIFS_FSCACHE */
 static inline
 void cifs_fscache_fill_coherency(struct inode *inode,
@@ -123,22 +135,29 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd
 static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; }
 static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {}
 
-static inline int
-cifs_readpage_from_fscache(struct inode *inode, struct page *page)
+static inline int cifs_fscache_query_occupancy(struct inode *inode,
+					       pgoff_t first, unsigned nr_pages,
+					       pgoff_t *_data_first,
+					       unsigned int *_data_nr_pages)
 {
+	*_data_first = ULONG_MAX;
+	*_data_nr_pages = 0;
 	return -ENOBUFS;
 }
 
-static inline int cifs_readpages_from_fscache(struct inode *inode,
-					      struct address_space *mapping,
-					      struct list_head *pages,
-					      unsigned *nr_pages)
+static inline int
+cifs_readpage_from_fscache(struct inode *inode, struct page *page)
 {
 	return -ENOBUFS;
 }
 
-static inline void cifs_readpage_to_fscache(struct inode *inode,
-			struct page *page) {}
+static inline
+void cifs_readpage_to_fscache(struct inode *inode, struct page *page) {}
+
+static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)
+{
+	return true; /* May release page */
+}
 
 #endif /* CONFIG_CIFS_FSCACHE */
 



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it
  2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
                   ` (3 preceding siblings ...)
  2022-01-27 16:02 ` [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly David Howells
@ 2022-01-27 16:09 ` David Howells
  2022-01-28 10:58 ` Jeff Layton
  5 siblings, 0 replies; 9+ messages in thread
From: David Howells @ 2022-01-27 16:09 UTC (permalink / raw)
  To: Steve French
  Cc: dhowells, Shyam Prasad N, linux-cifs, linux-cachefs, Jeff Layton,
	Matthew Wilcox, Linus Torvalds, linux-fsdevel, linux-kernel

>  (4) Make ->readahead() call

 (4) Make cifs_readahead() call into the cache to query if and where is has
     data available, and if it has, read data from it.

David


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly
  2022-01-27 16:02 ` [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly David Howells
@ 2022-01-28  0:55   ` Steve French
  2022-01-28  0:58     ` Matthew Wilcox
  0 siblings, 1 reply; 9+ messages in thread
From: Steve French @ 2022-01-28  0:55 UTC (permalink / raw)
  To: David Howells
  Cc: Shyam Prasad N, CIFS, linux-cachefs, Jeff Layton, Matthew Wilcox,
	Linus Torvalds, linux-fsdevel, LKML

Regression tests so far on Dave's cifs fscache patch series are going
fine.  First series of tests I ran were this:
http://smb3-test-rhel-75.southcentralus.cloudapp.azure.com/#/builders/11/builds/160
but I have to run additional tests with fscache enabled etc.

I saw that checkpatch had some minor complaints on this patch (patch
4) which I cleaned up, but was wondering other's thoughts on this
checkpatch warning:

WARNING: Avoid crashing the kernel - try using WARN_ON & recovery code
rather than BUG() or BUG_ON()
#101: FILE: fs/cifs/file.c:4449:

ie

+ page = readahead_page(ractl);
+ BUG_ON(!page);

On Thu, Jan 27, 2022 at 10:03 AM David Howells <dhowells@redhat.com> wrote:
>
> Move cifs to using fscache DIO API instead of the old upstream I/O API as
> that has been removed.  This is a stopgap solution as the intention is that
> at sometime in the future, the cache will move to using larger blocks and
> won't be able to store individual pages in order to deal with the potential
> for data corruption due to the backing filesystem being able insert/remove
> bridging blocks of zeros into its extent list[1].
>
> cifs then reads and writes cache pages synchronously and one page at a time.
>
> The preferred change would be to use the netfs lib, but the new I/O API can
> be used directly.  It's just that as the cache now needs to track data for
> itself, caching blocks may exceed page size...
>
> This code is somewhat borrowed from my "fallback I/O" patchset[2].
>
> Signed-off-by: David Howells <dhowells@redhat.com>
> cc: Steve French <smfrench@gmail.com>
> cc: Shyam Prasad N <nspmangalore@gmail.com>
> cc: linux-cifs@vger.kernel.org
> cc: linux-cachefs@redhat.com
> Link: https://lore.kernel.org/r/YO17ZNOcq+9PajfQ@mit.edu [1]
> Link: https://lore.kernel.org/r/202112100957.2oEDT20W-lkp@intel.com/ [2]
> ---
>
>  fs/cifs/file.c    |   55 +++++++++++++++++++++--
>  fs/cifs/fscache.c |  126 +++++++++++++++++++++++++++++++++++++++++++++--------
>  fs/cifs/fscache.h |   79 +++++++++++++++++++++------------
>  3 files changed, 207 insertions(+), 53 deletions(-)
>
> diff --git a/fs/cifs/file.c b/fs/cifs/file.c
> index be62dc29dc54..1b41b6f2a04b 100644
> --- a/fs/cifs/file.c
> +++ b/fs/cifs/file.c
> @@ -4276,12 +4276,12 @@ cifs_readv_complete(struct work_struct *work)
>                 } else
>                         SetPageError(page);
>
> -               unlock_page(page);
> -
>                 if (rdata->result == 0 ||
>                     (rdata->result == -EAGAIN && got_bytes))
>                         cifs_readpage_to_fscache(rdata->mapping->host, page);
>
> +               unlock_page(page);
> +
>                 got_bytes -= min_t(unsigned int, PAGE_SIZE, got_bytes);
>
>                 put_page(page);
> @@ -4396,7 +4396,11 @@ static void cifs_readahead(struct readahead_control *ractl)
>         struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file);
>         struct TCP_Server_Info *server;
>         pid_t pid;
> -       unsigned int xid, last_batch_size = 0;
> +       unsigned int xid, nr_pages, last_batch_size = 0, cache_nr_pages = 0;
> +       pgoff_t next_cached = ULONG_MAX;
> +       bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) &&
> +               cifs_inode_cookie(ractl->mapping->host)->cache_priv;
> +       bool check_cache = caching;
>
>         xid = get_xid();
>
> @@ -4414,12 +4418,52 @@ static void cifs_readahead(struct readahead_control *ractl)
>         /*
>          * Chop the readahead request up into rsize-sized read requests.
>          */
> -       while (readahead_count(ractl) - last_batch_size) {
> -               unsigned int i, nr_pages, got, rsize;
> +       while ((nr_pages = readahead_count(ractl) - last_batch_size)) {
> +               unsigned int i, got, rsize;
>                 struct page *page;
>                 struct cifs_readdata *rdata;
>                 struct cifs_credits credits_on_stack;
>                 struct cifs_credits *credits = &credits_on_stack;
> +               pgoff_t index = readahead_index(ractl) + last_batch_size;
> +
> +               /*
> +                * Find out if we have anything cached in the range of
> +                * interest, and if so, where the next chunk of cached data is.
> +                */
> +               if (caching) {
> +                       if (check_cache) {
> +                               rc = cifs_fscache_query_occupancy(
> +                                       ractl->mapping->host, index, nr_pages,
> +                                       &next_cached, &cache_nr_pages);
> +                               if (rc < 0)
> +                                       caching = false;
> +                               check_cache = false;
> +                       }
> +
> +                       if (index == next_cached) {
> +                               /*
> +                                * TODO: Send a whole batch of pages to be read
> +                                * by the cache.
> +                                */
> +                               page = readahead_page(ractl);
> +                               BUG_ON(!page);
> +                               if (cifs_readpage_from_fscache(ractl->mapping->host,
> +                                                              page) < 0) {
> +                                       /*
> +                                        * TODO: Deal with cache read failure
> +                                        * here, but for the moment, delegate
> +                                        * that to readpage.
> +                                        */
> +                                       caching = false;
> +                               }
> +                               unlock_page(page);
> +                               next_cached++;
> +                               cache_nr_pages--;
> +                               if (cache_nr_pages == 0)
> +                                       check_cache = true;
> +                               continue;
> +                       }
> +               }
>
>                 if (open_file->invalidHandle) {
>                         rc = cifs_reopen_file(open_file, true);
> @@ -4435,6 +4479,7 @@ static void cifs_readahead(struct readahead_control *ractl)
>                 if (rc)
>                         break;
>                 nr_pages = min_t(size_t, rsize / PAGE_SIZE, readahead_count(ractl));
> +               nr_pages = min_t(size_t, nr_pages, next_cached - index);
>
>                 /*
>                  * Give up immediately if rsize is too small to read an entire
> diff --git a/fs/cifs/fscache.c b/fs/cifs/fscache.c
> index efaac4d5ff55..f98cfcc0d397 100644
> --- a/fs/cifs/fscache.c
> +++ b/fs/cifs/fscache.c
> @@ -134,37 +134,127 @@ void cifs_fscache_release_inode_cookie(struct inode *inode)
>         }
>  }
>
> +static inline void fscache_end_operation(struct netfs_cache_resources *cres)
> +{
> +       const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
> +
> +       if (ops)
> +               ops->end_operation(cres);
> +}
> +
>  /*
> - * Retrieve a page from FS-Cache
> + * Fallback page reading interface.
>   */
> -int __cifs_readpage_from_fscache(struct inode *inode, struct page *page)
> +static int fscache_fallback_read_page(struct inode *inode, struct page *page)
>  {
> -       cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",
> -                __func__, CIFS_I(inode)->fscache, page, inode);
> -       return -ENOBUFS; // Needs conversion to using netfslib
> +       struct netfs_cache_resources cres;
> +       struct fscache_cookie *cookie = cifs_inode_cookie(inode);
> +       struct iov_iter iter;
> +       struct bio_vec bvec[1];
> +       int ret;
> +
> +       memset(&cres, 0, sizeof(cres));
> +       bvec[0].bv_page         = page;
> +       bvec[0].bv_offset       = 0;
> +       bvec[0].bv_len          = PAGE_SIZE;
> +       iov_iter_bvec(&iter, READ, bvec, ARRAY_SIZE(bvec), PAGE_SIZE);
> +
> +       ret = fscache_begin_read_operation(&cres, cookie);
> +       if (ret < 0)
> +               return ret;
> +
> +       ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL,
> +                          NULL, NULL);
> +       fscache_end_operation(&cres);
> +       return ret;
>  }
>
>  /*
> - * Retrieve a set of pages from FS-Cache
> + * Fallback page writing interface.
>   */
> -int __cifs_readpages_from_fscache(struct inode *inode,
> -                               struct address_space *mapping,
> -                               struct list_head *pages,
> -                               unsigned *nr_pages)
> +static int fscache_fallback_write_page(struct inode *inode, struct page *page,
> +                                      bool no_space_allocated_yet)
>  {
> -       cifs_dbg(FYI, "%s: (0x%p/%u/0x%p)\n",
> -                __func__, CIFS_I(inode)->fscache, *nr_pages, inode);
> -       return -ENOBUFS; // Needs conversion to using netfslib
> +       struct netfs_cache_resources cres;
> +       struct fscache_cookie *cookie = cifs_inode_cookie(inode);
> +       struct iov_iter iter;
> +       struct bio_vec bvec[1];
> +       loff_t start = page_offset(page);
> +       size_t len = PAGE_SIZE;
> +       int ret;
> +
> +       memset(&cres, 0, sizeof(cres));
> +       bvec[0].bv_page         = page;
> +       bvec[0].bv_offset       = 0;
> +       bvec[0].bv_len          = PAGE_SIZE;
> +       iov_iter_bvec(&iter, WRITE, bvec, ARRAY_SIZE(bvec), PAGE_SIZE);
> +
> +       ret = fscache_begin_write_operation(&cres, cookie);
> +       if (ret < 0)
> +               return ret;
> +
> +       ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode),
> +                                     no_space_allocated_yet);
> +       if (ret == 0)
> +               ret = fscache_write(&cres, page_offset(page), &iter, NULL, NULL);
> +       fscache_end_operation(&cres);
> +       return ret;
>  }
>
> -void __cifs_readpage_to_fscache(struct inode *inode, struct page *page)
> +/*
> + * Retrieve a page from FS-Cache
> + */
> +int __cifs_readpage_from_fscache(struct inode *inode, struct page *page)
>  {
> -       struct cifsInodeInfo *cifsi = CIFS_I(inode);
> +       int ret;
>
> -       WARN_ON(!cifsi->fscache);
> +       cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",
> +                __func__, cifs_inode_cookie(inode), page, inode);
>
> +       ret = fscache_fallback_read_page(inode, page);
> +       if (ret < 0)
> +               return ret;
> +
> +       /* Read completed synchronously */
> +       SetPageUptodate(page);
> +       return 0;
> +}
> +
> +void __cifs_readpage_to_fscache(struct inode *inode, struct page *page)
> +{
>         cifs_dbg(FYI, "%s: (fsc: %p, p: %p, i: %p)\n",
> -                __func__, cifsi->fscache, page, inode);
> +                __func__, cifs_inode_cookie(inode), page, inode);
> +
> +       fscache_fallback_write_page(inode, page, true);
> +}
> +
> +/*
> + * Query the cache occupancy.
> + */
> +int __cifs_fscache_query_occupancy(struct inode *inode,
> +                                  pgoff_t first, unsigned nr_pages,
> +                                  pgoff_t *_data_first,
> +                                  unsigned int *_data_nr_pages)
> +{
> +       struct netfs_cache_resources cres;
> +       struct fscache_cookie *cookie = cifs_inode_cookie(inode);
> +       loff_t start, data_start;
> +       size_t len, data_len;
> +       int ret;
>
> -       // Needs conversion to using netfslib
> +       ret = fscache_begin_read_operation(&cres, cookie);
> +       if (ret < 0)
> +               return ret;
> +
> +       start = first * PAGE_SIZE;
> +       len = nr_pages * PAGE_SIZE;
> +       ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE,
> +                                       &data_start, &data_len);
> +       if (ret == 0) {
> +               *_data_first = data_start / PAGE_SIZE;
> +               *_data_nr_pages = len / PAGE_SIZE;
> +       }
> +
> +       fscache_end_operation(&cres);
> +       return ret;
>  }
> diff --git a/fs/cifs/fscache.h b/fs/cifs/fscache.h
> index c6ca49ac33d4..ed4b08df1961 100644
> --- a/fs/cifs/fscache.h
> +++ b/fs/cifs/fscache.h
> @@ -9,6 +9,7 @@
>  #ifndef _CIFS_FSCACHE_H
>  #define _CIFS_FSCACHE_H
>
> +#include <linux/swap.h>
>  #include <linux/fscache.h>
>
>  #include "cifsglob.h"
> @@ -58,14 +59,6 @@ void cifs_fscache_fill_coherency(struct inode *inode,
>  }
>
>
> -extern int cifs_fscache_release_page(struct page *page, gfp_t gfp);
> -extern int __cifs_readpage_from_fscache(struct inode *, struct page *);
> -extern int __cifs_readpages_from_fscache(struct inode *,
> -                                        struct address_space *,
> -                                        struct list_head *,
> -                                        unsigned *);
> -extern void __cifs_readpage_to_fscache(struct inode *, struct page *);
> -
>  static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode)
>  {
>         return CIFS_I(inode)->fscache;
> @@ -80,33 +73,52 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags
>                            i_size_read(inode), flags);
>  }
>
> -static inline int cifs_readpage_from_fscache(struct inode *inode,
> -                                            struct page *page)
> -{
> -       if (CIFS_I(inode)->fscache)
> -               return __cifs_readpage_from_fscache(inode, page);
> +extern int __cifs_fscache_query_occupancy(struct inode *inode,
> +                                         pgoff_t first, unsigned nr_pages,
> +                                         pgoff_t *_data_first,
> +                                         unsigned int *_data_nr_pages);
>
> -       return -ENOBUFS;
> +static inline int cifs_fscache_query_occupancy(struct inode *inode,
> +                                              pgoff_t first, unsigned nr_pages,
> +                                              pgoff_t *_data_first,
> +                                              unsigned int *_data_nr_pages)
> +{
> +       if (!cifs_inode_cookie(inode))
> +               return -ENOBUFS;
> +       return __cifs_fscache_query_occupancy(inode, first, nr_pages,
> +                                             _data_first, _data_nr_pages);
>  }
>
> -static inline int cifs_readpages_from_fscache(struct inode *inode,
> -                                             struct address_space *mapping,
> -                                             struct list_head *pages,
> -                                             unsigned *nr_pages)
> +extern int __cifs_readpage_from_fscache(struct inode *, struct page *);
> +extern void __cifs_readpage_to_fscache(struct inode *, struct page *);
> +
> +
> +static inline int cifs_readpage_from_fscache(struct inode *inode,
> +                                            struct page *page)
>  {
> -       if (CIFS_I(inode)->fscache)
> -               return __cifs_readpages_from_fscache(inode, mapping, pages,
> -                                                    nr_pages);
> +       if (cifs_inode_cookie(inode))
> +               return __cifs_readpage_from_fscache(inode, page);
>         return -ENOBUFS;
>  }
>
>  static inline void cifs_readpage_to_fscache(struct inode *inode,
>                                             struct page *page)
>  {
> -       if (PageFsCache(page))
> +       if (cifs_inode_cookie(inode))
>                 __cifs_readpage_to_fscache(inode, page);
>  }
>
> +static inline int cifs_fscache_release_page(struct page *page, gfp_t gfp)
> +{
> +       if (PageFsCache(page)) {
> +               if (current_is_kswapd() || !(gfp & __GFP_FS))
> +                       return false;
> +               wait_on_page_fscache(page);
> +               fscache_note_page_release(cifs_inode_cookie(page->mapping->host));
> +       }
> +       return true;
> +}
> +
>  #else /* CONFIG_CIFS_FSCACHE */
>  static inline
>  void cifs_fscache_fill_coherency(struct inode *inode,
> @@ -123,22 +135,29 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd
>  static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; }
>  static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {}
>
> -static inline int
> -cifs_readpage_from_fscache(struct inode *inode, struct page *page)
> +static inline int cifs_fscache_query_occupancy(struct inode *inode,
> +                                              pgoff_t first, unsigned nr_pages,
> +                                              pgoff_t *_data_first,
> +                                              unsigned int *_data_nr_pages)
>  {
> +       *_data_first = ULONG_MAX;
> +       *_data_nr_pages = 0;
>         return -ENOBUFS;
>  }
>
> -static inline int cifs_readpages_from_fscache(struct inode *inode,
> -                                             struct address_space *mapping,
> -                                             struct list_head *pages,
> -                                             unsigned *nr_pages)
> +static inline int
> +cifs_readpage_from_fscache(struct inode *inode, struct page *page)
>  {
>         return -ENOBUFS;
>  }
>
> -static inline void cifs_readpage_to_fscache(struct inode *inode,
> -                       struct page *page) {}
> +static inline
> +void cifs_readpage_to_fscache(struct inode *inode, struct page *page) {}
> +
> +static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)
> +{
> +       return true; /* May release page */
> +}
>
>  #endif /* CONFIG_CIFS_FSCACHE */
>
>
>


-- 
Thanks,

Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly
  2022-01-28  0:55   ` Steve French
@ 2022-01-28  0:58     ` Matthew Wilcox
  0 siblings, 0 replies; 9+ messages in thread
From: Matthew Wilcox @ 2022-01-28  0:58 UTC (permalink / raw)
  To: Steve French
  Cc: David Howells, Shyam Prasad N, CIFS, linux-cachefs, Jeff Layton,
	Linus Torvalds, linux-fsdevel, LKML

On Thu, Jan 27, 2022 at 06:55:23PM -0600, Steve French wrote:
> Regression tests so far on Dave's cifs fscache patch series are going
> fine.  First series of tests I ran were this:
> http://smb3-test-rhel-75.southcentralus.cloudapp.azure.com/#/builders/11/builds/160
> but I have to run additional tests with fscache enabled etc.
> 
> I saw that checkpatch had some minor complaints on this patch (patch
> 4) which I cleaned up, but was wondering other's thoughts on this
> checkpatch warning:
> 
> WARNING: Avoid crashing the kernel - try using WARN_ON & recovery code
> rather than BUG() or BUG_ON()
> #101: FILE: fs/cifs/file.c:4449:
> 
> ie
> 
> + page = readahead_page(ractl);
> + BUG_ON(!page);

Just remove it.  The kernel will crash just fine without putting in an
explicit BUG_ON, and it'll be obvious what the problem is.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it
  2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
                   ` (4 preceding siblings ...)
  2022-01-27 16:09 ` [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
@ 2022-01-28 10:58 ` Jeff Layton
  5 siblings, 0 replies; 9+ messages in thread
From: Jeff Layton @ 2022-01-28 10:58 UTC (permalink / raw)
  To: David Howells, Steve French
  Cc: Shyam Prasad N, linux-cifs, linux-cachefs, Matthew Wilcox,
	Linus Torvalds, linux-fsdevel, linux-kernel

On Thu, 2022-01-27 at 16:01 +0000, David Howells wrote:
> Hi Steve,
> 
> Here are some patches to make cifs actually do I/O to the cache after it
> got disabled in the fscache rewrite[1] plus a warning fix that you might
> want to detach and take separately:
> 
>  (1) Fix a kernel doc warning.
> 
>  (2) Change cifs from using ->readpages() to using ->readahead().
> 
>  (3) Provide a netfs cache op to query for the presence of data in the
>      cache.[*]
> 
>  (4) Make ->readahead() call
> 
> The patches can be found here also:
> 
> 	https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-rewrite
> 
> David
> 
> [*] Ideally, we would use the netfslib read helpers, but it's probably better
>     to roll iterators down into cifs's I/O layer before doing that[2].
> 
> Link: https://lore.kernel.org/r/164021479106.640689.17404516570194656552.stgit@warthog.procyon.org.uk/ [1]
> Link: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=cifs-experimental [2]
> 
> ---
> David Howells (4):
>       Fix a warning about a malformed kernel doc comment in cifs by removing the
>       cifs: Transition from ->readpages() to ->readahead()
>       netfs, cachefiles: Add a method to query presence of data in the cache
>       cifs: Implement cache I/O by accessing the cache directly
> 
> 
>  Documentation/filesystems/netfs_library.rst |  16 ++
>  fs/cachefiles/io.c                          |  59 ++++++
>  fs/cifs/connect.c                           |   2 +-
>  fs/cifs/file.c                              | 221 ++++++++------------
>  fs/cifs/fscache.c                           | 126 +++++++++--
>  fs/cifs/fscache.h                           |  79 ++++---
>  include/linux/netfs.h                       |   7 +
>  7 files changed, 322 insertions(+), 188 deletions(-)
> 
> 

Acked-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-01-28 10:58 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-27 16:01 [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
2022-01-27 16:02 ` [PATCH 1/4] Fix a warning about a malformed kernel doc comment in cifs by removing the David Howells
2022-01-27 16:02 ` [PATCH 2/4] cifs: Transition from ->readpages() to ->readahead() David Howells
2022-01-27 16:02 ` [PATCH 3/4] netfs, cachefiles: Add a method to query presence of data in the cache David Howells
2022-01-27 16:02 ` [PATCH 4/4] cifs: Implement cache I/O by accessing the cache directly David Howells
2022-01-28  0:55   ` Steve French
2022-01-28  0:58     ` Matthew Wilcox
2022-01-27 16:09 ` [PATCH 0/4] cifs: Use fscache I/O again after the rewrite disabled it David Howells
2022-01-28 10:58 ` Jeff Layton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).