linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Howells <dhowells@redhat.com>
To: Trond Myklebust <trondmy@hammerspace.com>,
	Anna Schumaker <anna.schumaker@netapp.com>,
	Steve French <sfrench@samba.org>,
	Jeff Layton <jlayton@redhat.com>
Cc: dhowells@redhat.com, Matthew Wilcox <willy@infradead.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org,
	v9fs-developer@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [RFC PATCH 34/61] cachefiles: Implement extent shaper
Date: Mon, 04 May 2020 18:12:36 +0100	[thread overview]
Message-ID: <158861235619.340223.16395929645492372171.stgit@warthog.procyon.org.uk> (raw)
In-Reply-To: <158861203563.340223.7585359869938129395.stgit@warthog.procyon.org.uk>

Implement the function that shapes extents to map onto the granules in a
cache file.

When setting to fetch data from the server to be cached, the extent will be
expanded to align with granule size and cut down so that it doesn't cross
the boundary between a non-present extent and a present extent.

When setting to read data from the cache, the extent will be cut down so
that it doesn't cross the boundary between a present extent and a
non-present extent.

If no caching is taking place, whatever was requested goes.

Signed-off-by: David Howells <dhowells@redhat.com>
---

 fs/cachefiles/content-map.c |  229 ++++++++++++++++++++++++++++++++++++-------
 fs/cachefiles/internal.h    |    6 +
 fs/cachefiles/io.c          |   11 --
 3 files changed, 197 insertions(+), 49 deletions(-)

diff --git a/fs/cachefiles/content-map.c b/fs/cachefiles/content-map.c
index 594624cb1cb9..dea28948f006 100644
--- a/fs/cachefiles/content-map.c
+++ b/fs/cachefiles/content-map.c
@@ -15,6 +15,31 @@
 static const char cachefiles_xattr_content_map[] =
 	XATTR_USER_PREFIX "CacheFiles.content";
 
+/*
+ * Determine the map size for a granulated object.
+ *
+ * There's one bit per granule.  We size it in terms of 8-byte chunks, where a
+ * 64-bit span * 256KiB bytes granules covers 16MiB of file space.  At that,
+ * 512B will cover 1GiB.
+ */
+static size_t cachefiles_map_size(loff_t i_size)
+{
+	loff_t size;
+	size_t granules, bits, bytes, map_size;
+
+	if (i_size <= CACHEFILES_GRAN_SIZE * 64)
+		return 8;
+
+	size = i_size + CACHEFILES_GRAN_SIZE - 1;
+	granules = size / CACHEFILES_GRAN_SIZE;
+	bits = granules + (64 - 1);
+	bits &= ~(64 - 1);
+	bytes = bits / 8;
+	map_size = roundup_pow_of_two(bytes);
+	_leave(" = %zx [i=%llx g=%zu b=%zu]", map_size, i_size, granules, bits);
+	return map_size;
+}
+
 static bool cachefiles_granule_is_present(struct cachefiles_object *object,
 					  size_t granule)
 {
@@ -28,6 +53,157 @@ static bool cachefiles_granule_is_present(struct cachefiles_object *object,
 	return res;
 }
 
+/*
+ * Shape the extent of a single-chunk data object.
+ */
+static unsigned int cachefiles_shape_single(struct fscache_object *obj,
+					    struct fscache_extent *extent,
+					    loff_t i_size, bool for_write)
+{
+	struct cachefiles_object *object =
+		container_of(obj, struct cachefiles_object, fscache);
+	unsigned int ret;
+	pgoff_t eof;
+
+	_enter("{%lx,%lx,%lx},%llx,%d",
+	       extent->start, extent->block_end, extent->limit,
+	       i_size, for_write);
+
+	extent->dio_block_size = CACHEFILES_DIO_BLOCK_SIZE;
+
+	if (object->content_info == CACHEFILES_CONTENT_SINGLE) {
+		ret = FSCACHE_READ_FROM_CACHE;
+	} else {
+		eof = (i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+
+		extent->start = 0;
+		extent->block_end = eof;
+		extent->limit = eof;
+		ret = FSCACHE_WRITE_TO_CACHE;
+	}
+
+	_leave(" = %u", ret);
+	return ret;
+}
+
+/*
+ * Determine the size of a data extent in a cache object.
+ *
+ * In cachefiles, a data cache object is divided into granules of 256KiB, each
+ * of which must be written as a whole unit when the cache is being loaded.
+ * Data may be read out piecemeal.
+ *
+ * The extent is resized, but the result will always contain the starting page
+ * from the extent.
+ *
+ * If the granule does not exist in the cachefile, the start may be brought
+ * forward to align with the beginning of a granule boundary, and the end may be
+ * moved either way to align also.  The extent will be cut off it it would cross
+ * over the boundary between what's cached and what's not.
+ *
+ * If the starting granule does exist in the cachefile, the extent will be
+ * shortened, if necessary, so that it doesn't cross over into a region that is
+ * not present.
+ *
+ * If the granule does not exist and we cannot cache it for lack of space, the
+ * requested extent is left unaltered.
+ */
+unsigned int cachefiles_shape_extent(struct fscache_object *obj,
+				     struct fscache_extent *extent,
+				     loff_t i_size, bool for_write)
+{
+	struct cachefiles_object *object =
+		container_of(obj, struct cachefiles_object, fscache);
+	unsigned int ret = 0;
+	pgoff_t start, end, limit, eof, bend;
+	size_t granule;
+
+	if (object->fscache.cookie->advice & FSCACHE_ADV_SINGLE_CHUNK)
+		return cachefiles_shape_single(obj, extent, i_size, for_write);
+
+	start = extent->start;
+	end   = extent->block_end;
+	limit = extent->limit;
+	_enter("{%lx,%lx,%lx},%llx,%d", start, end, limit, i_size, for_write);
+
+	granule = start / CACHEFILES_GRAN_PAGES;
+
+	/* If the content map didn't get expanded for some reason - simply
+	 * ignore this granule.
+	 */
+	if (granule / 8 >= object->content_map_size)
+		return 0;
+
+	if (cachefiles_granule_is_present(object, granule)) {
+		/* The start of the requested extent is present in the cache -
+		 * restrict the returned extent to the maximum length of what's
+		 * available.
+		 */
+		bend = round_up(start + 1, CACHEFILES_GRAN_PAGES);
+		while (bend < end) {
+			pgoff_t i = round_up(bend + 1, CACHEFILES_GRAN_PAGES);
+			granule = i / CACHEFILES_GRAN_PAGES;
+			if (!cachefiles_granule_is_present(object, granule))
+				break;
+			bend = i;
+		}
+
+		if (bend > end)
+			bend = end;
+		end = bend;
+		ret = FSCACHE_READ_FROM_CACHE;
+	} else {
+		/* Otherwise expand the extent in both directions to cover what
+		 * we want for caching purposes.
+		 */
+		start = round_down(start, CACHEFILES_GRAN_PAGES);
+		end   = round_up(end, CACHEFILES_GRAN_PAGES);
+
+		if (limit != ULONG_MAX) {
+			limit = round_down(limit, CACHEFILES_GRAN_PAGES);
+			if (end > limit) {
+				end = limit;
+				if (end <= start) {
+					_leave(" = don't");
+					return 0;
+				}
+			}
+		}
+
+		/* But trim to the end of the file and the starting page */
+		eof = (i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+		if (eof <= extent->start)
+			eof = extent->start + 1;
+		if (end > eof)
+			end = eof;
+
+		if ((start << PAGE_SHIFT) >= object->fscache.cookie->zero_point) {
+			/* The start of the requested extent is beyond the
+			 * original EOF of the file on the server - therefore
+			 * it's not going to be found on the server.
+			 */
+			bend = round_up(start + 1, CACHEFILES_GRAN_PAGES);
+			end = bend;
+			ret = FSCACHE_FILL_WITH_ZERO;
+		} else {
+			bend = start + CACHEFILES_GRAN_PAGES;
+			if (bend > eof)
+				bend = eof;
+			ret = FSCACHE_WRITE_TO_CACHE;
+		}
+
+		/* TODO: Check we have space in the cache */
+	}
+
+	extent->start = start;
+	extent->block_end = bend;
+	extent->limit = end;
+	extent->dio_block_size = CACHEFILES_DIO_BLOCK_SIZE;
+
+	_leave(" = %u {%lx,%lx,%lx}", ret, start, bend, end);
+	return ret;
+}
+
 /*
  * Mark the content map to indicate stored granule.
  */
@@ -74,23 +250,14 @@ void cachefiles_mark_content_map(struct fscache_io_request *req)
 /*
  * Expand the content map to a larger file size.
  */
-void cachefiles_expand_content_map(struct cachefiles_object *object, loff_t size)
+void cachefiles_expand_content_map(struct cachefiles_object *object, loff_t i_size)
 {
+	size_t size;
 	u8 *map, *zap;
 
-	/* Determine the size.  There's one bit per granule.  We size it in
-	 * terms of 8-byte chunks, where a 64-bit span * 256KiB bytes granules
-	 * covers 16MiB of file space.  At that, 512B will cover 1GiB.
-	 */
-	if (size > 0) {
-		size += CACHEFILES_GRAN_SIZE - 1;
-		size /= CACHEFILES_GRAN_SIZE;
-		size += 8 - 1;
-		size /= 8;
-		size = roundup_pow_of_two(size);
-	} else {
-		size = 8;
-	}
+	size = cachefiles_map_size(i_size);
+
+	_enter("%llx,%lx,%x", i_size, size, object->content_map_size);
 
 	if (size <= object->content_map_size)
 		return;
@@ -122,7 +289,7 @@ void cachefiles_shorten_content_map(struct cachefiles_object *object,
 				    loff_t new_size)
 {
 	struct fscache_cookie *cookie = object->fscache.cookie;
-	loff_t granule, o_granule;
+	size_t granule, tmp, bytes;
 
 	if (object->fscache.cookie->advice & FSCACHE_ADV_SINGLE_CHUNK)
 		return;
@@ -137,12 +304,16 @@ void cachefiles_shorten_content_map(struct cachefiles_object *object,
 		granule += CACHEFILES_GRAN_SIZE - 1;
 		granule /= CACHEFILES_GRAN_SIZE;
 
-		o_granule = cookie->object_size;
-		o_granule += CACHEFILES_GRAN_SIZE - 1;
-		o_granule /= CACHEFILES_GRAN_SIZE;
+		tmp = granule;
+		tmp = round_up(granule, 64);
+		bytes = tmp / 8;
+		if (bytes < object->content_map_size)
+			memset(object->content_map + bytes, 0,
+			       object->content_map_size - bytes);
 
-		for (; o_granule > granule; o_granule--)
-			clear_bit_le(o_granule, object->content_map);
+		if (tmp > granule)
+			for (tmp--; tmp > granule; tmp--)
+				clear_bit_le(tmp, object->content_map);
 	}
 
 	write_unlock_bh(&object->content_map_lock);
@@ -157,7 +328,7 @@ bool cachefiles_load_content_map(struct cachefiles_object *object)
 						      struct cachefiles_cache, cache);
 	const struct cred *saved_cred;
 	ssize_t got;
-	loff_t size;
+	size_t size;
 	u8 *map = NULL;
 
 	_enter("c=%08x,%llx",
@@ -176,19 +347,7 @@ bool cachefiles_load_content_map(struct cachefiles_object *object)
 		 * bytes granules covers 16MiB of file space.  At that, 512B
 		 * will cover 1GiB.
 		 */
-		size = object->fscache.cookie->object_size;
-		if (size > 0) {
-			size += CACHEFILES_GRAN_SIZE - 1;
-			size /= CACHEFILES_GRAN_SIZE;
-			size += 8 - 1;
-			size /= 8;
-			if (size < 8)
-				size = 8;
-			size = roundup_pow_of_two(size);
-		} else {
-			size = 8;
-		}
-
+		size = cachefiles_map_size(object->fscache.cookie->object_size);
 		map = kzalloc(size, GFP_KERNEL);
 		if (!map)
 			return false;
@@ -212,7 +371,7 @@ bool cachefiles_load_content_map(struct cachefiles_object *object)
 		object->content_map = map;
 		object->content_map_size = size;
 		object->content_info = CACHEFILES_CONTENT_MAP;
-		_leave(" = t [%zd/%llu %*phN]", got, size, (int)size, map);
+		_leave(" = t [%zd/%zu %*phN]", got, size, (int)size, map);
 	}
 
 	return true;
diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
index c7a2a3442061..43f8e71136dd 100644
--- a/fs/cachefiles/internal.h
+++ b/fs/cachefiles/internal.h
@@ -125,6 +125,9 @@ extern void cachefiles_daemon_unbind(struct cachefiles_cache *cache);
 /*
  * content-map.c
  */
+extern unsigned int cachefiles_shape_extent(struct fscache_object *object,
+					    struct fscache_extent *extent,
+					    loff_t i_size, bool for_write);
 extern void cachefiles_mark_content_map(struct fscache_io_request *req);
 extern void cachefiles_expand_content_map(struct cachefiles_object *object, loff_t size);
 extern void cachefiles_shorten_content_map(struct cachefiles_object *object, loff_t new_size);
@@ -149,9 +152,6 @@ extern struct fscache_object *cachefiles_grab_object(struct fscache_object *_obj
 /*
  * io.c
  */
-extern unsigned int cachefiles_shape_extent(struct fscache_object *object,
-					    struct fscache_extent *extent,
-					    loff_t i_size, bool for_write);
 extern int cachefiles_read(struct fscache_object *object,
 			   struct fscache_io_request *req,
 			   struct iov_iter *iter);
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 642c3fd34809..ddb44ec5a199 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -12,17 +12,6 @@
 #include <linux/xattr.h>
 #include "internal.h"
 
-/*
- * Determine the size of a data extent in a cache object.  This must be written
- * as a whole unit, but can be read piecemeal.
- */
-unsigned int cachefiles_shape_extent(struct fscache_object *object,
-				     struct fscache_extent *extent,
-				     loff_t i_size, bool for_write)
-{
-	return 0;
-}
-
 /*
  * Initiate a read from the cache.
  */



  parent reply	other threads:[~2020-05-04 17:12 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-04 17:07 [RFC PATCH 00/61] fscache, cachefiles: Rewrite the I/O interface in terms of kiocb/iov_iter David Howells
2020-05-04 17:07 ` [RFC PATCH 01/61] afs: Make afs_zap_data() static David Howells
2020-05-04 17:07 ` [RFC PATCH 02/61] iov_iter: Add ITER_MAPPING David Howells
2020-05-04 17:07 ` [RFC PATCH 03/61] vm: Add wait/unlock functions for PG_fscache David Howells
2020-05-04 17:08 ` [RFC PATCH 04/61] vfs: Export rw_verify_area() for use by cachefiles David Howells
2020-05-04 17:08 ` [RFC PATCH 05/61] vfs: Provide S_CACHE_FILE inode flag David Howells
2020-05-04 17:08 ` [RFC PATCH 06/61] afs: Disable use of the fscache I/O routines David Howells
2020-05-04 17:08 ` [RFC PATCH 07/61] fscache: Add a cookie debug ID and use that in traces David Howells
2020-05-04 17:08 ` [RFC PATCH 08/61] fscache: Procfile to display cookies David Howells
2020-05-04 17:08 ` [RFC PATCH 09/61] fscache: Temporarily disable network filesystems' use of fscache David Howells
2020-05-04 17:08 ` [RFC PATCH 10/61] fscache: Remove the old I/O API David Howells
2020-05-04 17:09 ` [RFC PATCH 11/61] fscache: Remove the netfs data from the cookie David Howells
2020-05-04 17:09 ` [RFC PATCH 12/61] fscache: Remove struct fscache_cookie_def David Howells
2020-05-04 17:09 ` [RFC PATCH 13/61] fscache: Remove store_limit* from struct fscache_object David Howells
2020-05-04 17:09 ` [RFC PATCH 14/61] fscache: Remove fscache_check_consistency() David Howells
2020-05-04 17:09 ` [RFC PATCH 15/61] fscache: Remove fscache_attr_changed() David Howells
2020-05-04 17:09 ` [RFC PATCH 16/61] fscache: Remove obsolete stats David Howells
2020-05-04 17:10 ` [RFC PATCH 17/61] fscache: Remove old I/O tracepoints David Howells
2020-05-04 17:10 ` [RFC PATCH 18/61] fscache: Temporarily disable fscache_invalidate() David Howells
2020-05-04 17:10 ` [RFC PATCH 19/61] fscache: Remove the I/O operation manager David Howells
2020-05-04 17:10 ` [RFC PATCH 20/61] cachefiles: Remove tree of active files and use S_CACHE_FILE inode flag David Howells
2020-05-04 17:10 ` [RFC PATCH 21/61] fscache: Provide a simple thread pool for running ops asynchronously David Howells
2020-05-04 17:10 ` [RFC PATCH 23/61] fscache: Rewrite the I/O API based on iov_iter David Howells
2020-05-04 17:11 ` [RFC PATCH 24/61] fscache: Remove fscache_wait_on_invalidate() David Howells
2020-05-04 17:11 ` [RFC PATCH 25/61] fscache: Keep track of size of a file last set independently on the server David Howells
2020-05-04 17:11 ` [RFC PATCH 26/61] fscache, cachefiles: Fix disabled histogram warnings David Howells
2020-05-04 17:11 ` [RFC PATCH 27/61] fscache: Recast assertion in terms of cookie not being an index David Howells
2020-05-04 17:11 ` [RFC PATCH 28/61] cachefiles: Remove some redundant checks on unsigned values David Howells
2020-05-04 17:11 ` [RFC PATCH 29/61] cachefiles: trace: Log coherency checks David Howells
2020-05-04 17:12 ` [RFC PATCH 30/61] cachefiles: Split cachefiles_drop_object() up a bit David Howells
2020-05-04 17:12 ` [RFC PATCH 31/61] cachefiles: Implement new fscache I/O backend API David Howells
2020-05-04 17:12 ` [RFC PATCH 32/61] cachefiles: Merge object->backer into object->dentry David Howells
2020-05-04 17:12 ` [RFC PATCH 33/61] cachefiles: Implement a content-present indicator and bitmap David Howells
2020-05-04 17:12 ` David Howells [this message]
2020-05-04 17:12 ` [RFC PATCH 35/61] cachefiles: Round the cachefile size up to DIO block size David Howells
2020-05-04 17:12 ` [RFC PATCH 36/61] cachefiles: Implement read and write parts of new I/O API David Howells
2020-05-04 17:13 ` [RFC PATCH 37/61] cachefiles: Add I/O tracepoints David Howells
2020-05-04 17:13 ` [RFC PATCH 38/61] fscache: Add read helper David Howells
2020-05-04 17:13 ` [RFC PATCH 39/61] fscache: Display cache-specific data in /proc/fs/fscache/objects David Howells
2020-05-04 17:13 ` [RFC PATCH 40/61] fscache: Remove more obsolete stats David Howells
2020-05-04 17:13 ` [RFC PATCH 41/61] fscache: New stats David Howells
2020-05-04 17:13 ` [RFC PATCH 42/61] fscache, cachefiles: Rewrite invalidation David Howells
2020-05-04 17:13 ` [RFC PATCH 43/61] fscache: Implement "will_modify" parameter on fscache_use_cookie() David Howells
2020-05-04 17:14 ` [RFC PATCH 44/61] fscache: Provide resize operation David Howells
2020-05-04 17:14 ` [RFC PATCH 45/61] fscache: Remove the update operation David Howells
2020-05-04 17:14 ` [RFC PATCH 46/61] cachefiles: Shape write requests David Howells
2020-05-04 17:14 ` [RFC PATCH 47/61] afs: Remove afs_zero_fid as it's not used David Howells
2020-05-04 17:14 ` [RFC PATCH 48/61] afs: Move key to afs_read struct David Howells
2020-05-04 17:14 ` [RFC PATCH 49/61] afs: Don't truncate iter during data fetch David Howells
2020-05-04 17:15 ` [RFC PATCH 50/61] afs: Set up the iov_iter before calling afs_extract_data() David Howells
2020-05-04 17:15 ` [RFC PATCH 51/61] afs: Use ITER_MAPPING for writing David Howells
2020-05-04 17:15 ` [RFC PATCH 52/61] afs: Interpose struct fscache_io_request into struct afs_read David Howells
2020-05-04 17:15 ` [RFC PATCH 53/61] afs: Note the amount transferred in fetch-data delivery David Howells
2020-05-04 17:15 ` [RFC PATCH 54/61] afs: Wait on PG_fscache before modifying/releasing a page David Howells
2020-05-05 11:59   ` Matthew Wilcox
2020-05-06  7:57   ` David Howells
2020-05-06 11:09     ` Matthew Wilcox
2020-05-06 14:24     ` David Howells
2020-05-08 14:39     ` David Howells
2020-05-04 17:15 ` [RFC PATCH 55/61] afs: Use new fscache I/O API David Howells
2020-05-04 17:15 ` [RFC PATCH 56/61] afs: Copy local writes to the cache when writing to the server David Howells
2020-05-04 17:16 ` [RFC PATCH 57/61] afs: Invoke fscache_resize_cookie() when handling ATTR_SIZE for setattr David Howells
2020-05-04 17:16 ` [RFC PATCH 58/61] fscache: Rewrite the main document David Howells
2020-05-04 17:16 ` [RFC PATCH 59/61] fscache: Remove the obsolete API bits from the documentation David Howells
2020-05-04 17:16 ` [RFC PATCH 60/61] fscache: Document the new netfs API David Howells
2020-05-04 17:16 ` [RFC PATCH 61/61] fscache: Document the rewritten cache backend API David Howells
2020-05-04 17:54 ` [RFC PATCH 00/61] fscache, cachefiles: Rewrite the I/O interface in terms of kiocb/iov_iter Jeff Layton
2020-05-05  6:05 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=158861235619.340223.16395929645492372171.stgit@warthog.procyon.org.uk \
    --to=dhowells@redhat.com \
    --cc=anna.schumaker@netapp.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=jlayton@redhat.com \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-cifs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=sfrench@samba.org \
    --cc=trondmy@hammerspace.com \
    --cc=v9fs-developer@lists.sourceforge.net \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).