linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
@ 2022-04-06  7:55 Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 01/20] cachefiles: unmark inode in use in error path Jeffle Xu
                   ` (27 more replies)
  0 siblings, 28 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

changes since v7:
- rebased to 5.18-rc1
- include "cachefiles: unmark inode in use in error path" patch into
  this patchset to avoid warning from test robot (patch 1)
- cachefiles: rename [cookie|volume]_key_len field of struct
  cachefiles_open to [cookie|volume]_key_size to avoid potential
  misunderstanding. Also add more documentation to
  include/uapi/linux/cachefiles.h. (patch 3)
- cachefiles: valid check for error code returned from user daemon
  (patch 3)
- cachefiles: change WARN_ON_ONCE() to pr_info_once() when user daemon
  closes anon_fd prematurely (patch 4/5)
- ready for complete review


Kernel Patchset
---------------
Git tree:

    https://github.com/lostjeffle/linux.git jingbo/dev-erofs-fscache-v8

Gitweb:

    https://github.com/lostjeffle/linux/commits/jingbo/dev-erofs-fscache-v8


User Daemon for Quick Test
--------------------------
Git tree:

    https://github.com/lostjeffle/demand-read-cachefilesd.git main

Gitweb:

    https://github.com/lostjeffle/demand-read-cachefilesd


RFC: https://lore.kernel.org/all/YbRL2glGzjfZkVbH@B-P7TQMD6M-0146.local/t/
v1: https://lore.kernel.org/lkml/47831875-4bdd-8398-9f2d-0466b31a4382@linux.alibaba.com/T/
v2: https://lore.kernel.org/all/2946d871-b9e1-cf29-6d39-bcab30f2854f@linux.alibaba.com/t/
v3: https://lore.kernel.org/lkml/20220209060108.43051-1-jefflexu@linux.alibaba.com/T/
v4: https://lore.kernel.org/lkml/20220307123305.79520-1-jefflexu@linux.alibaba.com/T/#t
v5: https://lore.kernel.org/lkml/202203170912.gk2sqkaK-lkp@intel.com/T/
v6: https://lore.kernel.org/lkml/202203260720.uA5o7k5w-lkp@intel.com/T/
v7: https://www.spinics.net/lists/linux-fsdevel/msg215066.html


[Background]
============
Nydus [1] is an image distribution service especially optimized for
distribution over network. Nydus is an excellent container image
acceleration solution, since it only pulls data from remote when needed,
a.k.a. on-demand reading and it also supports chunk-based deduplication,
compression, etc.

erofs (Enhanced Read-Only File System) is a filesystem designed for
read-only scenarios. (Documentation/filesystem/erofs.rst)

Over the past months we've been focusing on supporting Nydus image service
with in-kernel erofs format[2]. In that case, each container image will be
organized in one bootstrap (metadata) and (optional) multiple data blobs in
erofs format. Massive container images will be stored on one machine.

To accelerate the container startup (fetching container images from remote
and then start the container), we do hope that the bootstrap & blob files
could support on-demand read. That is, erofs can be mounted and accessed
even when the bootstrap/data blob files have not been fully downloaded.
Then it'll have native performance after data is available locally.

That means we have to manage the cache state of the bootstrap/data blob
files (if cache hit, read directly from the local cache; if cache miss,
fetch the data somehow). It would be painful and may be dumb for erofs to
implement the cache management itself. Thus we prefer fscache/cachefiles
to do the cache management instead.

The fscache on-demand read feature aims to be implemented in a generic way
so that it can benefit other use cases and/or filesystems if it's
implemented in the fscache subsystem.

[1] https://nydus.dev
[2] https://sched.co/pcdL


[Overall Design]
================
Please refer to patch 7 ("cachefiles: document on-demand read mode") for
more details.

When working in the original mode, cachefiles mainly serves as a local cache
for remote networking fs, while in on-demand read mode, cachefiles can work
in the scenario where on-demand read semantics is needed, e.g. container image
distribution.

The essential difference between these two modes is that, in original mode,
when cache miss, netfs itself will fetch data from remote, and then write the
fetched data into cache file. While in on-demand read mode, a user daemon is
responsible for fetching data and then feeds to the kernel fscache side.

The on-demand read mode relies on a simple protocol used for communication
between kernel and user daemon.

The proposed implementation relies on the anonymous fd mechanism to avoid
the dependence on the format of cache file. When a fscache cachefile is opened
for the first time, an anon_fd associated with the cache file is sent to the
user daemon. With the given anon_fd, user daemon could fetch and write data
into the cache file in the background, even when kernel has not triggered the
cache miss. Besides, the write() syscall to the anon_fd will finally call
cachefiles kernel module, which will write data to cache file in the latest
format of cache file.

1. cache miss
When cache miss, cachefiles kernel module will notify user daemon with the
anon_fd, along with the requested file range. When notified, user daemon
needs to fetch data of the requested file range, and then write the fetched
data into cache file with the given anonymous fd. When finished processing
the request, user daemon needs to notify the kernel.

After notifying the user daemon, the kernel read routine will hang there,
until the request is handled by user daemon. When it's awaken by the
notification from user daemon, i.e. the corresponding hole has been filled
by the user daemon, it will retry to read from the same file range.

2. cache hit
Once data is already ready in cache file, netfs will read from cache
file directly.


[Advantage of fscache-based on-demand read]
========================================
1. Asynchronous Prefetch
In current mechanism, fscache is responsible for cache state management,
while the data plane (fetch data from local/remote on cache miss) is
done on the user daemon side.

If data has already been ready in the backing file, netfs (e.g. erofs)
will read from the backing file directly and won't be trapped to user
space anymore. Thus the user daemon could fetch data (from remote)
asynchronously on the background, and thus accelerate the backing file
accessing in some degree.

2. Support massive blob files
Besides this mechanism supports a large amount of backing files, and
thus can benefit the densely employed scenario.

In our using scenario, one container image can correspond to one
bootstrap file (required) and multiple data blob files (optional). For
example, one container image for node.js will corresponds to ~20 files
in total. In densely employed environment, there could be as many as
hundreds of containers and thus thousands of backing files on one
machine.


Jeffle Xu (20):
  cachefiles: unmark inode in use in error path
  cachefiles: extract write routine
  cachefiles: notify user daemon with anon_fd when looking up cookie
  cachefiles: notify user daemon when withdrawing cookie
  cachefiles: implement on-demand read
  cachefiles: enable on-demand read mode
  cachefiles: document on-demand read mode
  erofs: make erofs_map_blocks() generally available
  erofs: add mode checking helper
  erofs: register fscache volume
  erofs: add fscache context helper functions
  erofs: add anonymous inode managing page cache for data blob
  erofs: add erofs_fscache_read_folios() helper
  erofs: register fscache context for primary data blob
  erofs: register fscache context for extra data blobs
  erofs: implement fscache-based metadata read
  erofs: implement fscache-based data read for non-inline layout
  erofs: implement fscache-based data read for inline layout
  erofs: implement fscache-based data readahead
  erofs: add 'fsid' mount option

 .../filesystems/caching/cachefiles.rst        | 165 ++++++
 fs/cachefiles/Kconfig                         |  11 +
 fs/cachefiles/Makefile                        |   1 +
 fs/cachefiles/daemon.c                        |  90 +++-
 fs/cachefiles/interface.c                     |   2 +
 fs/cachefiles/internal.h                      |  67 +++
 fs/cachefiles/io.c                            |  72 ++-
 fs/cachefiles/namei.c                         |  49 +-
 fs/cachefiles/ondemand.c                      | 479 ++++++++++++++++++
 fs/erofs/Kconfig                              |  10 +
 fs/erofs/Makefile                             |   1 +
 fs/erofs/data.c                               |  27 +-
 fs/erofs/fscache.c                            | 369 ++++++++++++++
 fs/erofs/inode.c                              |   5 +
 fs/erofs/internal.h                           |  55 ++
 fs/erofs/super.c                              |  99 +++-
 include/linux/fscache.h                       |   1 +
 include/linux/netfs.h                         |   1 +
 include/trace/events/cachefiles.h             |   2 +
 include/uapi/linux/cachefiles.h               |  72 +++
 20 files changed, 1501 insertions(+), 77 deletions(-)
 create mode 100644 fs/cachefiles/ondemand.c
 create mode 100644 fs/erofs/fscache.c
 create mode 100644 include/uapi/linux/cachefiles.h

-- 
2.27.0


^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH v8 01/20] cachefiles: unmark inode in use in error path
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 02/20] cachefiles: extract write routine Jeffle Xu
                   ` (26 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Unmark inode in use if error encountered. If the in-use flag leakage
occurs in cachefiles_open_file(), Cachefiles will complain "Inode
already in use" when later another cookie with the same index key is
looked up.

If the in-use flag leakage occurs in cachefiles_create_tmpfile(), though
the "Inode already in use" warning won't be triggered, fix the leakage
anyway.

Reported-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Fixes: 1f08c925e7a3 ("cachefiles: Implement backing file wrangling")
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/cachefiles/namei.c | 33 ++++++++++++++++++++++++---------
 1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
index f256c8aff7bb..fe1bab0f36d4 100644
--- a/fs/cachefiles/namei.c
+++ b/fs/cachefiles/namei.c
@@ -57,6 +57,16 @@ static void __cachefiles_unmark_inode_in_use(struct cachefiles_object *object,
 	trace_cachefiles_mark_inactive(object, inode);
 }
 
+static void cachefiles_do_unmark_inode_in_use(struct cachefiles_object *object,
+				       struct dentry *dentry)
+{
+	struct inode *inode = d_backing_inode(dentry);
+
+	inode_lock(inode);
+	__cachefiles_unmark_inode_in_use(object, dentry);
+	inode_unlock(inode);
+}
+
 /*
  * Unmark a backing inode and tell cachefilesd that there's something that can
  * be culled.
@@ -68,9 +78,7 @@ void cachefiles_unmark_inode_in_use(struct cachefiles_object *object,
 	struct inode *inode = file_inode(file);
 
 	if (inode) {
-		inode_lock(inode);
-		__cachefiles_unmark_inode_in_use(object, file->f_path.dentry);
-		inode_unlock(inode);
+		cachefiles_do_unmark_inode_in_use(object, file->f_path.dentry);
 
 		if (!test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags)) {
 			atomic_long_add(inode->i_blocks, &cache->b_released);
@@ -484,7 +492,7 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
 				object, d_backing_inode(path.dentry), ret,
 				cachefiles_trace_trunc_error);
 			file = ERR_PTR(ret);
-			goto out_dput;
+			goto out_unuse;
 		}
 	}
 
@@ -494,15 +502,20 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
 		trace_cachefiles_vfs_error(object, d_backing_inode(path.dentry),
 					   PTR_ERR(file),
 					   cachefiles_trace_open_error);
-		goto out_dput;
+		goto out_unuse;
 	}
 	if (unlikely(!file->f_op->read_iter) ||
 	    unlikely(!file->f_op->write_iter)) {
 		fput(file);
 		pr_notice("Cache does not support read_iter and write_iter\n");
 		file = ERR_PTR(-EINVAL);
+		goto out_unuse;
 	}
 
+	goto out_dput;
+
+out_unuse:
+	cachefiles_do_unmark_inode_in_use(object, path.dentry);
 out_dput:
 	dput(path.dentry);
 out:
@@ -590,14 +603,16 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
 check_failed:
 	fscache_cookie_lookup_negative(object->cookie);
 	cachefiles_unmark_inode_in_use(object, file);
-	if (ret == -ESTALE) {
-		fput(file);
-		dput(dentry);
+	fput(file);
+	dput(dentry);
+	if (ret == -ESTALE)
 		return cachefiles_create_file(object);
-	}
+	return false;
+
 error_fput:
 	fput(file);
 error:
+	cachefiles_do_unmark_inode_in_use(object, dentry);
 	dput(dentry);
 	return false;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 02/20] cachefiles: extract write routine
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 01/20] cachefiles: unmark inode in use in error path Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie Jeffle Xu
                   ` (25 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Extract the generic routine of writing data to cache files, and make it
generally available.

This will be used by the following patch implementing on-demand read
mode. Since it's called inside cachefiles module in this case, make the
interface generic and unrelated to netfs_cache_resources.

It is worth nothing that, ki->inval_counter is not initialized after
this cleanup. It shall not make any visible difference, since
inval_counter is no longer used in the write completion routine, i.e.
cachefiles_write_complete().

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/cachefiles/internal.h | 10 +++++++
 fs/cachefiles/io.c       | 61 +++++++++++++++++++++++-----------------
 2 files changed, 45 insertions(+), 26 deletions(-)

diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
index c793d33b0224..e80673d0ab97 100644
--- a/fs/cachefiles/internal.h
+++ b/fs/cachefiles/internal.h
@@ -201,6 +201,16 @@ extern void cachefiles_put_object(struct cachefiles_object *object,
  */
 extern bool cachefiles_begin_operation(struct netfs_cache_resources *cres,
 				       enum fscache_want_state want_state);
+extern int __cachefiles_prepare_write(struct cachefiles_object *object,
+				      struct file *file,
+				      loff_t *_start, size_t *_len,
+				      bool no_space_allocated_yet);
+extern int __cachefiles_write(struct cachefiles_object *object,
+			      struct file *file,
+			      loff_t start_pos,
+			      struct iov_iter *iter,
+			      netfs_io_terminated_t term_func,
+			      void *term_func_priv);
 
 /*
  * key.c
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 9dc81e781f2b..50a14e8f0aac 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -277,36 +277,33 @@ static void cachefiles_write_complete(struct kiocb *iocb, long ret)
 /*
  * Initiate a write to the cache.
  */
-static int cachefiles_write(struct netfs_cache_resources *cres,
-			    loff_t start_pos,
-			    struct iov_iter *iter,
-			    netfs_io_terminated_t term_func,
-			    void *term_func_priv)
+int __cachefiles_write(struct cachefiles_object *object,
+		       struct file *file,
+		       loff_t start_pos,
+		       struct iov_iter *iter,
+		       netfs_io_terminated_t term_func,
+		       void *term_func_priv)
 {
-	struct cachefiles_object *object;
 	struct cachefiles_cache *cache;
 	struct cachefiles_kiocb *ki;
 	struct inode *inode;
-	struct file *file;
 	unsigned int old_nofs;
-	ssize_t ret = -ENOBUFS;
+	ssize_t ret;
 	size_t len = iov_iter_count(iter);
 
-	if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE))
-		goto presubmission_error;
 	fscache_count_write();
-	object = cachefiles_cres_object(cres);
 	cache = object->volume->cache;
-	file = cachefiles_cres_file(cres);
 
 	_enter("%pD,%li,%llx,%zx/%llx",
 	       file, file_inode(file)->i_ino, start_pos, len,
 	       i_size_read(file_inode(file)));
 
-	ret = -ENOMEM;
 	ki = kzalloc(sizeof(struct cachefiles_kiocb), GFP_KERNEL);
-	if (!ki)
-		goto presubmission_error;
+	if (!ki) {
+		if (term_func)
+			term_func(term_func_priv, -ENOMEM, false);
+		return -ENOMEM;
+	}
 
 	refcount_set(&ki->ki_refcnt, 2);
 	ki->iocb.ki_filp	= file;
@@ -314,7 +311,6 @@ static int cachefiles_write(struct netfs_cache_resources *cres,
 	ki->iocb.ki_flags	= IOCB_DIRECT | IOCB_WRITE;
 	ki->iocb.ki_ioprio	= get_current_ioprio();
 	ki->object		= object;
-	ki->inval_counter	= cres->inval_counter;
 	ki->start		= start_pos;
 	ki->len			= len;
 	ki->term_func		= term_func;
@@ -369,11 +365,24 @@ static int cachefiles_write(struct netfs_cache_resources *cres,
 	cachefiles_put_kiocb(ki);
 	_leave(" = %zd", ret);
 	return ret;
+}
 
-presubmission_error:
-	if (term_func)
-		term_func(term_func_priv, ret, false);
-	return ret;
+static int cachefiles_write(struct netfs_cache_resources *cres,
+			    loff_t start_pos,
+			    struct iov_iter *iter,
+			    netfs_io_terminated_t term_func,
+			    void *term_func_priv)
+{
+	if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) {
+		if (term_func)
+			term_func(term_func_priv, -ENOBUFS, false);
+		return -ENOBUFS;
+	}
+
+	return __cachefiles_write(cachefiles_cres_object(cres),
+				  cachefiles_cres_file(cres),
+				  start_pos, iter,
+				  term_func, term_func_priv);
 }
 
 /*
@@ -484,13 +493,12 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *
 /*
  * Prepare for a write to occur.
  */
-static int __cachefiles_prepare_write(struct netfs_cache_resources *cres,
-				      loff_t *_start, size_t *_len, loff_t i_size,
-				      bool no_space_allocated_yet)
+int __cachefiles_prepare_write(struct cachefiles_object *object,
+			       struct file *file,
+			       loff_t *_start, size_t *_len,
+			       bool no_space_allocated_yet)
 {
-	struct cachefiles_object *object = cachefiles_cres_object(cres);
 	struct cachefiles_cache *cache = object->volume->cache;
-	struct file *file = cachefiles_cres_file(cres);
 	loff_t start = *_start, pos;
 	size_t len = *_len, down;
 	int ret;
@@ -577,7 +585,8 @@ static int cachefiles_prepare_write(struct netfs_cache_resources *cres,
 	}
 
 	cachefiles_begin_secure(cache, &saved_cred);
-	ret = __cachefiles_prepare_write(cres, _start, _len, i_size,
+	ret = __cachefiles_prepare_write(object, cachefiles_cres_file(cres),
+					 _start, _len,
 					 no_space_allocated_yet);
 	cachefiles_end_secure(cache, saved_cred);
 	return ret;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 01/20] cachefiles: unmark inode in use in error path Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 02/20] cachefiles: extract write routine Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie Jeffle Xu
                   ` (24 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Fscache/cachefiles used to serve as a local cache for remote fs. This
patch, along with the following patches, introduces a new on-demand read
mode for cachefiles, which can boost the scenario where on-demand read
semantics is needed, e.g. container image distribution.

The essential difference between the original mode and on-demand read
mode is that, in the original mode, when cache miss, netfs itself will
fetch data from remote, and then write the fetched data into cache file.
While in on-demand read mode, a user daemon is responsible for fetching
data and then writing to the cache file.

As the first step, notify user daemon with anon_fd when looking up
cookie.

Send the anonymous fd to user daemon when looking up cookie, no matter
whether the cache file exists there or not. With the given anonymous fd,
user daemon can fetch and then write data into cache file in advance,
even when cache miss has not happened yet.

Also add one advisory flag (FSCACHE_ADV_WANT_CACHE_SIZE) suggesting that
cache file size shall be retrieved at runtime. This helps the scenario
where one cache file can contain multiple netfs files for the purpose of
deduplication, e.g. In this case, netfs itself has no idea the cache
file size, whilst user daemon needs to offer the hint on the cache file
size.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/cachefiles/Kconfig             |  11 +
 fs/cachefiles/Makefile            |   1 +
 fs/cachefiles/daemon.c            |  77 +++++--
 fs/cachefiles/internal.h          |  43 ++++
 fs/cachefiles/namei.c             |  16 +-
 fs/cachefiles/ondemand.c          | 360 ++++++++++++++++++++++++++++++
 include/linux/fscache.h           |   1 +
 include/trace/events/cachefiles.h |   2 +
 include/uapi/linux/cachefiles.h   |  49 ++++
 9 files changed, 545 insertions(+), 15 deletions(-)
 create mode 100644 fs/cachefiles/ondemand.c
 create mode 100644 include/uapi/linux/cachefiles.h

diff --git a/fs/cachefiles/Kconfig b/fs/cachefiles/Kconfig
index 719faeeda168..58aad1fb4c5c 100644
--- a/fs/cachefiles/Kconfig
+++ b/fs/cachefiles/Kconfig
@@ -26,3 +26,14 @@ config CACHEFILES_ERROR_INJECTION
 	help
 	  This permits error injection to be enabled in cachefiles whilst a
 	  cache is in service.
+
+config CACHEFILES_ONDEMAND
+	bool "Support for on-demand read"
+	depends on CACHEFILES
+	default n
+	help
+	  This permits on-demand read mode of cachefiles. In this mode, when
+	  cache miss, the cachefiles backend instead of netfs, is responsible
+          for fetching data, e.g. through user daemon.
+
+	  If unsure, say N.
diff --git a/fs/cachefiles/Makefile b/fs/cachefiles/Makefile
index 16d811f1a2fa..c37a7a9af10b 100644
--- a/fs/cachefiles/Makefile
+++ b/fs/cachefiles/Makefile
@@ -16,5 +16,6 @@ cachefiles-y := \
 	xattr.o
 
 cachefiles-$(CONFIG_CACHEFILES_ERROR_INJECTION) += error_inject.o
+cachefiles-$(CONFIG_CACHEFILES_ONDEMAND) += ondemand.o
 
 obj-$(CONFIG_CACHEFILES) := cachefiles.o
diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c
index 7ac04ee2c0a0..d155a6da90d3 100644
--- a/fs/cachefiles/daemon.c
+++ b/fs/cachefiles/daemon.c
@@ -75,6 +75,9 @@ static const struct cachefiles_daemon_cmd cachefiles_daemon_cmds[] = {
 	{ "inuse",	cachefiles_daemon_inuse		},
 	{ "secctx",	cachefiles_daemon_secctx	},
 	{ "tag",	cachefiles_daemon_tag		},
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+	{ "copen",	cachefiles_ondemand_copen	},
+#endif
 	{ "",		NULL				}
 };
 
@@ -108,6 +111,9 @@ static int cachefiles_daemon_open(struct inode *inode, struct file *file)
 	INIT_LIST_HEAD(&cache->volumes);
 	INIT_LIST_HEAD(&cache->object_list);
 	spin_lock_init(&cache->object_list_lock);
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+	xa_init_flags(&cache->reqs, XA_FLAGS_ALLOC);
+#endif
 
 	/* set default caching limits
 	 * - limit at 1% free space and/or free files
@@ -126,6 +132,27 @@ static int cachefiles_daemon_open(struct inode *inode, struct file *file)
 	return 0;
 }
 
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+static inline void cachefiles_flush_reqs(struct cachefiles_cache *cache)
+{
+	struct xarray *xa = &cache->reqs;
+	struct cachefiles_req *req;
+	unsigned long index;
+
+	/*
+	 * 1) Cache has been marked as dead state, and then 2) flush all
+	 * pending requests in @reqs xarray. The barrier inside set_bit()
+	 * will ensure that above two ops won't be reordered.
+	 */
+	xa_lock(xa);
+	xa_for_each(xa, index, req) {
+		req->error = -EIO;
+		complete(&req->done);
+	}
+	xa_unlock(xa);
+}
+#endif
+
 /*
  * Release a cache.
  */
@@ -139,6 +166,11 @@ static int cachefiles_daemon_release(struct inode *inode, struct file *file)
 
 	set_bit(CACHEFILES_DEAD, &cache->flags);
 
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+	cachefiles_flush_reqs(cache);
+	xa_destroy(&cache->reqs);
+#endif
+
 	cachefiles_daemon_unbind(cache);
 
 	/* clean up the control file interface */
@@ -152,23 +184,14 @@ static int cachefiles_daemon_release(struct inode *inode, struct file *file)
 	return 0;
 }
 
-/*
- * Read the cache state.
- */
-static ssize_t cachefiles_daemon_read(struct file *file, char __user *_buffer,
-				      size_t buflen, loff_t *pos)
+static ssize_t cachefiles_do_daemon_read(struct cachefiles_cache *cache,
+					 char __user *_buffer, size_t buflen)
 {
-	struct cachefiles_cache *cache = file->private_data;
 	unsigned long long b_released;
 	unsigned f_released;
 	char buffer[256];
 	int n;
 
-	//_enter(",,%zu,", buflen);
-
-	if (!test_bit(CACHEFILES_READY, &cache->flags))
-		return 0;
-
 	/* check how much space the cache has */
 	cachefiles_has_space(cache, 0, 0, cachefiles_has_space_check);
 
@@ -206,6 +229,26 @@ static ssize_t cachefiles_daemon_read(struct file *file, char __user *_buffer,
 	return n;
 }
 
+/*
+ * Read the cache state.
+ */
+static ssize_t cachefiles_daemon_read(struct file *file, char __user *_buffer,
+				      size_t buflen, loff_t *pos)
+{
+	struct cachefiles_cache *cache = file->private_data;
+
+	//_enter(",,%zu,", buflen);
+
+	if (!test_bit(CACHEFILES_READY, &cache->flags))
+		return 0;
+
+	if (IS_ENABLED(CONFIG_CACHEFILES_ONDEMAND) &&
+	    test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags))
+		return cachefiles_ondemand_daemon_read(cache, _buffer, buflen);
+	else
+		return cachefiles_do_daemon_read(cache, _buffer, buflen);
+}
+
 /*
  * Take a command from cachefilesd, parse it and act on it.
  */
@@ -297,8 +340,16 @@ static __poll_t cachefiles_daemon_poll(struct file *file,
 	poll_wait(file, &cache->daemon_pollwq, poll);
 	mask = 0;
 
-	if (test_bit(CACHEFILES_STATE_CHANGED, &cache->flags))
-		mask |= EPOLLIN;
+	if (IS_ENABLED(CONFIG_CACHEFILES_ONDEMAND) &&
+	    test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags)) {
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+		if (!xa_empty(&cache->reqs))
+			mask |= EPOLLIN;
+#endif
+	} else {
+		if (test_bit(CACHEFILES_STATE_CHANGED, &cache->flags))
+			mask |= EPOLLIN;
+	}
 
 	if (test_bit(CACHEFILES_CULLING, &cache->flags))
 		mask |= EPOLLOUT;
diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
index e80673d0ab97..7d5c7d391fdb 100644
--- a/fs/cachefiles/internal.h
+++ b/fs/cachefiles/internal.h
@@ -15,6 +15,8 @@
 #include <linux/fscache-cache.h>
 #include <linux/cred.h>
 #include <linux/security.h>
+#include <linux/xarray.h>
+#include <linux/cachefiles.h>
 
 #define CACHEFILES_DIO_BLOCK_SIZE 4096
 
@@ -58,6 +60,9 @@ struct cachefiles_object {
 	enum cachefiles_content		content_info:8;	/* Info about content presence */
 	unsigned long			flags;
 #define CACHEFILES_OBJECT_USING_TMPFILE	0		/* Have an unlinked tmpfile */
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+	int				fd;		/* anonymous fd */
+#endif
 };
 
 /*
@@ -98,11 +103,24 @@ struct cachefiles_cache {
 #define CACHEFILES_DEAD			1	/* T if cache dead */
 #define CACHEFILES_CULLING		2	/* T if cull engaged */
 #define CACHEFILES_STATE_CHANGED	3	/* T if state changed (poll trigger) */
+#define CACHEFILES_ONDEMAND_MODE	4	/* T if in on-demand read mode */
 	char				*rootdirname;	/* name of cache root directory */
 	char				*secctx;	/* LSM security context */
 	char				*tag;		/* cache binding tag */
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+	struct xarray			reqs;		/* xarray of pending on-demand requests */
+#endif
 };
 
+struct cachefiles_req {
+	struct cachefiles_object *object;
+	struct completion done;
+	int error;
+	struct cachefiles_msg msg;
+};
+
+#define CACHEFILES_REQ_NEW	XA_MARK_1
+
 #include <trace/events/cachefiles.h>
 
 static inline
@@ -250,6 +268,31 @@ extern struct file *cachefiles_create_tmpfile(struct cachefiles_object *object);
 extern bool cachefiles_commit_tmpfile(struct cachefiles_cache *cache,
 				      struct cachefiles_object *object);
 
+/*
+ * ondemand.c
+ */
+#ifdef CONFIG_CACHEFILES_ONDEMAND
+extern ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
+					char __user *_buffer, size_t buflen);
+
+extern int cachefiles_ondemand_copen(struct cachefiles_cache *cache,
+				     char *args);
+
+extern int cachefiles_ondemand_init_object(struct cachefiles_object *object);
+
+#else
+static inline ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
+					char __user *_buffer, size_t buflen)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int cachefiles_ondemand_init_object(struct cachefiles_object *object)
+{
+	return 0;
+}
+#endif
+
 /*
  * security.c
  */
diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
index fe1bab0f36d4..68213304e96b 100644
--- a/fs/cachefiles/namei.c
+++ b/fs/cachefiles/namei.c
@@ -452,10 +452,9 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
 	struct dentry *fan = volume->fanout[(u8)object->cookie->key_hash];
 	struct file *file;
 	struct path path;
-	uint64_t ni_size = object->cookie->object_size;
+	uint64_t ni_size;
 	long ret;
 
-	ni_size = round_up(ni_size, CACHEFILES_DIO_BLOCK_SIZE);
 
 	cachefiles_begin_secure(cache, &saved_cred);
 
@@ -481,6 +480,15 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
 		goto out_dput;
 	}
 
+	ret = cachefiles_ondemand_init_object(object);
+	if (ret < 0) {
+		file = ERR_PTR(ret);
+		goto out_unuse;
+	}
+
+	ni_size = object->cookie->object_size;
+	ni_size = round_up(ni_size, CACHEFILES_DIO_BLOCK_SIZE);
+
 	if (ni_size > 0) {
 		trace_cachefiles_trunc(object, d_backing_inode(path.dentry), 0, ni_size,
 				       cachefiles_trunc_expand_tmpfile);
@@ -586,6 +594,10 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
 	}
 	_debug("file -> %pd positive", dentry);
 
+	ret = cachefiles_ondemand_init_object(object);
+	if (ret < 0)
+		goto error_fput;
+
 	ret = cachefiles_check_auxdata(object, file);
 	if (ret < 0)
 		goto check_failed;
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
new file mode 100644
index 000000000000..75180d02af91
--- /dev/null
+++ b/fs/cachefiles/ondemand.c
@@ -0,0 +1,360 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2022, Alibaba Cloud
+ */
+#include <linux/fdtable.h>
+#include <linux/anon_inodes.h>
+#include <linux/uio.h>
+#include "internal.h"
+
+static int cachefiles_ondemand_fd_release(struct inode *inode,
+					  struct file *file)
+{
+	struct cachefiles_object *object = file->private_data;
+
+	/*
+	 * Uninstall anon_fd to the cachefiles object, so that no further
+	 * associated requests will get enqueued.
+	 */
+	object->fd = -1;
+
+	cachefiles_put_object(object, cachefiles_obj_put_ondemand_fd);
+	return 0;
+}
+
+static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
+						 struct iov_iter *iter)
+{
+	struct cachefiles_object *object = kiocb->ki_filp->private_data;
+	struct cachefiles_cache *cache = object->volume->cache;
+	struct file *file = object->file;
+	size_t len = iter->count;
+	loff_t pos = kiocb->ki_pos;
+	const struct cred *saved_cred;
+	int ret;
+
+	if (!file)
+		return -ENOBUFS;
+
+	cachefiles_begin_secure(cache, &saved_cred);
+	ret = __cachefiles_prepare_write(object, file, &pos, &len, true);
+	cachefiles_end_secure(cache, saved_cred);
+	if (ret < 0)
+		return ret;
+
+	ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
+	if (!ret)
+		ret = len;
+
+	return ret;
+}
+
+static loff_t cachefiles_ondemand_fd_llseek(struct file *filp, loff_t pos,
+					    int whence)
+{
+	struct cachefiles_object *object = filp->private_data;
+	struct file *file = object->file;
+
+	if (!file)
+		return -ENOBUFS;
+
+	return vfs_llseek(file, pos, whence);
+}
+
+static const struct file_operations cachefiles_ondemand_fd_fops = {
+	.owner		= THIS_MODULE,
+	.release	= cachefiles_ondemand_fd_release,
+	.write_iter	= cachefiles_ondemand_fd_write_iter,
+	.llseek		= cachefiles_ondemand_fd_llseek,
+};
+
+/*
+ * OPEN request Completion (copen)
+ * - command: "copen <id>,<cache_size>"
+ *   <cache_size> represents the object size if >=0, error code if negative
+ */
+int cachefiles_ondemand_copen(struct cachefiles_cache *cache, char *args)
+{
+	struct cachefiles_req *req;
+	struct fscache_cookie *cookie;
+	char *pid, *psize;
+	unsigned long id;
+	long size;
+	int ret;
+
+	if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags))
+		return -EOPNOTSUPP;
+
+	if (!*args) {
+		pr_err("Empty id specified\n");
+		return -EINVAL;
+	}
+
+	pid = args;
+	psize = strchr(args, ',');
+	if (!psize) {
+		pr_err("Cache size is not specified\n");
+		return -EINVAL;
+	}
+
+	*psize = 0;
+	psize++;
+
+	ret = kstrtoul(pid, 0, &id);
+	if (ret)
+		return ret;
+
+	req = xa_erase(&cache->reqs, id);
+	if (!req)
+		return -EINVAL;
+
+	/* fail OPEN request if copen format is invalid */
+	ret = kstrtol(psize, 0, &size);
+	if (ret) {
+		req->error = ret;
+		goto out;
+	}
+
+	/* fail OPEN request if daemon reports an error */
+	if (size < 0) {
+		if (!IS_ERR_VALUE(size))
+			size = -EINVAL;
+		req->error = size;
+		goto out;
+	}
+
+	cookie = req->object->cookie;
+	cookie->object_size = size;
+	if (size)
+		clear_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags);
+	else
+		set_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags);
+
+out:
+	complete(&req->done);
+	return ret;
+}
+
+static int cachefiles_ondemand_get_fd(struct cachefiles_req *req)
+{
+	struct cachefiles_object *object;
+	struct cachefiles_open *load;
+	struct file *file;
+	int ret, fd;
+
+	ret = get_unused_fd_flags(O_WRONLY);
+	if (ret < 0)
+		return ret;
+	fd = ret;
+
+	object = cachefiles_grab_object(req->object,
+			cachefiles_obj_get_ondemand_fd);
+
+	file = anon_inode_getfile("[cachefiles]", &cachefiles_ondemand_fd_fops,
+				object, O_WRONLY);
+	if (IS_ERR(file)) {
+		cachefiles_put_object(object, cachefiles_obj_put_ondemand_fd);
+		put_unused_fd(fd);
+		return PTR_ERR(file);
+	}
+
+	file->f_mode |= FMODE_PWRITE | FMODE_LSEEK;
+	fd_install(fd, file);
+
+	load = (void *)req->msg.data;
+	load->fd = fd;
+	object->fd = fd;
+
+	return 0;
+}
+
+ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
+					char __user *_buffer, size_t buflen)
+{
+	struct cachefiles_req *req;
+	struct cachefiles_msg *msg;
+	unsigned long id = 0;
+	size_t n;
+	int ret = 0;
+	XA_STATE(xas, &cache->reqs, 0);
+
+	/*
+	 * Search for request that has not ever been processed, to prevent
+	 * requests from being sent to user daemon repeatedly.
+	 */
+	xa_lock(&cache->reqs);
+	req = xas_find_marked(&xas, UINT_MAX, CACHEFILES_REQ_NEW);
+	if (!req) {
+		xa_unlock(&cache->reqs);
+		return 0;
+	}
+
+	msg = &req->msg;
+	n = msg->len;
+
+	if (n > buflen) {
+		xa_unlock(&cache->reqs);
+		return -EMSGSIZE;
+	}
+
+	xas_clear_mark(&xas, CACHEFILES_REQ_NEW);
+	xa_unlock(&cache->reqs);
+
+	msg->id = id = xas.xa_index;
+
+	if (msg->opcode == CACHEFILES_OP_OPEN) {
+		ret = cachefiles_ondemand_get_fd(req);
+		if (ret)
+			goto error;
+	}
+
+	if (copy_to_user(_buffer, msg, n) != 0) {
+		ret = -EFAULT;
+		goto err_put_fd;
+	}
+
+	return n;
+
+err_put_fd:
+	if (msg->opcode == CACHEFILES_OP_OPEN)
+		close_fd(req->object->fd);
+error:
+	xa_erase(&cache->reqs, id);
+	req->error = ret;
+	complete(&req->done);
+	return ret;
+}
+
+typedef int (*init_req_fn)(struct cachefiles_req *req, void *private);
+
+static int cachefiles_ondemand_send_req(struct cachefiles_object *object,
+					enum cachefiles_opcode opcode,
+					size_t data_len,
+					init_req_fn init_req,
+					void *private)
+{
+	struct cachefiles_cache *cache = object->volume->cache;
+	struct cachefiles_req *req;
+	XA_STATE(xas, &cache->reqs, 0);
+	int ret;
+
+	if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags))
+		return 0;
+
+	if (test_bit(CACHEFILES_DEAD, &cache->flags))
+		return -EIO;
+
+	req = kzalloc(sizeof(*req) + data_len, GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	req->object = object;
+	init_completion(&req->done);
+	req->msg.opcode = opcode;
+	req->msg.len = sizeof(struct cachefiles_msg) + data_len;
+
+	ret = init_req(req, private);
+	if (ret)
+		goto out;
+
+	do {
+		/*
+		 * Stop enqueuing the request when daemon is dying. So we need
+		 * to 1) check cache state, and 2) enqueue request if cache is
+		 * alive.
+		 *
+		 * These two ops need to be atomic as a whole. Otherwise request
+		 * may be enqueued after xarray has been flushed, in which case
+		 * the orphan request will never be completed and thus netfs
+		 * will hang there forever.
+		 */
+		xas_lock(&xas);
+
+		/* recheck dead state with lock held */
+		if (test_bit(CACHEFILES_DEAD, &cache->flags)) {
+			xas_unlock(&xas);
+			ret = -EIO;
+			goto out;
+		}
+
+		xas.xa_index = 0;
+		xas_find_marked(&xas, UINT_MAX, XA_FREE_MARK);
+		if (xas.xa_node == XAS_RESTART)
+			xas_set_err(&xas, -EBUSY);
+		xas_store(&xas, req);
+		xas_clear_mark(&xas, XA_FREE_MARK);
+		xas_set_mark(&xas, CACHEFILES_REQ_NEW);
+		xas_unlock(&xas);
+	} while (xas_nomem(&xas, GFP_KERNEL));
+
+	ret = xas_error(&xas);
+	if (ret)
+		goto out;
+
+	wake_up_all(&cache->daemon_pollwq);
+	wait_for_completion(&req->done);
+	ret = req->error;
+out:
+	kfree(req);
+	return ret;
+}
+
+static int init_open_req(struct cachefiles_req *req, void *private)
+{
+	struct cachefiles_object *object = req->object;
+	struct fscache_cookie *cookie = object->cookie;
+	struct fscache_volume *volume = object->volume->vcookie;
+	struct cachefiles_open *load = (void *)req->msg.data;
+	size_t volume_key_size, cookie_key_size;
+	void *volume_key, *cookie_key;
+
+	/*
+	 * Volume key is of string format.
+	 * key[0] stores strlen() of the string, while the remained part stores
+	 * the content of the string (excluding the suffix '\0'). Append the
+	 * suffix '\0' to the output volume_key, so that it's a valid string.
+	 */
+	volume_key_size = volume->key[0] + 1;
+	volume_key = volume->key + 1;
+
+	/* Cookie key is of binary format, which is netfs specific. */
+	cookie_key_size = cookie->key_len;
+	cookie_key = fscache_get_key(cookie);
+
+	if (!(object->cookie->advice & FSCACHE_ADV_WANT_CACHE_SIZE)) {
+		pr_err("WANT_CACHE_SIZE is needed for on-demand mode\n");
+		return -EINVAL;
+	}
+
+	load->volume_key_size = volume_key_size;
+	load->cookie_key_size = cookie_key_size;
+	memcpy(load->data, volume_key, volume_key_size);
+	memcpy(load->data + volume_key_size, cookie_key, cookie_key_size);
+
+	return 0;
+}
+
+int cachefiles_ondemand_init_object(struct cachefiles_object *object)
+{
+	struct fscache_cookie *cookie = object->cookie;
+	struct fscache_volume *volume = object->volume->vcookie;
+	size_t volume_key_size, cookie_key_size, data_len;
+
+	/*
+	 * Cachefiles will firstly check cache file under the root cache
+	 * directory. If coherency check failed, it will fallback to creating a
+	 * new tmpfile as the cache file. Reuse the previously created anon_fd
+	 * if any.
+	 */
+	if (object->fd > 0)
+		return 0;
+
+	volume_key_size = volume->key[0] + 1;
+	cookie_key_size = cookie->key_len;
+	data_len = sizeof(struct cachefiles_open) +
+		   volume_key_size + cookie_key_size;
+
+	return cachefiles_ondemand_send_req(object,
+					    CACHEFILES_OP_OPEN, data_len,
+					    init_open_req, NULL);
+}
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index 6727fb0db619..663ab6f2ede6 100644
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -39,6 +39,7 @@ struct fscache_cookie;
 #define FSCACHE_ADV_SINGLE_CHUNK	0x01 /* The object is a single chunk of data */
 #define FSCACHE_ADV_WRITE_CACHE		0x00 /* Do cache if written to locally */
 #define FSCACHE_ADV_WRITE_NOCACHE	0x02 /* Don't cache if written to locally */
+#define FSCACHE_ADV_WANT_CACHE_SIZE	0x04 /* Retrieve cache size at runtime */
 
 #define FSCACHE_INVAL_DIO_WRITE		0x01 /* Invalidate due to DIO write */
 
diff --git a/include/trace/events/cachefiles.h b/include/trace/events/cachefiles.h
index 311c14a20e70..93df9391bd7f 100644
--- a/include/trace/events/cachefiles.h
+++ b/include/trace/events/cachefiles.h
@@ -31,6 +31,8 @@ enum cachefiles_obj_ref_trace {
 	cachefiles_obj_see_lookup_failed,
 	cachefiles_obj_see_withdraw_cookie,
 	cachefiles_obj_see_withdrawal,
+	cachefiles_obj_get_ondemand_fd,
+	cachefiles_obj_put_ondemand_fd,
 };
 
 enum fscache_why_object_killed {
diff --git a/include/uapi/linux/cachefiles.h b/include/uapi/linux/cachefiles.h
new file mode 100644
index 000000000000..41492f2653c9
--- /dev/null
+++ b/include/uapi/linux/cachefiles.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_CACHEFILES_H
+#define _LINUX_CACHEFILES_H
+
+#include <linux/types.h>
+
+/*
+ * Fscache ensures that the maximum length of cookie key is 255. The volume key
+ * is controlled by netfs, and generally no bigger than 255.
+ */
+#define CACHEFILES_MSG_MAX_SIZE	1024
+
+enum cachefiles_opcode {
+	CACHEFILES_OP_OPEN,
+};
+
+/*
+ * Message Header
+ *
+ * @id		a unique ID identifying this message
+ * @opcode	message type, CACHEFILE_OP_*
+ * @len		message length, including message header and following data
+ * @data	message type specific payload
+ */
+struct cachefiles_msg {
+	__u32 id;
+	__u32 opcode;
+	__u32 len;
+	__u8  data[];
+};
+
+/*
+ * @data contains volume_key and cookie_key in sequence.
+ *
+ * volume_key is of string format, with a suffix '\0';
+ * @volume_key_size identifies size of volume key, in bytes.
+ *
+ * cookie_key is of binary format, which is netfs specific;
+ * @cookie_key_size identifies size of cookie key, in bytes.
+ */
+struct cachefiles_open {
+	__u32 volume_key_size;
+	__u32 cookie_key_size;
+	__u32 fd;
+	__u32 flags;
+	__u8  data[];
+};
+
+#endif
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (2 preceding siblings ...)
  2022-04-06  7:55 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 05/20] cachefiles: implement on-demand read Jeffle Xu
                   ` (23 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Notify user daemon that cookie is going to be withdrawn, providing a
hint that the associated anon_fd can be closed. The anon_fd attached in
the CLOSE request shall be same with that in the previous OPEN request.

Be noted that this is only a hint. User daemon can close the anon_fd
when receiving the CLOSE request, then it will receive another anon_fd
if the cookie gets looked up. Or it can also ignore the CLOSE request,
and keep writing data into the anon_fd. However the next time cookie
gets looked up, the user daemon will still receive another anon_fd.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/cachefiles/interface.c       |  2 ++
 fs/cachefiles/internal.h        |  5 +++++
 fs/cachefiles/ondemand.c        | 36 +++++++++++++++++++++++++++++++++
 include/uapi/linux/cachefiles.h |  5 +++++
 4 files changed, 48 insertions(+)

diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c
index ae93cee9d25d..a69073a1d3f0 100644
--- a/fs/cachefiles/interface.c
+++ b/fs/cachefiles/interface.c
@@ -362,6 +362,8 @@ static void cachefiles_withdraw_cookie(struct fscache_cookie *cookie)
 		spin_unlock(&cache->object_list_lock);
 	}
 
+	cachefiles_ondemand_clean_object(object);
+
 	if (object->file) {
 		cachefiles_begin_secure(cache, &saved_cred);
 		cachefiles_clean_up_object(object, cache);
diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
index 7d5c7d391fdb..8a397d4da560 100644
--- a/fs/cachefiles/internal.h
+++ b/fs/cachefiles/internal.h
@@ -279,6 +279,7 @@ extern int cachefiles_ondemand_copen(struct cachefiles_cache *cache,
 				     char *args);
 
 extern int cachefiles_ondemand_init_object(struct cachefiles_object *object);
+extern void cachefiles_ondemand_clean_object(struct cachefiles_object *object);
 
 #else
 static inline ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
@@ -291,6 +292,10 @@ static inline int cachefiles_ondemand_init_object(struct cachefiles_object *obje
 {
 	return 0;
 }
+
+static inline void cachefiles_ondemand_clean_object(struct cachefiles_object *object)
+{
+}
 #endif
 
 /*
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
index 75180d02af91..defd65124052 100644
--- a/fs/cachefiles/ondemand.c
+++ b/fs/cachefiles/ondemand.c
@@ -213,6 +213,12 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
 		goto err_put_fd;
 	}
 
+	/* CLOSE request has no reply */
+	if (msg->opcode == CACHEFILES_OP_CLOSE) {
+		xa_erase(&cache->reqs, id);
+		complete(&req->done);
+	}
+
 	return n;
 
 err_put_fd:
@@ -334,6 +340,28 @@ static int init_open_req(struct cachefiles_req *req, void *private)
 	return 0;
 }
 
+static int init_close_req(struct cachefiles_req *req, void *private)
+{
+	struct cachefiles_object *object = req->object;
+	struct cachefiles_close *load = (void *)req->msg.data;
+	int fd = object->fd;
+
+	if (fd == -1) {
+		pr_info_once("CLOSE: anonymous fd closed prematurely.\n");
+		return -EIO;
+	}
+
+	/*
+	 * It's possible if the cookie looking up phase failed before READ
+	 * request has ever been sent.
+	 */
+	if (fd == 0)
+		return -ENOENT;
+
+	load->fd = fd;
+	return 0;
+}
+
 int cachefiles_ondemand_init_object(struct cachefiles_object *object)
 {
 	struct fscache_cookie *cookie = object->cookie;
@@ -358,3 +386,11 @@ int cachefiles_ondemand_init_object(struct cachefiles_object *object)
 					    CACHEFILES_OP_OPEN, data_len,
 					    init_open_req, NULL);
 }
+
+void cachefiles_ondemand_clean_object(struct cachefiles_object *object)
+{
+	cachefiles_ondemand_send_req(object,
+				     CACHEFILES_OP_CLOSE,
+				     sizeof(struct cachefiles_close),
+				     init_close_req, NULL);
+}
diff --git a/include/uapi/linux/cachefiles.h b/include/uapi/linux/cachefiles.h
index 41492f2653c9..73397e142ab3 100644
--- a/include/uapi/linux/cachefiles.h
+++ b/include/uapi/linux/cachefiles.h
@@ -12,6 +12,7 @@
 
 enum cachefiles_opcode {
 	CACHEFILES_OP_OPEN,
+	CACHEFILES_OP_CLOSE,
 };
 
 /*
@@ -46,4 +47,8 @@ struct cachefiles_open {
 	__u8  data[];
 };
 
+struct cachefiles_close {
+	__u32 fd;
+};
+
 #endif
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 05/20] cachefiles: implement on-demand read
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (3 preceding siblings ...)
  2022-04-06  7:55 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 06/20] cachefiles: enable on-demand read mode Jeffle Xu
                   ` (22 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Implement the data plane of on-demand read mode.

A new NETFS_READ_HOLE_ONDEMAND flag is introduced to indicate that
on-demand read should be done when a cache miss encountered. In this
case, the read routine will send a READ request to user daemon, along
with the anonymous fd and the file range that shall be read. Now user
daemon is responsible for fetching data in the given file range, and
then writing the fetched data into cache file with the given anonymous
fd.

After sending the READ request, the read routine will hang there, until
the READ request is handled by user daemon. Then it will retry to read
from the same file range. If a cache miss is encountered again on the
same file range, the read routine will fail then.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/cachefiles/internal.h        |  9 ++++
 fs/cachefiles/io.c              | 11 +++++
 fs/cachefiles/ondemand.c        | 83 +++++++++++++++++++++++++++++++++
 include/linux/netfs.h           |  1 +
 include/uapi/linux/cachefiles.h | 18 +++++++
 5 files changed, 122 insertions(+)

diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
index 8a397d4da560..b4a834671b6b 100644
--- a/fs/cachefiles/internal.h
+++ b/fs/cachefiles/internal.h
@@ -281,6 +281,9 @@ extern int cachefiles_ondemand_copen(struct cachefiles_cache *cache,
 extern int cachefiles_ondemand_init_object(struct cachefiles_object *object);
 extern void cachefiles_ondemand_clean_object(struct cachefiles_object *object);
 
+extern int cachefiles_ondemand_read(struct cachefiles_object *object,
+				    loff_t pos, size_t len);
+
 #else
 static inline ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
 					char __user *_buffer, size_t buflen)
@@ -296,6 +299,12 @@ static inline int cachefiles_ondemand_init_object(struct cachefiles_object *obje
 static inline void cachefiles_ondemand_clean_object(struct cachefiles_object *object)
 {
 }
+
+static inline int cachefiles_ondemand_read(struct cachefiles_object *object,
+					   loff_t pos, size_t len)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
 /*
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 50a14e8f0aac..6f2e20cd41f4 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -95,6 +95,7 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
 	       file, file_inode(file)->i_ino, start_pos, len,
 	       i_size_read(file_inode(file)));
 
+retry:
 	/* If the caller asked us to seek for data before doing the read, then
 	 * we should do that now.  If we find a gap, we fill it with zeros.
 	 */
@@ -119,6 +120,16 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
 			if (read_hole == NETFS_READ_HOLE_FAIL)
 				goto presubmission_error;
 
+			if (read_hole == NETFS_READ_HOLE_ONDEMAND) {
+				ret = cachefiles_ondemand_read(object, off, len);
+				if (ret)
+					goto presubmission_error;
+
+				/* fail the read if no progress achieved */
+				read_hole = NETFS_READ_HOLE_FAIL;
+				goto retry;
+			}
+
 			iov_iter_zero(len, iter);
 			skipped = len;
 			ret = 0;
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
index defd65124052..149ae1923955 100644
--- a/fs/cachefiles/ondemand.c
+++ b/fs/cachefiles/ondemand.c
@@ -11,13 +11,30 @@ static int cachefiles_ondemand_fd_release(struct inode *inode,
 					  struct file *file)
 {
 	struct cachefiles_object *object = file->private_data;
+	struct cachefiles_cache *cache = object->volume->cache;
+	struct xarray *xa = &cache->reqs;
+	struct cachefiles_req *req;
+	unsigned long index;
 
+	xa_lock(xa);
 	/*
 	 * Uninstall anon_fd to the cachefiles object, so that no further
 	 * associated requests will get enqueued.
 	 */
 	object->fd = -1;
 
+	/*
+	 * Flush all pending READ requests since their completion depends on
+	 * anon_fd.
+	 */
+	xa_for_each(xa, index, req) {
+		if (req->msg.opcode == CACHEFILES_OP_READ) {
+			req->error = -EIO;
+			complete(&req->done);
+		}
+	}
+	xa_unlock(xa);
+
 	cachefiles_put_object(object, cachefiles_obj_put_ondemand_fd);
 	return 0;
 }
@@ -61,11 +78,35 @@ static loff_t cachefiles_ondemand_fd_llseek(struct file *filp, loff_t pos,
 	return vfs_llseek(file, pos, whence);
 }
 
+static long cachefiles_ondemand_fd_ioctl(struct file *filp, unsigned int ioctl,
+					 unsigned long arg)
+{
+	struct cachefiles_object *object = filp->private_data;
+	struct cachefiles_cache *cache = object->volume->cache;
+	struct cachefiles_req *req;
+	unsigned long id;
+
+	if (ioctl != CACHEFILES_IOC_CREAD)
+		return -EINVAL;
+
+	if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags))
+		return -EOPNOTSUPP;
+
+	id = arg;
+	req = xa_erase(&cache->reqs, id);
+	if (!req)
+		return -EINVAL;
+
+	complete(&req->done);
+	return 0;
+}
+
 static const struct file_operations cachefiles_ondemand_fd_fops = {
 	.owner		= THIS_MODULE,
 	.release	= cachefiles_ondemand_fd_release,
 	.write_iter	= cachefiles_ondemand_fd_write_iter,
 	.llseek		= cachefiles_ondemand_fd_llseek,
+	.unlocked_ioctl	= cachefiles_ondemand_fd_ioctl,
 };
 
 /*
@@ -283,6 +324,13 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object,
 			goto out;
 		}
 
+		/* recheck anon_fd for READ request with lock held */
+		if (opcode == CACHEFILES_OP_READ && object->fd == -1) {
+			xas_unlock(&xas);
+			ret = -EIO;
+			goto out;
+		}
+
 		xas.xa_index = 0;
 		xas_find_marked(&xas, UINT_MAX, XA_FREE_MARK);
 		if (xas.xa_node == XAS_RESTART)
@@ -362,6 +410,30 @@ static int init_close_req(struct cachefiles_req *req, void *private)
 	return 0;
 }
 
+struct cachefiles_read_ctx {
+	loff_t off;
+	size_t len;
+};
+
+static int init_read_req(struct cachefiles_req *req, void *private)
+{
+	struct cachefiles_object *object = req->object;
+	struct cachefiles_read *load = (void *)&req->msg.data;
+	struct cachefiles_read_ctx *read_ctx = private;
+	int fd = object->fd;
+
+	/* Stop enqueuing request when daemon closes anon_fd prematurely. */
+	if (fd == -1) {
+		pr_info_once("READ: anonymous fd closed prematurely.\n");
+		return -EIO;
+	}
+
+	load->off = read_ctx->off;
+	load->len = read_ctx->len;
+	load->fd  = fd;
+	return 0;
+}
+
 int cachefiles_ondemand_init_object(struct cachefiles_object *object)
 {
 	struct fscache_cookie *cookie = object->cookie;
@@ -394,3 +466,14 @@ void cachefiles_ondemand_clean_object(struct cachefiles_object *object)
 				     sizeof(struct cachefiles_close),
 				     init_close_req, NULL);
 }
+
+int cachefiles_ondemand_read(struct cachefiles_object *object,
+			     loff_t pos, size_t len)
+{
+	struct cachefiles_read_ctx read_ctx = {pos, len};
+
+	return cachefiles_ondemand_send_req(object,
+					    CACHEFILES_OP_READ,
+					    sizeof(struct cachefiles_read),
+					    init_read_req, &read_ctx);
+}
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index c7bf1eaf51d5..c1854e92333e 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -222,6 +222,7 @@ enum netfs_read_from_hole {
 	NETFS_READ_HOLE_IGNORE,
 	NETFS_READ_HOLE_CLEAR,
 	NETFS_READ_HOLE_FAIL,
+	NETFS_READ_HOLE_ONDEMAND,
 };
 
 /*
diff --git a/include/uapi/linux/cachefiles.h b/include/uapi/linux/cachefiles.h
index 73397e142ab3..9506b1697e14 100644
--- a/include/uapi/linux/cachefiles.h
+++ b/include/uapi/linux/cachefiles.h
@@ -3,6 +3,7 @@
 #define _LINUX_CACHEFILES_H
 
 #include <linux/types.h>
+#include <linux/ioctl.h>
 
 /*
  * Fscache ensures that the maximum length of cookie key is 255. The volume key
@@ -13,6 +14,7 @@
 enum cachefiles_opcode {
 	CACHEFILES_OP_OPEN,
 	CACHEFILES_OP_CLOSE,
+	CACHEFILES_OP_READ,
 };
 
 /*
@@ -51,4 +53,20 @@ struct cachefiles_close {
 	__u32 fd;
 };
 
+/*
+ * @off identifies the starting offset of the requested file range.
+ * @len identifies the length of the requested file range.
+ */
+struct cachefiles_read {
+	__u64 off;
+	__u64 len;
+	__u32 fd;
+};
+
+/*
+ * Reply for READ request (Completion for READ)
+ * arg for CACHEFILES_IOC_CREAD ioctl is the @id field of READ request.
+ */
+#define CACHEFILES_IOC_CREAD	_IOW(0x98, 1, int)
+
 #endif
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 06/20] cachefiles: enable on-demand read mode
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (4 preceding siblings ...)
  2022-04-06  7:55 ` [PATCH v8 05/20] cachefiles: implement on-demand read Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:55 ` [PATCH v8 07/20] cachefiles: document " Jeffle Xu
                   ` (21 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Enable on-demand read mode by adding an optional parameter to the "bind"
command.

On-demand mode will be turned on when this parameter is "ondemand", i.e.
"bind ondemand". Otherwise cachefiles will work in the original mode.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/cachefiles/daemon.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c
index d155a6da90d3..bd902a4c4cd8 100644
--- a/fs/cachefiles/daemon.c
+++ b/fs/cachefiles/daemon.c
@@ -738,11 +738,6 @@ static int cachefiles_daemon_bind(struct cachefiles_cache *cache, char *args)
 	    cache->brun_percent  >= 100)
 		return -ERANGE;
 
-	if (*args) {
-		pr_err("'bind' command doesn't take an argument\n");
-		return -EINVAL;
-	}
-
 	if (!cache->rootdirname) {
 		pr_err("No cache directory specified\n");
 		return -EINVAL;
@@ -754,6 +749,14 @@ static int cachefiles_daemon_bind(struct cachefiles_cache *cache, char *args)
 		return -EBUSY;
 	}
 
+	if (IS_ENABLED(CONFIG_CACHEFILES_ONDEMAND) &&
+	    !strcmp(args, "ondemand")) {
+		set_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags);
+	} else if (*args) {
+		pr_err("'bind' command doesn't take an argument\n");
+		return -EINVAL;
+	}
+
 	/* Make sure we have copies of the tag string */
 	if (!cache->tag) {
 		/*
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 07/20] cachefiles: document on-demand read mode
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (5 preceding siblings ...)
  2022-04-06  7:55 ` [PATCH v8 06/20] cachefiles: enable on-demand read mode Jeffle Xu
@ 2022-04-06  7:55 ` Jeffle Xu
  2022-04-06  7:56 ` [PATCH v8 08/20] erofs: make erofs_map_blocks() generally available Jeffle Xu
                   ` (20 subsequent siblings)
  27 siblings, 0 replies; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:55 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Document new user interface introduced by on-demand read mode.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 .../filesystems/caching/cachefiles.rst        | 165 ++++++++++++++++++
 1 file changed, 165 insertions(+)

diff --git a/Documentation/filesystems/caching/cachefiles.rst b/Documentation/filesystems/caching/cachefiles.rst
index 8bf396b76359..386801135027 100644
--- a/Documentation/filesystems/caching/cachefiles.rst
+++ b/Documentation/filesystems/caching/cachefiles.rst
@@ -28,6 +28,8 @@ Cache on Already Mounted Filesystem
 
  (*) Debugging.
 
+ (*) On-demand Read.
+
 
 
 Overview
@@ -482,3 +484,166 @@ the control file.  For example::
 	echo $((1|4|8)) >/sys/module/cachefiles/parameters/debug
 
 will turn on all function entry debugging.
+
+
+On-demand Read
+==============
+
+When working in original mode, cachefiles mainly serves as a local cache for
+remote networking fs, while in on-demand read mode, cachefiles can boost the
+scenario where on-demand read semantics is needed, e.g. container image
+distribution.
+
+The essential difference between these two modes is that, in original mode,
+when cache miss, netfs itself will fetch data from remote, and then write the
+fetched data into cache file. While in on-demand read mode, a user daemon is
+responsible for fetching data and then writing to the cache file.
+
+``CONFIG_CACHEFILES_ONDEMAND`` shall be enabled to support on-demand read mode.
+
+
+Protocol Communication
+----------------------
+
+The on-demand read mode relies on a simple protocol used for communication
+between kernel and user daemon. The model is like::
+
+	kernel --[request]--> user daemon --[reply]--> kernel
+
+The cachefiles kernel module will send requests to user daemon when needed.
+User daemon needs to poll on the devnode ('/dev/cachefiles') to check if
+there's pending request to be processed. A POLLIN event will be returned
+when there's pending request.
+
+Then user daemon needs to read the devnode to fetch one request and process it
+accordingly. It is worth nothing that each read only gets one request. When
+finished processing the request, user daemon needs to write the reply to the
+devnode.
+
+Each request is started with a message header like::
+
+	struct cachefiles_msg {
+		__u32 id;
+		__u32 opcode;
+		__u32 len;
+		__u8  data[];
+	};
+
+	* ``id`` is a unique ID identifying this request among all pending
+	  requests.
+
+	* ``opcode`` identifies the type of this request.
+
+	* ``data`` identifies the payload of this request.
+
+	* ``len`` identifies the whole length of this request, including the
+	  header and following type specific payload.
+
+
+Turn on On-demand Mode
+----------------------
+
+An optional parameter is added to "bind" command::
+
+	bind [ondemand]
+
+When "bind" command takes without argument, it defaults to the original mode.
+When "bind" command takes with "ondemand" argument, i.e. "bind ondemand",
+on-demand read mode will be enabled.
+
+
+OPEN Request
+------------
+
+When netfs opens a cache file for the first time, a request with
+CACHEFILES_OP_OPEN opcode, a.k.a OPEN request will be sent to user daemon. The
+payload format is like::
+
+	struct cachefiles_open {
+		__u32 volume_key_size;
+		__u32 cookie_key_size;
+		__u32 fd;
+		__u32 flags;
+		__u8  data[];
+	};
+
+	* ``data`` contains volume_key and cookie_key in sequence.
+
+	* ``volume_key_size`` identifies the size of volume key of the cache
+	  file, in bytes. volume_key is of string format, with a suffix '\0'.
+
+	* ``cookie_key_size`` identifies the size of cookie key of the cache
+	  file, in bytes. cookie_key is of binary format, which is netfs
+	  specific.
+
+	* ``fd`` identifies the anonymous fd of the cache file, with which user
+	  daemon can perform write/llseek file operations on the cache file.
+
+
+OPEN request contains (volume_key, cookie_key, anon_fd) triple for corresponding
+cache file. With this triple, user daemon could fetch and write data into the
+cache file in the background, even when kernel has not triggered the cache miss
+yet. User daemon is able to distinguish the requested cache file with the given
+(volume_key, cookie_key), and write the fetched data into cache file with the
+given anon_fd.
+
+After recording the (volume_key, cookie_key, anon_fd) triple, user daemon shall
+reply with "copen" (complete open) command::
+
+	copen <id>,<cache_size>
+
+	* ``id`` is exactly the id field of the previous OPEN request.
+
+	* When >= 0, ``cache_size`` identifies the size of the cache file;
+	  when < 0, ``cache_size`` identifies the error code ecountered by the
+	  user daemon.
+
+
+CLOSE Request
+-------------
+When cookie withdrawed, a request with CACHEFILES_OP_CLOSE opcode, a.k.a CLOSE
+request, will be sent to user daemon. It will notify user daemon to close the
+attached anon_fd. The payload format is like::
+
+	struct cachefiles_close {
+		__u32 fd;
+	};
+
+	* ``fd`` identifies the anon_fd to be closed, which is exactly the same
+	  with that in OPEN request.
+
+
+READ Request
+------------
+
+When on-demand read mode is turned on, and cache miss encountered, kernel will
+send a request with CACHEFILES_OP_READ opcode, a.k.a READ request, to user
+daemon. It will notify user daemon to fetch data in the requested file range.
+The payload format is like::
+
+	struct cachefiles_read {
+		__u64 off;
+		__u64 len;
+		__u32 fd;
+	};
+
+	* ``off`` identifies the starting offset of the requested file range.
+
+	* ``len`` identifies the length of the requested file range.
+
+	* ``fd`` identifies the anonymous fd of the requested cache file. It is
+	  guaranteed that it shall be the same with the fd field in the previous
+	  OPEN request.
+
+When receiving one READ request, user daemon needs to fetch data of the
+requested file range, and then write the fetched data into cache file with the
+given anonymous fd.
+
+When finished processing the READ request, user daemon needs to reply with
+CACHEFILES_IOC_CREAD ioctl on the corresponding anon_fd::
+
+	ioctl(fd, CACHEFILES_IOC_CREAD, id);
+
+	* ``fd`` is exactly the fd field of the previous READ request.
+
+	* ``id`` is exactly the id field of the previous READ request.
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 08/20] erofs: make erofs_map_blocks() generally available
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (6 preceding siblings ...)
  2022-04-06  7:55 ` [PATCH v8 07/20] cachefiles: document " Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07  2:44   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 09/20] erofs: add mode checking helper Jeffle Xu
                   ` (19 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

... so that it can be used in the following introduced fscache mode.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/data.c     | 4 ++--
 fs/erofs/internal.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index 780db1e5f4b7..bc22642358ec 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -110,8 +110,8 @@ static int erofs_map_blocks_flatmode(struct inode *inode,
 	return 0;
 }
 
-static int erofs_map_blocks(struct inode *inode,
-			    struct erofs_map_blocks *map, int flags)
+int erofs_map_blocks(struct inode *inode,
+		     struct erofs_map_blocks *map, int flags)
 {
 	struct super_block *sb = inode->i_sb;
 	struct erofs_inode *vi = EROFS_I(inode);
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index 5298c4ee277d..fe9564e5091e 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -486,6 +486,8 @@ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
 int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *dev);
 int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
 		 u64 start, u64 len);
+int erofs_map_blocks(struct inode *inode,
+		     struct erofs_map_blocks *map, int flags);
 
 /* inode.c */
 static inline unsigned long erofs_inode_hash(erofs_nid_t nid)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 09/20] erofs: add mode checking helper
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (7 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 08/20] erofs: make erofs_map_blocks() generally available Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07  2:46   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 10/20] erofs: register fscache volume Jeffle Xu
                   ` (18 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Until then erofs is exactly blockdev based filesystem.

A new fscache-based mode is going to be introduced for erofs to support
scenarios where on-demand read semantics is needed, e.g. container
image distribution. In this case, erofs could be mounted from data blobs
through fscache.

Add a helper checking which mode erofs works in.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/internal.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index fe9564e5091e..05a97533b1e9 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -161,6 +161,11 @@ struct erofs_sb_info {
 #define set_opt(opt, option)	((opt)->mount_opt |= EROFS_MOUNT_##option)
 #define test_opt(opt, option)	((opt)->mount_opt & EROFS_MOUNT_##option)
 
+static inline bool erofs_is_fscache_mode(struct super_block *sb)
+{
+	return IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && !sb->s_bdev;
+}
+
 enum {
 	EROFS_ZIP_CACHE_DISABLED,
 	EROFS_ZIP_CACHE_READAHEAD,
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 10/20] erofs: register fscache volume
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (8 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 09/20] erofs: add mode checking helper Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07  2:50   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 11/20] erofs: add fscache context helper functions Jeffle Xu
                   ` (17 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

A new fscache based mode is going to be introduced for erofs, in which
case on-demand read semantics is implemented through fscache.

As the first step, register fscache volume for each erofs filesystem.
That means, data blobs can not be shared among erofs filesystems. In the
following iteration, we are going to introduce the domain semantics, in
which case several erofs filesystems can belong to one domain, and data
blobs can be shared among these erofs filesystems of one domain.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/Kconfig    | 10 ++++++++++
 fs/erofs/Makefile   |  1 +
 fs/erofs/fscache.c  | 37 +++++++++++++++++++++++++++++++++++++
 fs/erofs/internal.h | 13 +++++++++++++
 fs/erofs/super.c    |  7 +++++++
 5 files changed, 68 insertions(+)
 create mode 100644 fs/erofs/fscache.c

diff --git a/fs/erofs/Kconfig b/fs/erofs/Kconfig
index f57255ab88ed..3d05265e3e8e 100644
--- a/fs/erofs/Kconfig
+++ b/fs/erofs/Kconfig
@@ -98,3 +98,13 @@ config EROFS_FS_ZIP_LZMA
 	  systems will be readable without selecting this option.
 
 	  If unsure, say N.
+
+config EROFS_FS_ONDEMAND
+	bool "EROFS fscache-based ondemand-read"
+	depends on CACHEFILES_ONDEMAND && (EROFS_FS=m && FSCACHE || EROFS_FS=y && FSCACHE=y)
+	default n
+	help
+	  EROFS is mounted from data blobs and on-demand read semantics is
+	  implemented through fscache.
+
+	  If unsure, say N.
diff --git a/fs/erofs/Makefile b/fs/erofs/Makefile
index 8a3317e38e5a..99bbc597a3e9 100644
--- a/fs/erofs/Makefile
+++ b/fs/erofs/Makefile
@@ -5,3 +5,4 @@ erofs-objs := super.o inode.o data.o namei.o dir.o utils.o pcpubuf.o sysfs.o
 erofs-$(CONFIG_EROFS_FS_XATTR) += xattr.o
 erofs-$(CONFIG_EROFS_FS_ZIP) += decompressor.o zmap.o zdata.o
 erofs-$(CONFIG_EROFS_FS_ZIP_LZMA) += decompressor_lzma.o
+erofs-$(CONFIG_EROFS_FS_ONDEMAND) += fscache.o
diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
new file mode 100644
index 000000000000..7a6d0239ebb1
--- /dev/null
+++ b/fs/erofs/fscache.c
@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2022, Alibaba Cloud
+ */
+#include <linux/fscache.h>
+#include "internal.h"
+
+int erofs_fscache_register_fs(struct super_block *sb)
+{
+	struct erofs_sb_info *sbi = EROFS_SB(sb);
+	struct fscache_volume *volume;
+	char *name;
+	int ret = 0;
+
+	name = kasprintf(GFP_KERNEL, "erofs,%s", sbi->opt.fsid);
+	if (!name)
+		return -ENOMEM;
+
+	volume = fscache_acquire_volume(name, NULL, NULL, 0);
+	if (IS_ERR_OR_NULL(volume)) {
+		erofs_err(sb, "failed to register volume for %s", name);
+		ret = volume ? PTR_ERR(volume) : -EOPNOTSUPP;
+		volume = NULL;
+	}
+
+	sbi->volume = volume;
+	kfree(name);
+	return ret;
+}
+
+void erofs_fscache_unregister_fs(struct super_block *sb)
+{
+	struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+	fscache_relinquish_volume(sbi->volume, NULL, false);
+	sbi->volume = NULL;
+}
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index 05a97533b1e9..952a2f483f94 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -74,6 +74,7 @@ struct erofs_mount_opts {
 	unsigned int max_sync_decompress_pages;
 #endif
 	unsigned int mount_opt;
+	char *fsid;
 };
 
 struct erofs_dev_context {
@@ -146,6 +147,9 @@ struct erofs_sb_info {
 	/* sysfs support */
 	struct kobject s_kobj;		/* /sys/fs/erofs/<devname> */
 	struct completion s_kobj_unregister;
+
+	/* fscache support */
+	struct fscache_volume *volume;
 };
 
 #define EROFS_SB(sb) ((struct erofs_sb_info *)(sb)->s_fs_info)
@@ -618,6 +622,15 @@ static inline int z_erofs_load_lzma_config(struct super_block *sb,
 }
 #endif	/* !CONFIG_EROFS_FS_ZIP */
 
+/* fscache.c */
+#ifdef CONFIG_EROFS_FS_ONDEMAND
+int erofs_fscache_register_fs(struct super_block *sb);
+void erofs_fscache_unregister_fs(struct super_block *sb);
+#else
+static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
+static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
+#endif
+
 #define EFSCORRUPTED    EUCLEAN         /* Filesystem is corrupted */
 
 #endif	/* __EROFS_INTERNAL_H */
diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index 0c4b41130c2f..6590ed1b7d3b 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -601,6 +601,12 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 	sbi->devs = ctx->devs;
 	ctx->devs = NULL;
 
+	if (erofs_is_fscache_mode(sb)) {
+		err = erofs_fscache_register_fs(sb);
+		if (err)
+			return err;
+	}
+
 	err = erofs_read_superblock(sb);
 	if (err)
 		return err;
@@ -757,6 +763,7 @@ static void erofs_kill_sb(struct super_block *sb)
 
 	erofs_free_dev_context(sbi->devs);
 	fs_put_dax(sbi->dax_dev);
+	erofs_fscache_unregister_fs(sb);
 	kfree(sbi);
 	sb->s_fs_info = NULL;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 11/20] erofs: add fscache context helper functions
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (9 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 10/20] erofs: register fscache volume Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07  3:25   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob Jeffle Xu
                   ` (16 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Introduce a context structure for managing data blobs, and helper
functions for initializing and cleaning up this context structure.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/fscache.c  | 46 +++++++++++++++++++++++++++++++++++++++++++++
 fs/erofs/internal.h | 19 +++++++++++++++++++
 2 files changed, 65 insertions(+)

diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index 7a6d0239ebb1..67a3c4935245 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -5,6 +5,52 @@
 #include <linux/fscache.h>
 #include "internal.h"
 
+/*
+ * Create an fscache context for data blob.
+ * Return: 0 on success and allocated fscache context is assigned to @fscache,
+ *	   negative error number on failure.
+ */
+int erofs_fscache_register_cookie(struct super_block *sb,
+				  struct erofs_fscache **fscache, char *name)
+{
+	struct fscache_volume *volume = EROFS_SB(sb)->volume;
+	struct erofs_fscache *ctx;
+	struct fscache_cookie *cookie;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return -ENOMEM;
+
+	cookie = fscache_acquire_cookie(volume, FSCACHE_ADV_WANT_CACHE_SIZE,
+					name, strlen(name), NULL, 0, 0);
+	if (!cookie) {
+		erofs_err(sb, "failed to get cookie for %s", name);
+		kfree(name);
+		return -EINVAL;
+	}
+
+	fscache_use_cookie(cookie, false);
+	ctx->cookie = cookie;
+
+	*fscache = ctx;
+	return 0;
+}
+
+void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
+{
+	struct erofs_fscache *ctx = *fscache;
+
+	if (!ctx)
+		return;
+
+	fscache_unuse_cookie(ctx->cookie, NULL, NULL);
+	fscache_relinquish_cookie(ctx->cookie, false);
+	ctx->cookie = NULL;
+
+	kfree(ctx);
+	*fscache = NULL;
+}
+
 int erofs_fscache_register_fs(struct super_block *sb)
 {
 	struct erofs_sb_info *sbi = EROFS_SB(sb);
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index 952a2f483f94..c6a3351a4d7d 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -97,6 +97,10 @@ struct erofs_sb_lz4_info {
 	u16 max_pclusterblks;
 };
 
+struct erofs_fscache {
+	struct fscache_cookie *cookie;
+};
+
 struct erofs_sb_info {
 	struct erofs_mount_opts opt;	/* options */
 #ifdef CONFIG_EROFS_FS_ZIP
@@ -626,9 +630,24 @@ static inline int z_erofs_load_lzma_config(struct super_block *sb,
 #ifdef CONFIG_EROFS_FS_ONDEMAND
 int erofs_fscache_register_fs(struct super_block *sb);
 void erofs_fscache_unregister_fs(struct super_block *sb);
+
+int erofs_fscache_register_cookie(struct super_block *sb,
+				  struct erofs_fscache **fscache, char *name);
+void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
 #else
 static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
 static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
+
+static inline int erofs_fscache_register_cookie(struct super_block *sb,
+						struct erofs_fscache **fscache,
+						char *name)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
+{
+}
 #endif
 
 #define EFSCORRUPTED    EUCLEAN         /* Filesystem is corrupted */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (10 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 11/20] erofs: add fscache context helper functions Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07  5:31   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 13/20] erofs: add erofs_fscache_read_folios() helper Jeffle Xu
                   ` (15 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Introduce one anonymous inode managing page cache for data blob. Then
erofs could read directly from the address space of the anonymous inode
when cache hit.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/fscache.c  | 39 ++++++++++++++++++++++++++++++++++++---
 fs/erofs/internal.h |  6 ++++--
 2 files changed, 40 insertions(+), 5 deletions(-)

diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index 67a3c4935245..1c88614203d2 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -5,17 +5,22 @@
 #include <linux/fscache.h>
 #include "internal.h"
 
+static const struct address_space_operations erofs_fscache_meta_aops = {
+};
+
 /*
  * Create an fscache context for data blob.
  * Return: 0 on success and allocated fscache context is assigned to @fscache,
  *	   negative error number on failure.
  */
 int erofs_fscache_register_cookie(struct super_block *sb,
-				  struct erofs_fscache **fscache, char *name)
+				  struct erofs_fscache **fscache,
+				  char *name, bool need_inode)
 {
 	struct fscache_volume *volume = EROFS_SB(sb)->volume;
 	struct erofs_fscache *ctx;
 	struct fscache_cookie *cookie;
+	int ret;
 
 	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
 	if (!ctx)
@@ -25,15 +30,40 @@ int erofs_fscache_register_cookie(struct super_block *sb,
 					name, strlen(name), NULL, 0, 0);
 	if (!cookie) {
 		erofs_err(sb, "failed to get cookie for %s", name);
-		kfree(name);
-		return -EINVAL;
+		ret = -EINVAL;
+		goto err;
 	}
 
 	fscache_use_cookie(cookie, false);
 	ctx->cookie = cookie;
 
+	if (need_inode) {
+		struct inode *const inode = new_inode(sb);
+
+		if (!inode) {
+			erofs_err(sb, "failed to get anon inode for %s", name);
+			ret = -ENOMEM;
+			goto err_cookie;
+		}
+
+		set_nlink(inode, 1);
+		inode->i_size = OFFSET_MAX;
+		inode->i_mapping->a_ops = &erofs_fscache_meta_aops;
+		mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+
+		ctx->inode = inode;
+	}
+
 	*fscache = ctx;
 	return 0;
+
+err_cookie:
+	fscache_unuse_cookie(ctx->cookie, NULL, NULL);
+	fscache_relinquish_cookie(ctx->cookie, false);
+	ctx->cookie = NULL;
+err:
+	kfree(ctx);
+	return ret;
 }
 
 void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
@@ -47,6 +77,9 @@ void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
 	fscache_relinquish_cookie(ctx->cookie, false);
 	ctx->cookie = NULL;
 
+	iput(ctx->inode);
+	ctx->inode = NULL;
+
 	kfree(ctx);
 	*fscache = NULL;
 }
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index c6a3351a4d7d..3a4a344cfed3 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -99,6 +99,7 @@ struct erofs_sb_lz4_info {
 
 struct erofs_fscache {
 	struct fscache_cookie *cookie;
+	struct inode *inode;
 };
 
 struct erofs_sb_info {
@@ -632,7 +633,8 @@ int erofs_fscache_register_fs(struct super_block *sb);
 void erofs_fscache_unregister_fs(struct super_block *sb);
 
 int erofs_fscache_register_cookie(struct super_block *sb,
-				  struct erofs_fscache **fscache, char *name);
+				  struct erofs_fscache **fscache,
+				  char *name, bool need_inode);
 void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
 #else
 static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
@@ -640,7 +642,7 @@ static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
 
 static inline int erofs_fscache_register_cookie(struct super_block *sb,
 						struct erofs_fscache **fscache,
-						char *name)
+						char *name, bool need_inode)
 {
 	return -EOPNOTSUPP;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 13/20] erofs: add erofs_fscache_read_folios() helper
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (11 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:05   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 14/20] erofs: register fscache context for primary data blob Jeffle Xu
                   ` (14 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Add erofs_fscache_read_folios() helper reading from fscache. It supports
on-demand read semantics. That is, it will make the backend prepare for
the data when cache miss. Once data ready, it will reinitiate a read
from the cache.

This helper can then be used to implement .readpage()/.readahead() of
on-demand read semantics.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/fscache.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index 1c88614203d2..d38a6efc8e50 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -5,6 +5,35 @@
 #include <linux/fscache.h>
 #include "internal.h"
 
+/*
+ * Read data from fscache and fill the read data into page cache described by
+ * @start/len, which shall be both aligned with PAGE_SIZE. @pstart describes
+ * the start physical address in the cache file.
+ */
+static int erofs_fscache_read_folios(struct fscache_cookie *cookie,
+				     struct address_space *mapping,
+				     loff_t start, size_t len,
+				     loff_t pstart)
+{
+	struct netfs_cache_resources cres;
+	struct iov_iter iter;
+	int ret;
+
+	memset(&cres, 0, sizeof(cres));
+
+	ret = fscache_begin_read_operation(&cres, cookie);
+	if (ret)
+		return ret;
+
+	iov_iter_xarray(&iter, READ, &mapping->i_pages, start, len);
+
+	ret = fscache_read(&cres, pstart, &iter,
+			   NETFS_READ_HOLE_ONDEMAND, NULL, NULL);
+
+	fscache_end_operation(&cres);
+	return ret;
+}
+
 static const struct address_space_operations erofs_fscache_meta_aops = {
 };
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 14/20] erofs: register fscache context for primary data blob
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (12 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 13/20] erofs: add erofs_fscache_read_folios() helper Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:09   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 15/20] erofs: register fscache context for extra data blobs Jeffle Xu
                   ` (13 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Registers fscache context for primary data blob. Also move the
initialization of s_op and related fields forward, since anonymous
inode will be allocated under the super block when registering the
fscache context.

Something worth mentioning about the cleanup routine.

1. The fscache context will instantiate anonymous inodes under the super
block. Release these anonymous inodes when .put_super() is called, or
we'll get "VFS: Busy inodes after unmount." warning.

2. The fscache context is initialized prior to the root inode. If
.kill_sb() is called when mount failed, .put_super() won't be called
when root inode has not been initialized yet. Thus .kill_sb() shall
also contain the cleanup routine.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/internal.h |  1 +
 fs/erofs/super.c    | 15 +++++++++++----
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index 3a4a344cfed3..eb37b33bce37 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -155,6 +155,7 @@ struct erofs_sb_info {
 
 	/* fscache support */
 	struct fscache_volume *volume;
+	struct erofs_fscache *s_fscache;
 };
 
 #define EROFS_SB(sb) ((struct erofs_sb_info *)(sb)->s_fs_info)
diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index 6590ed1b7d3b..9498b899b73b 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -585,6 +585,9 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 	int err;
 
 	sb->s_magic = EROFS_SUPER_MAGIC;
+	sb->s_flags |= SB_RDONLY | SB_NOATIME;
+	sb->s_maxbytes = MAX_LFS_FILESIZE;
+	sb->s_op = &erofs_sops;
 
 	if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
 		erofs_err(sb, "failed to set erofs blksize");
@@ -605,6 +608,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 		err = erofs_fscache_register_fs(sb);
 		if (err)
 			return err;
+
+		err = erofs_fscache_register_cookie(sb, &sbi->s_fscache,
+						    sbi->opt.fsid, true);
+		if (err)
+			return err;
 	}
 
 	err = erofs_read_superblock(sb);
@@ -619,11 +627,8 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 			clear_opt(&sbi->opt, DAX_ALWAYS);
 		}
 	}
-	sb->s_flags |= SB_RDONLY | SB_NOATIME;
-	sb->s_maxbytes = MAX_LFS_FILESIZE;
-	sb->s_time_gran = 1;
 
-	sb->s_op = &erofs_sops;
+	sb->s_time_gran = 1;
 	sb->s_xattr = erofs_xattr_handlers;
 
 	if (test_opt(&sbi->opt, POSIX_ACL))
@@ -763,6 +768,7 @@ static void erofs_kill_sb(struct super_block *sb)
 
 	erofs_free_dev_context(sbi->devs);
 	fs_put_dax(sbi->dax_dev);
+	erofs_fscache_unregister_cookie(&sbi->s_fscache);
 	erofs_fscache_unregister_fs(sb);
 	kfree(sbi);
 	sb->s_fs_info = NULL;
@@ -781,6 +787,7 @@ static void erofs_put_super(struct super_block *sb)
 	iput(sbi->managed_cache);
 	sbi->managed_cache = NULL;
 #endif
+	erofs_fscache_unregister_cookie(&sbi->s_fscache);
 }
 
 static struct file_system_type erofs_fs_type = {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 15/20] erofs: register fscache context for extra data blobs
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (13 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 14/20] erofs: register fscache context for primary data blob Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:15   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 16/20] erofs: implement fscache-based metadata read Jeffle Xu
                   ` (12 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Similar to the multi device mode, erofs could be mounted from one
primary data blob (mandatory) and multiple extra data blobs (optional).

Register fscache context for each extra data blob.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/data.c     |  3 +++
 fs/erofs/internal.h |  2 ++
 fs/erofs/super.c    | 25 +++++++++++++++++--------
 3 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index bc22642358ec..14b64d960541 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -199,6 +199,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
 	map->m_bdev = sb->s_bdev;
 	map->m_daxdev = EROFS_SB(sb)->dax_dev;
 	map->m_dax_part_off = EROFS_SB(sb)->dax_part_off;
+	map->m_fscache = EROFS_SB(sb)->s_fscache;
 
 	if (map->m_deviceid) {
 		down_read(&devs->rwsem);
@@ -210,6 +211,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
 		map->m_bdev = dif->bdev;
 		map->m_daxdev = dif->dax_dev;
 		map->m_dax_part_off = dif->dax_part_off;
+		map->m_fscache = dif->fscache;
 		up_read(&devs->rwsem);
 	} else if (devs->extra_devices) {
 		down_read(&devs->rwsem);
@@ -227,6 +229,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
 				map->m_bdev = dif->bdev;
 				map->m_daxdev = dif->dax_dev;
 				map->m_dax_part_off = dif->dax_part_off;
+				map->m_fscache = dif->fscache;
 				break;
 			}
 		}
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index eb37b33bce37..90f7d6286a4f 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -49,6 +49,7 @@ typedef u32 erofs_blk_t;
 
 struct erofs_device_info {
 	char *path;
+	struct erofs_fscache *fscache;
 	struct block_device *bdev;
 	struct dax_device *dax_dev;
 	u64 dax_part_off;
@@ -482,6 +483,7 @@ static inline int z_erofs_map_blocks_iter(struct inode *inode,
 #endif	/* !CONFIG_EROFS_FS_ZIP */
 
 struct erofs_map_dev {
+	struct erofs_fscache *m_fscache;
 	struct block_device *m_bdev;
 	struct dax_device *m_daxdev;
 	u64 m_dax_part_off;
diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index 9498b899b73b..8c7181cd37e6 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -259,15 +259,23 @@ static int erofs_init_devices(struct super_block *sb,
 		}
 		dis = ptr + erofs_blkoff(pos);
 
-		bdev = blkdev_get_by_path(dif->path,
-					  FMODE_READ | FMODE_EXCL,
-					  sb->s_type);
-		if (IS_ERR(bdev)) {
-			err = PTR_ERR(bdev);
-			break;
+		if (erofs_is_fscache_mode(sb)) {
+			err = erofs_fscache_register_cookie(sb, &dif->fscache,
+							    dif->path, false);
+			if (err)
+				break;
+		} else {
+			bdev = blkdev_get_by_path(dif->path,
+						  FMODE_READ | FMODE_EXCL,
+						  sb->s_type);
+			if (IS_ERR(bdev)) {
+				err = PTR_ERR(bdev);
+				break;
+			}
+			dif->bdev = bdev;
+			dif->dax_dev = fs_dax_get_by_bdev(bdev, &dif->dax_part_off);
 		}
-		dif->bdev = bdev;
-		dif->dax_dev = fs_dax_get_by_bdev(bdev, &dif->dax_part_off);
+
 		dif->blocks = le32_to_cpu(dis->blocks);
 		dif->mapped_blkaddr = le32_to_cpu(dis->mapped_blkaddr);
 		sbi->total_blocks += dif->blocks;
@@ -701,6 +709,7 @@ static int erofs_release_device_info(int id, void *ptr, void *data)
 	fs_put_dax(dif->dax_dev);
 	if (dif->bdev)
 		blkdev_put(dif->bdev, FMODE_READ | FMODE_EXCL);
+	erofs_fscache_unregister_cookie(&dif->fscache);
 	kfree(dif->path);
 	kfree(dif);
 	return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 16/20] erofs: implement fscache-based metadata read
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (14 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 15/20] erofs: register fscache context for extra data blobs Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:19   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 17/20] erofs: implement fscache-based data read for non-inline layout Jeffle Xu
                   ` (11 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Implement the data plane of reading metadata from primary data blob
over fscache.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/data.c     | 20 ++++++++++++++++++--
 fs/erofs/fscache.c  | 38 ++++++++++++++++++++++++++++++++++++++
 fs/erofs/internal.h |  9 +++++++++
 3 files changed, 65 insertions(+), 2 deletions(-)

diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index 14b64d960541..cb8fe299ad67 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -31,15 +31,26 @@ void erofs_put_metabuf(struct erofs_buf *buf)
 void *erofs_bread(struct erofs_buf *buf, struct inode *inode,
 		  erofs_blk_t blkaddr, enum erofs_kmap_type type)
 {
-	struct address_space *const mapping = inode->i_mapping;
 	erofs_off_t offset = blknr_to_addr(blkaddr);
 	pgoff_t index = offset >> PAGE_SHIFT;
 	struct page *page = buf->page;
 
 	if (!page || page->index != index) {
 		erofs_put_metabuf(buf);
-		page = read_cache_page_gfp(mapping, index,
+		if (buf->sb) {
+			struct folio *folio;
+
+			folio = erofs_fscache_get_folio(buf->sb, index);
+			if (IS_ERR(folio))
+				page = ERR_CAST(folio);
+			else
+				page = folio_page(folio, 0);
+		} else {
+			struct address_space *const mapping = inode->i_mapping;
+
+			page = read_cache_page_gfp(mapping, index,
 				mapping_gfp_constraint(mapping, ~__GFP_FS));
+		}
 		if (IS_ERR(page))
 			return page;
 		/* should already be PageUptodate, no need to lock page */
@@ -63,6 +74,11 @@ void *erofs_bread(struct erofs_buf *buf, struct inode *inode,
 void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
 			 erofs_blk_t blkaddr, enum erofs_kmap_type type)
 {
+	if (erofs_is_fscache_mode(sb)) {
+		buf->sb = sb;
+		return erofs_bread(buf, NULL, blkaddr, type);
+	}
+
 	return erofs_bread(buf, sb->s_bdev->bd_inode, blkaddr, type);
 }
 
diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index d38a6efc8e50..158cc273f8fb 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -34,9 +34,47 @@ static int erofs_fscache_read_folios(struct fscache_cookie *cookie,
 	return ret;
 }
 
+static int erofs_fscache_meta_readpage(struct file *data, struct page *page)
+{
+	int ret;
+	struct super_block *sb = (struct super_block *)data;
+	struct folio *folio = page_folio(page);
+	struct erofs_map_dev mdev = {
+		.m_deviceid = 0,
+		.m_pa = folio_pos(folio),
+	};
+
+	ret = erofs_map_dev(sb, &mdev);
+	if (ret)
+		goto out;
+
+	ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
+			folio_file_mapping(folio), folio_pos(folio),
+			folio_size(folio), mdev.m_pa);
+	if (ret)
+		goto out;
+
+	folio_mark_uptodate(folio);
+out:
+	folio_unlock(folio);
+	return ret;
+}
+
 static const struct address_space_operations erofs_fscache_meta_aops = {
+	.readpage = erofs_fscache_meta_readpage,
 };
 
+/*
+ * Get the page cache of data blob at the index offset.
+ * Return: up to date page on success, ERR_PTR() on failure.
+ */
+struct folio *erofs_fscache_get_folio(struct super_block *sb, pgoff_t index)
+{
+	struct erofs_fscache *ctx = EROFS_SB(sb)->s_fscache;
+
+	return read_mapping_folio(ctx->inode->i_mapping, index, (void *)sb);
+}
+
 /*
  * Create an fscache context for data blob.
  * Return: 0 on success and allocated fscache context is assigned to @fscache,
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index 90f7d6286a4f..e186051f0640 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -276,6 +276,7 @@ enum erofs_kmap_type {
 };
 
 struct erofs_buf {
+	struct super_block *sb;
 	struct page *page;
 	void *base;
 	enum erofs_kmap_type kmap_type;
@@ -639,6 +640,8 @@ int erofs_fscache_register_cookie(struct super_block *sb,
 				  struct erofs_fscache **fscache,
 				  char *name, bool need_inode);
 void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
+
+struct folio *erofs_fscache_get_folio(struct super_block *sb, pgoff_t index);
 #else
 static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
 static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
@@ -653,6 +656,12 @@ static inline int erofs_fscache_register_cookie(struct super_block *sb,
 static inline void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
 {
 }
+
+static inline struct folio *erofs_fscache_get_folio(struct super_block *sb,
+						    pgoff_t index)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
 #endif
 
 #define EFSCORRUPTED    EUCLEAN         /* Filesystem is corrupted */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 17/20] erofs: implement fscache-based data read for non-inline layout
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (15 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 16/20] erofs: implement fscache-based metadata read Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:24   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 18/20] erofs: implement fscache-based data read for inline layout Jeffle Xu
                   ` (10 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Implement the data plane of reading data from data blobs over fscache
for non-inline layout.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/fscache.c  | 52 +++++++++++++++++++++++++++++++++++++++++++++
 fs/erofs/inode.c    |  5 +++++
 fs/erofs/internal.h |  2 ++
 3 files changed, 59 insertions(+)

diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index 158cc273f8fb..65de1c754e80 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -60,10 +60,62 @@ static int erofs_fscache_meta_readpage(struct file *data, struct page *page)
 	return ret;
 }
 
+static int erofs_fscache_readpage(struct file *file, struct page *page)
+{
+	struct folio *folio = page_folio(page);
+	struct inode *inode = folio_file_mapping(folio)->host;
+	struct super_block *sb = inode->i_sb;
+	struct erofs_map_blocks map;
+	struct erofs_map_dev mdev;
+	erofs_off_t pos;
+	loff_t pstart;
+	int ret = 0;
+
+	DBG_BUGON(folio_size(folio) != EROFS_BLKSIZ);
+
+	pos = folio_pos(folio);
+	map.m_la = pos;
+
+	ret = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW);
+	if (ret)
+		goto out_unlock;
+
+	if (!(map.m_flags & EROFS_MAP_MAPPED)) {
+		folio_zero_range(folio, 0, folio_size(folio));
+		goto out_uptodate;
+	}
+
+	/* no-inline readpage */
+	mdev = (struct erofs_map_dev) {
+		.m_deviceid = map.m_deviceid,
+		.m_pa = map.m_pa,
+	};
+
+	ret = erofs_map_dev(sb, &mdev);
+	if (ret)
+		goto out_unlock;
+
+	pstart = mdev.m_pa + (pos - map.m_la);
+	ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
+			folio_file_mapping(folio), folio_pos(folio),
+			folio_size(folio), pstart);
+
+out_uptodate:
+	if (!ret)
+		folio_mark_uptodate(folio);
+out_unlock:
+	folio_unlock(folio);
+	return ret;
+}
+
 static const struct address_space_operations erofs_fscache_meta_aops = {
 	.readpage = erofs_fscache_meta_readpage,
 };
 
+const struct address_space_operations erofs_fscache_access_aops = {
+	.readpage = erofs_fscache_readpage,
+};
+
 /*
  * Get the page cache of data blob at the index offset.
  * Return: up to date page on success, ERR_PTR() on failure.
diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
index e8b37ba5e9ad..88b51b5fb53f 100644
--- a/fs/erofs/inode.c
+++ b/fs/erofs/inode.c
@@ -296,7 +296,12 @@ static int erofs_fill_inode(struct inode *inode, int isdir)
 		err = z_erofs_fill_inode(inode);
 		goto out_unlock;
 	}
+
 	inode->i_mapping->a_ops = &erofs_raw_access_aops;
+#ifdef CONFIG_EROFS_FS_ONDEMAND
+	if (erofs_is_fscache_mode(inode->i_sb))
+		inode->i_mapping->a_ops = &erofs_fscache_access_aops;
+#endif
 
 out_unlock:
 	erofs_put_metabuf(&buf);
diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
index e186051f0640..336d19647c96 100644
--- a/fs/erofs/internal.h
+++ b/fs/erofs/internal.h
@@ -642,6 +642,8 @@ int erofs_fscache_register_cookie(struct super_block *sb,
 void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
 
 struct folio *erofs_fscache_get_folio(struct super_block *sb, pgoff_t index);
+
+extern const struct address_space_operations erofs_fscache_access_aops;
 #else
 static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
 static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 18/20] erofs: implement fscache-based data read for inline layout
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (16 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 17/20] erofs: implement fscache-based data read for non-inline layout Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:31   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 19/20] erofs: implement fscache-based data readahead Jeffle Xu
                   ` (9 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Implement the data plane of reading data from data blobs over fscache
for inline layout.

For the heading non-inline part, the data plane for non-inline layout is
reused, while only the tail packing part needs special handling.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/fscache.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index 65de1c754e80..d32cb5840c6d 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -60,6 +60,40 @@ static int erofs_fscache_meta_readpage(struct file *data, struct page *page)
 	return ret;
 }
 
+static int erofs_fscache_readpage_inline(struct folio *folio,
+					 struct erofs_map_blocks *map)
+{
+	struct inode *inode = folio_file_mapping(folio)->host;
+	struct super_block *sb = inode->i_sb;
+	struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+	erofs_blk_t blknr;
+	size_t offset, len;
+	void *src, *dst;
+
+	/*
+	 * For inline (tail packing) layout, the offset may be non-zero, which
+	 * can be calculated from corresponding physical address directly.
+	 */
+	offset = erofs_blkoff(map->m_pa);
+	blknr = erofs_blknr(map->m_pa);
+	len = map->m_llen;
+
+	src = erofs_read_metabuf(&buf, sb, blknr, EROFS_KMAP);
+	if (IS_ERR(src))
+		return PTR_ERR(src);
+
+	DBG_BUGON(folio_size(folio) != PAGE_SIZE);
+
+	dst = kmap(folio_page(folio, 0));
+	memcpy(dst, src + offset, len);
+	memset(dst + len, 0, PAGE_SIZE - len);
+	kunmap(folio_page(folio, 0));
+
+	erofs_put_metabuf(&buf);
+
+	return 0;
+}
+
 static int erofs_fscache_readpage(struct file *file, struct page *page)
 {
 	struct folio *folio = page_folio(page);
@@ -85,6 +119,12 @@ static int erofs_fscache_readpage(struct file *file, struct page *page)
 		goto out_uptodate;
 	}
 
+	/* inline readpage */
+	if (map.m_flags & EROFS_MAP_META) {
+		ret = erofs_fscache_readpage_inline(folio, &map);
+		goto out_uptodate;
+	}
+
 	/* no-inline readpage */
 	mdev = (struct erofs_map_dev) {
 		.m_deviceid = map.m_deviceid,
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 19/20] erofs: implement fscache-based data readahead
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (17 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 18/20] erofs: implement fscache-based data read for inline layout Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:36   ` Gao Xiang
  2022-04-06  7:56 ` [PATCH v8 20/20] erofs: add 'fsid' mount option Jeffle Xu
                   ` (8 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Implement fscache-based data readahead. Also registers an individual
bdi for each erofs instance to enable readahead.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/fscache.c | 94 ++++++++++++++++++++++++++++++++++++++++++++++
 fs/erofs/super.c   |  4 ++
 2 files changed, 98 insertions(+)

diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index d32cb5840c6d..620d44210809 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -148,12 +148,106 @@ static int erofs_fscache_readpage(struct file *file, struct page *page)
 	return ret;
 }
 
+static inline void erofs_fscache_unlock_folios(struct readahead_control *rac,
+					       size_t len)
+{
+	while (len) {
+		struct folio *folio = readahead_folio(rac);
+
+		len -= folio_size(folio);
+		folio_mark_uptodate(folio);
+		folio_unlock(folio);
+	}
+}
+
+static void erofs_fscache_readahead(struct readahead_control *rac)
+{
+	struct inode *inode = rac->mapping->host;
+	struct super_block *sb = inode->i_sb;
+	size_t len, count, done = 0;
+	erofs_off_t pos;
+	loff_t start, offset;
+	int ret;
+
+	if (!readahead_count(rac))
+		return;
+
+	start = readahead_pos(rac);
+	len = readahead_length(rac);
+
+	do {
+		struct erofs_map_blocks map;
+		struct erofs_map_dev mdev;
+
+		pos = start + done;
+		map.m_la = pos;
+
+		ret = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW);
+		if (ret)
+			return;
+
+		/*
+		 * 1) For CHUNK_BASED layout, the output m_la is rounded down to
+		 * the nearest chunk boundary, and the output m_llen actually
+		 * starts from the start of the containing chunk.
+		 * 2) For other cases, the output m_la is equal to o_la.
+		 */
+		offset = start + done;
+		count = min_t(size_t, map.m_llen - (pos - map.m_la), len - done);
+
+		/* Read-ahead Hole */
+		if (!(map.m_flags & EROFS_MAP_MAPPED)) {
+			struct iov_iter iter;
+
+			iov_iter_xarray(&iter, READ, &rac->mapping->i_pages,
+					offset, count);
+			iov_iter_zero(count, &iter);
+
+			erofs_fscache_unlock_folios(rac, count);
+			ret = count;
+			continue;
+		}
+
+		/* Read-ahead Inline */
+		if (map.m_flags & EROFS_MAP_META) {
+			struct folio *folio = readahead_folio(rac);
+
+			ret = erofs_fscache_readpage_inline(folio, &map);
+			if (!ret) {
+				folio_mark_uptodate(folio);
+				ret = folio_size(folio);
+			}
+
+			folio_unlock(folio);
+			continue;
+		}
+
+		/* Read-ahead No-inline */
+		mdev = (struct erofs_map_dev) {
+			.m_deviceid = map.m_deviceid,
+			.m_pa = map.m_pa,
+		};
+		ret = erofs_map_dev(sb, &mdev);
+		if (ret)
+			return;
+
+		ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
+				rac->mapping, offset, count,
+				mdev.m_pa + (pos - map.m_la));
+		if (!ret) {
+			erofs_fscache_unlock_folios(rac, count);
+			ret = count;
+		}
+	} while (ret > 0 && ((done += ret) < len));
+}
+
 static const struct address_space_operations erofs_fscache_meta_aops = {
 	.readpage = erofs_fscache_meta_readpage,
 };
 
 const struct address_space_operations erofs_fscache_access_aops = {
 	.readpage = erofs_fscache_readpage,
+	.readahead = erofs_fscache_readahead,
 };
 
 /*
diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index 8c7181cd37e6..a5e4de60a0d8 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -621,6 +621,10 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 						    sbi->opt.fsid, true);
 		if (err)
 			return err;
+
+		err = super_setup_bdi(sb);
+		if (err)
+			return err;
 	}
 
 	err = erofs_read_superblock(sb);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v8 20/20] erofs: add 'fsid' mount option
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (18 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 19/20] erofs: implement fscache-based data readahead Jeffle Xu
@ 2022-04-06  7:56 ` Jeffle Xu
  2022-04-07 14:39   ` Gao Xiang
  2022-04-10 12:51 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Gao Xiang
                   ` (7 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: Jeffle Xu @ 2022-04-06  7:56 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs
  Cc: torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Introduce 'fsid' mount option to enable on-demand read sementics, in
which case, erofs will be mounted from data blobs. Users could specify
the name of primary data blob by this mount option.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/erofs/super.c | 48 ++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 42 insertions(+), 6 deletions(-)

diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index a5e4de60a0d8..292b4a70ce19 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -398,6 +398,7 @@ enum {
 	Opt_dax,
 	Opt_dax_enum,
 	Opt_device,
+	Opt_fsid,
 	Opt_err
 };
 
@@ -422,6 +423,7 @@ static const struct fs_parameter_spec erofs_fs_parameters[] = {
 	fsparam_flag("dax",             Opt_dax),
 	fsparam_enum("dax",		Opt_dax_enum, erofs_dax_param_enums),
 	fsparam_string("device",	Opt_device),
+	fsparam_string("fsid",		Opt_fsid),
 	{}
 };
 
@@ -517,6 +519,16 @@ static int erofs_fc_parse_param(struct fs_context *fc,
 		}
 		++ctx->devs->extra_devices;
 		break;
+	case Opt_fsid:
+#ifdef CONFIG_EROFS_FS_ONDEMAND
+		kfree(ctx->opt.fsid);
+		ctx->opt.fsid = kstrdup(param->string, GFP_KERNEL);
+		if (!ctx->opt.fsid)
+			return -ENOMEM;
+#else
+		errorfc(fc, "fsid option not supported");
+#endif
+		break;
 	default:
 		return -ENOPARAM;
 	}
@@ -597,9 +609,14 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 	sb->s_maxbytes = MAX_LFS_FILESIZE;
 	sb->s_op = &erofs_sops;
 
-	if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
-		erofs_err(sb, "failed to set erofs blksize");
-		return -EINVAL;
+	if (erofs_is_fscache_mode(sb)) {
+		sb->s_blocksize = EROFS_BLKSIZ;
+		sb->s_blocksize_bits = LOG_BLOCK_SIZE;
+	} else {
+		if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
+			erofs_err(sb, "failed to set erofs blksize");
+			return -EINVAL;
+		}
 	}
 
 	sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
@@ -608,7 +625,7 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 
 	sb->s_fs_info = sbi;
 	sbi->opt = ctx->opt;
-	sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev, &sbi->dax_part_off);
+	ctx->opt.fsid = NULL;
 	sbi->devs = ctx->devs;
 	ctx->devs = NULL;
 
@@ -625,6 +642,8 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 		err = super_setup_bdi(sb);
 		if (err)
 			return err;
+	} else {
+		sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev, &sbi->dax_part_off);
 	}
 
 	err = erofs_read_superblock(sb);
@@ -684,6 +703,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
 
 static int erofs_fc_get_tree(struct fs_context *fc)
 {
+	struct erofs_fs_context *ctx = fc->fs_private;
+
+	if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && ctx->opt.fsid)
+		return get_tree_nodev(fc, erofs_fc_fill_super);
+
 	return get_tree_bdev(fc, erofs_fc_fill_super);
 }
 
@@ -733,6 +757,7 @@ static void erofs_fc_free(struct fs_context *fc)
 	struct erofs_fs_context *ctx = fc->fs_private;
 
 	erofs_free_dev_context(ctx->devs);
+	kfree(ctx->opt.fsid);
 	kfree(ctx);
 }
 
@@ -773,7 +798,10 @@ static void erofs_kill_sb(struct super_block *sb)
 
 	WARN_ON(sb->s_magic != EROFS_SUPER_MAGIC);
 
-	kill_block_super(sb);
+	if (erofs_is_fscache_mode(sb))
+		generic_shutdown_super(sb);
+	else
+		kill_block_super(sb);
 
 	sbi = EROFS_SB(sb);
 	if (!sbi)
@@ -783,6 +811,7 @@ static void erofs_kill_sb(struct super_block *sb)
 	fs_put_dax(sbi->dax_dev);
 	erofs_fscache_unregister_cookie(&sbi->s_fscache);
 	erofs_fscache_unregister_fs(sb);
+	kfree(sbi->opt.fsid);
 	kfree(sbi);
 	sb->s_fs_info = NULL;
 }
@@ -884,7 +913,10 @@ static int erofs_statfs(struct dentry *dentry, struct kstatfs *buf)
 {
 	struct super_block *sb = dentry->d_sb;
 	struct erofs_sb_info *sbi = EROFS_SB(sb);
-	u64 id = huge_encode_dev(sb->s_bdev->bd_dev);
+	u64 id = 0;
+
+	if (!erofs_is_fscache_mode(sb))
+		id = huge_encode_dev(sb->s_bdev->bd_dev);
 
 	buf->f_type = sb->s_magic;
 	buf->f_bsize = EROFS_BLKSIZ;
@@ -929,6 +961,10 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
 		seq_puts(seq, ",dax=always");
 	if (test_opt(opt, DAX_NEVER))
 		seq_puts(seq, ",dax=never");
+#ifdef CONFIG_EROFS_FS_ONDEMAND
+	if (opt->fsid)
+		seq_printf(seq, ",fsid=%s", opt->fsid);
+#endif
 	return 0;
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 08/20] erofs: make erofs_map_blocks() generally available
  2022-04-06  7:56 ` [PATCH v8 08/20] erofs: make erofs_map_blocks() generally available Jeffle Xu
@ 2022-04-07  2:44   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07  2:44 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:00PM +0800, Jeffle Xu wrote:
> ... so that it can be used in the following introduced fscache mode.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

> ---
>  fs/erofs/data.c     | 4 ++--
>  fs/erofs/internal.h | 2 ++
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index 780db1e5f4b7..bc22642358ec 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -110,8 +110,8 @@ static int erofs_map_blocks_flatmode(struct inode *inode,
>  	return 0;
>  }
>  
> -static int erofs_map_blocks(struct inode *inode,
> -			    struct erofs_map_blocks *map, int flags)
> +int erofs_map_blocks(struct inode *inode,
> +		     struct erofs_map_blocks *map, int flags)
>  {
>  	struct super_block *sb = inode->i_sb;
>  	struct erofs_inode *vi = EROFS_I(inode);
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index 5298c4ee277d..fe9564e5091e 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -486,6 +486,8 @@ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
>  int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *dev);
>  int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
>  		 u64 start, u64 len);
> +int erofs_map_blocks(struct inode *inode,
> +		     struct erofs_map_blocks *map, int flags);
>  
>  /* inode.c */
>  static inline unsigned long erofs_inode_hash(erofs_nid_t nid)
> -- 
> 2.27.0

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 09/20] erofs: add mode checking helper
  2022-04-06  7:56 ` [PATCH v8 09/20] erofs: add mode checking helper Jeffle Xu
@ 2022-04-07  2:46   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07  2:46 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:01PM +0800, Jeffle Xu wrote:
> Until then erofs is exactly blockdev based filesystem.
> 
> A new fscache-based mode is going to be introduced for erofs to support
> scenarios where on-demand read semantics is needed, e.g. container
> image distribution. In this case, erofs could be mounted from data blobs
> through fscache.
> 
> Add a helper checking which mode erofs works in.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

> ---
>  fs/erofs/internal.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index fe9564e5091e..05a97533b1e9 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -161,6 +161,11 @@ struct erofs_sb_info {
>  #define set_opt(opt, option)	((opt)->mount_opt |= EROFS_MOUNT_##option)
>  #define test_opt(opt, option)	((opt)->mount_opt & EROFS_MOUNT_##option)
>  
> +static inline bool erofs_is_fscache_mode(struct super_block *sb)
> +{
> +	return IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && !sb->s_bdev;
> +}
> +
>  enum {
>  	EROFS_ZIP_CACHE_DISABLED,
>  	EROFS_ZIP_CACHE_READAHEAD,
> -- 
> 2.27.0

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 10/20] erofs: register fscache volume
  2022-04-06  7:56 ` [PATCH v8 10/20] erofs: register fscache volume Jeffle Xu
@ 2022-04-07  2:50   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07  2:50 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:02PM +0800, Jeffle Xu wrote:
> A new fscache based mode is going to be introduced for erofs, in which
> case on-demand read semantics is implemented through fscache.
> 
> As the first step, register fscache volume for each erofs filesystem.
> That means, data blobs can not be shared among erofs filesystems. In the
> following iteration, we are going to introduce the domain semantics, in
> which case several erofs filesystems can belong to one domain, and data
> blobs can be shared among these erofs filesystems of one domain.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

> ---
>  fs/erofs/Kconfig    | 10 ++++++++++
>  fs/erofs/Makefile   |  1 +
>  fs/erofs/fscache.c  | 37 +++++++++++++++++++++++++++++++++++++
>  fs/erofs/internal.h | 13 +++++++++++++
>  fs/erofs/super.c    |  7 +++++++
>  5 files changed, 68 insertions(+)
>  create mode 100644 fs/erofs/fscache.c
> 
> diff --git a/fs/erofs/Kconfig b/fs/erofs/Kconfig
> index f57255ab88ed..3d05265e3e8e 100644
> --- a/fs/erofs/Kconfig
> +++ b/fs/erofs/Kconfig
> @@ -98,3 +98,13 @@ config EROFS_FS_ZIP_LZMA
>  	  systems will be readable without selecting this option.
>  
>  	  If unsure, say N.
> +
> +config EROFS_FS_ONDEMAND
> +	bool "EROFS fscache-based ondemand-read"
> +	depends on CACHEFILES_ONDEMAND && (EROFS_FS=m && FSCACHE || EROFS_FS=y && FSCACHE=y)
> +	default n
> +	help
> +	  EROFS is mounted from data blobs and on-demand read semantics is
> +	  implemented through fscache.
> +
> +	  If unsure, say N.
> diff --git a/fs/erofs/Makefile b/fs/erofs/Makefile
> index 8a3317e38e5a..99bbc597a3e9 100644
> --- a/fs/erofs/Makefile
> +++ b/fs/erofs/Makefile
> @@ -5,3 +5,4 @@ erofs-objs := super.o inode.o data.o namei.o dir.o utils.o pcpubuf.o sysfs.o
>  erofs-$(CONFIG_EROFS_FS_XATTR) += xattr.o
>  erofs-$(CONFIG_EROFS_FS_ZIP) += decompressor.o zmap.o zdata.o
>  erofs-$(CONFIG_EROFS_FS_ZIP_LZMA) += decompressor_lzma.o
> +erofs-$(CONFIG_EROFS_FS_ONDEMAND) += fscache.o
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> new file mode 100644
> index 000000000000..7a6d0239ebb1
> --- /dev/null
> +++ b/fs/erofs/fscache.c
> @@ -0,0 +1,37 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Copyright (C) 2022, Alibaba Cloud
> + */
> +#include <linux/fscache.h>
> +#include "internal.h"
> +
> +int erofs_fscache_register_fs(struct super_block *sb)
> +{
> +	struct erofs_sb_info *sbi = EROFS_SB(sb);
> +	struct fscache_volume *volume;
> +	char *name;
> +	int ret = 0;
> +
> +	name = kasprintf(GFP_KERNEL, "erofs,%s", sbi->opt.fsid);
> +	if (!name)
> +		return -ENOMEM;
> +
> +	volume = fscache_acquire_volume(name, NULL, NULL, 0);
> +	if (IS_ERR_OR_NULL(volume)) {
> +		erofs_err(sb, "failed to register volume for %s", name);
> +		ret = volume ? PTR_ERR(volume) : -EOPNOTSUPP;
> +		volume = NULL;
> +	}
> +
> +	sbi->volume = volume;
> +	kfree(name);
> +	return ret;
> +}
> +
> +void erofs_fscache_unregister_fs(struct super_block *sb)
> +{
> +	struct erofs_sb_info *sbi = EROFS_SB(sb);
> +
> +	fscache_relinquish_volume(sbi->volume, NULL, false);
> +	sbi->volume = NULL;
> +}
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index 05a97533b1e9..952a2f483f94 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -74,6 +74,7 @@ struct erofs_mount_opts {
>  	unsigned int max_sync_decompress_pages;
>  #endif
>  	unsigned int mount_opt;
> +	char *fsid;
>  };
>  
>  struct erofs_dev_context {
> @@ -146,6 +147,9 @@ struct erofs_sb_info {
>  	/* sysfs support */
>  	struct kobject s_kobj;		/* /sys/fs/erofs/<devname> */
>  	struct completion s_kobj_unregister;
> +
> +	/* fscache support */
> +	struct fscache_volume *volume;
>  };
>  
>  #define EROFS_SB(sb) ((struct erofs_sb_info *)(sb)->s_fs_info)
> @@ -618,6 +622,15 @@ static inline int z_erofs_load_lzma_config(struct super_block *sb,
>  }
>  #endif	/* !CONFIG_EROFS_FS_ZIP */
>  
> +/* fscache.c */
> +#ifdef CONFIG_EROFS_FS_ONDEMAND
> +int erofs_fscache_register_fs(struct super_block *sb);
> +void erofs_fscache_unregister_fs(struct super_block *sb);
> +#else
> +static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
> +static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
> +#endif
> +
>  #define EFSCORRUPTED    EUCLEAN         /* Filesystem is corrupted */
>  
>  #endif	/* __EROFS_INTERNAL_H */
> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
> index 0c4b41130c2f..6590ed1b7d3b 100644
> --- a/fs/erofs/super.c
> +++ b/fs/erofs/super.c
> @@ -601,6 +601,12 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  	sbi->devs = ctx->devs;
>  	ctx->devs = NULL;
>  
> +	if (erofs_is_fscache_mode(sb)) {
> +		err = erofs_fscache_register_fs(sb);
> +		if (err)
> +			return err;
> +	}
> +
>  	err = erofs_read_superblock(sb);
>  	if (err)
>  		return err;
> @@ -757,6 +763,7 @@ static void erofs_kill_sb(struct super_block *sb)
>  
>  	erofs_free_dev_context(sbi->devs);
>  	fs_put_dax(sbi->dax_dev);
> +	erofs_fscache_unregister_fs(sb);
>  	kfree(sbi);
>  	sb->s_fs_info = NULL;
>  }
> -- 
> 2.27.0

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 11/20] erofs: add fscache context helper functions
  2022-04-06  7:56 ` [PATCH v8 11/20] erofs: add fscache context helper functions Jeffle Xu
@ 2022-04-07  3:25   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07  3:25 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:03PM +0800, Jeffle Xu wrote:
> Introduce a context structure for managing data blobs, and helper
> functions for initializing and cleaning up this context structure.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

> ---
>  fs/erofs/fscache.c  | 46 +++++++++++++++++++++++++++++++++++++++++++++
>  fs/erofs/internal.h | 19 +++++++++++++++++++
>  2 files changed, 65 insertions(+)
> 
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index 7a6d0239ebb1..67a3c4935245 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -5,6 +5,52 @@
>  #include <linux/fscache.h>
>  #include "internal.h"
>  
> +/*
> + * Create an fscache context for data blob.
> + * Return: 0 on success and allocated fscache context is assigned to @fscache,
> + *	   negative error number on failure.
> + */
> +int erofs_fscache_register_cookie(struct super_block *sb,
> +				  struct erofs_fscache **fscache, char *name)
> +{
> +	struct fscache_volume *volume = EROFS_SB(sb)->volume;
> +	struct erofs_fscache *ctx;
> +	struct fscache_cookie *cookie;
> +
> +	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> +	if (!ctx)
> +		return -ENOMEM;
> +
> +	cookie = fscache_acquire_cookie(volume, FSCACHE_ADV_WANT_CACHE_SIZE,
> +					name, strlen(name), NULL, 0, 0);
> +	if (!cookie) {
> +		erofs_err(sb, "failed to get cookie for %s", name);
> +		kfree(name);
> +		return -EINVAL;
> +	}
> +
> +	fscache_use_cookie(cookie, false);
> +	ctx->cookie = cookie;
> +
> +	*fscache = ctx;
> +	return 0;
> +}
> +
> +void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
> +{
> +	struct erofs_fscache *ctx = *fscache;
> +
> +	if (!ctx)
> +		return;
> +
> +	fscache_unuse_cookie(ctx->cookie, NULL, NULL);
> +	fscache_relinquish_cookie(ctx->cookie, false);
> +	ctx->cookie = NULL;
> +
> +	kfree(ctx);
> +	*fscache = NULL;
> +}
> +
>  int erofs_fscache_register_fs(struct super_block *sb)
>  {
>  	struct erofs_sb_info *sbi = EROFS_SB(sb);
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index 952a2f483f94..c6a3351a4d7d 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -97,6 +97,10 @@ struct erofs_sb_lz4_info {
>  	u16 max_pclusterblks;
>  };
>  
> +struct erofs_fscache {
> +	struct fscache_cookie *cookie;
> +};
> +
>  struct erofs_sb_info {
>  	struct erofs_mount_opts opt;	/* options */
>  #ifdef CONFIG_EROFS_FS_ZIP
> @@ -626,9 +630,24 @@ static inline int z_erofs_load_lzma_config(struct super_block *sb,
>  #ifdef CONFIG_EROFS_FS_ONDEMAND
>  int erofs_fscache_register_fs(struct super_block *sb);
>  void erofs_fscache_unregister_fs(struct super_block *sb);
> +
> +int erofs_fscache_register_cookie(struct super_block *sb,
> +				  struct erofs_fscache **fscache, char *name);
> +void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
>  #else
>  static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
>  static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
> +
> +static inline int erofs_fscache_register_cookie(struct super_block *sb,
> +						struct erofs_fscache **fscache,
> +						char *name)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
> +{
> +}
>  #endif
>  
>  #define EFSCORRUPTED    EUCLEAN         /* Filesystem is corrupted */
> -- 
> 2.27.0

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob
  2022-04-06  7:56 ` [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob Jeffle Xu
@ 2022-04-07  5:31   ` Gao Xiang
  2022-04-08  2:14     ` JeffleXu
  0 siblings, 1 reply; 56+ messages in thread
From: Gao Xiang @ 2022-04-07  5:31 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:04PM +0800, Jeffle Xu wrote:
> Introduce one anonymous inode managing page cache for data blob. Then
> erofs could read directly from the address space of the anonymous inode
> when cache hit.

Introduce one anonymous inode for data blobs so that erofs
can cache metadata directly within such anonymous inode.

> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Yeah, I think currently we can live with that:

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang


> ---
>  fs/erofs/fscache.c  | 39 ++++++++++++++++++++++++++++++++++++---
>  fs/erofs/internal.h |  6 ++++--
>  2 files changed, 40 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index 67a3c4935245..1c88614203d2 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -5,17 +5,22 @@
>  #include <linux/fscache.h>
>  #include "internal.h"
>  
> +static const struct address_space_operations erofs_fscache_meta_aops = {
> +};
> +
>  /*
>   * Create an fscache context for data blob.
>   * Return: 0 on success and allocated fscache context is assigned to @fscache,
>   *	   negative error number on failure.
>   */
>  int erofs_fscache_register_cookie(struct super_block *sb,
> -				  struct erofs_fscache **fscache, char *name)
> +				  struct erofs_fscache **fscache,
> +				  char *name, bool need_inode)
>  {
>  	struct fscache_volume *volume = EROFS_SB(sb)->volume;
>  	struct erofs_fscache *ctx;
>  	struct fscache_cookie *cookie;
> +	int ret;
>  
>  	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
>  	if (!ctx)
> @@ -25,15 +30,40 @@ int erofs_fscache_register_cookie(struct super_block *sb,
>  					name, strlen(name), NULL, 0, 0);
>  	if (!cookie) {
>  		erofs_err(sb, "failed to get cookie for %s", name);
> -		kfree(name);
> -		return -EINVAL;
> +		ret = -EINVAL;
> +		goto err;
>  	}
>  
>  	fscache_use_cookie(cookie, false);
>  	ctx->cookie = cookie;
>  
> +	if (need_inode) {
> +		struct inode *const inode = new_inode(sb);
> +
> +		if (!inode) {
> +			erofs_err(sb, "failed to get anon inode for %s", name);
> +			ret = -ENOMEM;
> +			goto err_cookie;
> +		}
> +
> +		set_nlink(inode, 1);
> +		inode->i_size = OFFSET_MAX;
> +		inode->i_mapping->a_ops = &erofs_fscache_meta_aops;
> +		mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
> +
> +		ctx->inode = inode;
> +	}
> +
>  	*fscache = ctx;
>  	return 0;
> +
> +err_cookie:
> +	fscache_unuse_cookie(ctx->cookie, NULL, NULL);
> +	fscache_relinquish_cookie(ctx->cookie, false);
> +	ctx->cookie = NULL;
> +err:
> +	kfree(ctx);
> +	return ret;
>  }
>  
>  void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
> @@ -47,6 +77,9 @@ void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
>  	fscache_relinquish_cookie(ctx->cookie, false);
>  	ctx->cookie = NULL;
>  
> +	iput(ctx->inode);
> +	ctx->inode = NULL;
> +
>  	kfree(ctx);
>  	*fscache = NULL;
>  }
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index c6a3351a4d7d..3a4a344cfed3 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -99,6 +99,7 @@ struct erofs_sb_lz4_info {
>  
>  struct erofs_fscache {
>  	struct fscache_cookie *cookie;
> +	struct inode *inode;
>  };
>  
>  struct erofs_sb_info {
> @@ -632,7 +633,8 @@ int erofs_fscache_register_fs(struct super_block *sb);
>  void erofs_fscache_unregister_fs(struct super_block *sb);
>  
>  int erofs_fscache_register_cookie(struct super_block *sb,
> -				  struct erofs_fscache **fscache, char *name);
> +				  struct erofs_fscache **fscache,
> +				  char *name, bool need_inode);
>  void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
>  #else
>  static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
> @@ -640,7 +642,7 @@ static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
>  
>  static inline int erofs_fscache_register_cookie(struct super_block *sb,
>  						struct erofs_fscache **fscache,
> -						char *name)
> +						char *name, bool need_inode)
>  {
>  	return -EOPNOTSUPP;
>  }
> -- 
> 2.27.0

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 13/20] erofs: add erofs_fscache_read_folios() helper
  2022-04-06  7:56 ` [PATCH v8 13/20] erofs: add erofs_fscache_read_folios() helper Jeffle Xu
@ 2022-04-07 14:05   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:05 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:05PM +0800, Jeffle Xu wrote:
> Add erofs_fscache_read_folios() helper reading from fscache. It supports
> on-demand read semantics. That is, it will make the backend prepare for
> the data when cache miss. Once data ready, it will reinitiate a read
> from the cache.
> 
> This helper can then be used to implement .readpage()/.readahead() of
> on-demand read semantics.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

> ---
>  fs/erofs/fscache.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
> 
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index 1c88614203d2..d38a6efc8e50 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -5,6 +5,35 @@
>  #include <linux/fscache.h>
>  #include "internal.h"
>  
> +/*
> + * Read data from fscache and fill the read data into page cache described by
> + * @start/len, which shall be both aligned with PAGE_SIZE. @pstart describes
> + * the start physical address in the cache file.
> + */
> +static int erofs_fscache_read_folios(struct fscache_cookie *cookie,
> +				     struct address_space *mapping,
> +				     loff_t start, size_t len,
> +				     loff_t pstart)
> +{
> +	struct netfs_cache_resources cres;
> +	struct iov_iter iter;
> +	int ret;
> +
> +	memset(&cres, 0, sizeof(cres));
> +
> +	ret = fscache_begin_read_operation(&cres, cookie);
> +	if (ret)
> +		return ret;
> +
> +	iov_iter_xarray(&iter, READ, &mapping->i_pages, start, len);
> +
> +	ret = fscache_read(&cres, pstart, &iter,
> +			   NETFS_READ_HOLE_ONDEMAND, NULL, NULL);
> +
> +	fscache_end_operation(&cres);
> +	return ret;
> +}
> +
>  static const struct address_space_operations erofs_fscache_meta_aops = {
>  };
>  
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 14/20] erofs: register fscache context for primary data blob
  2022-04-06  7:56 ` [PATCH v8 14/20] erofs: register fscache context for primary data blob Jeffle Xu
@ 2022-04-07 14:09   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:09 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:06PM +0800, Jeffle Xu wrote:
> Registers fscache context for primary data blob. Also move the
> initialization of s_op and related fields forward, since anonymous
> inode will be allocated under the super block when registering the
> fscache context.
> 
> Something worth mentioning about the cleanup routine.
> 
> 1. The fscache context will instantiate anonymous inodes under the super
> block. Release these anonymous inodes when .put_super() is called, or
> we'll get "VFS: Busy inodes after unmount." warning.
> 
> 2. The fscache context is initialized prior to the root inode. If
> .kill_sb() is called when mount failed, .put_super() won't be called
> when root inode has not been initialized yet. Thus .kill_sb() shall
> also contain the cleanup routine.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>

Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

> ---
>  fs/erofs/internal.h |  1 +
>  fs/erofs/super.c    | 15 +++++++++++----
>  2 files changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index 3a4a344cfed3..eb37b33bce37 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -155,6 +155,7 @@ struct erofs_sb_info {
>  
>  	/* fscache support */
>  	struct fscache_volume *volume;
> +	struct erofs_fscache *s_fscache;
>  };
>  
>  #define EROFS_SB(sb) ((struct erofs_sb_info *)(sb)->s_fs_info)
> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
> index 6590ed1b7d3b..9498b899b73b 100644
> --- a/fs/erofs/super.c
> +++ b/fs/erofs/super.c
> @@ -585,6 +585,9 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  	int err;
>  
>  	sb->s_magic = EROFS_SUPER_MAGIC;
> +	sb->s_flags |= SB_RDONLY | SB_NOATIME;
> +	sb->s_maxbytes = MAX_LFS_FILESIZE;
> +	sb->s_op = &erofs_sops;
>  
>  	if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
>  		erofs_err(sb, "failed to set erofs blksize");
> @@ -605,6 +608,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  		err = erofs_fscache_register_fs(sb);
>  		if (err)
>  			return err;
> +
> +		err = erofs_fscache_register_cookie(sb, &sbi->s_fscache,
> +						    sbi->opt.fsid, true);
> +		if (err)
> +			return err;
>  	}
>  
>  	err = erofs_read_superblock(sb);
> @@ -619,11 +627,8 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  			clear_opt(&sbi->opt, DAX_ALWAYS);
>  		}
>  	}
> -	sb->s_flags |= SB_RDONLY | SB_NOATIME;
> -	sb->s_maxbytes = MAX_LFS_FILESIZE;
> -	sb->s_time_gran = 1;
>  
> -	sb->s_op = &erofs_sops;
> +	sb->s_time_gran = 1;
>  	sb->s_xattr = erofs_xattr_handlers;
>  
>  	if (test_opt(&sbi->opt, POSIX_ACL))
> @@ -763,6 +768,7 @@ static void erofs_kill_sb(struct super_block *sb)
>  
>  	erofs_free_dev_context(sbi->devs);
>  	fs_put_dax(sbi->dax_dev);
> +	erofs_fscache_unregister_cookie(&sbi->s_fscache);
>  	erofs_fscache_unregister_fs(sb);
>  	kfree(sbi);
>  	sb->s_fs_info = NULL;
> @@ -781,6 +787,7 @@ static void erofs_put_super(struct super_block *sb)
>  	iput(sbi->managed_cache);
>  	sbi->managed_cache = NULL;
>  #endif
> +	erofs_fscache_unregister_cookie(&sbi->s_fscache);
>  }
>  
>  static struct file_system_type erofs_fs_type = {
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 15/20] erofs: register fscache context for extra data blobs
  2022-04-06  7:56 ` [PATCH v8 15/20] erofs: register fscache context for extra data blobs Jeffle Xu
@ 2022-04-07 14:15   ` Gao Xiang
  2022-04-08  2:11     ` JeffleXu
  0 siblings, 1 reply; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:15 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:07PM +0800, Jeffle Xu wrote:
> Similar to the multi device mode, erofs could be mounted from one
> primary data blob (mandatory) and multiple extra data blobs (optional).
> 
> Register fscache context for each extra data blob.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/erofs/data.c     |  3 +++
>  fs/erofs/internal.h |  2 ++
>  fs/erofs/super.c    | 25 +++++++++++++++++--------
>  3 files changed, 22 insertions(+), 8 deletions(-)
> 
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index bc22642358ec..14b64d960541 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -199,6 +199,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
>  	map->m_bdev = sb->s_bdev;
>  	map->m_daxdev = EROFS_SB(sb)->dax_dev;
>  	map->m_dax_part_off = EROFS_SB(sb)->dax_part_off;
> +	map->m_fscache = EROFS_SB(sb)->s_fscache;
>  
>  	if (map->m_deviceid) {
>  		down_read(&devs->rwsem);
> @@ -210,6 +211,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
>  		map->m_bdev = dif->bdev;
>  		map->m_daxdev = dif->dax_dev;
>  		map->m_dax_part_off = dif->dax_part_off;
> +		map->m_fscache = dif->fscache;
>  		up_read(&devs->rwsem);
>  	} else if (devs->extra_devices) {
>  		down_read(&devs->rwsem);
> @@ -227,6 +229,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
>  				map->m_bdev = dif->bdev;
>  				map->m_daxdev = dif->dax_dev;
>  				map->m_dax_part_off = dif->dax_part_off;
> +				map->m_fscache = dif->fscache;
>  				break;
>  			}
>  		}
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index eb37b33bce37..90f7d6286a4f 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -49,6 +49,7 @@ typedef u32 erofs_blk_t;
>  
>  struct erofs_device_info {
>  	char *path;
> +	struct erofs_fscache *fscache;
>  	struct block_device *bdev;
>  	struct dax_device *dax_dev;
>  	u64 dax_part_off;
> @@ -482,6 +483,7 @@ static inline int z_erofs_map_blocks_iter(struct inode *inode,
>  #endif	/* !CONFIG_EROFS_FS_ZIP */
>  
>  struct erofs_map_dev {
> +	struct erofs_fscache *m_fscache;
>  	struct block_device *m_bdev;
>  	struct dax_device *m_daxdev;
>  	u64 m_dax_part_off;
> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
> index 9498b899b73b..8c7181cd37e6 100644
> --- a/fs/erofs/super.c
> +++ b/fs/erofs/super.c
> @@ -259,15 +259,23 @@ static int erofs_init_devices(struct super_block *sb,
>  		}
>  		dis = ptr + erofs_blkoff(pos);
>  
> -		bdev = blkdev_get_by_path(dif->path,
> -					  FMODE_READ | FMODE_EXCL,
> -					  sb->s_type);
> -		if (IS_ERR(bdev)) {
> -			err = PTR_ERR(bdev);
> -			break;
> +		if (erofs_is_fscache_mode(sb)) {
> +			err = erofs_fscache_register_cookie(sb, &dif->fscache,
> +							    dif->path, false);
> +			if (err)
> +				break;
> +		} else {
> +			bdev = blkdev_get_by_path(dif->path,
> +						  FMODE_READ | FMODE_EXCL,
> +						  sb->s_type);
> +			if (IS_ERR(bdev)) {
> +				err = PTR_ERR(bdev);
> +				break;
> +			}
> +			dif->bdev = bdev;
> +			dif->dax_dev = fs_dax_get_by_bdev(bdev, &dif->dax_part_off);

Overly long line, please help split into 2 lines if possible.

Otherwise looks good,
Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

>  		}
> -		dif->bdev = bdev;
> -		dif->dax_dev = fs_dax_get_by_bdev(bdev, &dif->dax_part_off);
> +
>  		dif->blocks = le32_to_cpu(dis->blocks);
>  		dif->mapped_blkaddr = le32_to_cpu(dis->mapped_blkaddr);
>  		sbi->total_blocks += dif->blocks;
> @@ -701,6 +709,7 @@ static int erofs_release_device_info(int id, void *ptr, void *data)
>  	fs_put_dax(dif->dax_dev);
>  	if (dif->bdev)
>  		blkdev_put(dif->bdev, FMODE_READ | FMODE_EXCL);
> +	erofs_fscache_unregister_cookie(&dif->fscache);
>  	kfree(dif->path);
>  	kfree(dif);
>  	return 0;
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 16/20] erofs: implement fscache-based metadata read
  2022-04-06  7:56 ` [PATCH v8 16/20] erofs: implement fscache-based metadata read Jeffle Xu
@ 2022-04-07 14:19   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:19 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:08PM +0800, Jeffle Xu wrote:
> Implement the data plane of reading metadata from primary data blob
> over fscache.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/erofs/data.c     | 20 ++++++++++++++++++--
>  fs/erofs/fscache.c  | 38 ++++++++++++++++++++++++++++++++++++++
>  fs/erofs/internal.h |  9 +++++++++
>  3 files changed, 65 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index 14b64d960541..cb8fe299ad67 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -31,15 +31,26 @@ void erofs_put_metabuf(struct erofs_buf *buf)
>  void *erofs_bread(struct erofs_buf *buf, struct inode *inode,
>  		  erofs_blk_t blkaddr, enum erofs_kmap_type type)
>  {
> -	struct address_space *const mapping = inode->i_mapping;
>  	erofs_off_t offset = blknr_to_addr(blkaddr);
>  	pgoff_t index = offset >> PAGE_SHIFT;
>  	struct page *page = buf->page;
>  
>  	if (!page || page->index != index) {
>  		erofs_put_metabuf(buf);
> -		page = read_cache_page_gfp(mapping, index,
> +		if (buf->sb) {
> +			struct folio *folio;
> +
> +			folio = erofs_fscache_get_folio(buf->sb, index);
> +			if (IS_ERR(folio))
> +				page = ERR_CAST(folio);
> +			else
> +				page = folio_page(folio, 0);
> +		} else {
> +			struct address_space *const mapping = inode->i_mapping;
> +
> +			page = read_cache_page_gfp(mapping, index,
>  				mapping_gfp_constraint(mapping, ~__GFP_FS));
> +		}
>  		if (IS_ERR(page))
>  			return page;
>  		/* should already be PageUptodate, no need to lock page */
> @@ -63,6 +74,11 @@ void *erofs_bread(struct erofs_buf *buf, struct inode *inode,
>  void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
>  			 erofs_blk_t blkaddr, enum erofs_kmap_type type)
>  {
> +	if (erofs_is_fscache_mode(sb)) {
> +		buf->sb = sb;
> +		return erofs_bread(buf, NULL, blkaddr, type);
> +	}
> +
>  	return erofs_bread(buf, sb->s_bdev->bd_inode, blkaddr, type);
>  }
>  
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index d38a6efc8e50..158cc273f8fb 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -34,9 +34,47 @@ static int erofs_fscache_read_folios(struct fscache_cookie *cookie,
>  	return ret;
>  }
>  
> +static int erofs_fscache_meta_readpage(struct file *data, struct page *page)
> +{
> +	int ret;
> +	struct super_block *sb = (struct super_block *)data;
> +	struct folio *folio = page_folio(page);
> +	struct erofs_map_dev mdev = {
> +		.m_deviceid = 0,
> +		.m_pa = folio_pos(folio),
> +	};
> +
> +	ret = erofs_map_dev(sb, &mdev);
> +	if (ret)
> +		goto out;
> +
> +	ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
> +			folio_file_mapping(folio), folio_pos(folio),
> +			folio_size(folio), mdev.m_pa);
> +	if (ret)
> +		goto out;
> +
> +	folio_mark_uptodate(folio);
> +out:
> +	folio_unlock(folio);
> +	return ret;
> +}
> +
>  static const struct address_space_operations erofs_fscache_meta_aops = {
> +	.readpage = erofs_fscache_meta_readpage,
>  };
>  
> +/*
> + * Get the page cache of data blob at the index offset.
> + * Return: up to date page on success, ERR_PTR() on failure.
> + */

Unnecessary comment and even unnecessary helper.

Thanks,
Gao Xiang

> +struct folio *erofs_fscache_get_folio(struct super_block *sb, pgoff_t index)
> +{
> +	struct erofs_fscache *ctx = EROFS_SB(sb)->s_fscache;
> +
> +	return read_mapping_folio(ctx->inode->i_mapping, index, (void *)sb);
> +}
> +
>  /*
>   * Create an fscache context for data blob.
>   * Return: 0 on success and allocated fscache context is assigned to @fscache,
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index 90f7d6286a4f..e186051f0640 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -276,6 +276,7 @@ enum erofs_kmap_type {
>  };
>  
>  struct erofs_buf {
> +	struct super_block *sb;
>  	struct page *page;
>  	void *base;
>  	enum erofs_kmap_type kmap_type;
> @@ -639,6 +640,8 @@ int erofs_fscache_register_cookie(struct super_block *sb,
>  				  struct erofs_fscache **fscache,
>  				  char *name, bool need_inode);
>  void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
> +
> +struct folio *erofs_fscache_get_folio(struct super_block *sb, pgoff_t index);
>  #else
>  static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
>  static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
> @@ -653,6 +656,12 @@ static inline int erofs_fscache_register_cookie(struct super_block *sb,
>  static inline void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache)
>  {
>  }
> +
> +static inline struct folio *erofs_fscache_get_folio(struct super_block *sb,
> +						    pgoff_t index)
> +{
> +	return ERR_PTR(-EOPNOTSUPP);
> +}
>  #endif
>  
>  #define EFSCORRUPTED    EUCLEAN         /* Filesystem is corrupted */
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 17/20] erofs: implement fscache-based data read for non-inline layout
  2022-04-06  7:56 ` [PATCH v8 17/20] erofs: implement fscache-based data read for non-inline layout Jeffle Xu
@ 2022-04-07 14:24   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:24 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:09PM +0800, Jeffle Xu wrote:
> Implement the data plane of reading data from data blobs over fscache
> for non-inline layout.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/erofs/fscache.c  | 52 +++++++++++++++++++++++++++++++++++++++++++++
>  fs/erofs/inode.c    |  5 +++++
>  fs/erofs/internal.h |  2 ++
>  3 files changed, 59 insertions(+)
> 
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index 158cc273f8fb..65de1c754e80 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -60,10 +60,62 @@ static int erofs_fscache_meta_readpage(struct file *data, struct page *page)
>  	return ret;
>  }
>  
> +static int erofs_fscache_readpage(struct file *file, struct page *page)
> +{
> +	struct folio *folio = page_folio(page);
> +	struct inode *inode = folio_file_mapping(folio)->host;
> +	struct super_block *sb = inode->i_sb;
> +	struct erofs_map_blocks map;
> +	struct erofs_map_dev mdev;
> +	erofs_off_t pos;
> +	loff_t pstart;
> +	int ret = 0;
> +
> +	DBG_BUGON(folio_size(folio) != EROFS_BLKSIZ);
> +
> +	pos = folio_pos(folio);
> +	map.m_la = pos;
> +
> +	ret = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW);
> +	if (ret)
> +		goto out_unlock;
> +
> +	if (!(map.m_flags & EROFS_MAP_MAPPED)) {
> +		folio_zero_range(folio, 0, folio_size(folio));
> +		goto out_uptodate;
> +	}
> +
> +	/* no-inline readpage */
> +	mdev = (struct erofs_map_dev) {
> +		.m_deviceid = map.m_deviceid,
> +		.m_pa = map.m_pa,
> +	};
> +
> +	ret = erofs_map_dev(sb, &mdev);
> +	if (ret)
> +		goto out_unlock;
> +
> +	pstart = mdev.m_pa + (pos - map.m_la);
> +	ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
> +			folio_file_mapping(folio), folio_pos(folio),
> +			folio_size(folio), pstart);
> +
> +out_uptodate:
> +	if (!ret)
> +		folio_mark_uptodate(folio);
> +out_unlock:
> +	folio_unlock(folio);
> +	return ret;
> +}
> +
>  static const struct address_space_operations erofs_fscache_meta_aops = {
>  	.readpage = erofs_fscache_meta_readpage,
>  };
>  
> +const struct address_space_operations erofs_fscache_access_aops = {
> +	.readpage = erofs_fscache_readpage,
> +};
> +
>  /*
>   * Get the page cache of data blob at the index offset.
>   * Return: up to date page on success, ERR_PTR() on failure.
> diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
> index e8b37ba5e9ad..88b51b5fb53f 100644
> --- a/fs/erofs/inode.c
> +++ b/fs/erofs/inode.c
> @@ -296,7 +296,12 @@ static int erofs_fill_inode(struct inode *inode, int isdir)
>  		err = z_erofs_fill_inode(inode);
>  		goto out_unlock;
>  	}
> +

unnecessary modification.

Otherwise looks good:
Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>

Thanks,
Gao Xiang

>  	inode->i_mapping->a_ops = &erofs_raw_access_aops;
> +#ifdef CONFIG_EROFS_FS_ONDEMAND
> +	if (erofs_is_fscache_mode(inode->i_sb))
> +		inode->i_mapping->a_ops = &erofs_fscache_access_aops;
> +#endif
>  
>  out_unlock:
>  	erofs_put_metabuf(&buf);
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index e186051f0640..336d19647c96 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -642,6 +642,8 @@ int erofs_fscache_register_cookie(struct super_block *sb,
>  void erofs_fscache_unregister_cookie(struct erofs_fscache **fscache);
>  
>  struct folio *erofs_fscache_get_folio(struct super_block *sb, pgoff_t index);
> +
> +extern const struct address_space_operations erofs_fscache_access_aops;
>  #else
>  static inline int erofs_fscache_register_fs(struct super_block *sb) { return 0; }
>  static inline void erofs_fscache_unregister_fs(struct super_block *sb) {}
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 18/20] erofs: implement fscache-based data read for inline layout
  2022-04-06  7:56 ` [PATCH v8 18/20] erofs: implement fscache-based data read for inline layout Jeffle Xu
@ 2022-04-07 14:31   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:31 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:10PM +0800, Jeffle Xu wrote:
> Implement the data plane of reading data from data blobs over fscache
> for inline layout.
> 
> For the heading non-inline part, the data plane for non-inline layout is
> reused, while only the tail packing part needs special handling.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/erofs/fscache.c | 40 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
> 
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index 65de1c754e80..d32cb5840c6d 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -60,6 +60,40 @@ static int erofs_fscache_meta_readpage(struct file *data, struct page *page)
>  	return ret;
>  }
>  
> +static int erofs_fscache_readpage_inline(struct folio *folio,
> +					 struct erofs_map_blocks *map)
> +{
> +	struct inode *inode = folio_file_mapping(folio)->host;
> +	struct super_block *sb = inode->i_sb;
> +	struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
> +	erofs_blk_t blknr;
> +	size_t offset, len;
> +	void *src, *dst;
> +
> +	/*
> +	 * For inline (tail packing) layout, the offset may be non-zero, which
> +	 * can be calculated from corresponding physical address directly.
> +	 */
> +	offset = erofs_blkoff(map->m_pa);
> +	blknr = erofs_blknr(map->m_pa);
> +	len = map->m_llen;
> +
> +	src = erofs_read_metabuf(&buf, sb, blknr, EROFS_KMAP);
> +	if (IS_ERR(src))
> +		return PTR_ERR(src);
> +
> +	DBG_BUGON(folio_size(folio) != PAGE_SIZE);
> +
> +	dst = kmap(folio_page(folio, 0));

kmap_local_folio?

> +	memcpy(dst, src + offset, len);
> +	memset(dst + len, 0, PAGE_SIZE - len);
> +	kunmap(folio_page(folio, 0));
> +
> +	erofs_put_metabuf(&buf);
> +
> +	return 0;
> +}
> +
>  static int erofs_fscache_readpage(struct file *file, struct page *page)
>  {
>  	struct folio *folio = page_folio(page);
> @@ -85,6 +119,12 @@ static int erofs_fscache_readpage(struct file *file, struct page *page)
>  		goto out_uptodate;
>  	}
>  
> +	/* inline readpage */

I think the code below is self-explained.

> +	if (map.m_flags & EROFS_MAP_META) {
> +		ret = erofs_fscache_readpage_inline(folio, &map);
> +		goto out_uptodate;
> +	}
> +
>  	/* no-inline readpage */

Same here.

Thanks,
Gao Xiang

>  	mdev = (struct erofs_map_dev) {
>  		.m_deviceid = map.m_deviceid,
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 19/20] erofs: implement fscache-based data readahead
  2022-04-06  7:56 ` [PATCH v8 19/20] erofs: implement fscache-based data readahead Jeffle Xu
@ 2022-04-07 14:36   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:36 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:11PM +0800, Jeffle Xu wrote:
> Implement fscache-based data readahead. Also registers an individual
> bdi for each erofs instance to enable readahead.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/erofs/fscache.c | 94 ++++++++++++++++++++++++++++++++++++++++++++++
>  fs/erofs/super.c   |  4 ++
>  2 files changed, 98 insertions(+)
> 
> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
> index d32cb5840c6d..620d44210809 100644
> --- a/fs/erofs/fscache.c
> +++ b/fs/erofs/fscache.c
> @@ -148,12 +148,106 @@ static int erofs_fscache_readpage(struct file *file, struct page *page)
>  	return ret;
>  }
>  
> +static inline void erofs_fscache_unlock_folios(struct readahead_control *rac,
> +					       size_t len)
> +{
> +	while (len) {
> +		struct folio *folio = readahead_folio(rac);
> +
> +		len -= folio_size(folio);
> +		folio_mark_uptodate(folio);
> +		folio_unlock(folio);
> +	}
> +}
> +
> +static void erofs_fscache_readahead(struct readahead_control *rac)
> +{
> +	struct inode *inode = rac->mapping->host;
> +	struct super_block *sb = inode->i_sb;
> +	size_t len, count, done = 0;
> +	erofs_off_t pos;
> +	loff_t start, offset;
> +	int ret;
> +
> +	if (!readahead_count(rac))
> +		return;
> +
> +	start = readahead_pos(rac);
> +	len = readahead_length(rac);
> +
> +	do {
> +		struct erofs_map_blocks map;
> +		struct erofs_map_dev mdev;
> +
> +		pos = start + done;
> +		map.m_la = pos;
> +
> +		ret = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW);
> +		if (ret)
> +			return;
> +
> +		/*
> +		 * 1) For CHUNK_BASED layout, the output m_la is rounded down to
> +		 * the nearest chunk boundary, and the output m_llen actually
> +		 * starts from the start of the containing chunk.
> +		 * 2) For other cases, the output m_la is equal to o_la.
> +		 */

I think such comment is really unneeded, we should calculate like below
as always. Also I don't find o_la here anymore.

> +		offset = start + done;
> +		count = min_t(size_t, map.m_llen - (pos - map.m_la), len - done);
> +
> +		/* Read-ahead Hole */
> +		if (!(map.m_flags & EROFS_MAP_MAPPED)) {
> +			struct iov_iter iter;
> +
> +			iov_iter_xarray(&iter, READ, &rac->mapping->i_pages,
> +					offset, count);
> +			iov_iter_zero(count, &iter);
> +
> +			erofs_fscache_unlock_folios(rac, count);
> +			ret = count;
> +			continue;
> +		}
> +
> +		/* Read-ahead Inline */

Unnecessary comment.

> +		if (map.m_flags & EROFS_MAP_META) {
> +			struct folio *folio = readahead_folio(rac);
> +
> +			ret = erofs_fscache_readpage_inline(folio, &map);
> +			if (!ret) {
> +				folio_mark_uptodate(folio);
> +				ret = folio_size(folio);
> +			}
> +
> +			folio_unlock(folio);
> +			continue;
> +		}
> +
> +		/* Read-ahead No-inline */

Same here.

Thanks,
Gao Xiang

> +		mdev = (struct erofs_map_dev) {
> +			.m_deviceid = map.m_deviceid,
> +			.m_pa = map.m_pa,
> +		};
> +		ret = erofs_map_dev(sb, &mdev);
> +		if (ret)
> +			return;
> +
> +		ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
> +				rac->mapping, offset, count,
> +				mdev.m_pa + (pos - map.m_la));
> +		if (!ret) {
> +			erofs_fscache_unlock_folios(rac, count);
> +			ret = count;
> +		}
> +	} while (ret > 0 && ((done += ret) < len));
> +}
> +
>  static const struct address_space_operations erofs_fscache_meta_aops = {
>  	.readpage = erofs_fscache_meta_readpage,
>  };
>  
>  const struct address_space_operations erofs_fscache_access_aops = {
>  	.readpage = erofs_fscache_readpage,
> +	.readahead = erofs_fscache_readahead,
>  };
>  
>  /*
> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
> index 8c7181cd37e6..a5e4de60a0d8 100644
> --- a/fs/erofs/super.c
> +++ b/fs/erofs/super.c
> @@ -621,6 +621,10 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  						    sbi->opt.fsid, true);
>  		if (err)
>  			return err;
> +
> +		err = super_setup_bdi(sb);
> +		if (err)
> +			return err;
>  	}
>  
>  	err = erofs_read_superblock(sb);
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 20/20] erofs: add 'fsid' mount option
  2022-04-06  7:56 ` [PATCH v8 20/20] erofs: add 'fsid' mount option Jeffle Xu
@ 2022-04-07 14:39   ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-07 14:39 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:56:12PM +0800, Jeffle Xu wrote:
> Introduce 'fsid' mount option to enable on-demand read sementics, in
> which case, erofs will be mounted from data blobs. Users could specify
> the name of primary data blob by this mount option.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/erofs/super.c | 48 ++++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 42 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
> index a5e4de60a0d8..292b4a70ce19 100644
> --- a/fs/erofs/super.c
> +++ b/fs/erofs/super.c
> @@ -398,6 +398,7 @@ enum {
>  	Opt_dax,
>  	Opt_dax_enum,
>  	Opt_device,
> +	Opt_fsid,
>  	Opt_err
>  };
>  
> @@ -422,6 +423,7 @@ static const struct fs_parameter_spec erofs_fs_parameters[] = {
>  	fsparam_flag("dax",             Opt_dax),
>  	fsparam_enum("dax",		Opt_dax_enum, erofs_dax_param_enums),
>  	fsparam_string("device",	Opt_device),
> +	fsparam_string("fsid",		Opt_fsid),
>  	{}
>  };
>  
> @@ -517,6 +519,16 @@ static int erofs_fc_parse_param(struct fs_context *fc,
>  		}
>  		++ctx->devs->extra_devices;
>  		break;
> +	case Opt_fsid:
> +#ifdef CONFIG_EROFS_FS_ONDEMAND
> +		kfree(ctx->opt.fsid);
> +		ctx->opt.fsid = kstrdup(param->string, GFP_KERNEL);
> +		if (!ctx->opt.fsid)
> +			return -ENOMEM;
> +#else
> +		errorfc(fc, "fsid option not supported");
> +#endif
> +		break;
>  	default:
>  		return -ENOPARAM;
>  	}
> @@ -597,9 +609,14 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  	sb->s_maxbytes = MAX_LFS_FILESIZE;
>  	sb->s_op = &erofs_sops;
>  
> -	if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
> -		erofs_err(sb, "failed to set erofs blksize");
> -		return -EINVAL;
> +	if (erofs_is_fscache_mode(sb)) {
> +		sb->s_blocksize = EROFS_BLKSIZ;
> +		sb->s_blocksize_bits = LOG_BLOCK_SIZE;
> +	} else {
> +		if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
> +			erofs_err(sb, "failed to set erofs blksize");
> +			return -EINVAL;
> +		}
>  	}
>  
>  	sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
> @@ -608,7 +625,7 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  
>  	sb->s_fs_info = sbi;
>  	sbi->opt = ctx->opt;
> -	sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev, &sbi->dax_part_off);
> +	ctx->opt.fsid = NULL;
>  	sbi->devs = ctx->devs;
>  	ctx->devs = NULL;
>  
> @@ -625,6 +642,8 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  		err = super_setup_bdi(sb);
>  		if (err)
>  			return err;
> +	} else {
> +		sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev, &sbi->dax_part_off);

It should go with the previous patch? And even over long line here.

Thanks,
Gao Xiang

>  	}
>  
>  	err = erofs_read_superblock(sb);
> @@ -684,6 +703,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
>  
>  static int erofs_fc_get_tree(struct fs_context *fc)
>  {
> +	struct erofs_fs_context *ctx = fc->fs_private;
> +
> +	if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && ctx->opt.fsid)
> +		return get_tree_nodev(fc, erofs_fc_fill_super);
> +
>  	return get_tree_bdev(fc, erofs_fc_fill_super);
>  }
>  
> @@ -733,6 +757,7 @@ static void erofs_fc_free(struct fs_context *fc)
>  	struct erofs_fs_context *ctx = fc->fs_private;
>  
>  	erofs_free_dev_context(ctx->devs);
> +	kfree(ctx->opt.fsid);
>  	kfree(ctx);
>  }
>  
> @@ -773,7 +798,10 @@ static void erofs_kill_sb(struct super_block *sb)
>  
>  	WARN_ON(sb->s_magic != EROFS_SUPER_MAGIC);
>  
> -	kill_block_super(sb);
> +	if (erofs_is_fscache_mode(sb))
> +		generic_shutdown_super(sb);
> +	else
> +		kill_block_super(sb);
>  
>  	sbi = EROFS_SB(sb);
>  	if (!sbi)
> @@ -783,6 +811,7 @@ static void erofs_kill_sb(struct super_block *sb)
>  	fs_put_dax(sbi->dax_dev);
>  	erofs_fscache_unregister_cookie(&sbi->s_fscache);
>  	erofs_fscache_unregister_fs(sb);
> +	kfree(sbi->opt.fsid);
>  	kfree(sbi);
>  	sb->s_fs_info = NULL;
>  }
> @@ -884,7 +913,10 @@ static int erofs_statfs(struct dentry *dentry, struct kstatfs *buf)
>  {
>  	struct super_block *sb = dentry->d_sb;
>  	struct erofs_sb_info *sbi = EROFS_SB(sb);
> -	u64 id = huge_encode_dev(sb->s_bdev->bd_dev);
> +	u64 id = 0;
> +
> +	if (!erofs_is_fscache_mode(sb))
> +		id = huge_encode_dev(sb->s_bdev->bd_dev);
>  
>  	buf->f_type = sb->s_magic;
>  	buf->f_bsize = EROFS_BLKSIZ;
> @@ -929,6 +961,10 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
>  		seq_puts(seq, ",dax=always");
>  	if (test_opt(opt, DAX_NEVER))
>  		seq_puts(seq, ",dax=never");
> +#ifdef CONFIG_EROFS_FS_ONDEMAND
> +	if (opt->fsid)
> +		seq_printf(seq, ",fsid=%s", opt->fsid);
> +#endif
>  	return 0;
>  }
>  
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 15/20] erofs: register fscache context for extra data blobs
  2022-04-07 14:15   ` Gao Xiang
@ 2022-04-08  2:11     ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-08  2:11 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/7/22 10:15 PM, Gao Xiang wrote:
> On Wed, Apr 06, 2022 at 03:56:07PM +0800, Jeffle Xu wrote:
>> Similar to the multi device mode, erofs could be mounted from one
>> primary data blob (mandatory) and multiple extra data blobs (optional).
>>
>> Register fscache context for each extra data blob.
>>
>> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
>> ---
>>  fs/erofs/data.c     |  3 +++
>>  fs/erofs/internal.h |  2 ++
>>  fs/erofs/super.c    | 25 +++++++++++++++++--------
>>  3 files changed, 22 insertions(+), 8 deletions(-)
>>
>> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
>> index bc22642358ec..14b64d960541 100644
>> --- a/fs/erofs/data.c
>> +++ b/fs/erofs/data.c
>> @@ -199,6 +199,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
>>  	map->m_bdev = sb->s_bdev;
>>  	map->m_daxdev = EROFS_SB(sb)->dax_dev;
>>  	map->m_dax_part_off = EROFS_SB(sb)->dax_part_off;
>> +	map->m_fscache = EROFS_SB(sb)->s_fscache;
>>  
>>  	if (map->m_deviceid) {
>>  		down_read(&devs->rwsem);
>> @@ -210,6 +211,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
>>  		map->m_bdev = dif->bdev;
>>  		map->m_daxdev = dif->dax_dev;
>>  		map->m_dax_part_off = dif->dax_part_off;
>> +		map->m_fscache = dif->fscache;
>>  		up_read(&devs->rwsem);
>>  	} else if (devs->extra_devices) {
>>  		down_read(&devs->rwsem);
>> @@ -227,6 +229,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
>>  				map->m_bdev = dif->bdev;
>>  				map->m_daxdev = dif->dax_dev;
>>  				map->m_dax_part_off = dif->dax_part_off;
>> +				map->m_fscache = dif->fscache;
>>  				break;
>>  			}
>>  		}
>> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
>> index eb37b33bce37..90f7d6286a4f 100644
>> --- a/fs/erofs/internal.h
>> +++ b/fs/erofs/internal.h
>> @@ -49,6 +49,7 @@ typedef u32 erofs_blk_t;
>>  
>>  struct erofs_device_info {
>>  	char *path;
>> +	struct erofs_fscache *fscache;
>>  	struct block_device *bdev;
>>  	struct dax_device *dax_dev;
>>  	u64 dax_part_off;
>> @@ -482,6 +483,7 @@ static inline int z_erofs_map_blocks_iter(struct inode *inode,
>>  #endif	/* !CONFIG_EROFS_FS_ZIP */
>>  
>>  struct erofs_map_dev {
>> +	struct erofs_fscache *m_fscache;
>>  	struct block_device *m_bdev;
>>  	struct dax_device *m_daxdev;
>>  	u64 m_dax_part_off;
>> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
>> index 9498b899b73b..8c7181cd37e6 100644
>> --- a/fs/erofs/super.c
>> +++ b/fs/erofs/super.c
>> @@ -259,15 +259,23 @@ static int erofs_init_devices(struct super_block *sb,
>>  		}
>>  		dis = ptr + erofs_blkoff(pos);
>>  
>> -		bdev = blkdev_get_by_path(dif->path,
>> -					  FMODE_READ | FMODE_EXCL,
>> -					  sb->s_type);
>> -		if (IS_ERR(bdev)) {
>> -			err = PTR_ERR(bdev);
>> -			break;
>> +		if (erofs_is_fscache_mode(sb)) {
>> +			err = erofs_fscache_register_cookie(sb, &dif->fscache,
>> +							    dif->path, false);
>> +			if (err)
>> +				break;
>> +		} else {
>> +			bdev = blkdev_get_by_path(dif->path,
>> +						  FMODE_READ | FMODE_EXCL,
>> +						  sb->s_type);
>> +			if (IS_ERR(bdev)) {
>> +				err = PTR_ERR(bdev);
>> +				break;
>> +			}
>> +			dif->bdev = bdev;
>> +			dif->dax_dev = fs_dax_get_by_bdev(bdev, &dif->dax_part_off);
> 
> Overly long line, please help split into 2 lines if possible.
> 

Will be fixed in the next version.


-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob
  2022-04-07  5:31   ` Gao Xiang
@ 2022-04-08  2:14     ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-08  2:14 UTC (permalink / raw)
  To: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/7/22 1:31 PM, Gao Xiang wrote:
> On Wed, Apr 06, 2022 at 03:56:04PM +0800, Jeffle Xu wrote:
>> Introduce one anonymous inode managing page cache for data blob. Then
>> erofs could read directly from the address space of the anonymous inode
>> when cache hit.
> 
> Introduce one anonymous inode for data blobs so that erofs
> can cache metadata directly within such anonymous inode.
> 

Thanks. Commit message will be updated in the next version.

-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (19 preceding siblings ...)
  2022-04-06  7:56 ` [PATCH v8 20/20] erofs: add 'fsid' mount option Jeffle Xu
@ 2022-04-10 12:51 ` Gao Xiang
  2022-04-13 12:27   ` 田子晨
  2022-04-14  8:10   ` Jiachen Zhang
  2022-04-11 12:13 ` [PATCH v8 02/20] cachefiles: extract write routine David Howells
                   ` (6 subsequent siblings)
  27 siblings, 2 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-10 12:51 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

On Wed, Apr 06, 2022 at 03:55:52PM +0800, Jeffle Xu wrote:
> changes since v7:
> - rebased to 5.18-rc1
> - include "cachefiles: unmark inode in use in error path" patch into
>   this patchset to avoid warning from test robot (patch 1)
> - cachefiles: rename [cookie|volume]_key_len field of struct
>   cachefiles_open to [cookie|volume]_key_size to avoid potential
>   misunderstanding. Also add more documentation to
>   include/uapi/linux/cachefiles.h. (patch 3)
> - cachefiles: valid check for error code returned from user daemon
>   (patch 3)
> - cachefiles: change WARN_ON_ONCE() to pr_info_once() when user daemon
>   closes anon_fd prematurely (patch 4/5)
> - ready for complete review
> 
> 
> Kernel Patchset
> ---------------
> Git tree:
> 
>     https://github.com/lostjeffle/linux.git jingbo/dev-erofs-fscache-v8
> 
> Gitweb:
> 
>     https://github.com/lostjeffle/linux/commits/jingbo/dev-erofs-fscache-v8
> 
> 
> User Daemon for Quick Test
> --------------------------
> Git tree:
> 
>     https://github.com/lostjeffle/demand-read-cachefilesd.git main
> 
> Gitweb:
> 
>     https://github.com/lostjeffle/demand-read-cachefilesd
> 

Btw, we've also finished a preliminary end-to-end on-demand download
daemon in order to test the fscache on-demand kernel code as a real
end-to-end workload for container use cases:

User guide: https://github.com/dragonflyoss/image-service/blob/fscache/docs/nydus-fscache.md
Video: https://youtu.be/F4IF2_DENXo

Thanks,
Gao Xiang

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 02/20] cachefiles: extract write routine
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (20 preceding siblings ...)
  2022-04-10 12:51 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Gao Xiang
@ 2022-04-11 12:13 ` David Howells
  2022-04-11 12:29   ` JeffleXu
  2022-04-11 12:28 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie David Howells
                   ` (5 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 12:13 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> It is worth nothing that, ki->inval_counter is not initialized after
> this cleanup.

I think you meant "It is worth noting that, ...".

Btw, is there a particular reason that you didn't want to pass in a pointer to
a netfs_cache_resources struct?

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (21 preceding siblings ...)
  2022-04-11 12:13 ` [PATCH v8 02/20] cachefiles: extract write routine David Howells
@ 2022-04-11 12:28 ` David Howells
  2022-04-11 12:36   ` JeffleXu
  2022-04-11 12:32 ` David Howells
                   ` (4 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 12:28 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> +	  This permits on-demand read mode of cachefiles. In this mode, when
> +	  cache miss, the cachefiles backend instead of netfs, is responsible
> +          for fetching data, e.g. through user daemon.

That third line should probably begin with a tab as the other two line do.

> +static inline void cachefiles_flush_reqs(struct cachefiles_cache *cache)

If it's in a .c file, there's no need to mark it "inline".  The compiler will
inline it anyway if it decides it should.

> +#ifdef CONFIG_CACHEFILES_ONDEMAND
> +	cachefiles_flush_reqs(cache);
> +	xa_destroy(&cache->reqs);
> +#endif

If cachefiles_flush_reqs() is only used in this one place, the xa_destroy()
should possibly be moved into it.

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 02/20] cachefiles: extract write routine
  2022-04-11 12:13 ` [PATCH v8 02/20] cachefiles: extract write routine David Howells
@ 2022-04-11 12:29   ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-11 12:29 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 8:13 PM, David Howells wrote:
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
>> It is worth nothing that, ki->inval_counter is not initialized after
>> this cleanup.
> 
> I think you meant "It is worth noting that, ...".

Yeah...

> 
> Btw, is there a particular reason that you didn't want to pass in a pointer to
> a netfs_cache_resources struct?

IMHO, "struct netfs_cache_resources" is more like an interface for
netfs, while here __cachefiles_prepare_write() and __cachefiles_write()
are called inside Cachefiles module.

-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (22 preceding siblings ...)
  2022-04-11 12:28 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie David Howells
@ 2022-04-11 12:32 ` David Howells
  2022-04-11 12:36   ` JeffleXu
  2022-04-11 12:35 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie David Howells
                   ` (3 subsequent siblings)
  27 siblings, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 12:32 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> +static int init_open_req(struct cachefiles_req *req, void *private)

Please prefix with "cachefiles_".

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (23 preceding siblings ...)
  2022-04-11 12:32 ` David Howells
@ 2022-04-11 12:35 ` David Howells
  2022-04-11 12:48   ` JeffleXu
  2022-04-11 13:42   ` David Howells
  2022-04-11 12:44 ` [PATCH v8 05/20] cachefiles: implement on-demand read David Howells
                   ` (2 subsequent siblings)
  27 siblings, 2 replies; 56+ messages in thread
From: David Howells @ 2022-04-11 12:35 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> +static int init_close_req(struct cachefiles_req *req, void *private)

"cachefiles_" prefix please.

> +	/*
> +	 * It's possible if the cookie looking up phase failed before READ
> +	 * request has ever been sent.
> +	 */

What "it" is possible?  You might want to say "It's possible that the
cookie..."

> +	if (fd == 0)
> +		return -ENOENT;

0 is a valid fd.

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie
  2022-04-11 12:28 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie David Howells
@ 2022-04-11 12:36   ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-11 12:36 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 8:28 PM, David Howells wrote:
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
>> +	  This permits on-demand read mode of cachefiles. In this mode, when
>> +	  cache miss, the cachefiles backend instead of netfs, is responsible
>> +          for fetching data, e.g. through user daemon.
> 
> That third line should probably begin with a tab as the other two line do.

Oh yeah...

> 
>> +static inline void cachefiles_flush_reqs(struct cachefiles_cache *cache)
> 
> If it's in a .c file, there's no need to mark it "inline".  The compiler will
> inline it anyway if it decides it should.

Okay.

> 
>> +#ifdef CONFIG_CACHEFILES_ONDEMAND
>> +	cachefiles_flush_reqs(cache);
>> +	xa_destroy(&cache->reqs);
>> +#endif
> 
> If cachefiles_flush_reqs() is only used in this one place, the xa_destroy()
> should possibly be moved into it.
> 

Alright.


-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie
  2022-04-11 12:32 ` David Howells
@ 2022-04-11 12:36   ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-11 12:36 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 8:32 PM, David Howells wrote:
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
>> +static int init_open_req(struct cachefiles_req *req, void *private)
> 
> Please prefix with "cachefiles_".
> 

Okay.


-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 05/20] cachefiles: implement on-demand read
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (24 preceding siblings ...)
  2022-04-11 12:35 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie David Howells
@ 2022-04-11 12:44 ` David Howells
  2022-04-11 12:50   ` JeffleXu
  2022-04-11 13:38 ` [PATCH v8 07/20] cachefiles: document on-demand read mode David Howells
  2022-04-11 13:43 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics David Howells
  27 siblings, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 12:44 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

>  	/*
>  	 * Uninstall anon_fd to the cachefiles object, so that no further
>  	 * associated requests will get enqueued.
>  	 */

"Uninstall anon_fd from..."?

> +static int init_read_req(struct cachefiles_req *req, void *private)

Prefix with "cachefiles_" please (or "cachefiles_ondemand_").

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie
  2022-04-11 12:35 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie David Howells
@ 2022-04-11 12:48   ` JeffleXu
  2022-04-11 13:42   ` David Howells
  1 sibling, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-11 12:48 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 8:35 PM, David Howells wrote:
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
>> +static int init_close_req(struct cachefiles_req *req, void *private)
> 
> "cachefiles_" prefix please.

Okay.

> 
>> +	/*
>> +	 * It's possible if the cookie looking up phase failed before READ
>> +	 * request has ever been sent.
>> +	 */
> 
> What "it" is possible?  You might want to say "It's possible that the
> cookie..."

"It's possible that the following if (fd == 0) condition is triggered
when cookie looking up phase failed before READ request has ever been sent."

Anyway I will fix this comment then.

> 
>> +	if (fd == 0)
>> +		return -ENOENT;
> 
> 0 is a valid fd.

Yeah, but IMHO fd 0 is always for stdin? I think the allocated anon_fd
won't install at fd 0. Please correct me if I'm wrong.

In fact I wanna use "fd == 0" as the initial state as struct
cachefiles_object is allocated with kmem_cache_zalloc().


-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 05/20] cachefiles: implement on-demand read
  2022-04-11 12:44 ` [PATCH v8 05/20] cachefiles: implement on-demand read David Howells
@ 2022-04-11 12:50   ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-11 12:50 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 8:44 PM, David Howells wrote:
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
>>  	/*
>>  	 * Uninstall anon_fd to the cachefiles object, so that no further
>>  	 * associated requests will get enqueued.
>>  	 */
> 
> "Uninstall anon_fd from..."?

Okay, will be fixed.

> 
>> +static int init_read_req(struct cachefiles_req *req, void *private)
> 
> Prefix with "cachefiles_" please (or "cachefiles_ondemand_").

Alright.

Thanks for reviewing.

-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 07/20] cachefiles: document on-demand read mode
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (25 preceding siblings ...)
  2022-04-11 12:44 ` [PATCH v8 05/20] cachefiles: implement on-demand read David Howells
@ 2022-04-11 13:38 ` David Howells
  2022-04-12  3:17   ` JeffleXu
  2022-04-11 13:43 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics David Howells
  27 siblings, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 13:38 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> + (*) On-demand Read.
> +

Unnecessary extra blank line.

Jeffle Xu <jefflexu@linux.alibaba.com> wrote:

> +
> +
> +On-demand Read
> +==============
> +
> +When working in original mode, cachefiles mainly serves as a local cache for
> +remote networking fs, while in on-demand read mode, cachefiles can boost the
> +scenario where on-demand read semantics is needed, e.g. container image
> +distribution.
> +
> +The essential difference between these two modes is that, in original mode,
> +when cache miss, netfs itself will fetch data from remote, and then write the
> +fetched data into cache file. While in on-demand read mode, a user daemon is
> +responsible for fetching data and then writing to the cache file.
> +
> +``CONFIG_CACHEFILES_ONDEMAND`` shall be enabled to support on-demand read mode.

You're missing a few articles there.  How about:

"""
When working in its original mode, cachefiles mainly serves as a local cache
for a remote networking fs - while in on-demand read mode, cachefiles can boost
the scenario where on-demand read semantics are needed, e.g. container image
distribution.

The essential difference between these two modes is that, in original mode,
when a cache miss occurs, the netfs will fetch the data from the remote server
and then write it to the cache file.  With on-demand read mode, however,
fetching and the data and writing it into the cache is delegated to a user
daemon.

``CONFIG_CACHEFILES_ONDEMAND`` shall be enabled to support on-demand read mode.
"""

"should be enabled".

Also, two spaces after a full stop please (but not after the dot in a
contraction, e.g. "e.g.").

> +The on-demand read mode relies on a simple protocol used for communication
> +between kernel and user daemon. The model is like::

"The protocol can be modelled as"?

> +The cachefiles kernel module will send requests to

the

> user daemon when needed.
> +

the

> User daemon needs to poll on the devnode ('/dev/cachefiles') to check if
> +there's

a

> pending request to be processed.  A POLLIN event will be returned
> +when there's

a

> pending request.

> +Then user daemon needs to read

"The user daemon [than] reads "

> the devnode to fetch one

one -> a

> request and process it
> +accordingly. It is worth nothing

nothing -> noting

> that each read only gets one request. When
> +finished processing the request,

the

> user daemon needs to write the reply to the
> +devnode.

> +Each request is started with a message header like::

"is started with" -> "starts with".
"like" -> "of the form".

> +	* ``id`` is a unique ID identifying this request among all pending
> +	  requests.

What's the scope of the uniqueness of "id"?  Is it just unique to a particular
cachefiles cache?

> +	* ``len`` identifies the whole length of this request, including the
> +	  header and following type specific payload.

type-specific.

> +An optional parameter is added to "bind" command::

to the "bind" command.

> +When

the

> "bind" command takes

takes -> is given

> without argument, it defaults to the original mode.
> +When

the

> "bind" command takes

is given

> with

the

> "ondemand" argument, i.e. "bind ondemand",
> +on-demand read mode will be enabled.

> +OPEN Request

The

> +------------
> +
> +When

the

> netfs opens a cache file for the first time, a request with

the

> +CACHEFILES_OP_OPEN opcode, a.k.a

an

> OPEN request will be sent to

the

> user daemon. The
> +payload format is like::

format is like -> of the form

> +
> +	struct cachefiles_open {
> +		__u32 volume_key_size;
> +		__u32 cookie_key_size;
> +		__u32 fd;
> +		__u32 flags;
> +		__u8  data[];
> +	};
> +

"where:"

> +	* ``data`` contains

the

> volume_key and cookie_key in sequence.

Might be better to say "contains the volume_key followed directly by the
cookie_key.  The volume key is a NUL-terminated string; cookie_key is binary
data.".

> +
> +	* ``volume_key_size`` identifies

identifies -> indicates/supplies

> the size of

the

> volume key of the cache
> +	  file, in bytes. volume_key is of string format, with a suffix '\0'.
> +
> +	* ``cookie_key_size`` identifies the size of cookie key of the cache
> +	  file, in bytes. cookie_key is of binary format, which is netfs
> +	  specific.

"... indicates the size of the cookie key in bytes."

> +
> +	* ``fd`` identifies the

the -> an

> anonymous fd of

of -> referring to

> the cache file, with

with -> through

> which user
> +	  daemon can perform write/llseek file operations on the cache file.
> +
> +
> +

The

> OPEN request contains

a

> (volume_key, cookie_key, anon_fd) triple for

triplet for the

I would probably also use {...} rather than (...).

> corresponding
> +cache file. With this triple,

triplet, the

> user daemon could

could -> can

> fetch and write data into the
> +cache file in the background, even when kernel has not triggered the

the -> a

> cache miss
> +yet.

The

> User daemon is able to distinguish the requested cache file with the given
> +(volume_key, cookie_key), and write the fetched data into

the

> cache file with

with -> using

> the
> +given anon_fd.
> +
> +After recording the (volume_key, cookie_key, anon_fd) triple,

triplet, the

> user daemon shall

shall -> should

> +reply with

reply with -> complete the request by issuing a

> "copen" (complete open) command::
> +
> +	copen <id>,<cache_size>
> +
> +	* ``id`` is exactly the id field of the previous OPEN request.
> +
> +	* When >= 0, ``cache_size`` identifies the size of the cache file;
> +	  when < 0, ``cache_size`` identifies the error code ecountered by the
> +	  user daemon.

identifies -> indicates
ecountered -> encountered

> +CLOSE Request

The

> +-------------
> +When

a

> cookie withdrawed,

withdrawed -> withdrawn

> a request with

a

> CACHEFILES_OP_CLOSE opcode, a.k.a CLOSE
> +request,

Maybe phrase as "... a close request (opcode CACHEFILES_OP_CLOSE),

> will be sent to user daemon. It will notify

the

> user daemon to close the
> +attached anon_fd. The payload format is like::

like -> of the form

> +
> +	struct cachefiles_close {
> +		__u32 fd;
> +	};
> +

"where:"

> +	* ``fd`` identifies the anon_fd to be closed, which is exactly the same

"... which should be the same as that provided to the OPEN request".

Is it possible for userspace to move the fd around with dup() or whatever?

> +	  with that in OPEN request.
> +
> +
> +READ Request

The

> +------------
> +
> +When on-demand read mode is turned on, and

a

> cache miss encountered,

the

> kernel will
> +send a request with CACHEFILES_OP_READ opcode, a.k.a READ request,

"send a READ request (opcode CACHEFILES_OP_READ)"

> to

the

> user
> +daemon. It will notify

It will notify -> This will ask/tell

> user daemon to fetch data in the requested file range.
> +The payload format is like::

format is like -> is of the form

> +
> +	struct cachefiles_read {
> +		__u64 off;
> +		__u64 len;
> +		__u32 fd;
> +	};
> +
> +	* ``off`` identifies the starting offset of the requested file range.

identifies -> indicates

> +
> +	* ``len`` identifies the length of the requested file range.
> +

identifies -> indicates (you could alternatively say "specified")

> +	* ``fd`` identifies the anonymous fd of the requested cache file. It is
> +	  guaranteed that it shall be the same with

"same with" -> "same as"

Since the kernel cannot make such a guarantee, I think you may need to restate
this as something like "Userspace must present the same fd as was given in the
previous OPEN request".

> the fd field in the previous
> +	  OPEN request.
> +
> +When receiving one

one -> a

> READ request,

the

> user daemon needs to fetch

the

> data of the
> +requested file range, and then write the fetched data

, and then write the fetched data -> and write it

> into cache file

cache file -> cache

> with

using

> the
> +given anonymous fd.

+ to indicate the destination.

> +
> +When finished

When finished -> To finish

> processing the READ request,

the

> user daemon needs to reply with

the

> +CACHEFILES_IOC_CREAD ioctl on the corresponding anon_fd::
> +
> +	ioctl(fd, CACHEFILES_IOC_CREAD, id);
> +
> +	* ``fd`` is exactly the fd field of the previous READ request.

Does that have to be true?  What if userspace moves it somewhere else?

> +
> +	* ``id`` is exactly the id field of the previous READ request.

is exactly the -> must match the

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie
  2022-04-11 12:35 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie David Howells
  2022-04-11 12:48   ` JeffleXu
@ 2022-04-11 13:42   ` David Howells
  2022-04-12  3:35     ` JeffleXu
  1 sibling, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 13:42 UTC (permalink / raw)
  To: JeffleXu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

JeffleXu <jefflexu@linux.alibaba.com> wrote:

> > 
> >> +	if (fd == 0)
> >> +		return -ENOENT;
> > 
> > 0 is a valid fd.
> 
> Yeah, but IMHO fd 0 is always for stdin? I think the allocated anon_fd
> won't install at fd 0. Please correct me if I'm wrong.

If someone has closed 0, then you'll get 0 next, I'm pretty sure.  Try it and
see.

> In fact I wanna use "fd == 0" as the initial state as struct
> cachefiles_object is allocated with kmem_cache_zalloc().

I would suggest presetting it to something like -2 to avoid confusion.

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
  2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
                   ` (26 preceding siblings ...)
  2022-04-11 13:38 ` [PATCH v8 07/20] cachefiles: document on-demand read mode David Howells
@ 2022-04-11 13:43 ` David Howells
  2022-04-12  3:18   ` JeffleXu
  27 siblings, 1 reply; 56+ messages in thread
From: David Howells @ 2022-04-11 13:43 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: dhowells, linux-cachefs, xiang, chao, linux-erofs, torvalds,
	gregkh, willy, linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry,
	eguan, linux-kernel, luodaowen.backend, tianzichen, fannaihao

Btw, do you want to add a tracepoint or two to cachefiles to log requests?

David


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 07/20] cachefiles: document on-demand read mode
  2022-04-11 13:38 ` [PATCH v8 07/20] cachefiles: document on-demand read mode David Howells
@ 2022-04-12  3:17   ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-12  3:17 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao

Hi, thanks for such thorough and detailed reviewing and all these
corrections. I will fix them in the next version.


On 4/11/22 9:38 PM, David Howells wrote:
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
>> + (*) On-demand Read.
>> +
> 
> Unnecessary extra blank line.
> 
> Jeffle Xu <jefflexu@linux.alibaba.com> wrote:
> 
> What's the scope of the uniqueness of "id"?  Is it just unique to a particular
> cachefiles cache?

Yes. Currently each cache, I mean, each "struct cachefiles_cache",
maintains an xarray. The id is unique in the scope of the cache.


> 
>> +
>> +	struct cachefiles_close {
>> +		__u32 fd;
>> +	};
>> +
> 
> "where:"
> 
>> +	* ``fd`` identifies the anon_fd to be closed, which is exactly the same
> 
> "... which should be the same as that provided to the OPEN request".
> 
> Is it possible for userspace to move the fd around with dup() or whatever?

Currently No. The anon_fd is stored in

```
struct cachefiles_object {
	int fd;
	...
}
```

When sending READ/CLOSE request, the associated anon_fd is all fetched
from @fd field of struct cachefiles_object. dup() won't update @fd field
of struct cachefiles_object.

Thus when dup() is done, let's say there are fd A (original) and fd B
(duplicated from fd A) associated to the cachefiles_object. Then the @fd
field of following READ/CLOSE requests is always fd A, since @fd field
of struct cachefiles_object is not updated. However the CREAD (reply to
READ request) ioctl indeed can be done on either fd A or fd B.

Then when fd A is closed while fd B is still alive, @fd field of
following READ/CLOSE requests is still fd A, which is indeed buggy since
fd A can be reused then.

To fix this, I plan to replace @fd field of READ/CLOSE requests with
@object_id field.

```
struct cachefiles_close {
        __u32 object_id;
};


struct cachefiles_read {
        __u32 object_id;
        __u64 off;
        __u64 len;
};
```

Then each cachefiles_object has a unique object_id (in the scope of
cachefiles_cache). Each object_id can be mapped to multiple fds (1:N
mapping), while kernel only send an initial fd of this object_id through
OPEN request.

```
struct cachefiles_open {
	__u32 object_id;
        __u32 fd;
        __u32 volume_key_size;
        __u32 cookie_key_size;
        __u32 flags;
        __u8  data[];
};
```

The user daemon can modify the mapping through dup(), but it's
responsible for maintaining and updating this mapping. That is, the
mapping between object_id and all its associated fds should be
maintained in the user space.


>> +
>> +	struct cachefiles_read {
>> +		__u64 off;
>> +		__u64 len;
>> +		__u32 fd;
>> +	};
>> +
>> +	* ``off`` identifies the starting offset of the requested file range.
> 
> identifies -> indicates
> 
>> +
>> +	* ``len`` identifies the length of the requested file range.
>> +
> 
> identifies -> indicates (you could alternatively say "specified")
> 
>> +	* ``fd`` identifies the anonymous fd of the requested cache file. It is
>> +	  guaranteed that it shall be the same with
> 
> "same with" -> "same as"
> 
> Since the kernel cannot make such a guarantee, I think you may need to restate
> this as something like "Userspace must present the same fd as was given in the
> previous OPEN request".

Yes, whether the @fd field of READ request is same as that of OPEN
request or not, is actually implementation dependent. However as
described above, I'm going to change @fd field into @object_id field.
After that refactoring, the @object_id field of READ/CLOSE request
should be the same as the @object_id filed of CLOSE request.



>> +CACHEFILES_IOC_CREAD ioctl on the corresponding anon_fd::
>> +
>> +	ioctl(fd, CACHEFILES_IOC_CREAD, id);
>> +
>> +	* ``fd`` is exactly the fd field of the previous READ request.
> 
> Does that have to be true?  What if userspace moves it somewhere else?
> 

As described above, I'm going to change @fd field into @object_id field.
Then there is an @object_id filed in READ request. When replying the
READ request, the user daemon itself needs to get the corresponding
anon_fd of the given @object_id through the self-maintained mapping.


-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
  2022-04-11 13:43 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics David Howells
@ 2022-04-12  3:18   ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-12  3:18 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 9:43 PM, David Howells wrote:
> Btw, do you want to add a tracepoint or two to cachefiles to log requests?
> 

Good idea. Tracepoints will help a lot when debugging.


-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie
  2022-04-11 13:42   ` David Howells
@ 2022-04-12  3:35     ` JeffleXu
  0 siblings, 0 replies; 56+ messages in thread
From: JeffleXu @ 2022-04-12  3:35 UTC (permalink / raw)
  To: David Howells
  Cc: linux-cachefs, xiang, chao, linux-erofs, torvalds, gregkh, willy,
	linux-fsdevel, joseph.qi, bo.liu, tao.peng, gerry, eguan,
	linux-kernel, luodaowen.backend, tianzichen, fannaihao



On 4/11/22 9:42 PM, David Howells wrote:
> JeffleXu <jefflexu@linux.alibaba.com> wrote:
> 
>>>
>>>> +	if (fd == 0)
>>>> +		return -ENOENT;
>>>
>>> 0 is a valid fd.
>>
>> Yeah, but IMHO fd 0 is always for stdin? I think the allocated anon_fd
>> won't install at fd 0. Please correct me if I'm wrong.
> 
> If someone has closed 0, then you'll get 0 next, I'm pretty sure.  Try it and
> see.

Good catch.

> 
>> In fact I wanna use "fd == 0" as the initial state as struct
>> cachefiles_object is allocated with kmem_cache_zalloc().
> 
> I would suggest presetting it to something like -2 to avoid confusion.

Okay, as described in the previous email, I'm going to replace @fd to
@object_id. I will define some symbols to make it more readable,
something like

```
struct cachefiles_object {
	...
#ifdef CONFIG_CACHEFILES_ONDEMAND
#define CACHEFILES_OBJECT_ID_DEFAULT -2
#define CACHEFILES_OBJECT_ID_DEAD    -1
	int object_id;
#endif
	...
}
```

Thanks for your time.

-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
  2022-04-10 12:51 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Gao Xiang
@ 2022-04-13 12:27   ` 田子晨
  2022-04-14  8:10   ` Jiachen Zhang
  1 sibling, 0 replies; 56+ messages in thread
From: 田子晨 @ 2022-04-13 12:27 UTC (permalink / raw)
  To: Gao Xiang, jefflexu
  Cc: Jeffle Xu, linux-erofs, fannaihao, willy, linux-kernel, dhowells,
	joseph.qi, linux-cachefs, gregkh, linux-fsdevel,
	luodaowen.backend, gerry, torvalds



> 2022年4月10日 下午8:51,Gao Xiang <hsiangkao@linux.alibaba.com> 写道:
> 
> On Wed, Apr 06, 2022 at 03:55:52PM +0800, Jeffle Xu wrote:
>> changes since v7:
>> - rebased to 5.18-rc1
>> - include "cachefiles: unmark inode in use in error path" patch into
>>  this patchset to avoid warning from test robot (patch 1)
>> - cachefiles: rename [cookie|volume]_key_len field of struct
>>  cachefiles_open to [cookie|volume]_key_size to avoid potential
>>  misunderstanding. Also add more documentation to
>>  include/uapi/linux/cachefiles.h. (patch 3)
>> - cachefiles: valid check for error code returned from user daemon
>>  (patch 3)
>> - cachefiles: change WARN_ON_ONCE() to pr_info_once() when user daemon
>>  closes anon_fd prematurely (patch 4/5)
>> - ready for complete review
>> 
>> 
>> Kernel Patchset
>> ---------------
>> Git tree:
>> 
>>    https://github.com/lostjeffle/linux.git jingbo/dev-erofs-fscache-v8
>> 
>> Gitweb:
>> 
>>    https://github.com/lostjeffle/linux/commits/jingbo/dev-erofs-fscache-v8
>> 
>> 
>> User Daemon for Quick Test
>> --------------------------
>> Git tree:
>> 
>>    https://github.com/lostjeffle/demand-read-cachefilesd.git main
>> 
>> Gitweb:
>> 
>>    https://github.com/lostjeffle/demand-read-cachefilesd
>> 
> 
> Btw, we've also finished a preliminary end-to-end on-demand download
> daemon in order to test the fscache on-demand kernel code as a real
> end-to-end workload for container use cases:
> 
> User guide: https://github.com/dragonflyoss/image-service/blob/fscache/docs/nydus-fscache.md
> Video: https://youtu.be/F4IF2_DENXo

Tested-by: Zichen Tian <tianzichen@kuaishou.com>

> Thanks,
> Gao Xiang
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: Re: [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
  2022-04-10 12:51 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Gao Xiang
  2022-04-13 12:27   ` 田子晨
@ 2022-04-14  8:10   ` Jiachen Zhang
  2022-04-14  9:29     ` Gao Xiang
  1 sibling, 1 reply; 56+ messages in thread
From: Jiachen Zhang @ 2022-04-14  8:10 UTC (permalink / raw)
  To: Jeffle Xu, dhowells, linux-cachefs, xiang, chao, linux-erofs,
	torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

On Sun, Apr 10, 2022 at 8:52 PM Gao Xiang <hsiangkao@linux.alibaba.com> wrote:
>
> On Wed, Apr 06, 2022 at 03:55:52PM +0800, Jeffle Xu wrote:
> > changes since v7:
> > - rebased to 5.18-rc1
> > - include "cachefiles: unmark inode in use in error path" patch into
> >   this patchset to avoid warning from test robot (patch 1)
> > - cachefiles: rename [cookie|volume]_key_len field of struct
> >   cachefiles_open to [cookie|volume]_key_size to avoid potential
> >   misunderstanding. Also add more documentation to
> >   include/uapi/linux/cachefiles.h. (patch 3)
> > - cachefiles: valid check for error code returned from user daemon
> >   (patch 3)
> > - cachefiles: change WARN_ON_ONCE() to pr_info_once() when user daemon
> >   closes anon_fd prematurely (patch 4/5)
> > - ready for complete review
> >
> >
> > Kernel Patchset
> > ---------------
> > Git tree:
> >
> >     https://github.com/lostjeffle/linux.git jingbo/dev-erofs-fscache-v8
> >
> > Gitweb:
> >
> >     https://github.com/lostjeffle/linux/commits/jingbo/dev-erofs-fscache-v8
> >
> >
> > User Daemon for Quick Test
> > --------------------------
> > Git tree:
> >
> >     https://github.com/lostjeffle/demand-read-cachefilesd.git main
> >
> > Gitweb:
> >
> >     https://github.com/lostjeffle/demand-read-cachefilesd
> >
>
> Btw, we've also finished a preliminary end-to-end on-demand download
> daemon in order to test the fscache on-demand kernel code as a real
> end-to-end workload for container use cases:
>
> User guide: https://github.com/dragonflyoss/image-service/blob/fscache/docs/nydus-fscache.md
> Video: https://youtu.be/F4IF2_DENXo
>
> Thanks,
> Gao Xiang

Hi Xiang,

I think this feature is interesting and promising. So I have performed
some tests according to the user guide. Hope it can be an upstream
feature.

Thanks,
Jiachen

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: Re: [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics
  2022-04-14  8:10   ` Jiachen Zhang
@ 2022-04-14  9:29     ` Gao Xiang
  0 siblings, 0 replies; 56+ messages in thread
From: Gao Xiang @ 2022-04-14  9:29 UTC (permalink / raw)
  To: Jiachen Zhang
  Cc: Jeffle Xu, dhowells, linux-cachefs, xiang, chao, linux-erofs,
	torvalds, gregkh, willy, linux-fsdevel, joseph.qi, bo.liu,
	tao.peng, gerry, eguan, linux-kernel, luodaowen.backend,
	tianzichen, fannaihao

Hi Jiachen,

On Thu, Apr 14, 2022 at 04:10:10PM +0800, Jiachen Zhang wrote:
> On Sun, Apr 10, 2022 at 8:52 PM Gao Xiang <hsiangkao@linux.alibaba.com> wrote:
> >
> > On Wed, Apr 06, 2022 at 03:55:52PM +0800, Jeffle Xu wrote:
> > > changes since v7:
> > > - rebased to 5.18-rc1
> > > - include "cachefiles: unmark inode in use in error path" patch into
> > >   this patchset to avoid warning from test robot (patch 1)
> > > - cachefiles: rename [cookie|volume]_key_len field of struct
> > >   cachefiles_open to [cookie|volume]_key_size to avoid potential
> > >   misunderstanding. Also add more documentation to
> > >   include/uapi/linux/cachefiles.h. (patch 3)
> > > - cachefiles: valid check for error code returned from user daemon
> > >   (patch 3)
> > > - cachefiles: change WARN_ON_ONCE() to pr_info_once() when user daemon
> > >   closes anon_fd prematurely (patch 4/5)
> > > - ready for complete review
> > >
> > >
> > > Kernel Patchset
> > > ---------------
> > > Git tree:
> > >
> > >     https://github.com/lostjeffle/linux.git jingbo/dev-erofs-fscache-v8
> > >
> > > Gitweb:
> > >
> > >     https://github.com/lostjeffle/linux/commits/jingbo/dev-erofs-fscache-v8
> > >
> > >
> > > User Daemon for Quick Test
> > > --------------------------
> > > Git tree:
> > >
> > >     https://github.com/lostjeffle/demand-read-cachefilesd.git main
> > >
> > > Gitweb:
> > >
> > >     https://github.com/lostjeffle/demand-read-cachefilesd
> > >
> >
> > Btw, we've also finished a preliminary end-to-end on-demand download
> > daemon in order to test the fscache on-demand kernel code as a real
> > end-to-end workload for container use cases:
> >
> > User guide: https://github.com/dragonflyoss/image-service/blob/fscache/docs/nydus-fscache.md
> > Video: https://youtu.be/F4IF2_DENXo
> >
> > Thanks,
> > Gao Xiang
> 
> Hi Xiang,
> 
> I think this feature is interesting and promising. So I have performed
> some tests according to the user guide. Hope it can be an upstream
> feature.

Many thanks for the feedback. We're doing our best to form/stablize it
now. Still struggle with some specific cases.

Thanks,
Gao Xiang


> 
> Thanks,
> Jiachen

^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2022-04-14  9:29 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06  7:55 [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 01/20] cachefiles: unmark inode in use in error path Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 02/20] cachefiles: extract write routine Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 05/20] cachefiles: implement on-demand read Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 06/20] cachefiles: enable on-demand read mode Jeffle Xu
2022-04-06  7:55 ` [PATCH v8 07/20] cachefiles: document " Jeffle Xu
2022-04-06  7:56 ` [PATCH v8 08/20] erofs: make erofs_map_blocks() generally available Jeffle Xu
2022-04-07  2:44   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 09/20] erofs: add mode checking helper Jeffle Xu
2022-04-07  2:46   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 10/20] erofs: register fscache volume Jeffle Xu
2022-04-07  2:50   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 11/20] erofs: add fscache context helper functions Jeffle Xu
2022-04-07  3:25   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 12/20] erofs: add anonymous inode managing page cache for data blob Jeffle Xu
2022-04-07  5:31   ` Gao Xiang
2022-04-08  2:14     ` JeffleXu
2022-04-06  7:56 ` [PATCH v8 13/20] erofs: add erofs_fscache_read_folios() helper Jeffle Xu
2022-04-07 14:05   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 14/20] erofs: register fscache context for primary data blob Jeffle Xu
2022-04-07 14:09   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 15/20] erofs: register fscache context for extra data blobs Jeffle Xu
2022-04-07 14:15   ` Gao Xiang
2022-04-08  2:11     ` JeffleXu
2022-04-06  7:56 ` [PATCH v8 16/20] erofs: implement fscache-based metadata read Jeffle Xu
2022-04-07 14:19   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 17/20] erofs: implement fscache-based data read for non-inline layout Jeffle Xu
2022-04-07 14:24   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 18/20] erofs: implement fscache-based data read for inline layout Jeffle Xu
2022-04-07 14:31   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 19/20] erofs: implement fscache-based data readahead Jeffle Xu
2022-04-07 14:36   ` Gao Xiang
2022-04-06  7:56 ` [PATCH v8 20/20] erofs: add 'fsid' mount option Jeffle Xu
2022-04-07 14:39   ` Gao Xiang
2022-04-10 12:51 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics Gao Xiang
2022-04-13 12:27   ` 田子晨
2022-04-14  8:10   ` Jiachen Zhang
2022-04-14  9:29     ` Gao Xiang
2022-04-11 12:13 ` [PATCH v8 02/20] cachefiles: extract write routine David Howells
2022-04-11 12:29   ` JeffleXu
2022-04-11 12:28 ` [PATCH v8 03/20] cachefiles: notify user daemon with anon_fd when looking up cookie David Howells
2022-04-11 12:36   ` JeffleXu
2022-04-11 12:32 ` David Howells
2022-04-11 12:36   ` JeffleXu
2022-04-11 12:35 ` [PATCH v8 04/20] cachefiles: notify user daemon when withdrawing cookie David Howells
2022-04-11 12:48   ` JeffleXu
2022-04-11 13:42   ` David Howells
2022-04-12  3:35     ` JeffleXu
2022-04-11 12:44 ` [PATCH v8 05/20] cachefiles: implement on-demand read David Howells
2022-04-11 12:50   ` JeffleXu
2022-04-11 13:38 ` [PATCH v8 07/20] cachefiles: document on-demand read mode David Howells
2022-04-12  3:17   ` JeffleXu
2022-04-11 13:43 ` [PATCH v8 00/20] fscache,erofs: fscache-based on-demand read semantics David Howells
2022-04-12  3:18   ` JeffleXu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).