All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position
@ 2018-11-20 14:14 Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 02/14] staging: erofs: fix race when the managed cache is enabled Gao Xiang
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


`trace_erofs_readpage' should be placed in .readpage()
rather than in the internal `z_erofs_do_read_page'.

Fixes: 284db12cfda3 ("staging: erofs: add trace points for reading zipped data")

Reviewed-by: Chen Gong <gongchen4 at huawei.com>
Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index 6a283f618f46..ede3383ac601 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -598,8 +598,6 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 	unsigned int cur, end, spiltted, index;
 	int err = 0;
 
-	trace_erofs_readpage(page, false);
-
 	/* register locked file pages as online pages in pack */
 	z_erofs_onlinepage_init(page);
 
@@ -1288,6 +1286,8 @@ static int z_erofs_vle_normalaccess_readpage(struct file *file,
 	int err;
 	LIST_HEAD(pagepool);
 
+	trace_erofs_readpage(page, false);
+
 #if (EROFS_FS_ZIP_CACHE_LVL >= 2)
 	f.cachedzone_la = (erofs_off_t)page->index << PAGE_SHIFT;
 #endif
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 02/14] staging: erofs: fix race when the managed cache is enabled
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 03/14] staging: erofs: atomic_cond_read_relaxed on ref-locked workgroup Gao Xiang
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


When the managed cache is enabled, the last reference count
of a workgroup must be used for its workstation.

Otherwise, it could lead to incorrect (un)freezes in
the reclaim path, and it would be harmful.

A typical race as follows:

Thread 1 (In the reclaim path)  Thread 2
workgroup_freeze(grp, 1)                                refcnt = 1
...
workgroup_unfreeze(grp, 1)                              refcnt = 1
                                workgroup_get(grp)      refcnt = 2 (x)
workgroup_put(grp)                                      refcnt = 1 (x)
                                ...unexpected behaviors

* grp is detached but still used, which violates cache-managed
  freeze constraint.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---

change log v3:
- add __erofs_workgroup_put according to Chao's suggestion.

 drivers/staging/erofs/internal.h |   1 +
 drivers/staging/erofs/utils.c    | 131 +++++++++++++++++++++++++++------------
 2 files changed, 93 insertions(+), 39 deletions(-)

diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
index 57575c7f5635..89dbd0888e53 100644
--- a/drivers/staging/erofs/internal.h
+++ b/drivers/staging/erofs/internal.h
@@ -250,6 +250,7 @@ static inline bool erofs_workgroup_get(struct erofs_workgroup *grp, int *ocnt)
 }
 
 #define __erofs_workgroup_get(grp)	atomic_inc(&(grp)->refcount)
+#define __erofs_workgroup_put(grp)	atomic_dec(&(grp)->refcount)
 
 extern int erofs_workgroup_put(struct erofs_workgroup *grp);
 
diff --git a/drivers/staging/erofs/utils.c b/drivers/staging/erofs/utils.c
index ea8a962e5c95..90de8d9195b7 100644
--- a/drivers/staging/erofs/utils.c
+++ b/drivers/staging/erofs/utils.c
@@ -83,12 +83,21 @@ int erofs_register_workgroup(struct super_block *sb,
 
 	grp = xa_tag_pointer(grp, tag);
 
-	err = radix_tree_insert(&sbi->workstn_tree,
-		grp->index, grp);
+	/*
+	 * Bump up reference count before making this workgroup
+	 * visible to other users in order to avoid potential UAF
+	 * without serialized by erofs_workstn_lock.
+	 */
+	__erofs_workgroup_get(grp);
 
-	if (!err) {
-		__erofs_workgroup_get(grp);
-	}
+	err = radix_tree_insert(&sbi->workstn_tree,
+				grp->index, grp);
+	if (unlikely(err))
+		/*
+		 * it's safe to decrease since the workgroup isn't visible
+		 * and refcount >= 2 (cannot be freezed).
+		 */
+		__erofs_workgroup_put(grp);
 
 	erofs_workstn_unlock(sbi);
 	radix_tree_preload_end();
@@ -97,19 +106,91 @@ int erofs_register_workgroup(struct super_block *sb,
 
 extern void erofs_workgroup_free_rcu(struct erofs_workgroup *grp);
 
+static void  __erofs_workgroup_free(struct erofs_workgroup *grp)
+{
+	atomic_long_dec(&erofs_global_shrink_cnt);
+	erofs_workgroup_free_rcu(grp);
+}
+
 int erofs_workgroup_put(struct erofs_workgroup *grp)
 {
 	int count = atomic_dec_return(&grp->refcount);
 
 	if (count == 1)
 		atomic_long_inc(&erofs_global_shrink_cnt);
-	else if (!count) {
-		atomic_long_dec(&erofs_global_shrink_cnt);
-		erofs_workgroup_free_rcu(grp);
-	}
+	else if (!count)
+		__erofs_workgroup_free(grp);
 	return count;
 }
 
+#ifdef EROFS_FS_HAS_MANAGED_CACHE
+/* for cache-managed case, customized reclaim paths exist */
+static void erofs_workgroup_unfreeze_final(struct erofs_workgroup *grp)
+{
+	erofs_workgroup_unfreeze(grp, 0);
+	__erofs_workgroup_free(grp);
+}
+
+bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi,
+				    struct erofs_workgroup *grp,
+				    bool cleanup)
+{
+	/*
+	 * for managed cache enabled, the refcount of workgroups
+	 * themselves could be < 0 (freezed). So there is no guarantee
+	 * that all refcount > 0 if managed cache is enabled.
+	 */
+	if (!erofs_workgroup_try_to_freeze(grp, 1))
+		return false;
+
+	/*
+	 * note that all cached pages should be unlinked
+	 * before delete it from the radix tree.
+	 * Otherwise some cached pages of an orphan old workgroup
+	 * could be still linked after the new one is available.
+	 */
+	if (erofs_try_to_free_all_cached_pages(sbi, grp)) {
+		erofs_workgroup_unfreeze(grp, 1);
+		return false;
+	}
+
+	/* it is impossible to fail after we freeze the workgroup */
+	BUG_ON(xa_untag_pointer(radix_tree_delete(&sbi->workstn_tree,
+						  grp->index)) != grp);
+
+	/*
+	 * if managed cache is enable, the last refcount
+	 * should indicate the related workstation.
+	 */
+	erofs_workgroup_unfreeze_final(grp);
+	return true;
+}
+
+#else
+/* for nocache case, no customized reclaim path at all */
+bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi,
+				    struct erofs_workgroup *grp,
+				    bool cleanup)
+{
+	int cnt = atomic_read(&grp->refcount);
+
+	DBG_BUGON(cnt <= 0);
+	DBG_BUGON(cleanup && cnt != 1);
+
+	if (cnt > 1)
+		return false;
+
+	if (xa_untag_pointer(radix_tree_delete(&sbi->workstn_tree,
+					       grp->index)) != grp)
+		return false;
+
+	/* (rarely) could be grabbed again when freeing */
+	erofs_workgroup_put(grp);
+	return true;
+}
+
+#endif
+
 unsigned long erofs_shrink_workstation(struct erofs_sb_info *sbi,
 				       unsigned long nr_shrink,
 				       bool cleanup)
@@ -126,41 +207,13 @@ unsigned long erofs_shrink_workstation(struct erofs_sb_info *sbi,
 		batch, first_index, PAGEVEC_SIZE);
 
 	for (i = 0; i < found; ++i) {
-		int cnt;
 		struct erofs_workgroup *grp = xa_untag_pointer(batch[i]);
 
 		first_index = grp->index + 1;
 
-		cnt = atomic_read(&grp->refcount);
-		BUG_ON(cnt <= 0);
-
-		if (cleanup)
-			BUG_ON(cnt != 1);
-
-#ifndef EROFS_FS_HAS_MANAGED_CACHE
-		else if (cnt > 1)
-#else
-		if (!erofs_workgroup_try_to_freeze(grp, 1))
-#endif
-			continue;
-
-		if (xa_untag_pointer(radix_tree_delete(&sbi->workstn_tree,
-			grp->index)) != grp) {
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-skip:
-			erofs_workgroup_unfreeze(grp, 1);
-#endif
+		/* try to shrink each valid workgroup */
+		if (!erofs_try_to_release_workgroup(sbi, grp, cleanup))
 			continue;
-		}
-
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-		if (erofs_try_to_free_all_cached_pages(sbi, grp))
-			goto skip;
-
-		erofs_workgroup_unfreeze(grp, 1);
-#endif
-		/* (rarely) grabbed again when freeing */
-		erofs_workgroup_put(grp);
 
 		++freed;
 		if (unlikely(!--nr_shrink))
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 03/14] staging: erofs: atomic_cond_read_relaxed on ref-locked workgroup
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 02/14] staging: erofs: fix race when the managed cache is enabled Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 04/14] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}' Gao Xiang
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


It's better to use atomic_cond_read_relaxed, which is implemented
in hardware instructions to monitor a variable changes currently
for ARM64, instead of open-coded busy waiting.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/internal.h | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
index 89dbd0888e53..eb80ba44d072 100644
--- a/drivers/staging/erofs/internal.h
+++ b/drivers/staging/erofs/internal.h
@@ -221,23 +221,29 @@ static inline void erofs_workgroup_unfreeze(
 	preempt_enable();
 }
 
+#if defined(CONFIG_SMP)
+static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp)
+{
+	return atomic_cond_read_relaxed(&grp->refcount,
+					VAL != EROFS_LOCKED_MAGIC);
+}
+#else
+static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp)
+{
+	int v = atomic_read(&grp->refcount);
+
+	/* workgroup is never freezed on uniprocessor systems */
+	DBG_BUGON(v == EROFS_LOCKED_MAGIC);
+	return v;
+}
+#endif
+
 static inline bool erofs_workgroup_get(struct erofs_workgroup *grp, int *ocnt)
 {
-	const int locked = (int)EROFS_LOCKED_MAGIC;
 	int o;
 
 repeat:
-	o = atomic_read(&grp->refcount);
-
-	/* spin if it is temporarily locked at the reclaim path */
-	if (unlikely(o == locked)) {
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
-		do
-			cpu_relax();
-		while (atomic_read(&grp->refcount) == locked);
-#endif
-		goto repeat;
-	}
+	o = erofs_wait_on_workgroup_freezed(grp);
 
 	if (unlikely(o <= 0))
 		return -1;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 04/14] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}'
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 02/14] staging: erofs: fix race when the managed cache is enabled Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 03/14] staging: erofs: atomic_cond_read_relaxed on ref-locked workgroup Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 05/14] staging: erofs: add a full barrier in erofs_workgroup_unfreeze Gao Xiang
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


There are two minor issues in the current freeze interface:

   1) Freeze interfaces have not related with CONFIG_DEBUG_SPINLOCK,
      therefore fix the incorrect conditions;

   2) For SMP platforms, it should also disable preemption before
      doing atomic_cmpxchg in case that some high priority tasks
      preempt between atomic_cmpxchg and disable_preempt, then spin
      on the locked refcount later.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/internal.h | 41 ++++++++++++++++++++++++----------------
 1 file changed, 25 insertions(+), 16 deletions(-)

diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
index eb80ba44d072..2e0ef92c138b 100644
--- a/drivers/staging/erofs/internal.h
+++ b/drivers/staging/erofs/internal.h
@@ -194,40 +194,49 @@ struct erofs_workgroup {
 
 #define EROFS_LOCKED_MAGIC     (INT_MIN | 0xE0F510CCL)
 
-static inline bool erofs_workgroup_try_to_freeze(
-	struct erofs_workgroup *grp, int v)
+#if defined(CONFIG_SMP)
+static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp,
+						 int val)
 {
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
-	if (v != atomic_cmpxchg(&grp->refcount,
-		v, EROFS_LOCKED_MAGIC))
-		return false;
 	preempt_disable();
-#else
-	preempt_disable();
-	if (atomic_read(&grp->refcount) != v) {
+	if (val != atomic_cmpxchg(&grp->refcount, val, EROFS_LOCKED_MAGIC)) {
 		preempt_enable();
 		return false;
 	}
-#endif
 	return true;
 }
 
-static inline void erofs_workgroup_unfreeze(
-	struct erofs_workgroup *grp, int v)
+static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp,
+					    int orig_val)
 {
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
-	atomic_set(&grp->refcount, v);
-#endif
+	atomic_set(&grp->refcount, orig_val);
 	preempt_enable();
 }
 
-#if defined(CONFIG_SMP)
 static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp)
 {
 	return atomic_cond_read_relaxed(&grp->refcount,
 					VAL != EROFS_LOCKED_MAGIC);
 }
 #else
+static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp,
+						 int val)
+{
+	preempt_disable();
+	/* no need to spin on UP platforms, let's just disable preemption. */
+	if (val != atomic_read(&grp->refcount)) {
+		preempt_enable();
+		return false;
+	}
+	return true;
+}
+
+static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp,
+					    int orig_val)
+{
+	preempt_enable();
+}
+
 static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp)
 {
 	int v = atomic_read(&grp->refcount);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 05/14] staging: erofs: add a full barrier in erofs_workgroup_unfreeze
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (2 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 04/14] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}' Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 06/14] staging: erofs: fix the definition of DBG_BUGON Gao Xiang
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


Just like other generic locks, insert a full barrier
in case of memory reorder.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/internal.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
index 2e0ef92c138b..f77653d33633 100644
--- a/drivers/staging/erofs/internal.h
+++ b/drivers/staging/erofs/internal.h
@@ -209,6 +209,7 @@ static inline bool erofs_workgroup_try_to_freeze(struct erofs_workgroup *grp,
 static inline void erofs_workgroup_unfreeze(struct erofs_workgroup *grp,
 					    int orig_val)
 {
+	smp_mb();
 	atomic_set(&grp->refcount, orig_val);
 	preempt_enable();
 }
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 06/14] staging: erofs: fix the definition of DBG_BUGON
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (3 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 05/14] staging: erofs: add a full barrier in erofs_workgroup_unfreeze Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 07/14] staging: erofs: separate into init_once / always Gao Xiang
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


It's better not to positively BUG_ON the kernel, however developers
need a way to locate issues as soon as possible.

DBG_BUGON is introduced and it could only crash when EROFS_FS_DEBUG
(EROFS developping feature) is on. It is helpful for developers
to find and solve bugs quickly.

Previously, DBG_BUGON is defined as ((void)0) if EROFS_FS_DEBUG is off,
but some unused variable warnings as follows could occur:

drivers/staging/erofs/unzip_vle.c: In function ?init_always?:
drivers/staging/erofs/unzip_vle.c:61:33: warning: unused variable ?work? [-Wunused-variable]
  struct z_erofs_vle_work *const work =
                                 ^~~~

Fix it to #define DBG_BUGON(x) ((void)(x)).

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/internal.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
index f77653d33633..0aa2a41b9885 100644
--- a/drivers/staging/erofs/internal.h
+++ b/drivers/staging/erofs/internal.h
@@ -39,7 +39,7 @@
 #define debugln(x, ...)         ((void)0)
 
 #define dbg_might_sleep()       ((void)0)
-#define DBG_BUGON(...)          ((void)0)
+#define DBG_BUGON(x)            ((void)(x))
 #endif
 
 enum {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 07/14] staging: erofs: separate into init_once / always
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (4 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 06/14] staging: erofs: fix the definition of DBG_BUGON Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 08/14] staging: erofs: locked before registering for all new workgroups Gao Xiang
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


`z_erofs_vle_workgroup' is heavily generated in the decompression,
for example, it resets 32 bytes redundantly for 64-bit platforms
even through Z_EROFS_VLE_INLINE_PAGEVECS + Z_EROFS_CLUSTER_MAX_PAGES,
default 4, pages are stored in `z_erofs_vle_workgroup'.

As an another example, `struct mutex' takes 72 bytes for our kirin
64-bit platforms, it's unnecessary to be reseted at first and
be initialized each time.

Let's avoid filling all `z_erofs_vle_workgroup' with 0 at first
since most fields are reinitialized to meaningful values later,
and pagevec is no need to initialized at all.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 34 +++++++++++++++++++++++++++++-----
 1 file changed, 29 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index ede3383ac601..4e5843e8ee35 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -43,12 +43,38 @@ static inline int init_unzip_workqueue(void)
 	return z_erofs_workqueue ? 0 : -ENOMEM;
 }
 
+static void init_once(void *ptr)
+{
+	struct z_erofs_vle_workgroup *grp = ptr;
+	struct z_erofs_vle_work *const work =
+		z_erofs_vle_grab_primary_work(grp);
+	unsigned int i;
+
+	mutex_init(&work->lock);
+	work->nr_pages = 0;
+	work->vcnt = 0;
+	for (i = 0; i < Z_EROFS_CLUSTER_MAX_PAGES; ++i)
+		grp->compressed_pages[i] = NULL;
+}
+
+static void init_always(struct z_erofs_vle_workgroup *grp)
+{
+	struct z_erofs_vle_work *const work =
+		z_erofs_vle_grab_primary_work(grp);
+
+	atomic_set(&grp->obj.refcount, 1);
+	grp->flags = 0;
+
+	DBG_BUGON(work->nr_pages);
+	DBG_BUGON(work->vcnt);
+}
+
 int __init z_erofs_init_zip_subsystem(void)
 {
 	z_erofs_workgroup_cachep =
 		kmem_cache_create("erofs_compress",
 				  Z_EROFS_WORKGROUP_SIZE, 0,
-				  SLAB_RECLAIM_ACCOUNT, NULL);
+				  SLAB_RECLAIM_ACCOUNT, init_once);
 
 	if (z_erofs_workgroup_cachep) {
 		if (!init_unzip_workqueue())
@@ -370,10 +396,11 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 	BUG_ON(grp);
 
 	/* no available workgroup, let's allocate one */
-	grp = kmem_cache_zalloc(z_erofs_workgroup_cachep, GFP_NOFS);
+	grp = kmem_cache_alloc(z_erofs_workgroup_cachep, GFP_NOFS);
 	if (unlikely(!grp))
 		return ERR_PTR(-ENOMEM);
 
+	init_always(grp);
 	grp->obj.index = f->idx;
 	grp->llen = map->m_llen;
 
@@ -381,7 +408,6 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 		(map->m_flags & EROFS_MAP_ZIPPED) ?
 			Z_EROFS_VLE_WORKGRP_FMT_LZ4 :
 			Z_EROFS_VLE_WORKGRP_FMT_PLAIN);
-	atomic_set(&grp->obj.refcount, 1);
 
 	/* new workgrps have been claimed as type 1 */
 	WRITE_ONCE(grp->next, *f->owned_head);
@@ -394,8 +420,6 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 	work = z_erofs_vle_grab_primary_work(grp);
 	work->pageofs = f->pageofs;
 
-	mutex_init(&work->lock);
-
 	if (gnew) {
 		int err = erofs_register_workgroup(f->sb, &grp->obj, 0);
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 08/14] staging: erofs: locked before registering for all new workgroups
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (5 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 07/14] staging: erofs: separate into init_once / always Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 09/14] staging: erofs: decompress asynchronously if PG_readahead page at first Gao Xiang
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


Let's make sure that the one registering a workgroup will also
take the primary work lock at first for two reasons:
  1) There's no need to introduce such a race window (and consequently
     overhead) between registering and locking, other tasks could break
     in by chance, and the race seems unnecessary (no benefit at all);

  2) It's better to take the primary work when a workgroup
     is registered to apply the cache managed policy, for example,
     if some other tasks break in, it could turn into the in-place
     decompression rather than use as the cached decompression.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index 4e5843e8ee35..a1376f3c6065 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -420,18 +420,22 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 	work = z_erofs_vle_grab_primary_work(grp);
 	work->pageofs = f->pageofs;
 
+	/* lock all primary followed works before visible to others */
+	if (unlikely(!mutex_trylock(&work->lock)))
+		/* for a new workgroup, try_lock *never* fails */
+		DBG_BUGON(1);
+
 	if (gnew) {
 		int err = erofs_register_workgroup(f->sb, &grp->obj, 0);
 
 		if (err) {
+			mutex_unlock(&work->lock);
 			kmem_cache_free(z_erofs_workgroup_cachep, grp);
 			return ERR_PTR(-EAGAIN);
 		}
 	}
 
 	*f->owned_head = *f->grp_ret = grp;
-
-	mutex_lock(&work->lock);
 	return work;
 }
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 09/14] staging: erofs: decompress asynchronously if PG_readahead page at first
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (6 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 08/14] staging: erofs: locked before registering for all new workgroups Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 10/14] staging: erofs: rename strange variable names in z_erofs_vle_frontend Gao Xiang
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


For the case of nr_to_read == lookahead_size, it is better to
decompress asynchronously as well since no page will be needed immediately.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index a1376f3c6065..824d2c12c2f3 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -1344,8 +1344,8 @@ static int z_erofs_vle_normalaccess_readpages(struct file *filp,
 {
 	struct inode *const inode = mapping->host;
 	struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
-	const bool sync = __should_decompress_synchronously(sbi, nr_pages);
 
+	bool sync = __should_decompress_synchronously(sbi, nr_pages);
 	struct z_erofs_vle_frontend f = VLE_FRONTEND_INIT(inode);
 	gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL);
 	struct page *head = NULL;
@@ -1363,6 +1363,13 @@ static int z_erofs_vle_normalaccess_readpages(struct file *filp,
 		prefetchw(&page->flags);
 		list_del(&page->lru);
 
+		/*
+		 * A pure asynchronous readahead is indicated if
+		 * a PG_readahead marked page is hitted at first.
+		 * Let's also do asynchronous decompression for this case.
+		 */
+		sync &= !(PageReadahead(page) && !head);
+
 		if (add_to_page_cache_lru(page, mapping, page->index, gfp)) {
 			list_add(&page->lru, &pagepool);
 			continue;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 10/14] staging: erofs: rename strange variable names in z_erofs_vle_frontend
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (7 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 09/14] staging: erofs: decompress asynchronously if PG_readahead page at first Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 11/14] staging: erofs: introduce MNGD_MAPPING helper Gao Xiang
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


Previously, 2 members called `initial' and `cachedzone_la' are used
for applying caching policy (whether the workgroup is at either end),
which are hard to understand, rename them to `backmost' and `headoffset'.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 25 +++++++++++--------------
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index 824d2c12c2f3..1ef178e7ac39 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -586,10 +586,9 @@ struct z_erofs_vle_frontend {
 
 	z_erofs_vle_owned_workgrp_t owned_head;
 
-	bool initial;
-#if (EROFS_FS_ZIP_CACHE_LVL >= 2)
-	erofs_off_t cachedzone_la;
-#endif
+	/* used for applying cache strategy on the fly */
+	bool backmost;
+	erofs_off_t headoffset;
 };
 
 #define VLE_FRONTEND_INIT(__i) { \
@@ -600,7 +599,7 @@ struct z_erofs_vle_frontend {
 	}, \
 	.builder = VLE_WORK_BUILDER_INIT(), \
 	.owned_head = Z_EROFS_VLE_WORKGRP_TAIL, \
-	.initial = true, }
+	.backmost = true, }
 
 static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 				struct page *page,
@@ -643,7 +642,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 	debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
 
 	if (z_erofs_vle_work_iter_end(builder))
-		fe->initial = false;
+		fe->backmost = false;
 
 	map->m_la = offset + cur;
 	map->m_llen = 0;
@@ -669,8 +668,8 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 		erofs_blknr(map->m_pa),
 		grp->compressed_pages, erofs_blknr(map->m_plen),
 		/* compressed page caching selection strategy */
-		fe->initial | (EROFS_FS_ZIP_CACHE_LVL >= 2 ?
-			map->m_la < fe->cachedzone_la : 0));
+		fe->backmost | (EROFS_FS_ZIP_CACHE_LVL >= 2 ?
+				map->m_la < fe->headoffset : 0));
 
 	if (noio_outoforder && builder_is_followed(builder))
 		builder->role = Z_EROFS_VLE_WORK_PRIMARY;
@@ -1316,9 +1315,8 @@ static int z_erofs_vle_normalaccess_readpage(struct file *file,
 
 	trace_erofs_readpage(page, false);
 
-#if (EROFS_FS_ZIP_CACHE_LVL >= 2)
-	f.cachedzone_la = (erofs_off_t)page->index << PAGE_SHIFT;
-#endif
+	f.headoffset = (erofs_off_t)page->index << PAGE_SHIFT;
+
 	err = z_erofs_do_read_page(&f, page, &pagepool);
 	(void)z_erofs_vle_work_iter_end(&f.builder);
 
@@ -1354,9 +1352,8 @@ static int z_erofs_vle_normalaccess_readpages(struct file *filp,
 	trace_erofs_readpages(mapping->host, lru_to_page(pages),
 			      nr_pages, false);
 
-#if (EROFS_FS_ZIP_CACHE_LVL >= 2)
-	f.cachedzone_la = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT;
-#endif
+	f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT;
+
 	for (; nr_pages; --nr_pages) {
 		struct page *page = lru_to_page(pages);
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 11/14] staging: erofs: introduce MNGD_MAPPING helper
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (8 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 10/14] staging: erofs: rename strange variable names in z_erofs_vle_frontend Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 12/14] staging: erofs: fix compressed pages submission flow Gao Xiang
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


This patch introduces MNGD_MAPPING to wrap up
sbi->managed_cache->i_mapping.

Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/internal.h  |  4 ++++
 drivers/staging/erofs/unzip_vle.c | 29 +++++++++++++----------------
 2 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
index 0aa2a41b9885..06e72b6c1194 100644
--- a/drivers/staging/erofs/internal.h
+++ b/drivers/staging/erofs/internal.h
@@ -291,6 +291,10 @@ extern int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi,
 	struct erofs_workgroup *egrp);
 extern int erofs_try_to_free_cached_page(struct address_space *mapping,
 	struct page *page);
+
+#define MNGD_MAPPING(sbi)	((sbi)->managed_cache->i_mapping)
+#else
+#define MNGD_MAPPING(sbi)	(NULL)
 #endif
 
 #define DEFAULT_MAX_SYNC_DECOMPRESS_PAGES	3
diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index 1ef178e7ac39..fbdf9483b860 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -165,7 +165,7 @@ int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi,
 {
 	struct z_erofs_vle_workgroup *const grp =
 		container_of(egrp, struct z_erofs_vle_workgroup, obj);
-	struct address_space *const mapping = sbi->managed_cache->i_mapping;
+	struct address_space *const mapping = MNGD_MAPPING(sbi);
 	const int clusterpages = erofs_clusterpages(sbi);
 	int i;
 
@@ -616,7 +616,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 	struct z_erofs_vle_work *work = builder->work;
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct address_space *const mngda = sbi->managed_cache->i_mapping;
+	struct address_space *const mc = MNGD_MAPPING(sbi);
 	struct z_erofs_vle_workgroup *grp;
 	bool noio_outoforder;
 #endif
@@ -664,7 +664,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 	grp = fe->builder.grp;
 
 	/* let's do out-of-order decompression for noio */
-	noio_outoforder = grab_managed_cache_pages(mngda,
+	noio_outoforder = grab_managed_cache_pages(mc,
 		erofs_blknr(map->m_pa),
 		grp->compressed_pages, erofs_blknr(map->m_plen),
 		/* compressed page caching selection strategy */
@@ -758,7 +758,7 @@ static inline void z_erofs_vle_read_endio(struct bio *bio)
 	unsigned int i;
 	struct bio_vec *bvec;
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct address_space *mngda = NULL;
+	struct address_space *mc = NULL;
 #endif
 
 	bio_for_each_segment_all(bvec, bio, i) {
@@ -769,18 +769,18 @@ static inline void z_erofs_vle_read_endio(struct bio *bio)
 		BUG_ON(!page->mapping);
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-		if (unlikely(!mngda && !z_erofs_is_stagingpage(page))) {
+		if (unlikely(!mc && !z_erofs_is_stagingpage(page))) {
 			struct inode *const inode = page->mapping->host;
 			struct super_block *const sb = inode->i_sb;
 
-			mngda = EROFS_SB(sb)->managed_cache->i_mapping;
+			mc = MNGD_MAPPING(EROFS_SB(sb));
 		}
 
 		/*
-		 * If mngda has not gotten, it equals NULL,
+		 * If mc has not gotten, it equals NULL,
 		 * however, page->mapping never be NULL if working properly.
 		 */
-		cachemngd = (page->mapping == mngda);
+		cachemngd = (page->mapping == mc);
 #endif
 
 		if (unlikely(err))
@@ -804,9 +804,6 @@ static int z_erofs_vle_unzip(struct super_block *sb,
 	struct list_head *page_pool)
 {
 	struct erofs_sb_info *const sbi = EROFS_SB(sb);
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct address_space *const mngda = sbi->managed_cache->i_mapping;
-#endif
 	const unsigned int clusterpages = erofs_clusterpages(sbi);
 
 	struct z_erofs_pagevec_ctor ctor;
@@ -897,7 +894,7 @@ static int z_erofs_vle_unzip(struct super_block *sb,
 		if (z_erofs_is_stagingpage(page))
 			continue;
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-		else if (page->mapping == mngda) {
+		if (page->mapping == MNGD_MAPPING(sbi)) {
 			BUG_ON(PageLocked(page));
 			BUG_ON(!PageUptodate(page));
 			continue;
@@ -975,7 +972,7 @@ static int z_erofs_vle_unzip(struct super_block *sb,
 		page = compressed_pages[i];
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-		if (page->mapping == mngda)
+		if (page->mapping == MNGD_MAPPING(sbi))
 			continue;
 #endif
 		/* recycle all individual staging pages */
@@ -1108,7 +1105,7 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 	const unsigned int clusterpages = erofs_clusterpages(sbi);
 	const gfp_t gfp = GFP_NOFS;
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct address_space *const mngda = sbi->managed_cache->i_mapping;
+	struct address_space *const mc = MNGD_MAPPING(sbi);
 	struct z_erofs_vle_workgroup *lstgrp_noio = NULL, *lstgrp_io = NULL;
 #endif
 	struct z_erofs_vle_unzip_io *ios[1 + __FSIO_1];
@@ -1181,7 +1178,7 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 			cachemngd = true;
 			goto do_allocpage;
 		} else if (page) {
-			if (page->mapping != mngda)
+			if (page->mapping != mc)
 				BUG_ON(PageUptodate(page));
 			else if (recover_managed_page(grp, page)) {
 				/* page is uptodate, skip io submission */
@@ -1204,7 +1201,7 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 				goto repeat;
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
 			} else if (cachemngd && !add_to_page_cache_lru(page,
-				mngda, first_index + i, gfp)) {
+				   mc, first_index + i, gfp)) {
 				set_page_private(page, (unsigned long)grp);
 				SetPagePrivate(page);
 #endif
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 12/14] staging: erofs: fix compressed pages submission flow
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (9 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 11/14] staging: erofs: introduce MNGD_MAPPING helper Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 13/14] staging: erofs: redefine where `owned_workgrp_t' points Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 14/14] staging: erofs: simplify `z_erofs_vle_submit_all' Gao Xiang
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


This patch fully closes race between page reclaiming and
compressed pages submitting, which could cause very low
probability of reference leak and double free.

Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 344 +++++++++++++++++++++++++-------------
 drivers/staging/erofs/unzip_vle.h |  15 ++
 2 files changed, 247 insertions(+), 112 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index fbdf9483b860..aed426230860 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -15,6 +15,15 @@
 
 #include <trace/events/erofs.h>
 
+/* how to allocate cached pages for a workgroup */
+enum z_erofs_cache_alloctype {
+	DONTALLOC,	/* don't allocate any cached pages */
+	TRYALLOC,	/* minimal effort (w/o page reclaiming) */
+	DELAYEDALLOC,	/* delayed allocation (at the time of submitting io) */
+};
+
+#define PAGE_UNALLOCATED	((void *)0x5F0EF00D)
+
 static struct workqueue_struct *z_erofs_workqueue __read_mostly;
 static struct kmem_cache *z_erofs_workgroup_cachep __read_mostly;
 
@@ -125,38 +134,68 @@ struct z_erofs_vle_work_builder {
 	{ .work = NULL, .role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED }
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-
-static bool grab_managed_cache_pages(struct address_space *mapping,
-				     erofs_blk_t start,
-				     struct page **compressed_pages,
-				     int clusterblks,
-				     bool reserve_allocation)
+static void preload_compressed_pages(struct z_erofs_vle_work_builder *bl,
+				     struct address_space *mc,
+				     pgoff_t index,
+				     unsigned int clusterpages,
+				     enum z_erofs_cache_alloctype type,
+				     struct list_head *pagepool,
+				     gfp_t gfp)
 {
-	bool noio = true;
-	unsigned int i;
+	struct page **const pages = bl->compressed_pages;
+	const unsigned int remaining = bl->compressed_deficit;
+	bool standalone = true;
+	unsigned int i, j = 0;
+
+	if (bl->role < Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED)
+		return;
+
+	gfp = mapping_gfp_constraint(mc, gfp) & ~__GFP_RECLAIM;
 
-	/* TODO: optimize by introducing find_get_pages_range */
-	for (i = 0; i < clusterblks; ++i) {
-		struct page *page, *found;
+	index += clusterpages - remaining;
 
-		if (READ_ONCE(compressed_pages[i]))
+	for (i = 0; i < remaining; ++i) {
+		struct page *page, *newpage = NULL;
+		z_erofs_ctptr_t t;
+
+		/* the compressed page was loaded before */
+		if (READ_ONCE(pages[i]))
 			continue;
 
-		page = found = find_get_page(mapping, start + i);
-		if (!found) {
-			noio = false;
-			if (!reserve_allocation)
+		page = find_get_page(mc, index + i);
+
+		if (page) {
+			t = z_erofs_ctptr_tag_justfound(page);
+		} else if (type == DELAYEDALLOC) {
+			t = tagptr_init(z_erofs_ctptr_t, PAGE_UNALLOCATED);
+		} else if (type == TRYALLOC) {
+			newpage = erofs_allocpage(pagepool, gfp);
+
+			if (!newpage)
 				continue;
-			page = EROFS_UNALLOCATED_CACHED_PAGE;
+			newpage->mapping = Z_EROFS_MAPPING_PREALLOCATED;
+			t = z_erofs_ctptr_tag_justfound(newpage);
+		} else {			/* DONTALLOC */
+			if (standalone)
+				j = i;
+			standalone = false;
+			continue;
 		}
 
-		if (!cmpxchg(compressed_pages + i, NULL, page))
+		if (!cmpxchg(&pages[i], NULL, tagptr_cast_ptr(t)))
 			continue;
 
-		if (found)
-			put_page(found);
+		if (page)
+			put_page(page);
+		else if (newpage)
+			/* someone just allocated this page, drop our attempt */
+			list_add(&page->lru, pagepool);
 	}
-	return noio;
+	bl->compressed_pages += j;
+	bl->compressed_deficit = remaining - j;
+
+	if (standalone)
+		bl->role = Z_EROFS_VLE_WORK_PRIMARY;
 }
 
 /* called by erofs_shrinker to get rid of all compressed_pages */
@@ -228,6 +267,17 @@ int erofs_try_to_free_cached_page(struct address_space *mapping,
 	}
 	return ret;
 }
+#else
+static void preload_compressed_pages(struct z_erofs_vle_work_builder *bl,
+				     struct address_space *mc,
+				     pgoff_t index,
+				     unsigned int clusterpages,
+				     enum z_erofs_cache_alloctype type,
+				     struct list_head *pagepool,
+				     gfp_t gfp)
+{
+	/* nowhere to load compressed pages from */
+}
 #endif
 
 /* page_type must be Z_EROFS_PAGE_TYPE_EXCLUSIVE */
@@ -601,6 +651,26 @@ struct z_erofs_vle_frontend {
 	.owned_head = Z_EROFS_VLE_WORKGRP_TAIL, \
 	.backmost = true, }
 
+#ifdef EROFS_FS_HAS_MANAGED_CACHE
+static inline bool
+should_alloc_managed_pages(struct z_erofs_vle_frontend *fe, erofs_off_t la)
+{
+	if (fe->backmost)
+		return true;
+
+	if (EROFS_FS_ZIP_CACHE_LVL >= 2)
+		return la < fe->headoffset;
+
+	return false;
+}
+#else
+static inline bool
+should_alloc_managed_pages(struct z_erofs_vle_frontend *fe, erofs_off_t la)
+{
+	return false;
+}
+#endif
+
 static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 				struct page *page,
 				struct list_head *page_pool)
@@ -615,12 +685,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 	bool tight = builder_is_followed(builder);
 	struct z_erofs_vle_work *work = builder->work;
 
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct address_space *const mc = MNGD_MAPPING(sbi);
-	struct z_erofs_vle_workgroup *grp;
-	bool noio_outoforder;
-#endif
-
+	enum z_erofs_cache_alloctype cache_strategy;
 	enum z_erofs_page_type page_type;
 	unsigned int cur, end, spiltted, index;
 	int err = 0;
@@ -660,20 +725,16 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
 	if (unlikely(err))
 		goto err_out;
 
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-	grp = fe->builder.grp;
-
-	/* let's do out-of-order decompression for noio */
-	noio_outoforder = grab_managed_cache_pages(mc,
-		erofs_blknr(map->m_pa),
-		grp->compressed_pages, erofs_blknr(map->m_plen),
-		/* compressed page caching selection strategy */
-		fe->backmost | (EROFS_FS_ZIP_CACHE_LVL >= 2 ?
-				map->m_la < fe->headoffset : 0));
-
-	if (noio_outoforder && builder_is_followed(builder))
-		builder->role = Z_EROFS_VLE_WORK_PRIMARY;
-#endif
+	/* preload all compressed pages and downgrade role if necessary */
+	if (should_alloc_managed_pages(fe, map->m_la))
+		cache_strategy = DELAYEDALLOC;
+	else
+		cache_strategy = DONTALLOC;
+
+	preload_compressed_pages(builder, MNGD_MAPPING(sbi),
+				 map->m_pa / PAGE_SIZE,
+				 map->m_plen / PAGE_SIZE,
+				 cache_strategy, page_pool, GFP_KERNEL);
 
 	tight &= builder_is_followed(builder);
 	work = builder->work;
@@ -1035,6 +1096,124 @@ static void z_erofs_vle_unzip_wq(struct work_struct *work)
 	kvfree(iosb);
 }
 
+static struct page *
+pickup_page_for_submission(struct z_erofs_vle_workgroup *grp,
+			   unsigned int nr,
+			   struct list_head *pagepool,
+			   struct address_space *mc,
+			   gfp_t gfp)
+{
+	/* determined at compile time to avoid using macros. */
+	const bool nocache = __builtin_constant_p(mc) ? !mc : false;
+	const pgoff_t index = grp->obj.index;
+	bool tocache = false;
+
+	struct address_space *mapping;
+	struct page *oldpage, *page;
+
+	z_erofs_ctptr_t t;
+	int justfound;
+
+repeat:
+	page = READ_ONCE(grp->compressed_pages[nr]);
+	oldpage = page;
+
+	if (!page)
+		goto out_allocpage;
+
+	if (!nocache) {
+		if (page == PAGE_UNALLOCATED) {
+			tocache = true;
+			goto out_allocpage;
+		}
+
+		if (z_erofs_is_preallocatedpage(page))
+			goto out_add_to_managed_cache;
+	}
+
+	/* process the target tagged pointer */
+	t = tagptr_init(z_erofs_ctptr_t, page);
+	justfound = tagptr_unfold_tags(t);
+	page = tagptr_unfold_ptr(t);
+
+	mapping = READ_ONCE(page->mapping);
+
+	if (nocache) {
+		/* if managed cache is disabled, it is impossible `justfound' */
+		DBG_BUGON(justfound);
+
+		/* and it should be locked, not uptodate, and not truncated */
+		DBG_BUGON(!PageLocked(page));
+		DBG_BUGON(PageUptodate(page));
+		DBG_BUGON(!mapping);
+		goto out;
+	}
+
+	/*
+	 * unmanaged pages are all locked,
+	 * therefore it is impossible for `mapping' to be NULL.
+	 */
+	if (mapping && mapping != mc)
+		/* ought to be unmanaged pages */
+		goto out;
+
+	lock_page(page);
+	/* only true if page reclaim goes wrong, should never happen */
+	DBG_BUGON(justfound && PagePrivate(page));
+
+	if (page->mapping == mc) {
+		WRITE_ONCE(grp->compressed_pages[nr], page);
+
+		if (!PagePrivate(page)) {
+			/*
+			 * impossible to be !PagePrivate(page) for the current
+			 * implementation as well if the page is already in
+			 * compressed_pages[].
+			 */
+			DBG_BUGON(!justfound);
+
+			justfound = 0;
+			set_page_private(page, (unsigned long)grp);
+			SetPagePrivate(page);
+		}
+
+		/* no need to submit bio if the page is already up-to-date */
+		if (PageUptodate(page)) {
+			unlock_page(page);
+			page = NULL;
+		}
+		goto out;
+	}
+
+	/* and for the truncation case (page is still locked) */
+	DBG_BUGON(page->mapping);
+	/* truncation is only after disconnected currently */
+	DBG_BUGON(!justfound);
+
+	tocache = true;
+	unlock_page(page);
+	put_page(page);
+out_allocpage:
+	page = __stagingpage_alloc(pagepool, gfp);
+	if (oldpage != cmpxchg(&grp->compressed_pages[nr], oldpage, page)) {
+		list_add(&page->lru, pagepool);
+		cpu_relax();
+		goto repeat;
+	}
+	if (nocache || !tocache)
+		goto out;
+out_add_to_managed_cache:
+	if (add_to_page_cache_lru(page, mc, index + nr, gfp)) {
+		page->mapping = Z_EROFS_MAPPING_STAGING;
+		goto out;
+	}
+
+	set_page_private(page, (unsigned long)grp);
+	SetPagePrivate(page);
+out:	/* the only exit (for tracing and debugging) */
+	return page;
+}
+
 static inline struct z_erofs_vle_unzip_io *
 prepare_io_handler(struct super_block *sb,
 		   struct z_erofs_vle_unzip_io *io,
@@ -1070,26 +1249,6 @@ prepare_io_handler(struct super_block *sb,
 }
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-/* true - unlocked (noio), false - locked (need submit io) */
-static inline bool recover_managed_page(struct z_erofs_vle_workgroup *grp,
-					struct page *page)
-{
-	wait_on_page_locked(page);
-	if (PagePrivate(page) && PageUptodate(page))
-		return true;
-
-	lock_page(page);
-	if (unlikely(!PagePrivate(page))) {
-		set_page_private(page, (unsigned long)grp);
-		SetPagePrivate(page);
-	}
-	if (unlikely(PageUptodate(page))) {
-		unlock_page(page);
-		return true;
-	}
-	return false;
-}
-
 #define __FSIO_1 1
 #else
 #define __FSIO_1 0
@@ -1105,7 +1264,6 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 	const unsigned int clusterpages = erofs_clusterpages(sbi);
 	const gfp_t gfp = GFP_NOFS;
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct address_space *const mc = MNGD_MAPPING(sbi);
 	struct z_erofs_vle_workgroup *lstgrp_noio = NULL, *lstgrp_io = NULL;
 #endif
 	struct z_erofs_vle_unzip_io *ios[1 + __FSIO_1];
@@ -1144,13 +1302,9 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 
 	do {
 		struct z_erofs_vle_workgroup *grp;
-		struct page **compressed_pages, *oldpage, *page;
 		pgoff_t first_index;
-		unsigned int i = 0;
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-		unsigned int noio = 0;
-		bool cachemngd;
-#endif
+		struct page *page;
+		unsigned int i = 0, nr_uptodate = 0;
 		int err;
 
 		/* no possible 'owned_head' equals the following */
@@ -1161,51 +1315,19 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 
 		/* close the main owned chain at first */
 		owned_head = cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL,
-			Z_EROFS_VLE_WORKGRP_TAIL_CLOSED);
+				     Z_EROFS_VLE_WORKGRP_TAIL_CLOSED);
 
 		first_index = grp->obj.index;
-		compressed_pages = grp->compressed_pages;
-
 		force_submit |= (first_index != last_index + 1);
-repeat:
-		/* fulfill all compressed pages */
-		oldpage = page = READ_ONCE(compressed_pages[i]);
-
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-		cachemngd = false;
-
-		if (page == EROFS_UNALLOCATED_CACHED_PAGE) {
-			cachemngd = true;
-			goto do_allocpage;
-		} else if (page) {
-			if (page->mapping != mc)
-				BUG_ON(PageUptodate(page));
-			else if (recover_managed_page(grp, page)) {
-				/* page is uptodate, skip io submission */
-				force_submit = true;
-				++noio;
-				goto skippage;
-			}
-		} else {
-do_allocpage:
-#else
-		if (page)
-			BUG_ON(PageUptodate(page));
-		else {
-#endif
-			page = __stagingpage_alloc(pagepool, gfp);
 
-			if (oldpage != cmpxchg(compressed_pages + i,
-				oldpage, page)) {
-				list_add(&page->lru, pagepool);
-				goto repeat;
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-			} else if (cachemngd && !add_to_page_cache_lru(page,
-				   mc, first_index + i, gfp)) {
-				set_page_private(page, (unsigned long)grp);
-				SetPagePrivate(page);
-#endif
-			}
+		/* fulfill all compressed pages */
+repeat:
+		page = pickup_page_for_submission(grp, i, pagepool,
+						  MNGD_MAPPING(sbi), gfp);
+		if (!page) {
+			force_submit = true;
+			++nr_uptodate;
+			goto skippage;
 		}
 
 		if (bio && force_submit) {
@@ -1228,14 +1350,12 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 
 		force_submit = false;
 		last_index = first_index + i;
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
 skippage:
-#endif
 		if (++i < clusterpages)
 			goto repeat;
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-		if (noio < clusterpages) {
+		if (nr_uptodate < clusterpages) {
 			lstgrp_io = grp;
 		} else {
 			z_erofs_vle_owned_workgrp_t iogrp_next =
diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h
index 3316bc36965d..6f4c7440aeb1 100644
--- a/drivers/staging/erofs/unzip_vle.h
+++ b/drivers/staging/erofs/unzip_vle.h
@@ -36,6 +36,15 @@ static inline bool z_erofs_gather_if_stagingpage(struct list_head *page_pool,
 	return false;
 }
 
+/*
+ *  - 0x6A110C8D ('pallocated', Z_EROFS_MAPPING_PREALLOCATED) -
+ * preallocated cached pages, will be added into managed cache later
+ */
+#define Z_EROFS_MAPPING_PREALLOCATED	((void *)0x6A110C8D)
+
+#define z_erofs_is_preallocatedpage(page)	\
+	((page)->mapping == Z_EROFS_MAPPING_PREALLOCATED)
+
 /*
  * Structure fields follow one of the following exclusion rules.
  *
@@ -69,6 +78,12 @@ struct z_erofs_vle_work {
 
 typedef struct z_erofs_vle_workgroup *z_erofs_vle_owned_workgrp_t;
 
+/* compressed page tagptr (bit 0 - justfound, with an extra reference) */
+typedef tagptr1_t z_erofs_ctptr_t;
+
+#define z_erofs_ctptr_tag_justfound(page) \
+	tagptr_fold(z_erofs_ctptr_t, page, 1)
+
 struct z_erofs_vle_workgroup {
 	struct erofs_workgroup obj;
 	struct z_erofs_vle_work work;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 13/14] staging: erofs: redefine where `owned_workgrp_t' points
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (10 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 12/14] staging: erofs: fix compressed pages submission flow Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 14/14] staging: erofs: simplify `z_erofs_vle_submit_all' Gao Xiang
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


By design, workgroups are queued in the form of linked lists.

Previously, it points to the next `z_erofs_vle_workgroup', which
isn't flexible enough to simplify `z_erofs_vle_submit_all' logic.

Let's fix it pointing to the next `owned_workgrp_t' and use
container_of to get its coresponding `z_erofs_vle_workgroup'.

Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 12 +++++++-----
 drivers/staging/erofs/unzip_vle.h |  4 ++--
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index aed426230860..f0f2367c1131 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -331,7 +331,7 @@ static inline bool try_to_claim_workgroup(
 			    *owned_head) != Z_EROFS_VLE_WORKGRP_NIL)
 			goto retry;
 
-		*owned_head = grp;
+		*owned_head = &grp->next;
 		*hosted = true;
 	} else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) {
 		/*
@@ -485,7 +485,8 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 		}
 	}
 
-	*f->owned_head = *f->grp_ret = grp;
+	*f->owned_head = &grp->next;
+	*f->grp_ret = grp;
 	return work;
 }
 
@@ -1076,7 +1077,7 @@ static void z_erofs_vle_unzip_all(struct super_block *sb,
 		/* no possible that 'owned' equals NULL */
 		DBG_BUGON(owned == Z_EROFS_VLE_WORKGRP_NIL);
 
-		grp = owned;
+		grp = container_of(owned, struct z_erofs_vle_workgroup, next);
 		owned = READ_ONCE(grp->next);
 
 		z_erofs_vle_unzip(sb, grp, page_pool);
@@ -1311,7 +1312,8 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 		DBG_BUGON(owned_head == Z_EROFS_VLE_WORKGRP_TAIL_CLOSED);
 		DBG_BUGON(owned_head == Z_EROFS_VLE_WORKGRP_NIL);
 
-		grp = owned_head;
+		grp = container_of(owned_head,
+				   struct z_erofs_vle_workgroup, next);
 
 		/* close the main owned chain at first */
 		owned_head = cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL,
@@ -1369,7 +1371,7 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 				WRITE_ONCE(lstgrp_io->next, iogrp_next);
 
 			if (!lstgrp_noio)
-				ios[0]->head = grp;
+				ios[0]->head = &grp->next;
 			else
 				WRITE_ONCE(lstgrp_noio->next, grp);
 
diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h
index 6f4c7440aeb1..3fb44f1bd8f0 100644
--- a/drivers/staging/erofs/unzip_vle.h
+++ b/drivers/staging/erofs/unzip_vle.h
@@ -76,7 +76,7 @@ struct z_erofs_vle_work {
 #define Z_EROFS_VLE_WORKGRP_FMT_LZ4          1
 #define Z_EROFS_VLE_WORKGRP_FMT_MASK         1
 
-typedef struct z_erofs_vle_workgroup *z_erofs_vle_owned_workgrp_t;
+typedef void *z_erofs_vle_owned_workgrp_t;
 
 /* compressed page tagptr (bit 0 - justfound, with an extra reference) */
 typedef tagptr1_t z_erofs_ctptr_t;
@@ -88,7 +88,7 @@ struct z_erofs_vle_workgroup {
 	struct erofs_workgroup obj;
 	struct z_erofs_vle_work work;
 
-	/* next owned workgroup */
+	/* point to next owned_workgrp_t */
 	z_erofs_vle_owned_workgrp_t next;
 
 	/* compressed pages (including multi-usage pages) */
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 chao/erofs-dev 14/14] staging: erofs: simplify `z_erofs_vle_submit_all'
  2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
                   ` (11 preceding siblings ...)
  2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 13/14] staging: erofs: redefine where `owned_workgrp_t' points Gao Xiang
@ 2018-11-20 14:14 ` Gao Xiang
  12 siblings, 0 replies; 14+ messages in thread
From: Gao Xiang @ 2018-11-20 14:14 UTC (permalink / raw)


Previously, there are too many hacked stuffs such as `__FSIO_1',
`lstgrp_noio', `lstgrp_io' out there in `z_erofs_vle_submit_all'.

Rework the whole process by properly introducing the jobqueue idea
to represent each type of queued workgroups, and 2 independent
jobqueues if managed cache is enabled.

Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 195 ++++++++++++++++++++++----------------
 1 file changed, 113 insertions(+), 82 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index f0f2367c1131..053ebe5c7d7c 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -1215,33 +1215,28 @@ pickup_page_for_submission(struct z_erofs_vle_workgroup *grp,
 	return page;
 }
 
-static inline struct z_erofs_vle_unzip_io *
-prepare_io_handler(struct super_block *sb,
-		   struct z_erofs_vle_unzip_io *io,
-		   bool background)
+static struct z_erofs_vle_unzip_io *
+jobqueue_init(struct super_block *sb,
+	      struct z_erofs_vle_unzip_io *io,
+	      bool foreground)
 {
 	struct z_erofs_vle_unzip_io_sb *iosb;
 
-	if (!background) {
+	if (foreground) {
 		/* waitqueue available for foreground io */
-		BUG_ON(!io);
+		DBG_BUGON(!io);
 
 		init_waitqueue_head(&io->u.wait);
 		atomic_set(&io->pending_bios, 0);
 		goto out;
 	}
 
-	if (io)
-		BUG();
-	else {
-		/* allocate extra io descriptor for background io */
-		iosb = kvzalloc(sizeof(struct z_erofs_vle_unzip_io_sb),
+	iosb = kvzalloc(sizeof(struct z_erofs_vle_unzip_io_sb),
 			GFP_KERNEL | __GFP_NOFAIL);
-		BUG_ON(!iosb);
-
-		io = &iosb->io;
-	}
+	DBG_BUGON(!iosb);
 
+	/* initialize fields in the allocated descriptor */
+	io = &iosb->io;
 	iosb->sb = sb;
 	INIT_WORK(&io->u.work, z_erofs_vle_unzip_wq);
 out:
@@ -1249,27 +1244,105 @@ prepare_io_handler(struct super_block *sb,
 	return io;
 }
 
+/* define workgroup jobqueue types */
+enum {
+#ifdef EROFS_FS_HAS_MANAGED_CACHE
+	JQ_BYPASS,
+#endif
+	JQ_SUBMIT,
+	NR_JOBQUEUES,
+};
+
+static void *jobqueueset_init(struct super_block *sb,
+			      z_erofs_vle_owned_workgrp_t qtail[],
+			      struct z_erofs_vle_unzip_io *q[],
+			      struct z_erofs_vle_unzip_io *fgq,
+			      bool forcefg)
+{
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-#define __FSIO_1 1
+	/*
+	 * if managed cache is enabled, bypass jobqueue is needed,
+	 * no need to read from device for all workgroups in this queue.
+	 */
+	q[JQ_BYPASS] = jobqueue_init(sb, fgq + JQ_BYPASS, true);
+	qtail[JQ_BYPASS] = &q[JQ_BYPASS]->head;
+#endif
+
+	q[JQ_SUBMIT] = jobqueue_init(sb, fgq + JQ_SUBMIT, forcefg);
+	qtail[JQ_SUBMIT] = &q[JQ_SUBMIT]->head;
+
+	return tagptr_cast_ptr(tagptr_fold(tagptr1_t, q[JQ_SUBMIT], !forcefg));
+}
+
+#ifdef EROFS_FS_HAS_MANAGED_CACHE
+static void move_to_bypass_jobqueue(struct z_erofs_vle_workgroup *grp,
+				    z_erofs_vle_owned_workgrp_t qtail[],
+				    z_erofs_vle_owned_workgrp_t owned_head)
+{
+	z_erofs_vle_owned_workgrp_t *const submit_qtail = qtail[JQ_SUBMIT];
+	z_erofs_vle_owned_workgrp_t *const bypass_qtail = qtail[JQ_BYPASS];
+
+	DBG_BUGON(owned_head == Z_EROFS_VLE_WORKGRP_TAIL_CLOSED);
+	if (owned_head == Z_EROFS_VLE_WORKGRP_TAIL)
+		owned_head = Z_EROFS_VLE_WORKGRP_TAIL_CLOSED;
+
+	WRITE_ONCE(grp->next, Z_EROFS_VLE_WORKGRP_TAIL_CLOSED);
+
+	WRITE_ONCE(*submit_qtail, owned_head);
+	WRITE_ONCE(*bypass_qtail, &grp->next);
+
+	qtail[JQ_BYPASS] = &grp->next;
+}
+
+static bool postsubmit_is_all_bypassed(struct z_erofs_vle_unzip_io *q[],
+				       unsigned int nr_bios,
+				       bool force_fg)
+{
+	/*
+	 * although background is preferred, no one is pending for submission.
+	 * don't issue workqueue for decompression but drop it directly instead.
+	 */
+	if (force_fg || nr_bios)
+		return false;
+
+	kvfree(container_of(q[JQ_SUBMIT],
+			    struct z_erofs_vle_unzip_io_sb,
+			    io));
+	return true;
+}
 #else
-#define __FSIO_1 0
+static void move_to_bypass_jobqueue(struct z_erofs_vle_workgroup *grp,
+				    z_erofs_vle_owned_workgrp_t qtail[],
+				    z_erofs_vle_owned_workgrp_t owned_head)
+{
+	/* impossible to bypass submission for managed cache disabled */
+	DBG_BUGON(1);
+}
+
+static bool postsubmit_is_all_bypassed(struct z_erofs_vle_unzip_io *q[],
+				       unsigned int nr_bios,
+				       bool force_fg)
+{
+	/* bios should be >0 if managed cache is disabled */
+	DBG_BUGON(!nr_bios);
+	return false;
+}
 #endif
 
 static bool z_erofs_vle_submit_all(struct super_block *sb,
 				   z_erofs_vle_owned_workgrp_t owned_head,
 				   struct list_head *pagepool,
-				   struct z_erofs_vle_unzip_io *fg_io,
+				   struct z_erofs_vle_unzip_io *fgq,
 				   bool force_fg)
 {
 	struct erofs_sb_info *const sbi = EROFS_SB(sb);
 	const unsigned int clusterpages = erofs_clusterpages(sbi);
 	const gfp_t gfp = GFP_NOFS;
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-	struct z_erofs_vle_workgroup *lstgrp_noio = NULL, *lstgrp_io = NULL;
-#endif
-	struct z_erofs_vle_unzip_io *ios[1 + __FSIO_1];
+
+	z_erofs_vle_owned_workgrp_t qtail[NR_JOBQUEUES];
+	struct z_erofs_vle_unzip_io *q[NR_JOBQUEUES];
 	struct bio *bio;
-	tagptr1_t bi_private;
+	void *bi_private;
 	/* since bio will be NULL, no need to initialize last_index */
 	pgoff_t uninitialized_var(last_index);
 	bool force_submit = false;
@@ -1278,28 +1351,13 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 	if (unlikely(owned_head == Z_EROFS_VLE_WORKGRP_TAIL))
 		return false;
 
-	/*
-	 * force_fg == 1, (io, fg_io[0]) no io, (io, fg_io[1]) need submit io
-	 * force_fg == 0, (io, fg_io[0]) no io; (io[1], bg_io) need submit io
-	 */
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-	ios[0] = prepare_io_handler(sb, fg_io + 0, false);
-#endif
-
-	if (force_fg) {
-		ios[__FSIO_1] = prepare_io_handler(sb, fg_io + __FSIO_1, false);
-		bi_private = tagptr_fold(tagptr1_t, ios[__FSIO_1], 0);
-	} else {
-		ios[__FSIO_1] = prepare_io_handler(sb, NULL, true);
-		bi_private = tagptr_fold(tagptr1_t, ios[__FSIO_1], 1);
-	}
-
-	nr_bios = 0;
 	force_submit = false;
 	bio = NULL;
+	nr_bios = 0;
+	bi_private = jobqueueset_init(sb, qtail, q, fgq, force_fg);
 
 	/* by default, all need io submission */
-	ios[__FSIO_1]->head = owned_head;
+	q[JQ_SUBMIT]->head = owned_head;
 
 	do {
 		struct z_erofs_vle_workgroup *grp;
@@ -1340,8 +1398,9 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 
 		if (!bio) {
 			bio = erofs_grab_bio(sb, first_index + i,
-				BIO_MAX_PAGES, z_erofs_vle_read_endio, true);
-			bio->bi_private = tagptr_cast_ptr(bi_private);
+					     BIO_MAX_PAGES,
+					     z_erofs_vle_read_endio, true);
+			bio->bi_private = bi_private;
 
 			++nr_bios;
 		}
@@ -1356,47 +1415,19 @@ static bool z_erofs_vle_submit_all(struct super_block *sb,
 		if (++i < clusterpages)
 			goto repeat;
 
-#ifdef EROFS_FS_HAS_MANAGED_CACHE
-		if (nr_uptodate < clusterpages) {
-			lstgrp_io = grp;
-		} else {
-			z_erofs_vle_owned_workgrp_t iogrp_next =
-				owned_head == Z_EROFS_VLE_WORKGRP_TAIL ?
-				Z_EROFS_VLE_WORKGRP_TAIL_CLOSED :
-				owned_head;
-
-			if (!lstgrp_io)
-				ios[1]->head = iogrp_next;
-			else
-				WRITE_ONCE(lstgrp_io->next, iogrp_next);
-
-			if (!lstgrp_noio)
-				ios[0]->head = &grp->next;
-			else
-				WRITE_ONCE(lstgrp_noio->next, grp);
-
-			lstgrp_noio = grp;
-		}
-#endif
+		if (nr_uptodate < clusterpages)
+			qtail[JQ_SUBMIT] = &grp->next;
+		else
+			move_to_bypass_jobqueue(grp, qtail, owned_head);
 	} while (owned_head != Z_EROFS_VLE_WORKGRP_TAIL);
 
 	if (bio)
 		__submit_bio(bio, REQ_OP_READ, 0);
 
-#ifndef EROFS_FS_HAS_MANAGED_CACHE
-	BUG_ON(!nr_bios);
-#else
-	if (lstgrp_noio)
-		WRITE_ONCE(lstgrp_noio->next, Z_EROFS_VLE_WORKGRP_TAIL_CLOSED);
-
-	if (!force_fg && !nr_bios) {
-		kvfree(container_of(ios[1],
-			struct z_erofs_vle_unzip_io_sb, io));
+	if (postsubmit_is_all_bypassed(q, nr_bios, force_fg))
 		return true;
-	}
-#endif
 
-	z_erofs_vle_unzip_kickoff(tagptr_cast_ptr(bi_private), nr_bios);
+	z_erofs_vle_unzip_kickoff(bi_private, nr_bios);
 	return true;
 }
 
@@ -1405,23 +1436,23 @@ static void z_erofs_submit_and_unzip(struct z_erofs_vle_frontend *f,
 				     bool force_fg)
 {
 	struct super_block *sb = f->inode->i_sb;
-	struct z_erofs_vle_unzip_io io[1 + __FSIO_1];
+	struct z_erofs_vle_unzip_io io[NR_JOBQUEUES];
 
 	if (!z_erofs_vle_submit_all(sb, f->owned_head, pagepool, io, force_fg))
 		return;
 
 #ifdef EROFS_FS_HAS_MANAGED_CACHE
-	z_erofs_vle_unzip_all(sb, &io[0], pagepool);
+	z_erofs_vle_unzip_all(sb, &io[JQ_BYPASS], pagepool);
 #endif
 	if (!force_fg)
 		return;
 
 	/* wait until all bios are completed */
-	wait_event(io[__FSIO_1].u.wait,
-		!atomic_read(&io[__FSIO_1].pending_bios));
+	wait_event(io[JQ_SUBMIT].u.wait,
+		   !atomic_read(&io[JQ_SUBMIT].pending_bios));
 
 	/* let's synchronous decompression */
-	z_erofs_vle_unzip_all(sb, &io[__FSIO_1], pagepool);
+	z_erofs_vle_unzip_all(sb, &io[JQ_SUBMIT], pagepool);
 }
 
 static int z_erofs_vle_normalaccess_readpage(struct file *file,
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-11-20 14:14 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 02/14] staging: erofs: fix race when the managed cache is enabled Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 03/14] staging: erofs: atomic_cond_read_relaxed on ref-locked workgroup Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 04/14] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}' Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 05/14] staging: erofs: add a full barrier in erofs_workgroup_unfreeze Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 06/14] staging: erofs: fix the definition of DBG_BUGON Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 07/14] staging: erofs: separate into init_once / always Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 08/14] staging: erofs: locked before registering for all new workgroups Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 09/14] staging: erofs: decompress asynchronously if PG_readahead page at first Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 10/14] staging: erofs: rename strange variable names in z_erofs_vle_frontend Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 11/14] staging: erofs: introduce MNGD_MAPPING helper Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 12/14] staging: erofs: fix compressed pages submission flow Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 13/14] staging: erofs: redefine where `owned_workgrp_t' points Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 14/14] staging: erofs: simplify `z_erofs_vle_submit_all' Gao Xiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.