All of lore.kernel.org
 help / color / mirror / Atom feed
From: gaoxiang25@huawei.com (Gao Xiang)
Subject: [PATCH v3 chao/erofs-dev 07/14] staging: erofs: separate into init_once / always
Date: Tue, 20 Nov 2018 22:14:41 +0800	[thread overview]
Message-ID: <20181120141448.29483-7-gaoxiang25@huawei.com> (raw)
In-Reply-To: <20181120141448.29483-1-gaoxiang25@huawei.com>

`z_erofs_vle_workgroup' is heavily generated in the decompression,
for example, it resets 32 bytes redundantly for 64-bit platforms
even through Z_EROFS_VLE_INLINE_PAGEVECS + Z_EROFS_CLUSTER_MAX_PAGES,
default 4, pages are stored in `z_erofs_vle_workgroup'.

As an another example, `struct mutex' takes 72 bytes for our kirin
64-bit platforms, it's unnecessary to be reseted at first and
be initialized each time.

Let's avoid filling all `z_erofs_vle_workgroup' with 0 at first
since most fields are reinitialized to meaningful values later,
and pagevec is no need to initialized at all.

Reviewed-by: Chao Yu <yuchao0 at huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25 at huawei.com>
---
 drivers/staging/erofs/unzip_vle.c | 34 +++++++++++++++++++++++++++++-----
 1 file changed, 29 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
index ede3383ac601..4e5843e8ee35 100644
--- a/drivers/staging/erofs/unzip_vle.c
+++ b/drivers/staging/erofs/unzip_vle.c
@@ -43,12 +43,38 @@ static inline int init_unzip_workqueue(void)
 	return z_erofs_workqueue ? 0 : -ENOMEM;
 }
 
+static void init_once(void *ptr)
+{
+	struct z_erofs_vle_workgroup *grp = ptr;
+	struct z_erofs_vle_work *const work =
+		z_erofs_vle_grab_primary_work(grp);
+	unsigned int i;
+
+	mutex_init(&work->lock);
+	work->nr_pages = 0;
+	work->vcnt = 0;
+	for (i = 0; i < Z_EROFS_CLUSTER_MAX_PAGES; ++i)
+		grp->compressed_pages[i] = NULL;
+}
+
+static void init_always(struct z_erofs_vle_workgroup *grp)
+{
+	struct z_erofs_vle_work *const work =
+		z_erofs_vle_grab_primary_work(grp);
+
+	atomic_set(&grp->obj.refcount, 1);
+	grp->flags = 0;
+
+	DBG_BUGON(work->nr_pages);
+	DBG_BUGON(work->vcnt);
+}
+
 int __init z_erofs_init_zip_subsystem(void)
 {
 	z_erofs_workgroup_cachep =
 		kmem_cache_create("erofs_compress",
 				  Z_EROFS_WORKGROUP_SIZE, 0,
-				  SLAB_RECLAIM_ACCOUNT, NULL);
+				  SLAB_RECLAIM_ACCOUNT, init_once);
 
 	if (z_erofs_workgroup_cachep) {
 		if (!init_unzip_workqueue())
@@ -370,10 +396,11 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 	BUG_ON(grp);
 
 	/* no available workgroup, let's allocate one */
-	grp = kmem_cache_zalloc(z_erofs_workgroup_cachep, GFP_NOFS);
+	grp = kmem_cache_alloc(z_erofs_workgroup_cachep, GFP_NOFS);
 	if (unlikely(!grp))
 		return ERR_PTR(-ENOMEM);
 
+	init_always(grp);
 	grp->obj.index = f->idx;
 	grp->llen = map->m_llen;
 
@@ -381,7 +408,6 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 		(map->m_flags & EROFS_MAP_ZIPPED) ?
 			Z_EROFS_VLE_WORKGRP_FMT_LZ4 :
 			Z_EROFS_VLE_WORKGRP_FMT_PLAIN);
-	atomic_set(&grp->obj.refcount, 1);
 
 	/* new workgrps have been claimed as type 1 */
 	WRITE_ONCE(grp->next, *f->owned_head);
@@ -394,8 +420,6 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
 	work = z_erofs_vle_grab_primary_work(grp);
 	work->pageofs = f->pageofs;
 
-	mutex_init(&work->lock);
-
 	if (gnew) {
 		int err = erofs_register_workgroup(f->sb, &grp->obj, 0);
 
-- 
2.14.4

  parent reply	other threads:[~2018-11-20 14:14 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-20 14:14 [PATCH v3 chao/erofs-dev 01/14] staging: erofs: fix `trace_erofs_readpage' position Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 02/14] staging: erofs: fix race when the managed cache is enabled Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 03/14] staging: erofs: atomic_cond_read_relaxed on ref-locked workgroup Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 04/14] staging: erofs: fix `erofs_workgroup_{try_to_freeze, unfreeze}' Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 05/14] staging: erofs: add a full barrier in erofs_workgroup_unfreeze Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 06/14] staging: erofs: fix the definition of DBG_BUGON Gao Xiang
2018-11-20 14:14 ` Gao Xiang [this message]
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 08/14] staging: erofs: locked before registering for all new workgroups Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 09/14] staging: erofs: decompress asynchronously if PG_readahead page at first Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 10/14] staging: erofs: rename strange variable names in z_erofs_vle_frontend Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 11/14] staging: erofs: introduce MNGD_MAPPING helper Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 12/14] staging: erofs: fix compressed pages submission flow Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 13/14] staging: erofs: redefine where `owned_workgrp_t' points Gao Xiang
2018-11-20 14:14 ` [PATCH v3 chao/erofs-dev 14/14] staging: erofs: simplify `z_erofs_vle_submit_all' Gao Xiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181120141448.29483-7-gaoxiang25@huawei.com \
    --to=gaoxiang25@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.