From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9002ECE562 for ; Wed, 19 Sep 2018 05:50:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 53062208A3 for ; Wed, 19 Sep 2018 05:50:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53062208A3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730962AbeISL0R (ORCPT ); Wed, 19 Sep 2018 07:26:17 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:12616 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726972AbeISL0R (ORCPT ); Wed, 19 Sep 2018 07:26:17 -0400 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id C8C7782C15E48; Wed, 19 Sep 2018 13:49:57 +0800 (CST) Received: from szvp000100637.huawei.com (10.162.55.131) by smtp.huawei.com (10.3.19.213) with Microsoft SMTP Server (TLS) id 14.3.399.0; Wed, 19 Sep 2018 13:49:53 +0800 From: Gao Xiang To: Greg Kroah-Hartman , Chao Yu , CC: LKML , , "Miao Xie" , Du Wei , Gao Xiang Subject: [PATCH 3/6] staging: erofs: drop multiref support temporarily Date: Wed, 19 Sep 2018 13:49:07 +0800 Message-ID: <1537336150-90604-3-git-send-email-gaoxiang25@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1537336150-90604-1-git-send-email-gaoxiang25@huawei.com> References: <1537336150-90604-1-git-send-email-gaoxiang25@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.162.55.131] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Multiref support means that a compressed page could have more than one reference, which is designed for on-disk data deduplication. However, mkfs doesn't support this mode at this moment, and the kernel implementation is also broken. Let's drop multiref support. If it is fully implemented in the future, it can be reverted later. Signed-off-by: Gao Xiang --- drivers/staging/erofs/unzip_vle.c | 36 +++++------------------------------- drivers/staging/erofs/unzip_vle.h | 12 +----------- 2 files changed, 6 insertions(+), 42 deletions(-) diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c index e820490..9fe18cd3 100644 --- a/drivers/staging/erofs/unzip_vle.c +++ b/drivers/staging/erofs/unzip_vle.c @@ -301,12 +301,9 @@ struct z_erofs_vle_work_finder { grp = container_of(egrp, struct z_erofs_vle_workgroup, obj); *f->grp_ret = grp; -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF work = z_erofs_vle_grab_work(grp, f->pageofs); + /* if multiref is disabled, `primary' is always true */ primary = true; -#else - BUG(); -#endif DBG_BUGON(work->pageofs != f->pageofs); @@ -368,12 +365,9 @@ struct z_erofs_vle_work_finder { struct z_erofs_vle_workgroup *grp = *f->grp_ret; struct z_erofs_vle_work *work; -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF + /* if multiref is disabled, grp should never be nullptr */ BUG_ON(grp != NULL); -#else - if (grp != NULL) - goto skip; -#endif + /* no available workgroup, let's allocate one */ grp = kmem_cache_zalloc(z_erofs_workgroup_cachep, GFP_NOFS); if (unlikely(grp == NULL)) @@ -396,13 +390,7 @@ struct z_erofs_vle_work_finder { *f->hosted = true; gnew = true; -#ifdef CONFIG_EROFS_FS_ZIP_MULTIREF -skip: - /* currently unimplemented */ - BUG(); -#else work = z_erofs_vle_grab_primary_work(grp); -#endif work->pageofs = f->pageofs; mutex_init(&work->lock); @@ -797,9 +785,7 @@ static int z_erofs_vle_unzip(struct super_block *sb, struct z_erofs_pagevec_ctor ctor; unsigned int nr_pages; -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF unsigned int sparsemem_pages = 0; -#endif struct page *pages_onstack[Z_EROFS_VLE_VMAP_ONSTACK_PAGES]; struct page **pages, **compressed_pages, *page; unsigned int i, llen; @@ -811,11 +797,7 @@ static int z_erofs_vle_unzip(struct super_block *sb, int err; might_sleep(); -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF work = z_erofs_vle_grab_primary_work(grp); -#else - BUG(); -#endif BUG_ON(!READ_ONCE(work->nr_pages)); mutex_lock(&work->lock); @@ -866,13 +848,11 @@ static int z_erofs_vle_unzip(struct super_block *sb, pagenr = z_erofs_onlinepage_index(page); BUG_ON(pagenr >= nr_pages); - -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF BUG_ON(pages[pagenr] != NULL); - ++sparsemem_pages; -#endif + pages[pagenr] = page; } + sparsemem_pages = i; z_erofs_pagevec_ctor_exit(&ctor, true); @@ -902,10 +882,8 @@ static int z_erofs_vle_unzip(struct super_block *sb, pagenr = z_erofs_onlinepage_index(page); BUG_ON(pagenr >= nr_pages); -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF BUG_ON(pages[pagenr] != NULL); ++sparsemem_pages; -#endif pages[pagenr] = page; overlapped = true; @@ -931,12 +909,10 @@ static int z_erofs_vle_unzip(struct super_block *sb, if (err != -ENOTSUPP) goto out_percpu; -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF if (sparsemem_pages >= nr_pages) { BUG_ON(sparsemem_pages > nr_pages); goto skip_allocpage; } -#endif for (i = 0; i < nr_pages; ++i) { if (pages[i] != NULL) @@ -945,9 +921,7 @@ static int z_erofs_vle_unzip(struct super_block *sb, pages[i] = __stagingpage_alloc(page_pool, GFP_NOFS); } -#ifndef CONFIG_EROFS_FS_ZIP_MULTIREF skip_allocpage: -#endif vout = erofs_vmap(pages, nr_pages); err = z_erofs_vle_unzip_vmap(compressed_pages, diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h index 3939985..3316bc3 100644 --- a/drivers/staging/erofs/unzip_vle.h +++ b/drivers/staging/erofs/unzip_vle.h @@ -47,13 +47,6 @@ static inline bool z_erofs_gather_if_stagingpage(struct list_head *page_pool, #define Z_EROFS_VLE_INLINE_PAGEVECS 3 struct z_erofs_vle_work { - /* struct z_erofs_vle_work *left, *right; */ - -#ifdef CONFIG_EROFS_FS_ZIP_MULTIREF - struct list_head list; - - atomic_t refcount; -#endif struct mutex lock; /* I: decompression offset in page */ @@ -107,10 +100,8 @@ static inline void z_erofs_vle_set_workgrp_fmt( grp->flags = fmt | (grp->flags & ~Z_EROFS_VLE_WORKGRP_FMT_MASK); } -#ifdef CONFIG_EROFS_FS_ZIP_MULTIREF -#error multiref decompression is unimplemented yet -#else +/* definitions if multiref is disabled */ #define z_erofs_vle_grab_primary_work(grp) (&(grp)->work) #define z_erofs_vle_grab_work(grp, pageofs) (&(grp)->work) #define z_erofs_vle_work_workgroup(wrk, primary) \ @@ -118,7 +109,6 @@ static inline void z_erofs_vle_set_workgrp_fmt( struct z_erofs_vle_workgroup, work) : \ ({ BUG(); (void *)NULL; })) -#endif #define Z_EROFS_WORKGROUP_SIZE sizeof(struct z_erofs_vle_workgroup) -- 1.9.1