From: Gao Xiang <gaoxiang25@huawei.com>
To: Chao Yu <yuchao0@huawei.com>, <linux-erofs@lists.ozlabs.org>
Cc: Miao Xie <miaoxie@huawei.com>, LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH for-next 3/5] erofs: get rid of __stagingpage_alloc helper
Date: Tue, 8 Oct 2019 20:56:14 +0800 [thread overview]
Message-ID: <20191008125616.183715-3-gaoxiang25@huawei.com> (raw)
In-Reply-To: <20191008125616.183715-1-gaoxiang25@huawei.com>
Now open code is much cleaner due to iterative development.
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
---
fs/erofs/zdata.c | 19 +++++--------------
1 file changed, 5 insertions(+), 14 deletions(-)
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index 93f8bc1a64f6..e2a89aa921b1 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -546,15 +546,6 @@ static bool z_erofs_collector_end(struct z_erofs_collector *clt)
return true;
}
-static inline struct page *__stagingpage_alloc(struct list_head *pagepool,
- gfp_t gfp)
-{
- struct page *page = erofs_allocpage(pagepool, gfp, true);
-
- page->mapping = Z_EROFS_MAPPING_STAGING;
- return page;
-}
-
static bool should_alloc_managed_pages(struct z_erofs_decompress_frontend *fe,
unsigned int cachestrategy,
erofs_off_t la)
@@ -661,8 +652,9 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe,
/* should allocate an additional staging page for pagevec */
if (err == -EAGAIN) {
struct page *const newpage =
- __stagingpage_alloc(pagepool, GFP_NOFS);
+ erofs_allocpage(pagepool, GFP_NOFS, true);
+ newpage->mapping = Z_EROFS_MAPPING_STAGING;
err = z_erofs_attach_page(clt, newpage,
Z_EROFS_PAGE_TYPE_EXCLUSIVE);
if (!err)
@@ -1079,15 +1071,14 @@ static struct page *pickup_page_for_submission(struct z_erofs_pcluster *pcl,
unlock_page(page);
put_page(page);
out_allocpage:
- page = __stagingpage_alloc(pagepool, gfp);
+ page = erofs_allocpage(pagepool, gfp, true);
if (oldpage != cmpxchg(&pcl->compressed_pages[nr], oldpage, page)) {
list_add(&page->lru, pagepool);
cpu_relax();
goto repeat;
}
- if (!tocache)
- goto out;
- if (add_to_page_cache_lru(page, mc, index + nr, gfp)) {
+
+ if (!tocache || add_to_page_cache_lru(page, mc, index + nr, gfp)) {
page->mapping = Z_EROFS_MAPPING_STAGING;
goto out;
}
--
2.17.1
next prev parent reply other threads:[~2019-10-08 12:53 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-08 12:56 [PATCH for-next 1/5] erofs: clean up collection handling routines Gao Xiang
2019-10-08 12:56 ` [PATCH for-next 2/5] erofs: remove dead code since managed cache is now built-in Gao Xiang
2019-10-10 7:40 ` Chao Yu
2019-10-08 12:56 ` Gao Xiang [this message]
2019-10-10 7:40 ` [PATCH for-next 3/5] erofs: get rid of __stagingpage_alloc helper Chao Yu
2019-10-08 12:56 ` [PATCH for-next 4/5] erofs: clean up decompress queue stuffs Gao Xiang
2019-10-10 7:41 ` Chao Yu
2019-10-08 12:56 ` [PATCH for-next 5/5] erofs: set iowait for sync decompression Gao Xiang
2019-10-10 7:41 ` Chao Yu
2019-10-10 7:40 ` [PATCH for-next 1/5] erofs: clean up collection handling routines Chao Yu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191008125616.183715-3-gaoxiang25@huawei.com \
--to=gaoxiang25@huawei.com \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-kernel@vger.kernel.org \
--cc=miaoxie@huawei.com \
--cc=yuchao0@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).