From: Vasily Averin <vvs@virtuozzo.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: linux-fsdevel@vger.kernel.org, Maxim Patlasov <maximvp@gmail.com>,
Kirill Tkhai <ktkhai@virtuozzo.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH] fuse_writepages_fill() optimization to avoid WARN_ON in tree_insert
Date: Thu, 25 Jun 2020 12:02:53 +0300 [thread overview]
Message-ID: <d6e8ef46-c311-b993-909c-4ae2823e2237@virtuozzo.com> (raw)
In-Reply-To: <2733b41a-b4c6-be94-0118-a1a8d6f26eec@virtuozzo.com>
In current implementation fuse_writepages_fill() tries to share the code:
for new wpa it calls tree_insert() with num_pages = 0
then switches to common code used non-modified num_pages
and increments it at the very end.
Though it triggers WARN_ON(!wpa->ia.ap.num_pages) in tree_insert()
WARNING: CPU: 1 PID: 17211 at fs/fuse/file.c:1728 tree_insert+0xab/0xc0 [fuse]
RIP: 0010:tree_insert+0xab/0xc0 [fuse]
Call Trace:
fuse_writepages_fill+0x5da/0x6a0 [fuse]
write_cache_pages+0x171/0x470
fuse_writepages+0x8a/0x100 [fuse]
do_writepages+0x43/0xe0
This patch re-works fuse_writepages_fill() to call tree_insert()
with num_pages = 1 and avoids its subsequent increment and
an extra spin_lock(&fi->lock) for newly added wpa.
Fixes: 6b2fb79963fb ("fuse: optimize writepages search")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
---
fs/fuse/file.c | 56 +++++++++++++++++++++++++++++---------------------------
1 file changed, 29 insertions(+), 27 deletions(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index e573b0c..cf267bd 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1966,10 +1966,9 @@ static bool fuse_writepage_in_flight(struct fuse_writepage_args *new_wpa,
struct fuse_writepage_args *old_wpa;
struct fuse_args_pages *new_ap = &new_wpa->ia.ap;
- WARN_ON(new_ap->num_pages != 0);
+ WARN_ON(new_ap->num_pages != 1);
spin_lock(&fi->lock);
- rb_erase(&new_wpa->writepages_entry, &fi->writepages);
old_wpa = fuse_find_writeback(fi, page->index, page->index);
if (!old_wpa) {
tree_insert(&fi->writepages, new_wpa);
@@ -1977,7 +1976,6 @@ static bool fuse_writepage_in_flight(struct fuse_writepage_args *new_wpa,
return false;
}
- new_ap->num_pages = 1;
for (tmp = old_wpa->next; tmp; tmp = tmp->next) {
pgoff_t curr_index;
@@ -2020,7 +2018,7 @@ static int fuse_writepages_fill(struct page *page,
struct fuse_conn *fc = get_fuse_conn(inode);
struct page *tmp_page;
bool is_writeback;
- int err;
+ int index, err;
if (!data->ff) {
err = -EIO;
@@ -2083,44 +2081,48 @@ static int fuse_writepages_fill(struct page *page,
wpa->next = NULL;
ap->args.in_pages = true;
ap->args.end = fuse_writepage_end;
- ap->num_pages = 0;
+ ap->num_pages = 1;
wpa->inode = inode;
-
- spin_lock(&fi->lock);
- tree_insert(&fi->writepages, wpa);
- spin_unlock(&fi->lock);
-
+ index = 0;
data->wpa = wpa;
+ } else {
+ index = ap->num_pages;
}
set_page_writeback(page);
copy_highpage(tmp_page, page);
- ap->pages[ap->num_pages] = tmp_page;
- ap->descs[ap->num_pages].offset = 0;
- ap->descs[ap->num_pages].length = PAGE_SIZE;
+ ap->pages[index] = tmp_page;
+ ap->descs[index].offset = 0;
+ ap->descs[index].length = PAGE_SIZE;
inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK);
inc_node_page_state(tmp_page, NR_WRITEBACK_TEMP);
err = 0;
- if (is_writeback && fuse_writepage_in_flight(wpa, page)) {
- end_page_writeback(page);
- data->wpa = NULL;
- goto out_unlock;
+ if (is_writeback) {
+ if (fuse_writepage_in_flight(wpa, page)) {
+ end_page_writeback(page);
+ data->wpa = NULL;
+ goto out_unlock;
+ }
+ } else if (!index) {
+ spin_lock(&fi->lock);
+ tree_insert(&fi->writepages, wpa);
+ spin_unlock(&fi->lock);
}
- data->orig_pages[ap->num_pages] = page;
-
- /*
- * Protected by fi->lock against concurrent access by
- * fuse_page_is_writeback().
- */
- spin_lock(&fi->lock);
- ap->num_pages++;
- spin_unlock(&fi->lock);
+ data->orig_pages[index] = page;
+ if (index) {
+ /*
+ * Protected by fi->lock against concurrent access by
+ * fuse_page_is_writeback().
+ */
+ spin_lock(&fi->lock);
+ ap->num_pages++;
+ spin_unlock(&fi->lock);
+ }
out_unlock:
unlock_page(page);
-
return err;
}
--
1.8.3.1
next parent reply other threads:[~2020-06-25 9:03 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <2733b41a-b4c6-be94-0118-a1a8d6f26eec@virtuozzo.com>
2020-06-25 9:02 ` Vasily Averin [this message]
2020-06-27 10:31 ` [PATCH] fuse_writepages_fill() optimization to avoid WARN_ON in tree_insert Sedat Dilek
2020-06-29 21:11 ` Vivek Goyal
2020-07-11 4:01 ` Miklos Szeredi
2020-07-13 8:02 ` Vasily Averin
2020-07-13 16:14 ` Miklos Szeredi
2020-07-14 6:18 ` Vasily Averin
2020-07-14 12:40 ` Sedat Dilek
2020-07-14 12:52 ` Miklos Szeredi
2020-07-14 12:57 ` Sedat Dilek
2020-07-15 7:48 ` Sedat Dilek
2020-06-25 9:30 ` [PATCH] fuse_writepages_fill: simplified "if-else if" constuction Vasily Averin
2020-07-14 12:24 ` Miklos Szeredi
2020-07-14 18:53 ` Vasily Averin
2020-06-25 9:39 ` [PATCH] fuse_writepages ignores errors from fuse_writepages_fill Vasily Averin
2020-07-14 12:44 ` Miklos Szeredi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6e8ef46-c311-b993-909c-4ae2823e2237@virtuozzo.com \
--to=vvs@virtuozzo.com \
--cc=ktkhai@virtuozzo.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maximvp@gmail.com \
--cc=miklos@szeredi.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).