stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Lu, Davina" <davinalu@amazon.com>
To: "linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
	"tytso@mit.edu" <tytso@mit.edu>,
	"adilger.kernel@dilger.ca" <adilger.kernel@dilger.ca>,
	"regressions@lists.linux.dev" <regressions@lists.linux.dev>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Mohamed Abuelfotoh, Hazem" <abuehaze@amazon.com>,
	hazem ahmed mohamed <hazem.ahmed.abuelfotoh@gmail.com>,
	"Ritesh Harjani (IBM)" <ritesh.list@gmail.com>
Cc: "Kiselev, Oleg" <okiselev@amazon.com>,
	"Liu, Frank" <franklmz@amazon.com>
Subject: significant drop  fio IOPS performance on v5.10
Date: Wed, 28 Sep 2022 06:07:34 +0000	[thread overview]
Message-ID: <5c819c9d6190452f9b10bb78a72cb47f@amazon.com> (raw)
In-Reply-To: <1cdc68e6a98d4e93a95be5d887bcc75d@amazon.com>


Hello,

I was profiling the 5.10 kernel and comparing it to 4.14.  On a system with 64 virtual CPUs and 256 GiB of RAM, I am observing a significant drop in IO performance. Using the following FIO with the script "sudo ftest_write.sh <dev_name>" in attachment, I saw FIO iops result drop from 22K to less than 1K. 
The script simply does: mount a the EXT4 16GiB volume with max IOPS 64000K, mounting option is " -o noatime,nodiratime,data=ordered", then run fio with 2048 fio wring thread with 28800000 file size with { --name=16kb_rand_write_only_2048_jobs --directory=/rdsdbdata1 --rw=randwrite --ioengine=sync --buffered=1 --bs=16k --max-jobs=2048 --numjobs=2048 --runtime=60 --time_based --thread --filesize=28800000 --fsync=1 --group_reporting }.

My analyzing is that the degradation is introduce by commit {244adf6426ee31a83f397b700d964cff12a247d3} and the issue is the contention on rsv_conversion_wq.  The simplest option is to increase the journal size, but that introduces more operational complexity.  Another option is to add the following change in attachment "allow more ext4-rsv-conversion workqueue.patch"

From 27e1b0e14275a281b3529f6a60c7b23a81356751 Mon Sep 17 00:00:00 2001
From: davinalu <davinalu@amazon.com>
Date: Fri, 23 Sep 2022 00:43:53 +0000
Subject: [PATCH] allow more ext4-rsv-conversion workqueue to speedup fio  writing
---
 fs/ext4/super.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ext4/super.c b/fs/ext4/super.c index a0af833f7da7..6b34298cdc3b 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4963,7 +4963,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
         * concurrency isn't really necessary.  Limit it to 1.
         */
        EXT4_SB(sb)->rsv_conversion_wq =
-               alloc_workqueue("ext4-rsv-conversion", WQ_MEM_RECLAIM | WQ_UNBOUND, 1);
+               alloc_workqueue("ext4-rsv-conversion", WQ_MEM_RECLAIM | 
+ WQ_UNBOUND | __WQ_ORDERED, 0);
        if (!EXT4_SB(sb)->rsv_conversion_wq) {
                printk(KERN_ERR "EXT4-fs: failed to create workqueue\n");
                ret = -ENOMEM;

My thought is: If the max_active is 1, it means the "__WQ_ORDERED" combined with WQ_UNBOUND setting, based on alloc_workqueue(). So I added it .
I am not sure should we need "__WQ_ORDERED" or not? without "__WQ_ORDERED" it looks also work at my testbed, but I added since not much fio TP difference on my testbed result with/out "__WQ_ORDERED".

From My understanding and observation: with dioread_unlock and delay_alloc both enabled,  the  bio_endio() and ext4_writepages() will trigger this work queue to ext4_do_flush_completed_IO(). Looks like the work queue is an one-by-one updating: at EXT4 extend.c io_end->list_vec  list only have one io_end_vec each time. So if the BIO has high performance, and we have only one thread to do EXT4 flush will be an bottleneck here. The "ext4-rsv-conversion" this workqueue is mainly for update the EXT4_IO_END_UNWRITTEN extend block(only exist on dioread_unlock and delay_alloc options are set) and extend status  if I understand correctly here. Am  I correct?

This works on my test system and passes xfstests, but  will this cause any corruption on ext4 extends blocks updates, not even sure about the journal transaction updates either?
Can you tell me what I will break if this change is made?

Thanks
Davina


       reply	other threads:[~2022-09-28  6:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <357ace228adf4e859df5e9f3f4f18b49@amazon.com>
     [not found] ` <1cdc68e6a98d4e93a95be5d887bcc75d@amazon.com>
2022-09-28  6:07   ` Lu, Davina [this message]
2022-09-28 22:36     ` significant drop fio IOPS performance on v5.10 Theodore Ts'o
2022-10-05  7:24       ` Lu, Davina
2022-11-09  1:02       ` Lu, Davina
2022-09-29  7:03     ` significant drop fio IOPS performance on v5.10 #forregzbot Thorsten Leemhuis
2022-09-29 11:36       ` Theodore Ts'o

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c819c9d6190452f9b10bb78a72cb47f@amazon.com \
    --to=davinalu@amazon.com \
    --cc=abuehaze@amazon.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=franklmz@amazon.com \
    --cc=hazem.ahmed.abuelfotoh@gmail.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=okiselev@amazon.com \
    --cc=regressions@lists.linux.dev \
    --cc=ritesh.list@gmail.com \
    --cc=stable@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).