From: David Sterba <dsterba@suse.cz>
To: Anand Jain <anand.jain@oracle.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v2] btrfs: cleanup btrfs_async_submit_limit to return the final limit value
Date: Mon, 6 Nov 2017 16:34:32 +0100 [thread overview]
Message-ID: <20171106153432.GK28789@twin.jikos.cz> (raw)
In-Reply-To: <20171102060353.7103-1-anand.jain@oracle.com>
On Thu, Nov 02, 2017 at 02:03:53PM +0800, Anand Jain wrote:
> We feedback IO progress when it falls below 2/3 times of the limit
> obtained from btrfs_async_submit_limit(), and creates a wait for the
> write process and makes progress during the async submission.
>
> In general device/transport q depth is 256 and, btrfs_async_submit_limit()
> returns 256 times per device which originally was introduced by [1]. But
> 256 at the device level is for all types of IOs (R/W sync/async) and so
> may be it was possible that entire of 256 could have occupied by async
> writes and, so later patch [2] took only 2/3 times of 256 which seemed to
> work well.
>
> [1]
> cb03c743c648
> Btrfs: Change the congestion functions to meter the number of async submits as well
>
> [2]
> 4854ddd0ed0a
> Btrfs: Wait for kernel threads to make progress during async submission
>
> This patch is a cleanup patch, no functional changes. And now as we are taking
> only 2/3 of limit (256), so btrfs_async_submit_limit() will return 170 itself.
>
> Signed-off-by: Anand Jain <anand.jain@oracle.com>
> ---
> IMO:
> 1. If the pdflush issue is fixed, we should go back to bdi congestion method,
> as block layer is more appropriate and accurate to tell when the device is
> congested. Device q depth 256 is very generic.
> 2. Consider RAID1 devices at different speed (SSD and iscsi LUN) not too sure
> if this approach would lead to the FS layer IO performance throttle at the
> speed of the lowest ? wonder how to reliably test it.
>
> fs/btrfs/disk-io.c | 6 ++++--
> fs/btrfs/volumes.c | 1 -
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index dfdab849037b..12702e292007 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -861,7 +861,10 @@ unsigned long btrfs_async_submit_limit(struct btrfs_fs_info *info)
> unsigned long limit = min_t(unsigned long,
> info->thread_pool_size,
> info->fs_devices->open_devices);
> - return 256 * limit;
> + /*
> + * limit:170 is computed as 2/3 * 256.
> + */
> + return 170 * limit;
Please keep it opencoded, the constant will be calculated by compiler
but for code clarity it's better to be written as 2 / 3 and it's
self-documenting, so you can drop the comment.
next prev parent reply other threads:[~2017-11-06 15:36 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-31 12:59 [PATCH] btrfs: cleanup btrfs_async_submit_limit to return the final limit value Anand Jain
2017-10-31 14:18 ` Nikolay Borisov
2017-11-02 5:55 ` Anand Jain
2017-11-02 6:03 ` [PATCH v2] " Anand Jain
2017-11-06 15:34 ` David Sterba [this message]
2017-11-06 15:38 ` David Sterba
2017-11-07 2:22 ` Anand Jain
2017-11-07 2:17 ` [PATCH v3] " Anand Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171106153432.GK28789@twin.jikos.cz \
--to=dsterba@suse.cz \
--cc=anand.jain@oracle.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).