From: Christoph Hellwig <hch@infradead.org>
To: Brian Foster <bfoster@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>,
Dave Chinner <david@fromorbit.com>,
Ritesh Harjani <riteshh@linux.ibm.com>,
Anju T Sudhakar <anju@linux.vnet.ibm.com>,
darrick.wong@oracle.com, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
willy@infradead.org
Subject: Re: [PATCH] iomap: Fix the write_count in iomap_add_to_ioend().
Date: Mon, 24 Aug 2020 16:04:17 +0100 [thread overview]
Message-ID: <20200824150417.GA12258@infradead.org> (raw)
In-Reply-To: <20200824142823.GA295033@bfoster>
On Mon, Aug 24, 2020 at 10:28:23AM -0400, Brian Foster wrote:
> Do I understand the current code (__bio_try_merge_page() ->
> page_is_mergeable()) correctly in that we're checking for physical page
> contiguity and not necessarily requiring a new bio_vec per physical
> page?
Yes.
> With regard to Dave's earlier point around seeing excessively sized bio
> chains.. If I set up a large memory box with high dirty mem ratios and
> do contiguous buffered overwrites over a 32GB range followed by fsync, I
> can see upwards of 1GB per bio and thus chains on the order of 32+ bios
> for the entire write. If I play games with how the buffered overwrite is
> submitted (i.e., in reverse) however, then I can occasionally reproduce
> a ~32GB chain of ~32k bios, which I think is what leads to problems in
> I/O completion on some systems. Granted, I don't reproduce soft lockup
> issues on my system with that behavior, so perhaps there's more to that
> particular issue.
>
> Regardless, it seems reasonable to me to at least have a conservative
> limit on the length of an ioend bio chain. Would anybody object to
> iomap_ioend growing a chain counter and perhaps forcing into a new ioend
> if we chain something like more than 1k bios at once?
So what exactly is the problem of processing a long chain in the
workqueue vs multiple small chains? Maybe we need a cond_resched()
here and there, but I don't see how we'd substantially change behavior.
next prev parent reply other threads:[~2020-08-24 15:04 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-19 10:28 [PATCH] iomap: Fix the write_count in iomap_add_to_ioend() Anju T Sudhakar
2020-08-20 23:11 ` Dave Chinner
2020-08-21 4:45 ` Ritesh Harjani
2020-08-21 6:00 ` Christoph Hellwig
2020-08-21 9:09 ` Ritesh Harjani
2020-08-21 21:53 ` Dave Chinner
2020-08-22 13:13 ` Christoph Hellwig
2020-08-24 14:28 ` Brian Foster
2020-08-24 15:04 ` Christoph Hellwig [this message]
2020-08-24 15:48 ` Brian Foster
2020-08-25 0:42 ` Dave Chinner
2020-08-25 14:49 ` Brian Foster
2020-08-31 4:01 ` Ming Lei
2020-08-31 14:35 ` Brian Foster
2020-09-16 0:12 ` Darrick J. Wong
2020-09-16 8:45 ` Christoph Hellwig
2020-09-16 13:07 ` Brian Foster
2020-09-17 8:04 ` Christoph Hellwig
2020-09-17 10:42 ` Brian Foster
2020-09-17 14:48 ` Christoph Hellwig
2020-09-17 21:33 ` Darrick J. Wong
2020-09-17 23:13 ` Ming Lei
2020-08-21 6:01 ` Christoph Hellwig
2020-08-21 6:07 ` Christoph Hellwig
2020-08-21 8:53 ` Ritesh Harjani
2020-08-21 14:49 ` Jens Axboe
2020-08-21 13:31 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200824150417.GA12258@infradead.org \
--to=hch@infradead.org \
--cc=anju@linux.vnet.ibm.com \
--cc=bfoster@redhat.com \
--cc=darrick.wong@oracle.com \
--cc=david@fromorbit.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=riteshh@linux.ibm.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).