linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>,
	Ritesh Harjani <riteshh@linux.ibm.com>,
	Anju T Sudhakar <anju@linux.vnet.ibm.com>,
	darrick.wong@oracle.com, linux-xfs@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	willy@infradead.org
Subject: Re: [PATCH] iomap: Fix the write_count in iomap_add_to_ioend().
Date: Mon, 24 Aug 2020 10:28:23 -0400	[thread overview]
Message-ID: <20200824142823.GA295033@bfoster> (raw)
In-Reply-To: <20200822131312.GA17997@infradead.org>

On Sat, Aug 22, 2020 at 02:13:12PM +0100, Christoph Hellwig wrote:
> On Sat, Aug 22, 2020 at 07:53:58AM +1000, Dave Chinner wrote:
> > but iomap only allows BIO_MAX_PAGES when creating the bio. And:
> > 
> > #define BIO_MAX_PAGES 256
> > 
> > So even on a 64k page machine, we should not be building a bio with
> > more than 16MB of data in it. So how are we getting 4GB of data into
> > it?
> 
> BIO_MAX_PAGES is the number of bio_vecs in the bio, it has no
> direct implication on the size, as each entry can fit up to UINT_MAX
> bytes.
> 

Do I understand the current code (__bio_try_merge_page() ->
page_is_mergeable()) correctly in that we're checking for physical page
contiguity and not necessarily requiring a new bio_vec per physical
page?

With regard to Dave's earlier point around seeing excessively sized bio
chains.. If I set up a large memory box with high dirty mem ratios and
do contiguous buffered overwrites over a 32GB range followed by fsync, I
can see upwards of 1GB per bio and thus chains on the order of 32+ bios
for the entire write. If I play games with how the buffered overwrite is
submitted (i.e., in reverse) however, then I can occasionally reproduce
a ~32GB chain of ~32k bios, which I think is what leads to problems in
I/O completion on some systems. Granted, I don't reproduce soft lockup
issues on my system with that behavior, so perhaps there's more to that
particular issue.

Regardless, it seems reasonable to me to at least have a conservative
limit on the length of an ioend bio chain. Would anybody object to
iomap_ioend growing a chain counter and perhaps forcing into a new ioend
if we chain something like more than 1k bios at once?

Brian


  reply	other threads:[~2020-08-24 14:28 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-19 10:28 [PATCH] iomap: Fix the write_count in iomap_add_to_ioend() Anju T Sudhakar
2020-08-20 23:11 ` Dave Chinner
2020-08-21  4:45   ` Ritesh Harjani
2020-08-21  6:00     ` Christoph Hellwig
2020-08-21  9:09       ` Ritesh Harjani
2020-08-21 21:53     ` Dave Chinner
2020-08-22 13:13       ` Christoph Hellwig
2020-08-24 14:28         ` Brian Foster [this message]
2020-08-24 15:04           ` Christoph Hellwig
2020-08-24 15:48             ` Brian Foster
2020-08-25  0:42               ` Dave Chinner
2020-08-25 14:49                 ` Brian Foster
2020-08-31  4:01                   ` Ming Lei
2020-08-31 14:35                     ` Brian Foster
2020-09-16  0:12                   ` Darrick J. Wong
2020-09-16  8:45                     ` Christoph Hellwig
2020-09-16 13:07                       ` Brian Foster
2020-09-17  8:04                         ` Christoph Hellwig
2020-09-17 10:42                           ` Brian Foster
2020-09-17 14:48                             ` Christoph Hellwig
2020-09-17 21:33                               ` Darrick J. Wong
2020-09-17 23:13                           ` Ming Lei
2020-08-21  6:01   ` Christoph Hellwig
2020-08-21  6:07 ` Christoph Hellwig
2020-08-21  8:53   ` Ritesh Harjani
2020-08-21 14:49   ` Jens Axboe
2020-08-21 13:31 ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200824142823.GA295033@bfoster \
    --to=bfoster@redhat.com \
    --cc=anju@linux.vnet.ibm.com \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=riteshh@linux.ibm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).