linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Brian Foster <bfoster@redhat.com>
Cc: "Darrick J. Wong" <djwong@kernel.org>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org
Subject: Re: [PATCH v2 2/2] xfs: kick extra large ioends to completion workqueue
Date: Fri, 7 May 2021 15:40:39 +0100	[thread overview]
Message-ID: <YJVRZ1Bg1gan2BrW@casper.infradead.org> (raw)
In-Reply-To: <YJVJZzld5ucxnlAH@bfoster>

On Fri, May 07, 2021 at 10:06:31AM -0400, Brian Foster wrote:
> > <nod> So I guess I'm saying that my resistance to /this/ part of the
> > changes is melting away.  For a 2GB+ write IO, I guess the extra overhead
> > of poking a workqueue can be amortized over the sheer number of pages.
> 
> I think the main question is what is a suitable size threshold to kick
> an ioend over to the workqueue? Looking back, I think this patch just
> picked 256k randomly to propose the idea. ISTM there could be a
> potentially large window from the point where I/O latency starts to
> dominate (over the extra context switch for wq processing) and where the
> softlockup warning thing will eventually trigger due to having too many
> pages. I think that means we could probably use a more conservative
> value, I'm just not sure what value should be (10MB, 100MB, 1GB?). If
> you have a reproducer it might be interesting to experiment with that.

To my mind, there are four main types of I/Os.

1. Small, dependent reads -- maybe reading a B-tree block so we can get
the next pointer.  Those need latency above all.

2. Readahead.  Tend to be large I/Os and latency is not a concern

3. Page writeback which tend to be large and can afford the extra latency.

4. Log writes.  These tend to be small, and I'm not sure what increasing
their latency would do.  Probably bad.

I like 256kB as a threshold.  I think I could get behind anything from
128kB to 1MB.  I don't think playing with it is going to be really
interesting because most IOs are going to be far below or far above
that threshold.


  reply	other threads:[~2021-05-07 14:41 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-02 15:33 [PATCH 0/2] iomap: avoid soft lockup warnings on large ioends Brian Foster
2020-10-02 15:33 ` [PATCH 1/2] iomap: resched ioend completion when in non-atomic context Brian Foster
2020-10-02 15:33 ` [PATCH 2/2] xfs: kick extra large ioends to completion workqueue Brian Foster
2020-10-02 16:19   ` Darrick J. Wong
2020-10-02 16:38     ` Brian Foster
2020-10-03  0:26   ` kernel test robot
2020-10-05 15:21   ` [PATCH v2 " Brian Foster
2020-10-06  3:55     ` Darrick J. Wong
2020-10-06 12:44       ` Brian Foster
2021-05-06 19:31         ` Darrick J. Wong
2021-05-07 14:06           ` Brian Foster
2021-05-07 14:40             ` Matthew Wilcox [this message]
2021-05-10  2:45               ` Dave Chinner
2020-10-06 14:07       ` Matthew Wilcox
2021-05-06 19:34         ` Darrick J. Wong
2021-05-06 19:45           ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YJVRZ1Bg1gan2BrW@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=bfoster@redhat.com \
    --cc=darrick.wong@oracle.com \
    --cc=djwong@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).