From: Matthew Wilcox <willy@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>,
Zhengyuan Liu <liuzhengyuang521@gmail.com>,
linux-xfs@vger.kernel.org,
Zhengyuan Liu <liuzhengyuan@kylinos.cn>,
Christoph Hellwig <hch@infradead.org>
Subject: Re: [Question] About XFS random buffer write performance
Date: Fri, 31 Jul 2020 03:37:29 +0100 [thread overview]
Message-ID: <20200731023729.GN23808@casper.infradead.org> (raw)
In-Reply-To: <20200731020558.GE2005@dread.disaster.area>
On Fri, Jul 31, 2020 at 12:05:58PM +1000, Dave Chinner wrote:
> On Fri, Jul 31, 2020 at 12:45:17AM +0100, Matthew Wilcox wrote:
> > Essentially, we do as you thought
> > it worked, we read the entire page (or at least the portion of it that
> > isn't going to be overwritten. Once all the bytes have been transferred,
> > we can mark the page Uptodate. We'll need to wait for the transfer to
> > happen if the write overlaps a block boundary, but we do that right now.
>
> Right, we can do that, but it would be an entire page read, I think,
> because I see little point int doing two small IOs with a seek in
> between them when a single IO will do the entire thing much faster
> that two small IOs and put less IOP load on the disk. We still have
> to think about impact of IOs on spinning disks, unfortunately...
Heh, maybe don't read the existing code because we actually do that today
if, say, you have a write that spans bytes 800-3000 of a 4kB page. Worse,
we wait for each one individually before submitting the next, so the
drive doesn't even get the chance to see that we're doing read-seek-read.
I think we can profitably skip reading portions of the page if the write
overlaps either the beginning or end of the page, but it's not worth
breaking up an I/O for skipping reading 2-3kB.
The readahead window expands up to 256kB, so clearly we are comfortable
with doing potentially-unnecessary reads of at least that much. I start
to wonder about whether it might be worth skipping part of the page if
you do a 1MB write to the middle of a 2MB page, but the THP patchset
doesn't even try to allocate large pages in the write path yet, so the
question remains moot today.
next prev parent reply other threads:[~2020-07-31 2:37 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-28 11:34 [Question] About XFS random buffer write performance Zhengyuan Liu
2020-07-28 15:34 ` Darrick J. Wong
2020-07-28 15:47 ` Matthew Wilcox
2020-07-29 1:54 ` Dave Chinner
2020-07-29 2:12 ` Matthew Wilcox
2020-07-29 5:19 ` Dave Chinner
2020-07-29 18:50 ` Matthew Wilcox
2020-07-29 23:05 ` Dave Chinner
2020-07-30 13:50 ` Matthew Wilcox
2020-07-30 22:08 ` Dave Chinner
2020-07-30 23:45 ` Matthew Wilcox
2020-07-31 2:05 ` Dave Chinner
2020-07-31 2:37 ` Matthew Wilcox [this message]
2020-07-31 20:47 ` Matthew Wilcox
2020-07-31 22:13 ` Dave Chinner
2020-08-21 2:39 ` Zhengyuan Liu
2020-07-31 6:55 ` Christoph Hellwig
2020-07-29 13:02 ` Zhengyuan Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200731023729.GN23808@casper.infradead.org \
--to=willy@infradead.org \
--cc=darrick.wong@oracle.com \
--cc=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=linux-xfs@vger.kernel.org \
--cc=liuzhengyuan@kylinos.cn \
--cc=liuzhengyuang521@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).