linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>,
	Zhengyuan Liu <liuzhengyuang521@gmail.com>,
	linux-xfs@vger.kernel.org,
	Zhengyuan Liu <liuzhengyuan@kylinos.cn>,
	Christoph Hellwig <hch@infradead.org>
Subject: Re: [Question] About XFS random buffer write performance
Date: Thu, 30 Jul 2020 14:50:40 +0100	[thread overview]
Message-ID: <20200730135040.GD23808@casper.infradead.org> (raw)
In-Reply-To: <20200729230503.GA2005@dread.disaster.area>

On Thu, Jul 30, 2020 at 09:05:03AM +1000, Dave Chinner wrote:
> On Wed, Jul 29, 2020 at 07:50:35PM +0100, Matthew Wilcox wrote:
> > I had a bit of a misunderstanding.  Let's discard that proposal
> > and discuss what we want to optimise for, ignoring THPs.  We don't
> > need to track any per-block state, of course.  We could implement
> > __iomap_write_begin() by reading in the entire page (skipping the last
> > few blocks if they lie outside i_size, of course) and then marking the
> > entire page Uptodate.
> 
> __iomap_write_begin() already does read-around for sub-page writes.
> And, if necessary, it does zeroing of unwritten extents, newly
> allocated ranges and ranges beyond EOF and marks them uptodate
> appropriately.

But it doesn't read in the entire page, just the blocks in the page which
will be touched by the write.

> > Buffer heads track several bits of information about each block:
> >  - Uptodate (contents of cache at least as recent as storage)
> >  - Dirty (contents of cache more recent than storage)
> >  - ... er, I think all the rest are irrelevant for iomap
> 
> 
> Yes, it is. And we optimised out the dirty tracking by just using
> the single dirty bit in the page.
> 
> > I think I just talked myself into what you were arguing for -- that we
> > change the ->uptodate bit array into a ->dirty bit array.
> > 
> > That implies that we lose the current optimisation that we can write at
> > a blocksize alignment into the page cache and not read from storage.
> 
> iomap does not do that. It always reads the entire page in, even for
> block aligned sub-page writes. IIRC, we even allocate on page
> granularity for sub-page block size filesystems so taht we fill
> holes and can do full page writes in writeback because this tends to
> significantly reduce worst case file fragmentation for random sparse
> writes...

That isn't what __iomap_write_begin() does today.

Consider a 1kB block size filesystem and a 4kB page size host.  Trace through
writing 1kB at a 2kB offset into the file.
We call iomap_write_begin() with pos of 2048, len 1024.
Allocate a new page
Call __iomap_write_begin(2048, 1024)
block_start = 2048
block_end = 3072
iomap_adjust_read_range() sets poff and plen to 2048 & 1024
from == 2048, to == 3072, so we continue
block_start + plen == block_end so the loop terminates.
We didn't read anything.

> Modern really SSDs don't care about runs of zeros being written.
> They compress and/or deduplicate such things on the fly as part of
> their internal write-amplification reduction strategies. Pretty much
> all SSDs on the market these days - consumer or enterprise - do this
> sort of thing in their FTLs and so writing more than the exact
> changed data really doesn't make a difference.

You're clearly talking to different SSD people than I am.


  reply	other threads:[~2020-07-30 13:50 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-28 11:34 [Question] About XFS random buffer write performance Zhengyuan Liu
2020-07-28 15:34 ` Darrick J. Wong
2020-07-28 15:47   ` Matthew Wilcox
2020-07-29  1:54     ` Dave Chinner
2020-07-29  2:12       ` Matthew Wilcox
2020-07-29  5:19         ` Dave Chinner
2020-07-29 18:50           ` Matthew Wilcox
2020-07-29 23:05             ` Dave Chinner
2020-07-30 13:50               ` Matthew Wilcox [this message]
2020-07-30 22:08                 ` Dave Chinner
2020-07-30 23:45                   ` Matthew Wilcox
2020-07-31  2:05                     ` Dave Chinner
2020-07-31  2:37                       ` Matthew Wilcox
2020-07-31 20:47                     ` Matthew Wilcox
2020-07-31 22:13                       ` Dave Chinner
2020-08-21  2:39                         ` Zhengyuan Liu
2020-07-31  6:55                   ` Christoph Hellwig
2020-07-29 13:02       ` Zhengyuan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200730135040.GD23808@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=liuzhengyuan@kylinos.cn \
    --cc=liuzhengyuang521@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).