All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	Zhengyuan Liu <liuzhengyuang521@gmail.com>,
	linux-xfs@vger.kernel.org,
	Zhengyuan Liu <liuzhengyuan@kylinos.cn>,
	Christoph Hellwig <hch@infradead.org>
Subject: Re: [Question] About XFS random buffer write performance
Date: Fri, 31 Jul 2020 07:55:25 +0100	[thread overview]
Message-ID: <20200731065525.GC25674@infradead.org> (raw)
In-Reply-To: <20200730220857.GD2005@dread.disaster.area>

[delayed and partial response because I'm on vacation, still feeling
 like I should shime in]

On Fri, Jul 31, 2020 at 08:08:57AM +1000, Dave Chinner wrote:
> In which case, you just identified why the uptodate array is
> necessary and can't be removed. If we do a sub-page write() the page
> is not fully initialised, and so if we then mmap it readpage needs
> to know what part of the page requires initialisation to bring the
> page uptodate before it is exposed to userspace.
> 
> But that also means the behaviour of the 4kB write on 64kB page size
> benchmark is unexplained, because that should only be marking the
> written pages of the page up to date, and so it should be behaving
> exactly like ext4 and only writing back individual uptodate chunks
> on the dirty page....

We have two different cases here:  file read in through read or mmap,
or just writing to a not cached file.  In the former case redpage
reads everything in, and everything will also be written out.  If
OTOH write only read in parts only those parts will be written out.

> > You're clearly talking to different SSD people than I am.
> 
> Perhaps so.
> 
> But it was pretty clear way back in the days of early sandforce SSD
> controllers that compression and zero detection at the FTL level
> resulted in massive reductions in write amplification right down at
> the hardware level. The next generation of controllers all did this
> so they could compete on performance. They still do this, which is
> why industry benchmarks test performance with incompressible data so
> that they expose the flash write perofrmance, not just the rate at
> which the drive can detect and elide runs of zeros...

I don't know of any modern SSDs doing zeroes detection.

> IOWs, showing that even high end devices end up bandwidth limited
> under common workloads using default configurations is a much more
> convincing argument...

Not every SSD is a high end device.  If you have an enterprise SSD
with a non-volatile write cache and a full blown PCIe interface bandwith
is not going to a limitation.  If on the other hand you have an
el-cheapo ATA SSD or a 2x gen3 PCIe consumer with very few flash
channels OTOH..

  parent reply	other threads:[~2020-07-31  6:55 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-28 11:34 [Question] About XFS random buffer write performance Zhengyuan Liu
2020-07-28 15:34 ` Darrick J. Wong
2020-07-28 15:47   ` Matthew Wilcox
2020-07-29  1:54     ` Dave Chinner
2020-07-29  2:12       ` Matthew Wilcox
2020-07-29  5:19         ` Dave Chinner
2020-07-29 18:50           ` Matthew Wilcox
2020-07-29 23:05             ` Dave Chinner
2020-07-30 13:50               ` Matthew Wilcox
2020-07-30 22:08                 ` Dave Chinner
2020-07-30 23:45                   ` Matthew Wilcox
2020-07-31  2:05                     ` Dave Chinner
2020-07-31  2:37                       ` Matthew Wilcox
2020-07-31 20:47                     ` Matthew Wilcox
2020-07-31 22:13                       ` Dave Chinner
2020-08-21  2:39                         ` Zhengyuan Liu
2020-07-31  6:55                   ` Christoph Hellwig [this message]
2020-07-29 13:02       ` Zhengyuan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200731065525.GC25674@infradead.org \
    --to=hch@infradead.org \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=liuzhengyuan@kylinos.cn \
    --cc=liuzhengyuang521@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.