All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Steve French <smfrench@gmail.com>
Cc: Jeff Layton <jlayton@redhat.com>,
	David Howells <dhowells@redhat.com>,
	Trond Myklebust <trondmy@hammerspace.com>,
	Anna Schumaker <anna.schumaker@netapp.com>,
	Steve French <sfrench@samba.org>,
	Dominique Martinet <asmadeus@codewreck.org>,
	CIFS <linux-cifs@vger.kernel.org>,
	ceph-devel@vger.kernel.org, linux-cachefs@redhat.com,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	linux-mm <linux-mm@kvack.org>,
	linux-afs@lists.infradead.org,
	v9fs-developer@lists.sourceforge.net,
	Christoph Hellwig <hch@lst.de>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-nfs <linux-nfs@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	David Wysochanski <dwysocha@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	William Kucharski <william.kucharski@oracle.com>,
	Jaegeuk Kim <jaegeuk@kernel.org>, Chao Yu <yuchao0@huawei.com>,
	linux-f2fs-devel@lists.sourceforge.net
Subject: Re: [PATCH 00/33] Network fs helper library & fscache kiocb API [ver #3]
Date: Tue, 23 Feb 2021 20:27:42 +0000	[thread overview]
Message-ID: <20210223202742.GM2858050@casper.infradead.org> (raw)
In-Reply-To: <CAH2r5mv+AdiODH1TSL+SOQ5qpZ25n7Ysrp+iYxauX9sD8ehhVQ@mail.gmail.com>

On Mon, Feb 15, 2021 at 11:22:20PM -0600, Steve French wrote:
> On Mon, Feb 15, 2021 at 8:10 PM Matthew Wilcox <willy@infradead.org> wrote:
> > The switch from readpages to readahead does help in a couple of corner
> > cases.  For example, if you have two processes reading the same file at
> > the same time, one will now block on the other (due to the page lock)
> > rather than submitting a mess of overlapping and partial reads.
> 
> Do you have a simple repro example of this we could try (fio, dbench, iozone
> etc) to get some objective perf data?

I don't.  The problem was noted by the f2fs people, so maybe they have a
reproducer.

> My biggest worry is making sure that the switch to netfs doesn't degrade
> performance (which might be a low bar now since current network file copy
> perf seems to signifcantly lag at least Windows), and in some easy to understand
> scenarios want to make sure it actually helps perf.

I had a question about that ... you've mentioned having 4x4MB reads
outstanding as being the way to get optimum performance.  Is there a
significant performance difference between 4x4MB, 16x1MB and 64x256kB?
I'm concerned about having "too large" an I/O on the wire at a given time.
For example, with a 1Gbps link, you get 250MB/s.  That's a minimum
latency of 16us for a 4kB page, but 16ms for a 4MB page.

"For very simple tasks, people can perceive latencies down to 2 ms or less"
(https://danluu.com/input-lag/)
so going all the way to 4MB I/Os takes us into the perceptible latency
range, whereas a 256kB I/O is only 1ms.

So could you do some experiments with fio doing direct I/O to see if
it takes significantly longer to do, say, 1TB of I/O in 4MB chunks vs
256kB chunks?  Obviously use threads to keep lots of I/Os outstanding.

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Steve French <smfrench@gmail.com>
Cc: Dominique Martinet <asmadeus@codewreck.org>,
	David Howells <dhowells@redhat.com>,
	William Kucharski <william.kucharski@oracle.com>,
	linux-afs@lists.infradead.org, CIFS <linux-cifs@vger.kernel.org>,
	Jeff Layton <jlayton@redhat.com>, Christoph Hellwig <hch@lst.de>,
	linux-cachefs@redhat.com,
	Trond Myklebust <trondmy@hammerspace.com>,
	v9fs-developer@lists.sourceforge.net,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	ceph-devel@vger.kernel.org, linux-nfs <linux-nfs@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	David Wysochanski <dwysocha@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-f2fs-devel@lists.sourceforge.net,
	Steve French <sfrench@samba.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Anna Schumaker <anna.schumaker@netapp.com>
Subject: Re: [f2fs-dev] [PATCH 00/33] Network fs helper library & fscache kiocb API [ver #3]
Date: Tue, 23 Feb 2021 20:27:42 +0000	[thread overview]
Message-ID: <20210223202742.GM2858050@casper.infradead.org> (raw)
In-Reply-To: <CAH2r5mv+AdiODH1TSL+SOQ5qpZ25n7Ysrp+iYxauX9sD8ehhVQ@mail.gmail.com>

On Mon, Feb 15, 2021 at 11:22:20PM -0600, Steve French wrote:
> On Mon, Feb 15, 2021 at 8:10 PM Matthew Wilcox <willy@infradead.org> wrote:
> > The switch from readpages to readahead does help in a couple of corner
> > cases.  For example, if you have two processes reading the same file at
> > the same time, one will now block on the other (due to the page lock)
> > rather than submitting a mess of overlapping and partial reads.
> 
> Do you have a simple repro example of this we could try (fio, dbench, iozone
> etc) to get some objective perf data?

I don't.  The problem was noted by the f2fs people, so maybe they have a
reproducer.

> My biggest worry is making sure that the switch to netfs doesn't degrade
> performance (which might be a low bar now since current network file copy
> perf seems to signifcantly lag at least Windows), and in some easy to understand
> scenarios want to make sure it actually helps perf.

I had a question about that ... you've mentioned having 4x4MB reads
outstanding as being the way to get optimum performance.  Is there a
significant performance difference between 4x4MB, 16x1MB and 64x256kB?
I'm concerned about having "too large" an I/O on the wire at a given time.
For example, with a 1Gbps link, you get 250MB/s.  That's a minimum
latency of 16us for a 4kB page, but 16ms for a 4MB page.

"For very simple tasks, people can perceive latencies down to 2 ms or less"
(https://danluu.com/input-lag/)
so going all the way to 4MB I/Os takes us into the perceptible latency
range, whereas a 256kB I/O is only 1ms.

So could you do some experiments with fio doing direct I/O to see if
it takes significantly longer to do, say, 1TB of I/O in 4MB chunks vs
256kB chunks?  Obviously use threads to keep lots of I/Os outstanding.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

  reply	other threads:[~2021-02-23 20:29 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-15 15:44 [PATCH 00/33] Network fs helper library & fscache kiocb API [ver #3] David Howells
2021-02-15 15:44 ` [PATCH 01/33] iov_iter: Add ITER_XARRAY David Howells
2021-02-15 15:44 ` [PATCH 02/33] mm: Add an unlock function for PG_private_2/PG_fscache David Howells
2021-02-16 10:26   ` Christoph Hellwig
2021-02-15 15:44 ` [PATCH 03/33] mm: Implement readahead_control pageset expansion David Howells
2021-02-16 10:32   ` Christoph Hellwig
2021-02-16 13:22     ` Matthew Wilcox
2021-02-17 14:36       ` Mike Marshall
2021-02-17 14:36         ` Mike Marshall
2021-02-17 15:42       ` David Howells
2021-02-17 16:59         ` Mike Marshall
2021-02-17 16:59           ` Mike Marshall
2021-02-17 22:20         ` David Howells
2021-02-16 11:48   ` David Howells
2021-02-17 16:13   ` Matthew Wilcox
2021-02-17 22:34   ` David Howells
2021-02-17 22:49     ` Matthew Wilcox
2021-02-18 17:47   ` David Howells
2021-02-15 15:45 ` [PATCH 04/33] vfs: Export rw_verify_area() for use by cachefiles David Howells
2021-02-16 10:26   ` Christoph Hellwig
2021-02-16 11:55   ` David Howells
2021-02-15 15:45 ` [PATCH 05/33] netfs: Make a netfs helper module David Howells
2021-02-15 15:45 ` [PATCH 06/33] netfs, mm: Move PG_fscache helper funcs to linux/netfs.h David Howells
2021-02-15 15:45 ` [PATCH 07/33] netfs, mm: Add unlock_page_fscache() and wait_on_page_fscache() David Howells
2021-02-15 15:45 ` [PATCH 08/33] netfs: Provide readahead and readpage netfs helpers David Howells
2021-02-15 15:45 ` [PATCH 09/33] netfs: Add tracepoints David Howells
2021-02-15 15:46 ` [PATCH 10/33] netfs: Gather stats David Howells
2021-02-15 15:46 ` [PATCH 11/33] netfs: Add write_begin helper David Howells
2021-02-15 15:46 ` [PATCH 12/33] netfs: Define an interface to talk to a cache David Howells
2021-02-15 15:46 ` [PATCH 13/33] netfs: Hold a ref on a page when PG_private_2 is set David Howells
2021-02-15 15:47 ` [PATCH 14/33] fscache, cachefiles: Add alternate API to use kiocb for read/write to cache David Howells
2021-02-16 10:49   ` Christoph Hellwig
2021-02-16 15:08   ` David Howells
2021-02-15 15:47 ` [PATCH 15/33] afs: Disable use of the fscache I/O routines David Howells
2021-02-15 15:47 ` [PATCH 16/33] afs: Pass page into dirty region helpers to provide THP size David Howells
2021-02-15 15:47 ` [PATCH 17/33] afs: Print the operation debug_id when logging an unexpected data version David Howells
2021-02-15 15:47 ` [PATCH 18/33] afs: Move key to afs_read struct David Howells
2021-02-15 15:47 ` [PATCH 19/33] afs: Don't truncate iter during data fetch David Howells
2021-02-15 15:48 ` [PATCH 20/33] afs: Log remote unmarshalling errors David Howells
2021-02-15 15:48 ` [PATCH 21/33] afs: Set up the iov_iter before calling afs_extract_data() David Howells
2021-02-15 15:48 ` [PATCH 22/33] afs: Use ITER_XARRAY for writing David Howells
2021-02-15 15:48 ` [PATCH 23/33] afs: Wait on PG_fscache before modifying/releasing a page David Howells
2021-02-15 15:49 ` [PATCH 24/33] afs: Extract writeback extension into its own function David Howells
2021-02-15 15:49 ` [PATCH 25/33] afs: Prepare for use of THPs David Howells
2021-02-15 15:49 ` [PATCH 26/33] afs: Use the fs operation ops to handle FetchData completion David Howells
2021-02-15 15:49 ` [PATCH 27/33] afs: Use new fscache read helper API David Howells
2021-02-15 15:49 ` [PATCH 28/33] ceph: disable old fscache readpage handling David Howells
2021-02-15 15:50 ` [PATCH 29/33] ceph: rework PageFsCache handling David Howells
2021-02-15 15:50 ` [PATCH 30/33] ceph: fix fscache invalidation David Howells
2021-02-15 15:50 ` [PATCH 31/33] ceph: convert readpage to fscache read helper David Howells
2021-02-15 15:50 ` [PATCH 32/33] ceph: plug write_begin into " David Howells
2021-02-15 15:51 ` [PATCH 33/33] ceph: convert ceph_readpages to ceph_readahead David Howells
2021-02-15 18:05 ` [PATCH 00/33] Network fs helper library & fscache kiocb API [ver #3] Jeff Layton
2021-02-15 18:05   ` Jeff Layton
2021-02-16  0:40   ` Steve French
2021-02-16  0:40     ` Steve French
2021-02-16  2:10     ` Matthew Wilcox
2021-02-16  5:18       ` Steve French
2021-02-16  5:18         ` Steve French
2021-02-16  5:22       ` Steve French
2021-02-16  5:22         ` Steve French
2021-02-23 20:27         ` Matthew Wilcox [this message]
2021-02-23 20:27           ` [f2fs-dev] " Matthew Wilcox
2021-02-24  4:57           ` Steve French
2021-02-24  4:57             ` [f2fs-dev] " Steve French
2021-02-24  4:57             ` Steve French
2021-02-24 13:32       ` David Howells
2021-02-24 15:51         ` Matthew Wilcox
2021-02-16 11:01     ` Jeff Layton
2021-02-16 11:01       ` Jeff Layton
2021-02-15 22:46 ` [PATCH 34/33] netfs: Use in_interrupt() not in_softirq() David Howells
2021-02-16  8:42   ` Christoph Hellwig
2021-02-16  9:06     ` Sebastian Andrzej Siewior
2021-02-16  9:29   ` David Howells
2021-02-16  9:30     ` Christoph Hellwig
2021-02-18 14:02     ` [PATCH 34/33] netfs: Pass flag rather than use in_softirq() David Howells
2021-02-18 15:06       ` Marc Dionne
2021-02-18 15:06         ` Marc Dionne
2021-02-18 15:16       ` Marc Dionne
2021-02-18 15:16         ` Marc Dionne
2021-02-19  9:01       ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210223202742.GM2858050@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=anna.schumaker@netapp.com \
    --cc=asmadeus@codewreck.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=dhowells@redhat.com \
    --cc=dwysocha@redhat.com \
    --cc=hch@lst.de \
    --cc=jaegeuk@kernel.org \
    --cc=jlayton@redhat.com \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-cachefs@redhat.com \
    --cc=linux-cifs@vger.kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=sfrench@samba.org \
    --cc=smfrench@gmail.com \
    --cc=torvalds@linux-foundation.org \
    --cc=trondmy@hammerspace.com \
    --cc=v9fs-developer@lists.sourceforge.net \
    --cc=viro@zeniv.linux.org.uk \
    --cc=william.kucharski@oracle.com \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.