linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Jan Kara <jack@suse.cz>, Dave Chinner <david@fromorbit.com>,
	Christoph Hellwig <hch@lst.de>,
	"Darrick J . Wong" <darrick.wong@oracle.com>,
	Matthew Wilcox <willy@infradead.org>,
	linux-xfs <linux-xfs@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [POC][PATCH] xfs: reduce ilock contention on buffered randrw workload
Date: Wed, 14 Sep 2022 10:39:45 -0700	[thread overview]
Message-ID: <YyIR4XmDYkYIK2ad@magnolia> (raw)
In-Reply-To: <CAOQ4uxhFJWW-ykyzomHCUWfWvbJNEmetw0G5mUYjFGoYJBb7NA@mail.gmail.com>

On Wed, Sep 14, 2022 at 07:29:15PM +0300, Amir Goldstein wrote:
> > > Dave, Christoph,
> > >
> > > I know that you said that changing the atomic buffered read semantics
> > > is out of the question and that you also objected to a mount option
> > > (which nobody will know how to use) and I accept that.
> > >
> > > Given that a performant range locks implementation is not something
> > > trivial to accomplish (Dave please correct me if I am wrong),
> > > and given the massive performance impact of XFS_IOLOCK_SHARED
> > > on this workload,
> > > what do you think about POSIX_FADV_TORN_RW that a specific
> > > application can use to opt-out of atomic buffer read semantics?
> > >
> > > The specific application that I want to modify to use this hint is Samba.
> > > Samba uses IO threads by default to issue pread/pwrite on the server
> > > for IO requested by the SMB client. The IO size is normally larger than
> > > xfs block size and the range may not be block aligned.
> > >
> > > The SMB protocol has explicit byte range locks and the server implements
> > > them, so it is pretty safe to assume that a client that did not request
> > > range locks does not need xfs to do the implicit range locking for it.
> > >
> > > For this reason and because of the huge performance win,
> > > I would like to implement POSIX_FADV_TORN_RW in xfs and
> > > have Samba try to set this hint when supported.
> > >
> > > It is very much possible that NFSv4 servers (user and kennel)
> > > would also want to set this hint for very similar reasons.
> > >
> > > Thoughts?
> >
> > How about range locks for i_rwsem and invalidate_lock?  That could
> > reduce contention on VM farms, though I can only assume that, given that
> > I don't have a reference implementation to play with...
> >
> 
> If you are asking if I have the bandwidth to work on range lock
> then the answer is that I do not.
> 
> IIRC, Dave had a WIP and ran some benchmarks with range locks,
> but I do not know at which state that work is.

Yeah, that's what I was getting at -- I really wish Dave would post that
as an RFC.  The last time I talked to him about it, he was worried that
the extra complexity of the range lock structure would lead to more
memory traffic and overhead.

I /know/ there are a lot of cloud vendors that would appreciate the
speedup that range locking might provide.  I'm also fairly sure there
are also people who want maximum single threaded iops and will /not/
like range locks, but I think we ought to let kernel distributors choose
which one they want.

Recently I've been playing around with static keys, because certain
parts of xfs online fsck need to hook into libxfs.  The hooks have some
overhead, so I'd want to reduce the cost of that to making the
instruction prefetcher skip over a nop sled when fsck isn't running.
I sorta suspect this is a way out -- the distributor selects a default
locking implementation at kbuild time, and we allow a kernel command
line parameter to switch (if desired) during early boot.  That only
works if the compiler supports asm goto (iirc) but that's not /so/
uncommon.

I'll try to prod Dave about this later today, maybe we can find someone
to work on it if he'd post the prototype.

--D

> The question is, if application developers know (or believe)
> that their application does not care about torn reads, are we
> insisting not to allow them to opt out of atomic buffered reads
> (which they do not need) because noone has the time to
> work on range locks?
> 
> If that is the final decision then if customers come to me to
> complain about this workload, my response will be:
> 
> If this workload is important for your application, either
> - contribute developer resource to work on range locks
> - carry a patch in your kernel
> or
> - switch to another filesystem for this workload
> 
> Thanks,
> Amir.

  reply	other threads:[~2022-09-14 17:39 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-04 16:57 [POC][PATCH] xfs: reduce ilock contention on buffered randrw workload Amir Goldstein
2019-04-04 21:17 ` Dave Chinner
2019-04-05 14:02   ` Amir Goldstein
2019-04-07 23:27     ` Dave Chinner
2019-04-08  9:02       ` Amir Goldstein
2019-04-08 14:11         ` Jan Kara
2019-04-08 17:41           ` Amir Goldstein
2019-04-09  8:26             ` Jan Kara
2022-06-17 14:48               ` Amir Goldstein
2022-06-17 15:11                 ` Jan Kara
2022-06-18  8:38                   ` Amir Goldstein
2022-06-20  9:11                     ` Jan Kara
2022-06-21  7:49                       ` Amir Goldstein
2022-06-21  8:59                         ` Jan Kara
2022-06-21 12:53                           ` Amir Goldstein
2022-06-22  3:23                             ` Matthew Wilcox
2022-06-22  9:00                               ` Amir Goldstein
2022-06-22  9:34                                 ` Jan Kara
2022-06-22 16:26                                   ` Amir Goldstein
2022-09-13 14:40                             ` Amir Goldstein
2022-09-14 16:01                               ` Darrick J. Wong
2022-09-14 16:29                                 ` Amir Goldstein
2022-09-14 17:39                                   ` Darrick J. Wong [this message]
2022-09-19 23:09                                     ` Dave Chinner
2022-09-20  2:24                                       ` Dave Chinner
2022-09-20  3:08                                         ` Amir Goldstein
2022-09-21 11:20                                           ` Amir Goldstein
2019-04-08 11:03       ` Jan Kara
2019-04-22 10:55         ` Boaz Harrosh
2019-04-08 10:33   ` Jan Kara
2019-04-08 16:37     ` Davidlohr Bueso
2019-04-11  1:11       ` Dave Chinner
2019-04-16 12:22         ` Dave Chinner
2019-04-18  3:10           ` Dave Chinner
2019-04-18 18:21             ` Davidlohr Bueso
2019-04-20 23:54               ` Dave Chinner
2019-05-03  4:17                 ` Dave Chinner
2019-05-03  5:17                   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YyIR4XmDYkYIK2ad@magnolia \
    --to=djwong@kernel.org \
    --cc=amir73il@gmail.com \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).