From: Davidlohr Bueso <dbueso@suse.com> To: Dave Chinner <david@fromorbit.com> Cc: Jan Kara <jack@suse.cz>, Amir Goldstein <amir73il@gmail.com>, "Darrick J . Wong" <darrick.wong@oracle.com>, Christoph Hellwig <hch@lst.de>, "Matthew Wilcox" <willy@infradead.org>, <linux-xfs@vger.kernel.org>, <linux-fsdevel@vger.kernel.org> Subject: Re: [POC][PATCH] xfs: reduce ilock contention on buffered randrw workload Date: Thu, 18 Apr 2019 11:21:34 -0700 [thread overview] Message-ID: <1555611694.18313.12.camel@suse.com> (raw) In-Reply-To: <20190418031013.GX29573@dread.disaster.area> On Thu, 2019-04-18 at 13:10 +1000, Dave Chinner wrote: > Now the stuff I've been working on has the same interface as > Davidlohr's patch, so I can swap and change them without thinking > about it. It's still completely unoptimised, but: > > IOPS read/write (direct IO) > processes rwsem DB rangelock XFS > rangelock > 1 78k / 78k 75k / 75k 72k / 72k > 2 131k / 131k 123k / 123k 133k / 133k > 4 267k / 267k 183k / 183k 237k / 237k > 8 372k / 372k 177k / 177k 265k / 265k > 16 315k / 315k 135k / 135k 228k / 228k > > It's still substantially faster than the interval tree code. In general another big difference between both rangelock vs rwsems (when comparing them with full ranges) is that the latter will do writer optimistic spinning, so saving a context switch under the right scenarios provides mayor wins for rwsems -- I'm not sure if this applies to your fio tests, though. And pretty soon readers will also do this, hence rwsem will become a try-hard-not-to-sleep lock. One of the reasons why I was hesitant with Btrees was the fact that insertion requires memory allocation, something I wanted to avoid... per your numbers, sacrificing tree depth was the wrong choice. Thanks for sharing these numbers. > > BTW, if I take away the rwsem serialisation altogether, this > test tops out at just under 500k/500k at 8 threads, and at 16 > threads has started dropping off (~440k/440k). So the rwsem is > a scalability limitation at just 8 threads.... > > /me goes off and thinks more about adding optimistic lock coupling > to the XFS iext btree to get rid of the need for tree-wide > locking altogether I was not aware of this code. Thanks, Davidlohr
next prev parent reply other threads:[~2019-04-18 18:22 UTC|newest] Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-04-04 16:57 Amir Goldstein 2019-04-04 21:17 ` Dave Chinner 2019-04-05 14:02 ` Amir Goldstein 2019-04-07 23:27 ` Dave Chinner 2019-04-08 9:02 ` Amir Goldstein 2019-04-08 14:11 ` Jan Kara 2019-04-08 17:41 ` Amir Goldstein 2019-04-09 8:26 ` Jan Kara 2019-04-08 11:03 ` Jan Kara 2019-04-22 10:55 ` Boaz Harrosh 2019-04-08 10:33 ` Jan Kara 2019-04-08 16:37 ` Davidlohr Bueso 2019-04-11 1:11 ` Dave Chinner 2019-04-16 12:22 ` Dave Chinner 2019-04-18 3:10 ` Dave Chinner 2019-04-18 18:21 ` Davidlohr Bueso [this message] 2019-04-20 23:54 ` Dave Chinner 2019-05-03 4:17 ` Dave Chinner 2019-05-03 5:17 ` Dave Chinner
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1555611694.18313.12.camel@suse.com \ --to=dbueso@suse.com \ --cc=amir73il@gmail.com \ --cc=darrick.wong@oracle.com \ --cc=david@fromorbit.com \ --cc=hch@lst.de \ --cc=jack@suse.cz \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-xfs@vger.kernel.org \ --cc=willy@infradead.org \ --subject='Re: [POC][PATCH] xfs: reduce ilock contention on buffered randrw workload' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).