linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Amir Goldstein <amir73il@gmail.com>
To: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>,
	"Darrick J . Wong" <darrick.wong@oracle.com>,
	Dave Chinner <david@fromorbit.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-xfs <linux-xfs@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [POC][PATCH] xfs: reduce ilock contention on buffered randrw workload
Date: Wed, 22 Jun 2022 19:26:48 +0300	[thread overview]
Message-ID: <CAOQ4uxiGNHp55AU-dP1W13Qc=WAi3FTephzus+QtWgJuR24Cjw@mail.gmail.com> (raw)
In-Reply-To: <20220622093403.hvsk2zmlw7o37phe@quack3.lan>

> > I am going to go find a machine with slow disk to test the random rw
> > workload again on both xfs and ext4 pre and post invalidate_lock and
> > to try out the pre-warm page cache solution.
> >
> > The results could be:
> > a) ext4 random rw performance has been degraded by invalidate_lock
> > b) pre-warm page cache before taking IOLOCK is going to improve
> >     xfs random rw performance
> > c) A little bit of both

The correct answer is b. :)

>
> Well, numbers always beat the theory so I'm all for measuring it but let me
> say our kernel performance testing within SUSE didn't show significant hit
> being introduced by invalidate_lock for any major filesystem.
>

Here are the numbers produced on v5.10.109, on v5.19-rc3
and on v5.19-rc3+ which includes the pre-warn test patch [1].

The numbers are produced by a filebench workload [2] that runs
8 random reader threads and 8 random writer threads for 60 seconds
on a cold cache preallocated 5GB file.

Note that the machine I tested with has much faster storage than
the one that was used 3 years ago, but the performance impact
of IOLOCK is still very clear, even larger in this test.

If there are no other objections to the pre-warm concept,
I will go on to write and test a proper patch.

Thanks,
Amir.

[1] https://github.com/amir73il/linux/commit/70e94f3471739c442b1110ee46e8b59e5d5f5042
[2] https://github.com/amir73il/filebench/blob/overlayfs-devel/workloads/randomrw.f

--- EXT4 5.10 ---
 filebench randomrw (8 read threads, 8 write threads)
 kernel 5.10.109, ext4

Test #1:
rand-write1          3002127ops    50020ops/s 390.8mb/s    0.156ms/op
[0.002ms - 213.755ms]
rand-read1           31749234ops   528988ops/s 4132.7mb/s
0.010ms/op [0.001ms - 68.884ms]

Test #2:
rand-write1          3083679ops    51381ops/s 401.4mb/s    0.152ms/op
[0.002ms - 181.368ms]
rand-read1           32182118ops   536228ops/s 4189.3mb/s
0.010ms/op [0.001ms - 61.158ms]

--- EXT4 5.19 ---
 filebench randomrw (8 read threads, 8 write threads)
 kernel 5.19-rc3, ext4

Test #1:
rand-write1          2829917ops    47159ops/s 368.4mb/s    0.160ms/op
[0.002ms - 4709.167ms]
rand-read1           36997540ops   616542ops/s 4816.7mb/s
0.009ms/op [0.001ms - 4704.105ms]

Test #2:
rand-write1          2764486ops    46067ops/s 359.9mb/s    0.170ms/op
[0.002ms - 5042.597ms]
rand-read1           38893279ops   648118ops/s 5063.4mb/s
0.008ms/op [0.001ms - 5004.069ms]

--- XFS 5.10 ---
 filebench randomrw (8 read threads, 8 write threads)
 kernel 5.10.109, xfs

Test #1:
rand-write1          1049278ops    17485ops/s 136.6mb/s    0.456ms/op
[0.002ms - 224.062ms]
rand-read1           33325ops      555ops/s   4.3mb/s   14.392ms/op
[0.007ms - 224.833ms]

Test #2:
rand-write1          1127497ops    18788ops/s 146.8mb/s    0.424ms/op
[0.003ms - 445.810ms]
rand-read1           35341ops      589ops/s   4.6mb/s   13.566ms/op
[0.005ms - 445.529ms]

--- XFS 5.19 ---
 filebench randomrw (8 read threads, 8 write threads)
 kernel 5.19-rc3, xfs

Test #1:
rand-write1          3295934ops    54920ops/s 429.1mb/s    0.144ms/op
[0.003ms - 109.703ms]
rand-read1           86768ops     1446ops/s  11.3mb/s    5.520ms/op
[0.003ms - 372.000ms]

Test #2:
rand-write1          3246935ops    54106ops/s 422.7mb/s    0.146ms/op
[0.002ms - 103.505ms]
rand-read1           167018ops     2783ops/s  21.7mb/s    2.867ms/op
[0.003ms - 101.105ms]

--- XFS+ 5.19 ---
 filebench randomrw (8 read threads, 8 write threads)
 kernel 5.19-rc3+ (xfs page cache warmup patch)

Test #1:
rand-write1          3054567ops    50899ops/s 397.6mb/s    0.154ms/op
[0.002ms - 201.531ms]
rand-read1           38107333ops   634990ops/s 4960.9mb/s
0.008ms/op [0.001ms - 60.027ms]

Test #2:
rand-write1          2704416ops    45053ops/s 352.0mb/s    0.174ms/op
[0.002ms - 287.079ms]
rand-read1           38589737ops   642874ops/s 5022.4mb/s
0.008ms/op [0.001ms - 60.741ms]

  reply	other threads:[~2022-06-22 16:27 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-04 16:57 [POC][PATCH] xfs: reduce ilock contention on buffered randrw workload Amir Goldstein
2019-04-04 21:17 ` Dave Chinner
2019-04-05 14:02   ` Amir Goldstein
2019-04-07 23:27     ` Dave Chinner
2019-04-08  9:02       ` Amir Goldstein
2019-04-08 14:11         ` Jan Kara
2019-04-08 17:41           ` Amir Goldstein
2019-04-09  8:26             ` Jan Kara
2022-06-17 14:48               ` Amir Goldstein
2022-06-17 15:11                 ` Jan Kara
2022-06-18  8:38                   ` Amir Goldstein
2022-06-20  9:11                     ` Jan Kara
2022-06-21  7:49                       ` Amir Goldstein
2022-06-21  8:59                         ` Jan Kara
2022-06-21 12:53                           ` Amir Goldstein
2022-06-22  3:23                             ` Matthew Wilcox
2022-06-22  9:00                               ` Amir Goldstein
2022-06-22  9:34                                 ` Jan Kara
2022-06-22 16:26                                   ` Amir Goldstein [this message]
2022-09-13 14:40                             ` Amir Goldstein
2022-09-14 16:01                               ` Darrick J. Wong
2022-09-14 16:29                                 ` Amir Goldstein
2022-09-14 17:39                                   ` Darrick J. Wong
2022-09-19 23:09                                     ` Dave Chinner
2022-09-20  2:24                                       ` Dave Chinner
2022-09-20  3:08                                         ` Amir Goldstein
2022-09-21 11:20                                           ` Amir Goldstein
2019-04-08 11:03       ` Jan Kara
2019-04-22 10:55         ` Boaz Harrosh
2019-04-08 10:33   ` Jan Kara
2019-04-08 16:37     ` Davidlohr Bueso
2019-04-11  1:11       ` Dave Chinner
2019-04-16 12:22         ` Dave Chinner
2019-04-18  3:10           ` Dave Chinner
2019-04-18 18:21             ` Davidlohr Bueso
2019-04-20 23:54               ` Dave Chinner
2019-05-03  4:17                 ` Dave Chinner
2019-05-03  5:17                   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOQ4uxiGNHp55AU-dP1W13Qc=WAi3FTephzus+QtWgJuR24Cjw@mail.gmail.com' \
    --to=amir73il@gmail.com \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).