linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] Convert btree locking to an rwsem
@ 2020-08-20 15:45 Josef Bacik
  2020-08-20 15:46 ` [PATCH 01/12] btrfs: rename eb->lock_nested to eb->lock_recursed Josef Bacik
                   ` (12 more replies)
  0 siblings, 13 replies; 16+ messages in thread
From: Josef Bacik @ 2020-08-20 15:45 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

v1->v2:
- broke the lockdep fixes out into their own series.
- added a new fix where we need to handle double splitting leaves in some cases.

----- Original email -------
tl;dr:
- Cleaner code.
- dbench 200 3x as fast.
- fio with < nr_cpus is 10% slower.
- fio with 2 * nr_cpus is 10% faster.
- fs_mark is the same ish (results have a 10% variance between runs).

When Chris originally wrote the locking code for Btrfs the performance of rwsem
was atroctios compared to his locking scheme.  In btrfs we can usually spin on
most accesses, because generally we won't need to block.  If we do need to block
we can simply flip over to a blocking lock.  This flexibility has meant that our
locking scheme generally outperforms any of the standard locking.  We also have
a unique usecase where in a specific case (the space cache) we need to be able
to recurse a tree lock.  It is safe to do this because we're just reading, and
we would currently be holding a write lock, but this is not something that the
generic locking infrastructure provides.

There are a few downsides

1) The code is complicated.  This generally isn't an issue because we don't
really have to touch the code that often.

2) We lose lockdep help.  As you can see the first 4 patches in this series are
fixing deadlocks that exist that were caught by having lockdep properly working
with our tree locking.

3) We actually perform really poorly in the highly contended case.  Because we
don't really do anything proper locks do like dealing with writer/read
starvation, we have a series of waitqueues that mass get woken up when we unlock
blocking.  This means our context switch counts can be stupid high, as we'll
wake everybody up, and whoever wins the race gets the lock.  Implementing proper
exclusive waitqueues is a possiblity, but would further complicate the code, and
likely introduce problems we'd spend the next 6 months figuring out.

It's #3 that actually brought me to this patch series, as changing to an rwsem
significantly improved a usecase I was investigating.  However the rwsem isn't
without drawbacks.  Namely that in less highly contended cases we can be slower
than baseline, around 9-10% on my test box.  However in the very highly
contended cases (> nr_cpus) we are the same, if not better.

Filipe and I have discussed this a bit over the last few days, we're both on the
fence about it currently.  On one hand we could likely make up the lost
performance in other areas, and there's certainly things that could be
investigated to see why exactly we're slower with the rwsem and maybe figure out
a way to fix those.  We also gain a lot with this new code.  But 9-10% isn't
nothing, so it needs to be taken into consideration.

The following is the set of benchmarks that I've run on my 80 CPU, 256 GIB of
ram, 2 TIB NVME drive machine.  If there's anything else people would like to
see I can easily do A/B testing to see.

fio was N threads with 1gib files, 64kib blocksize with a fsync after every
write.  The fs_mark was just create empty files with 16 threads.

                        PATCHED         UNPATCHED       % DIFF
dbench 200              699.011 Mb/s    213.568 Mb/s    +227%
fs_mark                 223 seconds     197 seconds     -11.65%
fio 64 threads          562 Mb/s        624 Mb/s        -9.9%
fio 100 threads         566 Mb/s        566 Mb/s        0.0%
fio 160 threads         593 Mb/s        576 Mb/s        +2.95%

Another thing not shown by the raw numbers is the number of context switches.
Because of the nature of our locking we wake up everything constantly, so our
context switch counts for the fio jobs are consistently 2-3x with the old scheme
vs the rwsem.  This is why my particular workload performed so poorly, the
context switching was actually quite painful for that workload.

The diffstat is as follows

 fs/btrfs/backref.c            |  13 +-
 fs/btrfs/ctree.c              | 172 +++++--------
 fs/btrfs/ctree.h              |  10 +-
 fs/btrfs/delayed-inode.c      |  11 -
 fs/btrfs/dir-item.c           |   1 -
 fs/btrfs/disk-io.c            |  13 +-
 fs/btrfs/export.c             |   1 -
 fs/btrfs/extent-tree.c        |  39 +--
 fs/btrfs/extent_io.c          |  16 +-
 fs/btrfs/extent_io.h          |  21 +-
 fs/btrfs/file-item.c          |   4 -
 fs/btrfs/file.c               |   3 +-
 fs/btrfs/free-space-tree.c    |   2 -
 fs/btrfs/inode-item.c         |   6 -
 fs/btrfs/inode.c              |  13 +-
 fs/btrfs/ioctl.c              |   4 +-
 fs/btrfs/locking.c            | 469 ++++++----------------------------
 fs/btrfs/locking.h            |  92 ++++++-
 fs/btrfs/print-tree.c         |  11 +-
 fs/btrfs/qgroup.c             |  11 +-
 fs/btrfs/ref-verify.c         |   6 +-
 fs/btrfs/reflink.c            |   3 -
 fs/btrfs/relocation.c         |  15 +-
 fs/btrfs/super.c              |   2 -
 fs/btrfs/tests/qgroup-tests.c |   4 -
 fs/btrfs/transaction.c        |   7 +-
 fs/btrfs/tree-defrag.c        |   1 -
 fs/btrfs/tree-log.c           |   3 -
 28 files changed, 273 insertions(+), 680 deletions(-)

Thanks,

Josef

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-11-03 21:26 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-20 15:45 [PATCH 00/12] Convert btree locking to an rwsem Josef Bacik
2020-08-20 15:46 ` [PATCH 01/12] btrfs: rename eb->lock_nested to eb->lock_recursed Josef Bacik
2020-08-20 15:46 ` [PATCH 02/12] btrfs: introduce path->recurse Josef Bacik
2020-08-20 15:46 ` [PATCH 03/12] btrfs: add nesting tags to the locking helpers Josef Bacik
2020-08-20 15:46 ` [PATCH 04/12] btrfs: introduce BTRFS_NESTING_COW for cow'ing blocks Josef Bacik
2020-08-20 15:46 ` [PATCH 05/12] btrfs: introduce BTRFS_NESTING_LEFT/BTRFS_NESTING_RIGHT Josef Bacik
2020-08-20 15:46 ` [PATCH 06/12] btrfs: introduce BTRFS_NESTING_LEFT/RIGHT_COW Josef Bacik
2020-08-20 15:46 ` [PATCH 07/12] btrfs: introduce BTRFS_NESTING_SPLIT for split blocks Josef Bacik
2020-08-20 15:46 ` [PATCH 08/12] btrfs: introduce BTRFS_NESTING_NEW_ROOT for adding new roots Josef Bacik
2020-08-20 15:46 ` [PATCH 09/12] btrfs: use BTRFS_NESTED_NEW_ROOT for double splits Josef Bacik
2020-08-20 15:46 ` [PATCH 10/12] btrfs: change our extent buffer lock to a rw_semaphore Josef Bacik
2020-08-20 15:46 ` [PATCH 11/12] btrfs: remove all of the blocking helpers Josef Bacik
2020-08-20 15:46 ` [PATCH 12/12] btrfs: rip out path->leave_spinning Josef Bacik
2020-08-26 14:58 ` [PATCH 00/12] Convert btree locking to an rwsem David Sterba
2020-08-31 22:41   ` David Sterba
2020-11-03 20:59   ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).