All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: linux-xfs@vger.kernel.org
Subject: [PATCH 0/5 v2] xfs: fix a couple of performance issues
Date: Tue, 12 May 2020 19:28:06 +1000	[thread overview]
Message-ID: <20200512092811.1846252-1-david@fromorbit.com> (raw)

Hi folks,

To follow up on the interesting performance gain I found, there's
three RFC patches that follow up the two I posted earlier. These get
rid of the CIL xc_cil_lock entirely by moving the entire CIL list
and accounting to percpu structures.

The result is that I'm topping out at about 1.12M transactions/s
and bottlenecking on VFS spinlocks in the dentry cache path walk
code and the superblock inode list lock. The XFS CIL commit path
mostly disappears from the profiles when creating about 600,000
inodes/s:


-   73.42%     0.12%  [kernel]               [k] path_openat
   - 11.29% path_openat
      - 7.12% xfs_vn_create
         - 7.18% xfs_vn_mknod
            - 7.30% xfs_generic_create
               - 6.73% xfs_create
                  - 2.69% xfs_dir_ialloc
                     - 2.98% xfs_ialloc
                        - 1.26% xfs_dialloc
                           - 1.04% xfs_dialloc_ag
                        - 1.02% xfs_setup_inode
                           - 0.90% inode_sb_list_add
>>>>>                         - 1.09% _raw_spin_lock
                                 - 4.47% do_raw_spin_lock
                                      4.05% __pv_queued_spin_lock_slowpath
                        - 0.75% xfs_iget
                  - 2.43% xfs_trans_commit
                     - 3.47% __xfs_trans_commit
                        - 7.47% xfs_log_commit_cil
                             1.60% memcpy_erms
                           - 1.35% xfs_buf_item_size
                                0.99% xfs_buf_item_size_segment.isra.0
                             1.30% xfs_buf_item_format
                  - 1.44% xfs_dir_createname
                     - 1.60% xfs_dir2_node_addname
                        - 1.08% xfs_dir2_leafn_add
                             0.79% xfs_dir3_leaf_check_int
      - 1.09% terminate_walk
         - 1.09% dput
>>>>>>      - 1.42% _raw_spin_lock
               - 7.75% do_raw_spin_lock
                    7.19% __pv_queued_spin_lock_slowpath
      - 0.99% xfs_vn_lookup
         - 0.96% xfs_lookup
            - 1.01% xfs_dir_lookup
               - 1.24% xfs_dir2_node_lookup
                  - 1.09% xfs_da3_node_lookup_int
      - 0.90% unlazy_walk
         - 0.87% legitimize_root
            - 0.94% __legitimize_path.isra.0
               - 0.91% lockref_get_not_dead
>>>>>>>           - 1.28% _raw_spin_lock
                     - 6.85% do_raw_spin_lock
                          6.29% __pv_queued_spin_lock_slowpath
      - 0.82% d_lookup
           __d_lookup
.....
+   39.21%     6.76%  [kernel]               [k] do_raw_spin_lock
+   35.07%     0.16%  [kernel]               [k] _raw_spin_lock
+   32.35%    32.13%  [kernel]               [k] __pv_queued_spin_lock_slowpath

So we're going 3-4x faster on this machine than without these
patches, yet we're still burning about 40% of the CPU consumed by
the workload on spinlocks.  IOWs, the XFS code is running 3-4x
faster consuming half the CPU, and we're bashing on other locks
now...

There's still more work to do to make these patches production
ready, but I figured people might want to comment on how much it
hurts their brain and whether there might be better ways to
aggregrate all this percpu functionality into a neater package...

Cheers,

Dave.



             reply	other threads:[~2020-05-12  9:28 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-12  9:28 Dave Chinner [this message]
2020-05-12  9:28 ` [PATCH 1/5] xfs: separate read-only variables in struct xfs_mount Dave Chinner
2020-05-12 12:30   ` Brian Foster
2020-05-12 16:09     ` Darrick J. Wong
2020-05-12 21:43       ` Dave Chinner
2020-05-12 21:53     ` Dave Chinner
2020-05-12  9:28 ` [PATCH 2/5] xfs: convert m_active_trans counter to per-cpu Dave Chinner
2020-05-12 12:31   ` Brian Foster
2020-05-12  9:28 ` [PATCH 3/5] [RFC] xfs: use percpu counters for CIL context counters Dave Chinner
2020-05-12 14:05   ` Brian Foster
2020-05-12 23:36     ` Dave Chinner
2020-05-13 12:09       ` Brian Foster
2020-05-13 21:52         ` Dave Chinner
2020-05-14  1:50           ` Dave Chinner
2020-05-14  2:49             ` Dave Chinner
2020-05-14 13:43           ` Brian Foster
2020-05-12  9:28 ` [PATCH 4/5] [RFC] xfs: per-cpu CIL lists Dave Chinner
2020-05-13 17:02   ` Brian Foster
2020-05-13 23:33     ` Dave Chinner
2020-05-14 13:44       ` Brian Foster
2020-05-14 22:46         ` Dave Chinner
2020-05-15 17:26           ` Brian Foster
2020-05-18  0:30             ` Dave Chinner
2020-05-12  9:28 ` [PATCH 5/5] [RFC] xfs: make CIl busy extent lists per-cpu Dave Chinner
2020-05-12 10:25 ` [PATCH 0/5 v2] xfs: fix a couple of performance issues Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200512092811.1846252-1-david@fromorbit.com \
    --to=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.