linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@redhat.com>
To: fsdevel <linux-fsdevel@vger.kernel.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Subject: [PATCH 2/2] fs: call fsnotify_sb_delete after evict_inodes
Date: Thu, 7 Nov 2019 13:48:48 -0600	[thread overview]
Message-ID: <541385e5-e186-65b2-e38e-54dfb4c98e47@redhat.com> (raw)
In-Reply-To: <cb77c8d5-e894-4e9d-bf6f-fc1be14c5423@redhat.com>

When a filesystem is unmounted, we currently call fsnotify_sb_delete()
before evict_inodes(), which means that fsnotify_unmount_inodes()
must iterate over all inodes on the superblock, even though it will
only act on inodes with a refcount.  This is inefficient and can lead
to livelocks as it iterates over many unrefcounted inodes.

However, since fsnotify_sb_delete() and evict_inodes() are working
on orthogonal sets of inodes (fsnotify_sb_delete() only cares about
nonzero refcount, and evict_inodes() only cares about zero refcount),
we can swap the order of the calls.  The fsnotify call will then have
a much smaller list to walk (any refcounted inodes).

This should speed things up overall, and avoid livelocks in 
fsnotify_unmount_inodes().

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
---

diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
index ac9eb273e28c..426f03b6e660 100644
--- a/fs/notify/fsnotify.c
+++ b/fs/notify/fsnotify.c
@@ -57,6 +57,10 @@ static void fsnotify_unmount_inodes(struct super_block *sb)
 		 * doing an __iget/iput with SB_ACTIVE clear would actually
 		 * evict all inodes with zero i_count from icache which is
 		 * unnecessarily violent and may in fact be illegal to do.
+		 *
+		 * However, we should have been called /after/ evict_inodes
+		 * removed all zero refcount inodes, in any case.  Test to
+		 * be sure.
 		 */
 		if (!atomic_read(&inode->i_count)) {
 			spin_unlock(&inode->i_lock);
diff --git a/fs/super.c b/fs/super.c
index cfadab2cbf35..cd352530eca9 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -448,10 +448,12 @@ void generic_shutdown_super(struct super_block *sb)
 		sync_filesystem(sb);
 		sb->s_flags &= ~SB_ACTIVE;
 
-		fsnotify_sb_delete(sb);
 		cgroup_writeback_umount();
 
+		/* evict all inodes with zero refcount */
 		evict_inodes(sb);
+		/* only nonzero refcount inodes can have marks */
+		fsnotify_sb_delete(sb);
 
 		if (sb->s_dio_done_wq) {
 			destroy_workqueue(sb->s_dio_done_wq);




  parent reply	other threads:[~2019-11-07 19:48 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-07 19:42 [PATCH 0/2] avoid softlockups in various s_inodes iterators Eric Sandeen
2019-11-07 19:47 ` [PATCH 1/2] fs: avoid softlockups in " Eric Sandeen
2019-11-07 19:48 ` Eric Sandeen [this message]
2019-11-07 19:58 ` [PATCH 0/2] avoid softlockups in various " Eric Sandeen
2019-11-07 20:52 [PATCH 0/2 V2] " Eric Sandeen
2019-11-07 20:52 ` [PATCH 2/2] fs: call fsnotify_sb_delete after evict_inodes Eric Sandeen
2019-11-13 23:26   ` Al Viro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=541385e5-e186-65b2-e38e-54dfb4c98e47@redhat.com \
    --to=sandeen@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).