* [PATCH 0/4] read-only remount race fix v10
@ 2011-11-21 11:11 Miklos Szeredi
2011-11-21 11:11 ` [PATCH 1/4] vfs: keep list of mounts for each superblock Miklos Szeredi
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:11 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
Al,
Please apply the following patches that fix read-only remount races.
Git tree is here:
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git read-only-remount-fixes.v10
Thanks,
Miklos
---
Miklos Szeredi (4):
vfs: keep list of mounts for each superblock
vfs: protect remounting superblock read-only
vfs: count unlinked inodes
vfs: prevent remount read-only if pending removes
---
fs/file_table.c | 23 -------------
fs/inode.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++
fs/internal.h | 1 +
fs/namespace.c | 57 ++++++++++++++++++++++++++++++++-
fs/super.c | 20 +++++++++--
include/linux/fs.h | 67 ++++++--------------------------------
include/linux/mount.h | 1 +
7 files changed, 170 insertions(+), 84 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/4] vfs: keep list of mounts for each superblock
2011-11-21 11:11 [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
@ 2011-11-21 11:11 ` Miklos Szeredi
2011-11-21 11:11 ` [PATCH 2/4] vfs: protect remounting superblock read-only Miklos Szeredi
` (3 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:11 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
From: Miklos Szeredi <mszeredi@suse.cz>
Keep track of vfsmounts belonging to a superblock. List is protected
by vfsmount_lock.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Tested-by: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
---
fs/namespace.c | 10 ++++++++++
fs/super.c | 2 ++
include/linux/fs.h | 1 +
include/linux/mount.h | 1 +
4 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/fs/namespace.c b/fs/namespace.c
index e5e1c7d..70a0748 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -696,6 +696,11 @@ vfs_kern_mount(struct file_system_type *type, int flags, const char *name, void
mnt->mnt_sb = root->d_sb;
mnt->mnt_mountpoint = mnt->mnt_root;
mnt->mnt_parent = mnt;
+
+ br_write_lock(vfsmount_lock);
+ list_add_tail(&mnt->mnt_instance, &mnt->mnt_sb->s_mounts);
+ br_write_unlock(vfsmount_lock);
+
return mnt;
}
EXPORT_SYMBOL_GPL(vfs_kern_mount);
@@ -745,6 +750,10 @@ static struct vfsmount *clone_mnt(struct vfsmount *old, struct dentry *root,
if (!list_empty(&old->mnt_expire))
list_add(&mnt->mnt_expire, &old->mnt_expire);
}
+
+ br_write_lock(vfsmount_lock);
+ list_add_tail(&mnt->mnt_instance, &mnt->mnt_sb->s_mounts);
+ br_write_unlock(vfsmount_lock);
}
return mnt;
@@ -805,6 +814,7 @@ put_again:
acct_auto_close_mnt(mnt);
goto put_again;
}
+ list_del(&mnt->mnt_instance);
br_write_unlock(vfsmount_lock);
mntfree(mnt);
}
diff --git a/fs/super.c b/fs/super.c
index afd0f1a..74ab2c8 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -142,6 +142,7 @@ static struct super_block *alloc_super(struct file_system_type *type)
INIT_LIST_HEAD(&s->s_dentry_lru);
INIT_LIST_HEAD(&s->s_inode_lru);
spin_lock_init(&s->s_inode_lru_lock);
+ INIT_LIST_HEAD(&s->s_mounts);
init_rwsem(&s->s_umount);
mutex_init(&s->s_lock);
lockdep_set_class(&s->s_umount, &type->s_umount_key);
@@ -200,6 +201,7 @@ static inline void destroy_super(struct super_block *s)
free_percpu(s->s_files);
#endif
security_sb_free(s);
+ WARN_ON(!list_empty(&s->s_mounts));
kfree(s->s_subtype);
kfree(s->s_options);
kfree(s);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 0c4df26..cbae78e 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1428,6 +1428,7 @@ struct super_block {
#else
struct list_head s_files;
#endif
+ struct list_head s_mounts; /* list of mounts */
/* s_dentry_lru, s_nr_dentry_unused protected by dcache.c lru locks */
struct list_head s_dentry_lru; /* unused dentry lru */
int s_nr_dentry_unused; /* # of dentry on lru */
diff --git a/include/linux/mount.h b/include/linux/mount.h
index 33fe53d..f88c726 100644
--- a/include/linux/mount.h
+++ b/include/linux/mount.h
@@ -67,6 +67,7 @@ struct vfsmount {
#endif
struct list_head mnt_mounts; /* list of children, anchored here */
struct list_head mnt_child; /* and going through their mnt_child */
+ struct list_head mnt_instance; /* mount instance on sb->s_mounts */
int mnt_flags;
/* 4 bytes hole on 64bits arches without fsnotify */
#ifdef CONFIG_FSNOTIFY
--
1.7.7
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/4] vfs: protect remounting superblock read-only
2011-11-21 11:11 [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
2011-11-21 11:11 ` [PATCH 1/4] vfs: keep list of mounts for each superblock Miklos Szeredi
@ 2011-11-21 11:11 ` Miklos Szeredi
2011-11-21 11:11 ` [PATCH 3/4] vfs: count unlinked inodes Miklos Szeredi
` (2 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:11 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
From: Miklos Szeredi <mszeredi@suse.cz>
Currently remouting superblock read-only is racy in a major way.
With the per mount read-only infrastructure it is now possible to
prevent most races, which this patch attempts.
Before starting the remount read-only, iterate through all mounts
belonging to the superblock and if none of them have any pending
writes, set sb->s_readonly_remount. This indicates that remount is in
progress and no further write requests are allowed. If the remount
succeeds set MS_RDONLY and reset s_readonly_remount.
If the remounting is unsuccessful just reset s_readonly_remount.
This can result in transient EROFS errors, despite the fact the
remount failed. Unfortunately hodling off writes is difficult as
remount itself may touch the filesystem (e.g. through load_nls())
which would deadlock.
A later patch deals with delayed writes due to nlink going to zero.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Tested-by: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
---
fs/internal.h | 1 +
fs/namespace.c | 40 +++++++++++++++++++++++++++++++++++++++-
fs/super.c | 22 ++++++++++++++++++----
include/linux/fs.h | 3 +++
4 files changed, 61 insertions(+), 5 deletions(-)
diff --git a/fs/internal.h b/fs/internal.h
index fe327c2..f925271 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -74,6 +74,7 @@ extern int finish_automount(struct vfsmount *, struct path *);
extern void mnt_make_longterm(struct vfsmount *);
extern void mnt_make_shortterm(struct vfsmount *);
+extern int sb_prepare_remount_readonly(struct super_block *);
extern void __init mnt_init(void);
diff --git a/fs/namespace.c b/fs/namespace.c
index 70a0748..f296790 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -311,6 +311,15 @@ static unsigned int mnt_get_writers(struct vfsmount *mnt)
#endif
}
+static int mnt_is_readonly(struct vfsmount *mnt)
+{
+ if (mnt->mnt_sb->s_readonly_remount)
+ return 1;
+ /* Order wrt setting s_flags/s_readonly_remount in do_remount() */
+ smp_rmb();
+ return __mnt_is_readonly(mnt);
+}
+
/*
* Most r/o checks on a fs are for operations that take
* discrete amounts of time, like a write() or unlink().
@@ -349,7 +358,7 @@ int mnt_want_write(struct vfsmount *mnt)
* MNT_WRITE_HOLD is cleared.
*/
smp_rmb();
- if (__mnt_is_readonly(mnt)) {
+ if (mnt_is_readonly(mnt)) {
mnt_dec_writers(mnt);
ret = -EROFS;
goto out;
@@ -466,6 +475,35 @@ static void __mnt_unmake_readonly(struct vfsmount *mnt)
br_write_unlock(vfsmount_lock);
}
+int sb_prepare_remount_readonly(struct super_block *sb)
+{
+ struct vfsmount *mnt;
+ int err = 0;
+
+ br_write_lock(vfsmount_lock);
+ list_for_each_entry(mnt, &sb->s_mounts, mnt_instance) {
+ if (!(mnt->mnt_flags & MNT_READONLY)) {
+ mnt->mnt_flags |= MNT_WRITE_HOLD;
+ smp_mb();
+ if (mnt_get_writers(mnt) > 0) {
+ err = -EBUSY;
+ break;
+ }
+ }
+ }
+ if (!err) {
+ sb->s_readonly_remount = 1;
+ smp_wmb();
+ }
+ list_for_each_entry(mnt, &sb->s_mounts, mnt_instance) {
+ if (mnt->mnt_flags & MNT_WRITE_HOLD)
+ mnt->mnt_flags &= ~MNT_WRITE_HOLD;
+ }
+ br_write_unlock(vfsmount_lock);
+
+ return err;
+}
+
static void free_vfsmnt(struct vfsmount *mnt)
{
kfree(mnt->mnt_devname);
diff --git a/fs/super.c b/fs/super.c
index 74ab2c8..027e02d 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -721,23 +721,33 @@ int do_remount_sb(struct super_block *sb, int flags, void *data, int force)
/* If we are remounting RDONLY and current sb is read/write,
make sure there are no rw files opened */
if (remount_ro) {
- if (force)
+ if (force) {
mark_files_ro(sb);
- else if (!fs_may_remount_ro(sb))
- return -EBUSY;
+ } else {
+ retval = sb_prepare_remount_readonly(sb);
+ if (retval)
+ return retval;
+
+ retval = -EBUSY;
+ if (!fs_may_remount_ro(sb))
+ goto cancel_readonly;
+ }
}
if (sb->s_op->remount_fs) {
retval = sb->s_op->remount_fs(sb, &flags, data);
if (retval) {
if (!force)
- return retval;
+ goto cancel_readonly;
/* If forced remount, go ahead despite any errors */
WARN(1, "forced remount of a %s fs returned %i\n",
sb->s_type->name, retval);
}
}
sb->s_flags = (sb->s_flags & ~MS_RMT_MASK) | (flags & MS_RMT_MASK);
+ /* Needs to be ordered wrt mnt_is_readonly() */
+ smp_wmb();
+ sb->s_readonly_remount = 0;
/*
* Some filesystems modify their metadata via some other path than the
@@ -750,6 +760,10 @@ int do_remount_sb(struct super_block *sb, int flags, void *data, int force)
if (remount_ro && sb->s_bdev)
invalidate_bdev(sb->s_bdev);
return 0;
+
+cancel_readonly:
+ sb->s_readonly_remount = 0;
+ return retval;
}
static void do_emergency_remount(struct work_struct *work)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index cbae78e..58f50f2 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1482,6 +1482,9 @@ struct super_block {
int cleancache_poolid;
struct shrinker s_shrink; /* per-sb shrinker handle */
+
+ /* Being remounted read-only */
+ int s_readonly_remount;
};
/* superblock cache pruning functions */
--
1.7.7
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/4] vfs: count unlinked inodes
2011-11-21 11:11 [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
2011-11-21 11:11 ` [PATCH 1/4] vfs: keep list of mounts for each superblock Miklos Szeredi
2011-11-21 11:11 ` [PATCH 2/4] vfs: protect remounting superblock read-only Miklos Szeredi
@ 2011-11-21 11:11 ` Miklos Szeredi
2011-11-21 11:34 ` Christoph Hellwig
2011-12-17 7:36 ` Al Viro
2011-11-21 11:11 ` [PATCH 4/4] vfs: prevent remount read-only if pending removes Miklos Szeredi
2011-11-28 9:39 ` [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
4 siblings, 2 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:11 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
From: Miklos Szeredi <mszeredi@suse.cz>
Add a new counter to the superblock that keeps track of unlinked but
not yet deleted inodes.
Do not WARN_ON if set_nlink is called with zero count, just do a
ratelimited printk. This happens on xfs and probably other
filesystems after an unclean shutdown when the filesystem reads inodes
which already have zero i_nlink. Reported by Christoph Hellwig.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Tested-by: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
---
fs/inode.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++++++
include/linux/fs.h | 61 ++++---------------------------------
2 files changed, 92 insertions(+), 54 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index ee4e66b..5f8fd6b 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -26,6 +26,7 @@
#include <linux/ima.h>
#include <linux/cred.h>
#include <linux/buffer_head.h> /* for inode_has_buffers */
+#include <linux/ratelimit.h>
#include "internal.h"
/*
@@ -241,6 +242,11 @@ void __destroy_inode(struct inode *inode)
BUG_ON(inode_has_buffers(inode));
security_inode_free(inode);
fsnotify_inode_delete(inode);
+ if (!inode->i_nlink) {
+ WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
+ atomic_long_dec(&inode->i_sb->s_remove_count);
+ }
+
#ifdef CONFIG_FS_POSIX_ACL
if (inode->i_acl && inode->i_acl != ACL_NOT_CACHED)
posix_acl_release(inode->i_acl);
@@ -268,6 +274,85 @@ static void destroy_inode(struct inode *inode)
call_rcu(&inode->i_rcu, i_callback);
}
+/**
+ * drop_nlink - directly drop an inode's link count
+ * @inode: inode
+ *
+ * This is a low-level filesystem helper to replace any
+ * direct filesystem manipulation of i_nlink. In cases
+ * where we are attempting to track writes to the
+ * filesystem, a decrement to zero means an imminent
+ * write when the file is truncated and actually unlinked
+ * on the filesystem.
+ */
+void drop_nlink(struct inode *inode)
+{
+ WARN_ON(inode->i_nlink == 0);
+ inode->__i_nlink--;
+ if (!inode->i_nlink)
+ atomic_long_inc(&inode->i_sb->s_remove_count);
+}
+EXPORT_SYMBOL(drop_nlink);
+
+/**
+ * clear_nlink - directly zero an inode's link count
+ * @inode: inode
+ *
+ * This is a low-level filesystem helper to replace any
+ * direct filesystem manipulation of i_nlink. See
+ * drop_nlink() for why we care about i_nlink hitting zero.
+ */
+void clear_nlink(struct inode *inode)
+{
+ if (inode->i_nlink) {
+ inode->__i_nlink = 0;
+ atomic_long_inc(&inode->i_sb->s_remove_count);
+ }
+}
+EXPORT_SYMBOL(clear_nlink);
+
+/**
+ * set_nlink - directly set an inode's link count
+ * @inode: inode
+ * @nlink: new nlink (should be non-zero)
+ *
+ * This is a low-level filesystem helper to replace any
+ * direct filesystem manipulation of i_nlink.
+ */
+void set_nlink(struct inode *inode, unsigned int nlink)
+{
+ if (!nlink) {
+ printk_ratelimited(KERN_INFO
+ "set_nlink() clearing i_nlink on %s inode %li\n",
+ inode->i_sb->s_type->name, inode->i_ino);
+ clear_nlink(inode);
+ } else {
+ /* Yes, some filesystems do change nlink from zero to one */
+ if (inode->i_nlink == 0)
+ atomic_long_dec(&inode->i_sb->s_remove_count);
+
+ inode->__i_nlink = nlink;
+ }
+}
+EXPORT_SYMBOL(set_nlink);
+
+/**
+ * inc_nlink - directly increment an inode's link count
+ * @inode: inode
+ *
+ * This is a low-level filesystem helper to replace any
+ * direct filesystem manipulation of i_nlink. Currently,
+ * it is only here for parity with dec_nlink().
+ */
+void inc_nlink(struct inode *inode)
+{
+ if (WARN_ON(inode->i_nlink == 0))
+ atomic_long_dec(&inode->i_sb->s_remove_count);
+
+ inode->__i_nlink++;
+}
+EXPORT_SYMBOL(inc_nlink);
+
void address_space_init_once(struct address_space *mapping)
{
memset(mapping, 0, sizeof(*mapping));
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 58f50f2..f4636c3 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1483,6 +1483,9 @@ struct super_block {
struct shrinker s_shrink; /* per-sb shrinker handle */
+ /* Number of inodes with nlink == 0 but still referenced */
+ atomic_long_t s_remove_count;
+
/* Being remounted read-only */
int s_readonly_remount;
};
@@ -1768,31 +1771,10 @@ static inline void mark_inode_dirty_sync(struct inode *inode)
__mark_inode_dirty(inode, I_DIRTY_SYNC);
}
-/**
- * set_nlink - directly set an inode's link count
- * @inode: inode
- * @nlink: new nlink (should be non-zero)
- *
- * This is a low-level filesystem helper to replace any
- * direct filesystem manipulation of i_nlink.
- */
-static inline void set_nlink(struct inode *inode, unsigned int nlink)
-{
- inode->__i_nlink = nlink;
-}
-
-/**
- * inc_nlink - directly increment an inode's link count
- * @inode: inode
- *
- * This is a low-level filesystem helper to replace any
- * direct filesystem manipulation of i_nlink. Currently,
- * it is only here for parity with dec_nlink().
- */
-static inline void inc_nlink(struct inode *inode)
-{
- inode->__i_nlink++;
-}
+extern void inc_nlink(struct inode *inode);
+extern void drop_nlink(struct inode *inode);
+extern void clear_nlink(struct inode *inode);
+extern void set_nlink(struct inode *inode, unsigned int nlink);
static inline void inode_inc_link_count(struct inode *inode)
{
@@ -1800,35 +1782,6 @@ static inline void inode_inc_link_count(struct inode *inode)
mark_inode_dirty(inode);
}
-/**
- * drop_nlink - directly drop an inode's link count
- * @inode: inode
- *
- * This is a low-level filesystem helper to replace any
- * direct filesystem manipulation of i_nlink. In cases
- * where we are attempting to track writes to the
- * filesystem, a decrement to zero means an imminent
- * write when the file is truncated and actually unlinked
- * on the filesystem.
- */
-static inline void drop_nlink(struct inode *inode)
-{
- inode->__i_nlink--;
-}
-
-/**
- * clear_nlink - directly zero an inode's link count
- * @inode: inode
- *
- * This is a low-level filesystem helper to replace any
- * direct filesystem manipulation of i_nlink. See
- * drop_nlink() for why we care about i_nlink hitting zero.
- */
-static inline void clear_nlink(struct inode *inode)
-{
- inode->__i_nlink = 0;
-}
-
static inline void inode_dec_link_count(struct inode *inode)
{
drop_nlink(inode);
--
1.7.7
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/4] vfs: prevent remount read-only if pending removes
2011-11-21 11:11 [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
` (2 preceding siblings ...)
2011-11-21 11:11 ` [PATCH 3/4] vfs: count unlinked inodes Miklos Szeredi
@ 2011-11-21 11:11 ` Miklos Szeredi
2011-11-28 9:39 ` [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
4 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:11 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
From: Miklos Szeredi <mszeredi@suse.cz>
If there are any inodes on the super block that have been unlinked
(i_nlink == 0) but have not yet been deleted then prevent the
remounting the super block read-only.
Reported-by: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Tested-by: Toshiyuki Okajima <toshi.okajima@jp.fujitsu.com>
---
fs/file_table.c | 23 -----------------------
fs/namespace.c | 7 +++++++
fs/super.c | 4 ----
include/linux/fs.h | 2 --
4 files changed, 7 insertions(+), 29 deletions(-)
diff --git a/fs/file_table.c b/fs/file_table.c
index c322794..20002e3 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -474,29 +474,6 @@ void file_sb_list_del(struct file *file)
#endif
-int fs_may_remount_ro(struct super_block *sb)
-{
- struct file *file;
- /* Check that no files are currently opened for writing. */
- lg_global_lock(files_lglock);
- do_file_list_for_each_entry(sb, file) {
- struct inode *inode = file->f_path.dentry->d_inode;
-
- /* File with pending delete? */
- if (inode->i_nlink == 0)
- goto too_bad;
-
- /* Writeable file? */
- if (S_ISREG(inode->i_mode) && (file->f_mode & FMODE_WRITE))
- goto too_bad;
- } while_file_list_for_each_entry;
- lg_global_unlock(files_lglock);
- return 1; /* Tis' cool bro. */
-too_bad:
- lg_global_unlock(files_lglock);
- return 0;
-}
-
/**
* mark_files_ro - mark all files read-only
* @sb: superblock in question
diff --git a/fs/namespace.c b/fs/namespace.c
index f296790..62684d3 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -480,6 +480,10 @@ int sb_prepare_remount_readonly(struct super_block *sb)
struct vfsmount *mnt;
int err = 0;
+ /* Racy optimization. Recheck the counter under MNT_WRITE_HOLD */
+ if (atomic_long_read(&sb->s_remove_count))
+ return -EBUSY;
+
br_write_lock(vfsmount_lock);
list_for_each_entry(mnt, &sb->s_mounts, mnt_instance) {
if (!(mnt->mnt_flags & MNT_READONLY)) {
@@ -491,6 +495,9 @@ int sb_prepare_remount_readonly(struct super_block *sb)
}
}
}
+ if (!err && atomic_long_read(&sb->s_remove_count))
+ err = -EBUSY;
+
if (!err) {
sb->s_readonly_remount = 1;
smp_wmb();
diff --git a/fs/super.c b/fs/super.c
index 027e02d..e81baed 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -727,10 +727,6 @@ int do_remount_sb(struct super_block *sb, int flags, void *data, int force)
retval = sb_prepare_remount_readonly(sb);
if (retval)
return retval;
-
- retval = -EBUSY;
- if (!fs_may_remount_ro(sb))
- goto cancel_readonly;
}
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index f4636c3..963dd2a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2146,8 +2146,6 @@ extern const struct file_operations read_pipefifo_fops;
extern const struct file_operations write_pipefifo_fops;
extern const struct file_operations rdwr_pipefifo_fops;
-extern int fs_may_remount_ro(struct super_block *);
-
#ifdef CONFIG_BLOCK
/*
* return READ, READA, or WRITE
--
1.7.7
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
2011-11-21 11:11 ` [PATCH 3/4] vfs: count unlinked inodes Miklos Szeredi
@ 2011-11-21 11:34 ` Christoph Hellwig
2011-11-21 11:51 ` Miklos Szeredi
2011-12-17 7:36 ` Al Viro
1 sibling, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2011-11-21 11:34 UTC (permalink / raw)
To: Miklos Szeredi
Cc: viro, hch, linux-fsdevel, linux-kernel, jack, akpm,
toshi.okajima, mszeredi
On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
> Do not WARN_ON if set_nlink is called with zero count, just do a
> ratelimited printk. This happens on xfs and probably other
> filesystems after an unclean shutdown when the filesystem reads inodes
> which already have zero i_nlink. Reported by Christoph Hellwig.
Given that this is part of the normal recovery process printing anything
seems like a bad idea. I also don't think the code for this actually
is correct.
Remember when a filesystem recovery from unlinked but open inodes the
following happens:
- we walk the list of unlinked but open inodes, and read them into
memory, remove the linkage and then iput it.
With the current code that won't ever increment s_remove_count, but
decrement it from __destroy_inode. I suspect the right fix is to
simply not warn for a set_nlink to zero, but rather simply increment
s_remove_count for that case.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
2011-11-21 11:34 ` Christoph Hellwig
@ 2011-11-21 11:51 ` Miklos Szeredi
0 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:51 UTC (permalink / raw)
To: Christoph Hellwig
Cc: viro, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
On Mon, Nov 21, 2011 at 12:34 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
>> Do not WARN_ON if set_nlink is called with zero count, just do a
>> ratelimited printk. This happens on xfs and probably other
>> filesystems after an unclean shutdown when the filesystem reads inodes
>> which already have zero i_nlink. Reported by Christoph Hellwig.
>
> Given that this is part of the normal recovery process printing anything
> seems like a bad idea. I also don't think the code for this actually
> is correct.
>
> Remember when a filesystem recovery from unlinked but open inodes the
> following happens:
>
> - we walk the list of unlinked but open inodes, and read them into
> memory, remove the linkage and then iput it.
>
> With the current code that won't ever increment s_remove_count,
It will increment s_remove_count
+void set_nlink(struct inode *inode, unsigned int nlink)
+{
+ if (!nlink) {
+ printk_ratelimited(KERN_INFO
+ "set_nlink() clearing i_nlink on %s inode %li\n",
+ inode->i_sb->s_type->name, inode->i_ino);
here:
+ clear_nlink(inode);
> but
> decrement it from __destroy_inode. I suspect the right fix is to
> simply not warn for a set_nlink to zero, but rather simply increment
> s_remove_count for that case.
I don't really care about the printk. Without the printk
clear_nlink() is just a shorthand for set_nlink(0), which is fine, but
that's not what the original intention was AFAIK.
Thanks,
Miklos
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
@ 2011-11-21 11:51 ` Miklos Szeredi
0 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-21 11:51 UTC (permalink / raw)
To: Christoph Hellwig
Cc: viro, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
On Mon, Nov 21, 2011 at 12:34 PM, Christoph Hellwig <hch@infradead.org> wrote:
> On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
>> Do not WARN_ON if set_nlink is called with zero count, just do a
>> ratelimited printk. This happens on xfs and probably other
>> filesystems after an unclean shutdown when the filesystem reads inodes
>> which already have zero i_nlink. Reported by Christoph Hellwig.
>
> Given that this is part of the normal recovery process printing anything
> seems like a bad idea. I also don't think the code for this actually
> is correct.
>
> Remember when a filesystem recovery from unlinked but open inodes the
> following happens:
>
> - we walk the list of unlinked but open inodes, and read them into
> memory, remove the linkage and then iput it.
>
> With the current code that won't ever increment s_remove_count,
It will increment s_remove_count
+void set_nlink(struct inode *inode, unsigned int nlink)
+{
+ if (!nlink) {
+ printk_ratelimited(KERN_INFO
+ "set_nlink() clearing i_nlink on %s inode %li\n",
+ inode->i_sb->s_type->name, inode->i_ino);
here:
+ clear_nlink(inode);
> but
> decrement it from __destroy_inode. I suspect the right fix is to
> simply not warn for a set_nlink to zero, but rather simply increment
> s_remove_count for that case.
I don't really care about the printk. Without the printk
clear_nlink() is just a shorthand for set_nlink(0), which is fine, but
that's not what the original intention was AFAIK.
Thanks,
Miklos
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/4] read-only remount race fix v10
2011-11-21 11:11 [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
` (3 preceding siblings ...)
2011-11-21 11:11 ` [PATCH 4/4] vfs: prevent remount read-only if pending removes Miklos Szeredi
@ 2011-11-28 9:39 ` Miklos Szeredi
2011-12-07 8:40 ` Miklos Szeredi
4 siblings, 1 reply; 15+ messages in thread
From: Miklos Szeredi @ 2011-11-28 9:39 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
On Mon, Nov 21, 2011 at 12:11 PM, Miklos Szeredi <miklos@szeredi.hu> wrote:
> Al,
>
> Please apply the following patches that fix read-only remount races.
>
> Git tree is here:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git read-only-remount-fixes.v10
Ping?
>
> Thanks,
> Miklos
> ---
> Miklos Szeredi (4):
> vfs: keep list of mounts for each superblock
> vfs: protect remounting superblock read-only
> vfs: count unlinked inodes
> vfs: prevent remount read-only if pending removes
>
> ---
> fs/file_table.c | 23 -------------
> fs/inode.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++
> fs/internal.h | 1 +
> fs/namespace.c | 57 ++++++++++++++++++++++++++++++++-
> fs/super.c | 20 +++++++++--
> include/linux/fs.h | 67 ++++++--------------------------------
> include/linux/mount.h | 1 +
> 7 files changed, 170 insertions(+), 84 deletions(-)
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/4] read-only remount race fix v10
2011-11-28 9:39 ` [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
@ 2011-12-07 8:40 ` Miklos Szeredi
0 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-12-07 8:40 UTC (permalink / raw)
To: viro
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
On Mon, Nov 28, 2011 at 10:39 AM, Miklos Szeredi <miklos@szeredi.hu> wrote:
> On Mon, Nov 21, 2011 at 12:11 PM, Miklos Szeredi <miklos@szeredi.hu> wrote:
>> Al,
>>
>> Please apply the following patches that fix read-only remount races.
>>
>> Git tree is here:
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git read-only-remount-fixes.v10
Al, what's up with this? Am I on your block list? Or bug fixes for
the VFS are no longer interesting? Or what?
Thanks,
MIklos
>> ---
>> Miklos Szeredi (4):
>> vfs: keep list of mounts for each superblock
>> vfs: protect remounting superblock read-only
>> vfs: count unlinked inodes
>> vfs: prevent remount read-only if pending removes
>>
>> ---
>> fs/file_table.c | 23 -------------
>> fs/inode.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++
>> fs/internal.h | 1 +
>> fs/namespace.c | 57 ++++++++++++++++++++++++++++++++-
>> fs/super.c | 20 +++++++++--
>> include/linux/fs.h | 67 ++++++--------------------------------
>> include/linux/mount.h | 1 +
>> 7 files changed, 170 insertions(+), 84 deletions(-)
>>
>>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
2011-11-21 11:11 ` [PATCH 3/4] vfs: count unlinked inodes Miklos Szeredi
2011-11-21 11:34 ` Christoph Hellwig
@ 2011-12-17 7:36 ` Al Viro
2011-12-19 14:38 ` Steven Whitehouse
1 sibling, 1 reply; 15+ messages in thread
From: Al Viro @ 2011-12-17 7:36 UTC (permalink / raw)
To: Miklos Szeredi
Cc: hch, linux-fsdevel, linux-kernel, jack, akpm, toshi.okajima, mszeredi
On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
> @@ -241,6 +242,11 @@ void __destroy_inode(struct inode *inode)
> BUG_ON(inode_has_buffers(inode));
> security_inode_free(inode);
> fsnotify_inode_delete(inode);
> + if (!inode->i_nlink) {
> + WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
> + atomic_long_dec(&inode->i_sb->s_remove_count);
> + }
Umm... That relies on ->destroy_inode() doing nothing stupid; granted,
all work on actual file removal should've been done in ->evice_inode()
leaving only (RCU'd) freeing of in-core, but there are odd ones that
do strange things in ->destroy_inode() and I'm not sure that it's not
a Yet Another Remount Race(tm). OTOH, it's clearly not worse than what
we used to have; just something to keep in mind for future work.
Anyway, I'm mostly OK with that series; I still hate your per-superblock
list of vfsmounts, but at least on top of the vfsmount-guts series they
won't be a temptation for abuse - list goes through struct mount now,
so filesystems won't be able to do fun things like "iterate through all
places where I'm mounted" (and #include "../mounts.h" in any fs code
will be a shootable offense - at least that is easy to spot).
There is another thing I'm less than happy about - suppose you have a
corrupted fs and run into zero on-disk i_nlink. Sure, the inode will
get immediately evicted and __destroy_inode() will happen; however, for
the duration of that window you end up with bumped ->s_remove_count.
Transient EROFS is annoying, but tolerable - we only hit it if attempt
to remount r/o fails in ->remount_fs(). But this is something different -
it's a transient -EBUSY on attempt to remount r/o happening when nothing
actually is trying to do any kind of write access at all. As it is,
you have ->s_remove_count equal to the number of in-core inodes with
zero ->i_nlink that had not yet reached destroy_inode(). Hell knows...
Maybe we want two versions of set_nlink(); one doing what yours does,
another returning -EINVAL if asked to set i_nlink to 0. And assorted
foo_read_inode() would use the latter. Anyway, that's a separate work;
so's the analysis of what happens if directory entry points to on-disk
inode with zero i_nlink.
Applied, with rebase on top of vfsmount-guts. Will push the whole pile
into #for-next as soon as I finish sorting out conflicts in btrfs patches
versus btrfs tree.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
2011-12-17 7:36 ` Al Viro
@ 2011-12-19 14:38 ` Steven Whitehouse
2011-12-19 16:03 ` Miklos Szeredi
0 siblings, 1 reply; 15+ messages in thread
From: Steven Whitehouse @ 2011-12-19 14:38 UTC (permalink / raw)
To: Al Viro
Cc: Miklos Szeredi, hch, linux-fsdevel, linux-kernel, jack, akpm,
toshi.okajima, mszeredi
Hi,
On Sat, 2011-12-17 at 07:36 +0000, Al Viro wrote:
> On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
> > @@ -241,6 +242,11 @@ void __destroy_inode(struct inode *inode)
> > BUG_ON(inode_has_buffers(inode));
> > security_inode_free(inode);
> > fsnotify_inode_delete(inode);
> > + if (!inode->i_nlink) {
> > + WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
> > + atomic_long_dec(&inode->i_sb->s_remove_count);
> > + }
>
> Umm... That relies on ->destroy_inode() doing nothing stupid; granted,
> all work on actual file removal should've been done in ->evice_inode()
> leaving only (RCU'd) freeing of in-core, but there are odd ones that
> do strange things in ->destroy_inode() and I'm not sure that it's not
> a Yet Another Remount Race(tm). OTOH, it's clearly not worse than what
> we used to have; just something to keep in mind for future work.
>
GFS2 is one of those cases. The issue is that when we enter
->evict_inode() with i_nlink 0, we do not know whether any other node
still has the inode open. If it does, then we do not deallocate it in
->evict_inode() but instead just forget about it, just as if i_nlink was
> 0 leaving the remaining opener(s) to do the deallocation later,
Steve.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
2011-12-19 14:38 ` Steven Whitehouse
@ 2011-12-19 16:03 ` Miklos Szeredi
0 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-12-19 16:03 UTC (permalink / raw)
To: Steven Whitehouse
Cc: Al Viro, hch, linux-fsdevel, linux-kernel, jack, akpm,
toshi.okajima, mszeredi
On Mon, Dec 19, 2011 at 3:38 PM, Steven Whitehouse <swhiteho@redhat.com> wrote:
> Hi,
>
> On Sat, 2011-12-17 at 07:36 +0000, Al Viro wrote:
>> On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
>> > @@ -241,6 +242,11 @@ void __destroy_inode(struct inode *inode)
>> > BUG_ON(inode_has_buffers(inode));
>> > security_inode_free(inode);
>> > fsnotify_inode_delete(inode);
>> > + if (!inode->i_nlink) {
>> > + WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
>> > + atomic_long_dec(&inode->i_sb->s_remove_count);
>> > + }
>>
>> Umm... That relies on ->destroy_inode() doing nothing stupid; granted,
>> all work on actual file removal should've been done in ->evice_inode()
>> leaving only (RCU'd) freeing of in-core, but there are odd ones that
>> do strange things in ->destroy_inode() and I'm not sure that it's not
>> a Yet Another Remount Race(tm). OTOH, it's clearly not worse than what
>> we used to have; just something to keep in mind for future work.
>>
> GFS2 is one of those cases. The issue is that when we enter
> ->evict_inode() with i_nlink 0, we do not know whether any other node
> still has the inode open. If it does, then we do not deallocate it in
> ->evict_inode() but instead just forget about it, just as if i_nlink was
>> 0 leaving the remaining opener(s) to do the deallocation later,
And does GFS2 care about read-only remount races because of that?
I.e. if an unlinked file is still open on another node, should we
prevent remounting read-only until it the file is released and
actually gone?
If that's not a requirement (and I don't see why it should be) then all is fine.
Thanks,
Miklos
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
@ 2011-12-19 16:03 ` Miklos Szeredi
0 siblings, 0 replies; 15+ messages in thread
From: Miklos Szeredi @ 2011-12-19 16:03 UTC (permalink / raw)
To: Steven Whitehouse
Cc: Al Viro, hch, linux-fsdevel, linux-kernel, jack, akpm,
toshi.okajima, mszeredi
On Mon, Dec 19, 2011 at 3:38 PM, Steven Whitehouse <swhiteho@redhat.com> wrote:
> Hi,
>
> On Sat, 2011-12-17 at 07:36 +0000, Al Viro wrote:
>> On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
>> > @@ -241,6 +242,11 @@ void __destroy_inode(struct inode *inode)
>> > BUG_ON(inode_has_buffers(inode));
>> > security_inode_free(inode);
>> > fsnotify_inode_delete(inode);
>> > + if (!inode->i_nlink) {
>> > + WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
>> > + atomic_long_dec(&inode->i_sb->s_remove_count);
>> > + }
>>
>> Umm... That relies on ->destroy_inode() doing nothing stupid; granted,
>> all work on actual file removal should've been done in ->evice_inode()
>> leaving only (RCU'd) freeing of in-core, but there are odd ones that
>> do strange things in ->destroy_inode() and I'm not sure that it's not
>> a Yet Another Remount Race(tm). OTOH, it's clearly not worse than what
>> we used to have; just something to keep in mind for future work.
>>
> GFS2 is one of those cases. The issue is that when we enter
> ->evict_inode() with i_nlink 0, we do not know whether any other node
> still has the inode open. If it does, then we do not deallocate it in
> ->evict_inode() but instead just forget about it, just as if i_nlink was
>> 0 leaving the remaining opener(s) to do the deallocation later,
And does GFS2 care about read-only remount races because of that?
I.e. if an unlinked file is still open on another node, should we
prevent remounting read-only until it the file is released and
actually gone?
If that's not a requirement (and I don't see why it should be) then all is fine.
Thanks,
Miklos
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] vfs: count unlinked inodes
2011-12-19 16:03 ` Miklos Szeredi
(?)
@ 2011-12-19 16:14 ` Steven Whitehouse
-1 siblings, 0 replies; 15+ messages in thread
From: Steven Whitehouse @ 2011-12-19 16:14 UTC (permalink / raw)
To: Miklos Szeredi
Cc: Al Viro, hch, linux-fsdevel, linux-kernel, jack, akpm,
toshi.okajima, mszeredi
Hi,
On Mon, 2011-12-19 at 17:03 +0100, Miklos Szeredi wrote:
> On Mon, Dec 19, 2011 at 3:38 PM, Steven Whitehouse <swhiteho@redhat.com> wrote:
> > Hi,
> >
> > On Sat, 2011-12-17 at 07:36 +0000, Al Viro wrote:
> >> On Mon, Nov 21, 2011 at 12:11:32PM +0100, Miklos Szeredi wrote:
> >> > @@ -241,6 +242,11 @@ void __destroy_inode(struct inode *inode)
> >> > BUG_ON(inode_has_buffers(inode));
> >> > security_inode_free(inode);
> >> > fsnotify_inode_delete(inode);
> >> > + if (!inode->i_nlink) {
> >> > + WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
> >> > + atomic_long_dec(&inode->i_sb->s_remove_count);
> >> > + }
> >>
> >> Umm... That relies on ->destroy_inode() doing nothing stupid; granted,
> >> all work on actual file removal should've been done in ->evice_inode()
> >> leaving only (RCU'd) freeing of in-core, but there are odd ones that
> >> do strange things in ->destroy_inode() and I'm not sure that it's not
> >> a Yet Another Remount Race(tm). OTOH, it's clearly not worse than what
> >> we used to have; just something to keep in mind for future work.
> >>
> > GFS2 is one of those cases. The issue is that when we enter
> > ->evict_inode() with i_nlink 0, we do not know whether any other node
> > still has the inode open. If it does, then we do not deallocate it in
> > ->evict_inode() but instead just forget about it, just as if i_nlink was
> >> 0 leaving the remaining opener(s) to do the deallocation later,
>
> And does GFS2 care about read-only remount races because of that?
> I.e. if an unlinked file is still open on another node, should we
> prevent remounting read-only until it the file is released and
> actually gone?
>
> If that's not a requirement (and I don't see why it should be) then all is fine.
>
> Thanks,
> Miklos
Ok. Good, we don't need to worry about that. We can support any mix of
read-write, and read-only nodes with the caveat that a cluster with only
one read-write node will have no other node to perform recovery for it,
should it fail. Also, since read-only nodes cannot deallocate inodes
(even if they are the last openers of a file) then they will simply
ignore such inodes, and wait for the next read-write node to perform an
allocation in that resource group, whereupon the deallocation will be
completed.
So remounting read-only is a purely local operation so far as GFS2 is
concerned,
Steve.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-12-19 16:14 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-21 11:11 [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
2011-11-21 11:11 ` [PATCH 1/4] vfs: keep list of mounts for each superblock Miklos Szeredi
2011-11-21 11:11 ` [PATCH 2/4] vfs: protect remounting superblock read-only Miklos Szeredi
2011-11-21 11:11 ` [PATCH 3/4] vfs: count unlinked inodes Miklos Szeredi
2011-11-21 11:34 ` Christoph Hellwig
2011-11-21 11:51 ` Miklos Szeredi
2011-11-21 11:51 ` Miklos Szeredi
2011-12-17 7:36 ` Al Viro
2011-12-19 14:38 ` Steven Whitehouse
2011-12-19 16:03 ` Miklos Szeredi
2011-12-19 16:03 ` Miklos Szeredi
2011-12-19 16:14 ` Steven Whitehouse
2011-11-21 11:11 ` [PATCH 4/4] vfs: prevent remount read-only if pending removes Miklos Szeredi
2011-11-28 9:39 ` [PATCH 0/4] read-only remount race fix v10 Miklos Szeredi
2011-12-07 8:40 ` Miklos Szeredi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.