All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/19] quota: RFC SMP improvements for generic quota V3
@ 2010-11-11 12:14 Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock Dmitry Monakhov
                   ` (19 more replies)
  0 siblings, 20 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

 This patch set is my attempt to make quota code more scalable.
 Main goal of this patch-set is to split global locking to per-sb basis.
 Actually it consists of several parts
 * Fixes : trivial fixes which i hope will be accepted w/o any complain
 * Splitup global locks: Imho this part clean and simple. I hope it is
   also a sane candidate for for_testing branch.
 * More scalability for single sb : Some of this patches was already
   submitted previously, some wasn't. This part is just my first vision
   of the way we can move. This way result in real speedup, but i'm not
   shure about design solutions, please do not punch me too strong
   if you dont like that direction.

 This patch-set survived after some stress testing
  * parallel quota{on,off}
  * fssress
  * triggering ENOSPC

More info here: download.openvz.org/~dmonakhov/quota.html

Changes from V2
   * Move getfmt call to dquot  (suggested by hch@)
   * Use global hash with per backet lock (suggested by viro@)
   * Protect dqget with SRCU to prevent race on quota_info ptr
   * Add dquot_init optimization
   * Remove data_lock for ocfs2 where possible.
   * I've remove dq_count optimization patch because it was buggy,
     and in fact it should belongs to another patch-set.
   * Bug fixes
   ** Fix deadlock on dquot transfer due to previous ENOSPC

Changes from V1
   * random fixes according to Jan's comments
     + fix spelling
     + fix deadlock on dquot_transfer, and lock_dep issues
     - list_lock patches split is still the same as before.
   *  move quota data from sb to dedicated pointer.
   *  Basic improvements fore per-sb scalability

patch against 2.6.36-rc5, linux-fs-2.6.git for_testing branch
<Out of tree patches from other developers>
      kernel: add bl_list
<Cleanups and Fixes>
      quota: protect getfmt call with dqonoff_mutex lock
      quota: Wrap common expression to  helper function
<Split-up global locks>
      quota: protect dqget() from parallels quotaoff via SRCU
      quota: mode quota internals from sb to quota_info
      quota: Remove state_lock
      quota: add quota format lock
      quota: make dquot lists per-sb
      quota: optimize quota_initialize
      quota: user per-backet hlist lock for dquot_hash
      quota: remove global dq_list_lock
<More scalability for single sb>
      quota: rename dq_lock
      quota: make per-sb dq_data_lock
      quota: protect dquot mem info with object's lock
      quota: drop dq_data_lock where possible
      quota: relax dq_data_lock dq_lock locking consistency
      quota: Some stylistic cleanup for dquot interface
      fs: add unlocked helpers
      quota: protect i_dquot with i_lock instead of dqptr_sem

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 Makefile |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/Makefile b/Makefile
index 471c49f..4e7602b 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
 VERSION = 2
 PATCHLEVEL = 6
 SUBLEVEL = 36
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc6-quota
 NAME = Sheep on Meth
 
 # *DOCUMENTATION*
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 13:36   ` Christoph Hellwig
  2010-11-22 19:35   ` Jan Kara
  2010-11-11 12:14 ` [PATCH 02/19] kernel: add bl_list Dmitry Monakhov
                   ` (18 subsequent siblings)
  19 siblings, 2 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

dqptr_sem hasn't any thing in common with quota files,
quota file load  protected with dqonoff_mutex, so we have to use
it for reading fmt info.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ext3/super.c          |    1 +
 fs/ext4/super.c          |    1 +
 fs/ocfs2/super.c         |    1 +
 fs/quota/dquot.c         |   14 ++++++++++++++
 fs/quota/quota.c         |   16 ++++++----------
 fs/reiserfs/super.c      |    1 +
 include/linux/quota.h    |    1 +
 include/linux/quotaops.h |    1 +
 8 files changed, 26 insertions(+), 10 deletions(-)

diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index e9fd676..e0f68f0 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -773,6 +773,7 @@ static const struct quotactl_ops ext3_qctl_operations = {
 	.quota_sync	= dquot_quota_sync,
 	.get_info	= dquot_get_dqinfo,
 	.set_info	= dquot_set_dqinfo,
+	.get_fmt	= dquot_get_dqfmt,
 	.get_dqblk	= dquot_get_dqblk,
 	.set_dqblk	= dquot_set_dqblk
 };
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 4062fbe..a2e68f9 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1150,6 +1150,7 @@ static const struct quotactl_ops ext4_qctl_operations = {
 	.quota_sync	= dquot_quota_sync,
 	.get_info	= dquot_get_dqinfo,
 	.set_info	= dquot_set_dqinfo,
+	.get_fmt	= dquot_get_dqfmt,
 	.get_dqblk	= dquot_get_dqblk,
 	.set_dqblk	= dquot_set_dqblk
 };
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index 26bd015..8e6a20c 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -986,6 +986,7 @@ static const struct quotactl_ops ocfs2_quotactl_ops = {
 	.quota_sync	= dquot_quota_sync,
 	.get_info	= dquot_get_dqinfo,
 	.set_info	= dquot_set_dqinfo,
+	.get_fmt	= dquot_get_dqfmt,
 	.get_dqblk	= dquot_get_dqblk,
 	.set_dqblk	= dquot_set_dqblk,
 };
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 06157aa..d1d1c51 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -1890,6 +1890,19 @@ int dquot_file_open(struct inode *inode, struct file *file)
 }
 EXPORT_SYMBOL(dquot_file_open);
 
+int dquot_get_dqfmt(struct super_block *sb, int type, unsigned int *fmt)
+{
+	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	if (!sb_has_quota_active(sb, type)) {
+		mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+		return -ESRCH;
+	}
+	*fmt = sb_dqopt(sb)->info[type].dqi_format->qf_fmt_id;
+	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	return 0;
+}
+EXPORT_SYMBOL(dquot_get_dqfmt);
+
 /*
  * Turn quota off on a device. type == -1 ==> quotaoff for all types (umount)
  */
@@ -2502,6 +2515,7 @@ const struct quotactl_ops dquot_quotactl_ops = {
 	.quota_sync	= dquot_quota_sync,
 	.get_info	= dquot_get_dqinfo,
 	.set_info	= dquot_set_dqinfo,
+	.get_fmt	= dquot_get_dqfmt,
 	.get_dqblk	= dquot_get_dqblk,
 	.set_dqblk	= dquot_set_dqblk
 };
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index b34bdb2..3b1d315 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -78,17 +78,13 @@ static int quota_quotaon(struct super_block *sb, int type, int cmd, qid_t id,
 static int quota_getfmt(struct super_block *sb, int type, void __user *addr)
 {
 	__u32 fmt;
-
-	down_read(&sb_dqopt(sb)->dqptr_sem);
-	if (!sb_has_quota_active(sb, type)) {
-		up_read(&sb_dqopt(sb)->dqptr_sem);
-		return -ESRCH;
-	}
-	fmt = sb_dqopt(sb)->info[type].dqi_format->qf_fmt_id;
-	up_read(&sb_dqopt(sb)->dqptr_sem);
-	if (copy_to_user(addr, &fmt, sizeof(fmt)))
+	int ret;
+	if (!sb->s_qcop->get_fmt)
+		return -ENOSYS;
+	ret = sb->s_qcop->get_fmt(sb, type, &fmt);
+	if (!ret && copy_to_user(addr, &fmt, sizeof(fmt)))
 		return -EFAULT;
-	return 0;
+	return ret;
 }
 
 static int quota_getinfo(struct super_block *sb, int type, void __user *addr)
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index 73c000f..d4d32df 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -644,6 +644,7 @@ static const struct quotactl_ops reiserfs_qctl_operations = {
 	.quota_sync = dquot_quota_sync,
 	.get_info = dquot_get_dqinfo,
 	.set_info = dquot_set_dqinfo,
+	.get_fmt =  dquot_get_dqfmt,
 	.get_dqblk = dquot_get_dqblk,
 	.set_dqblk = dquot_set_dqblk,
 };
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 9a85412..2767e4c 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -331,6 +331,7 @@ struct quotactl_ops {
 	int (*quota_off)(struct super_block *, int);
 	int (*quota_sync)(struct super_block *, int, int);
 	int (*get_info)(struct super_block *, int, struct if_dqinfo *);
+	int (*get_fmt)(struct super_block*, int, unsigned int*);
 	int (*set_info)(struct super_block *, int, struct if_dqinfo *);
 	int (*get_dqblk)(struct super_block *, int, qid_t, struct fs_disk_quota *);
 	int (*set_dqblk)(struct super_block *, int, qid_t, struct fs_disk_quota *);
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index 9e09c9a..45ae255 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -85,6 +85,7 @@ int dquot_quota_off(struct super_block *sb, int type);
 int dquot_quota_sync(struct super_block *sb, int type, int wait);
 int dquot_get_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii);
 int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii);
+int dquot_get_dqfmt(struct super_block *sb, int type, unsigned int *fmt);
 int dquot_get_dqblk(struct super_block *sb, int type, qid_t id,
 		struct fs_disk_quota *di);
 int dquot_set_dqblk(struct super_block *sb, int type, qid_t id,
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 02/19] kernel: add bl_list
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 03/19] quota: Wrap common expression to helper function Dmitry Monakhov
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Nick Piggin, Dave Chinner

From: Nick Piggin <npiggin@suse.de>

Introduce a type of hlist that can support the use of the lowest bit
in the hlist_head. This will be subsequently used to implement
per-bucket bit spinlock for inode hashes.

Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 include/linux/list_bl.h    |  127 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/rculist_bl.h |  128 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 255 insertions(+), 0 deletions(-)
 create mode 100644 include/linux/list_bl.h
 create mode 100644 include/linux/rculist_bl.h

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
new file mode 100644
index 0000000..cf8acfc
--- /dev/null
+++ b/include/linux/list_bl.h
@@ -0,0 +1,127 @@
+#ifndef _LINUX_LIST_BL_H
+#define _LINUX_LIST_BL_H
+
+#include <linux/list.h>
+#include <linux/bit_spinlock.h>
+
+/*
+ * Special version of lists, where head of the list has a bit spinlock
+ * in the lowest bit. This is useful for scalable hash tables without
+ * increasing memory footprint overhead.
+ *
+ * For modification operations, the 0 bit of hlist_bl_head->first
+ * pointer must be set.
+ */
+
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
+#define LIST_BL_LOCKMASK	1UL
+#else
+#define LIST_BL_LOCKMASK	0UL
+#endif
+
+#ifdef CONFIG_DEBUG_LIST
+#define LIST_BL_BUG_ON(x) BUG_ON(x)
+#else
+#define LIST_BL_BUG_ON(x)
+#endif
+
+
+struct hlist_bl_head {
+	struct hlist_bl_node *first;
+};
+
+struct hlist_bl_node {
+	struct hlist_bl_node *next, **pprev;
+};
+#define INIT_HLIST_BL_HEAD(ptr) \
+	((ptr)->first = NULL)
+
+static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
+{
+	h->next = NULL;
+	h->pprev = NULL;
+}
+
+#define hlist_bl_entry(ptr, type, member) container_of(ptr,type,member)
+
+static inline int hlist_bl_unhashed(const struct hlist_bl_node *h)
+{
+	return !h->pprev;
+}
+
+static inline struct hlist_bl_node *hlist_bl_first(struct hlist_bl_head *h)
+{
+	return (struct hlist_bl_node *)
+		((unsigned long)h->first & ~LIST_BL_LOCKMASK);
+}
+
+static inline void hlist_bl_set_first(struct hlist_bl_head *h,
+					struct hlist_bl_node *n)
+{
+	LIST_BL_BUG_ON((unsigned long)n & LIST_BL_LOCKMASK);
+	LIST_BL_BUG_ON(!bit_spin_is_locked(0, (unsigned long *)&h->first));
+	h->first = (struct hlist_bl_node *)((unsigned long)n | LIST_BL_LOCKMASK);
+}
+
+static inline int hlist_bl_empty(const struct hlist_bl_head *h)
+{
+	return !((unsigned long)h->first & ~LIST_BL_LOCKMASK);
+}
+
+static inline void hlist_bl_add_head(struct hlist_bl_node *n,
+					struct hlist_bl_head *h)
+{
+	struct hlist_bl_node *first = hlist_bl_first(h);
+
+	n->next = first;
+	if (first)
+		first->pprev = &n->next;
+	n->pprev = &h->first;
+	hlist_bl_set_first(h, n);
+}
+
+static inline void __hlist_bl_del(struct hlist_bl_node *n)
+{
+	struct hlist_bl_node *next = n->next;
+	struct hlist_bl_node **pprev = n->pprev;
+
+	LIST_BL_BUG_ON((unsigned long)n & LIST_BL_LOCKMASK);
+
+	/* pprev may be `first`, so be careful not to lose the lock bit */
+	*pprev = (struct hlist_bl_node *)
+			((unsigned long)next |
+			 ((unsigned long)*pprev & LIST_BL_LOCKMASK));
+	if (next)
+		next->pprev = pprev;
+}
+
+static inline void hlist_bl_del(struct hlist_bl_node *n)
+{
+	__hlist_bl_del(n);
+	n->next = LIST_POISON1;
+	n->pprev = LIST_POISON2;
+}
+
+static inline void hlist_bl_del_init(struct hlist_bl_node *n)
+{
+	if (!hlist_bl_unhashed(n)) {
+		__hlist_bl_del(n);
+		INIT_HLIST_BL_NODE(n);
+	}
+}
+
+/**
+ * hlist_bl_for_each_entry	- iterate over list of given type
+ * @tpos:	the type * to use as a loop cursor.
+ * @pos:	the &struct hlist_node to use as a loop cursor.
+ * @head:	the head for your list.
+ * @member:	the name of the hlist_node within the struct.
+ *
+ */
+#define hlist_bl_for_each_entry(tpos, pos, head, member)		\
+	for (pos = hlist_bl_first(head);				\
+	     pos &&							\
+		({ tpos = hlist_bl_entry(pos, typeof(*tpos), member); 1;}); \
+	     pos = pos->next)
+
+#endif
diff --git a/include/linux/rculist_bl.h b/include/linux/rculist_bl.h
new file mode 100644
index 0000000..cdfb54e
--- /dev/null
+++ b/include/linux/rculist_bl.h
@@ -0,0 +1,128 @@
+#ifndef _LINUX_RCULIST_BL_H
+#define _LINUX_RCULIST_BL_H
+
+/*
+ * RCU-protected bl list version. See include/linux/list_bl.h.
+ */
+#include <linux/list_bl.h>
+#include <linux/rcupdate.h>
+#include <linux/bit_spinlock.h>
+
+static inline void hlist_bl_set_first_rcu(struct hlist_bl_head *h,
+					struct hlist_bl_node *n)
+{
+	LIST_BL_BUG_ON((unsigned long)n & LIST_BL_LOCKMASK);
+	LIST_BL_BUG_ON(!bit_spin_is_locked(0, (unsigned long *)&h->first));
+	rcu_assign_pointer(h->first,
+		(struct hlist_bl_node *)((unsigned long)n | LIST_BL_LOCKMASK));
+}
+
+static inline struct hlist_bl_node *hlist_bl_first_rcu(struct hlist_bl_head *h)
+{
+	return (struct hlist_bl_node *)
+		((unsigned long)rcu_dereference(h->first) & ~LIST_BL_LOCKMASK);
+}
+
+/**
+ * hlist_bl_del_init_rcu - deletes entry from hash list with re-initialization
+ * @n: the element to delete from the hash list.
+ *
+ * Note: hlist_bl_unhashed() on the node returns true after this. It is
+ * useful for RCU based read lockfree traversal if the writer side
+ * must know if the list entry is still hashed or already unhashed.
+ *
+ * In particular, it means that we can not poison the forward pointers
+ * that may still be used for walking the hash list and we can only
+ * zero the pprev pointer so list_unhashed() will return true after
+ * this.
+ *
+ * The caller must take whatever precautions are necessary (such as
+ * holding appropriate locks) to avoid racing with another
+ * list-mutation primitive, such as hlist_bl_add_head_rcu() or
+ * hlist_bl_del_rcu(), running on this same list.  However, it is
+ * perfectly legal to run concurrently with the _rcu list-traversal
+ * primitives, such as hlist_bl_for_each_entry_rcu().
+ */
+static inline void hlist_bl_del_init_rcu(struct hlist_bl_node *n)
+{
+	if (!hlist_bl_unhashed(n)) {
+		__hlist_bl_del(n);
+		n->pprev = NULL;
+	}
+}
+
+/**
+ * hlist_bl_del_rcu - deletes entry from hash list without re-initialization
+ * @n: the element to delete from the hash list.
+ *
+ * Note: hlist_bl_unhashed() on entry does not return true after this,
+ * the entry is in an undefined state. It is useful for RCU based
+ * lockfree traversal.
+ *
+ * In particular, it means that we can not poison the forward
+ * pointers that may still be used for walking the hash list.
+ *
+ * The caller must take whatever precautions are necessary
+ * (such as holding appropriate locks) to avoid racing
+ * with another list-mutation primitive, such as hlist_bl_add_head_rcu()
+ * or hlist_bl_del_rcu(), running on this same list.
+ * However, it is perfectly legal to run concurrently with
+ * the _rcu list-traversal primitives, such as
+ * hlist_bl_for_each_entry().
+ */
+static inline void hlist_bl_del_rcu(struct hlist_bl_node *n)
+{
+	__hlist_bl_del(n);
+	n->pprev = LIST_POISON2;
+}
+
+/**
+ * hlist_bl_add_head_rcu
+ * @n: the element to add to the hash list.
+ * @h: the list to add to.
+ *
+ * Description:
+ * Adds the specified element to the specified hlist_bl,
+ * while permitting racing traversals.
+ *
+ * The caller must take whatever precautions are necessary
+ * (such as holding appropriate locks) to avoid racing
+ * with another list-mutation primitive, such as hlist_bl_add_head_rcu()
+ * or hlist_bl_del_rcu(), running on this same list.
+ * However, it is perfectly legal to run concurrently with
+ * the _rcu list-traversal primitives, such as
+ * hlist_bl_for_each_entry_rcu(), used to prevent memory-consistency
+ * problems on Alpha CPUs.  Regardless of the type of CPU, the
+ * list-traversal primitive must be guarded by rcu_read_lock().
+ */
+static inline void hlist_bl_add_head_rcu(struct hlist_bl_node *n,
+					struct hlist_bl_head *h)
+{
+	struct hlist_bl_node *first;
+
+	/* don't need hlist_bl_first_rcu because we're under lock */
+	first = hlist_bl_first(h);
+
+	n->next = first;
+	if (first)
+		first->pprev = &n->next;
+	n->pprev = &h->first;
+
+	/* need _rcu because we can have concurrent lock free readers */
+	hlist_bl_set_first_rcu(h, n);
+}
+/**
+ * hlist_bl_for_each_entry_rcu - iterate over rcu list of given type
+ * @tpos:	the type * to use as a loop cursor.
+ * @pos:	the &struct hlist_bl_node to use as a loop cursor.
+ * @head:	the head for your list.
+ * @member:	the name of the hlist_bl_node within the struct.
+ *
+ */
+#define hlist_bl_for_each_entry_rcu(tpos, pos, head, member)		\
+	for (pos = hlist_bl_first_rcu(head);				\
+		pos &&							\
+		({ tpos = hlist_bl_entry(pos, typeof(*tpos), member); 1; }); \
+		pos = rcu_dereference_raw(pos->next))
+
+#endif
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 03/19] quota: Wrap common expression to  helper function
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 02/19] kernel: add bl_list Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU Dmitry Monakhov
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

- rename sb_dqopt(sb) to dqopts(sb): returns quota_info structure of the sb
- add new sb_dqopts(dquot): returns quota_info structure of the sb dquot
  belongs to.

This helps us to make code more readable.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ext2/super.c          |    4 +-
 fs/ext3/super.c          |    8 ++--
 fs/ext4/super.c          |    8 ++--
 fs/gfs2/ops_fstype.c     |    2 +-
 fs/jfs/super.c           |    4 +-
 fs/ocfs2/quota_global.c  |   12 ++--
 fs/ocfs2/quota_local.c   |   34 +++++++-------
 fs/ocfs2/super.c         |    6 +-
 fs/quota/dquot.c         |  114 +++++++++++++++++++++++-----------------------
 fs/quota/quota_tree.c    |    2 +-
 fs/quota/quota_v1.c      |   14 +++---
 fs/reiserfs/super.c      |    6 +-
 include/linux/quota.h    |    1 +
 include/linux/quotaops.h |   14 ++++--
 14 files changed, 117 insertions(+), 112 deletions(-)

diff --git a/fs/ext2/super.c b/fs/ext2/super.c
index 1ec6026..7727491 100644
--- a/fs/ext2/super.c
+++ b/fs/ext2/super.c
@@ -1371,7 +1371,7 @@ static int ext2_get_sb(struct file_system_type *fs_type,
 static ssize_t ext2_quota_read(struct super_block *sb, int type, char *data,
 			       size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	sector_t blk = off >> EXT2_BLOCK_SIZE_BITS(sb);
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
@@ -1416,7 +1416,7 @@ static ssize_t ext2_quota_read(struct super_block *sb, int type, char *data,
 static ssize_t ext2_quota_write(struct super_block *sb, int type,
 				const char *data, size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	sector_t blk = off >> EXT2_BLOCK_SIZE_BITS(sb);
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index e0f68f0..5cd148a 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -1530,7 +1530,7 @@ static void ext3_orphan_cleanup (struct super_block * sb,
 #ifdef CONFIG_QUOTA
 	/* Turn quotas off */
 	for (i = 0; i < MAXQUOTAS; i++) {
-		if (sb_dqopt(sb)->files[i])
+		if (dqopts(sb)->files[i])
 			dquot_quota_off(sb, i);
 	}
 #endif
@@ -2788,7 +2788,7 @@ static int ext3_statfs (struct dentry * dentry, struct kstatfs * buf)
 
 static inline struct inode *dquot_to_inode(struct dquot *dquot)
 {
-	return sb_dqopt(dquot->dq_sb)->files[dquot->dq_type];
+	return sb_dqopts(dquot)->files[dquot->dq_type];
 }
 
 static int ext3_write_dquot(struct dquot *dquot)
@@ -2931,7 +2931,7 @@ static int ext3_quota_on(struct super_block *sb, int type, int format_id,
 static ssize_t ext3_quota_read(struct super_block *sb, int type, char *data,
 			       size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	sector_t blk = off >> EXT3_BLOCK_SIZE_BITS(sb);
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
@@ -2969,7 +2969,7 @@ static ssize_t ext3_quota_read(struct super_block *sb, int type, char *data,
 static ssize_t ext3_quota_write(struct super_block *sb, int type,
 				const char *data, size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	sector_t blk = off >> EXT3_BLOCK_SIZE_BITS(sb);
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index a2e68f9..053aee4 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2116,7 +2116,7 @@ static void ext4_orphan_cleanup(struct super_block *sb,
 #ifdef CONFIG_QUOTA
 	/* Turn quotas off */
 	for (i = 0; i < MAXQUOTAS; i++) {
-		if (sb_dqopt(sb)->files[i])
+		if (dqopts(sb)->files[i])
 			dquot_quota_off(sb, i);
 	}
 #endif
@@ -3969,7 +3969,7 @@ static int ext4_statfs(struct dentry *dentry, struct kstatfs *buf)
 
 static inline struct inode *dquot_to_inode(struct dquot *dquot)
 {
-	return sb_dqopt(dquot->dq_sb)->files[dquot->dq_type];
+	return sb_dqopts(dquot)->files[dquot->dq_type];
 }
 
 static int ext4_write_dquot(struct dquot *dquot)
@@ -4124,7 +4124,7 @@ static int ext4_quota_off(struct super_block *sb, int type)
 static ssize_t ext4_quota_read(struct super_block *sb, int type, char *data,
 			       size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
@@ -4162,7 +4162,7 @@ static ssize_t ext4_quota_read(struct super_block *sb, int type, char *data,
 static ssize_t ext4_quota_write(struct super_block *sb, int type,
 				const char *data, size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	ext4_lblk_t blk = off >> EXT4_BLOCK_SIZE_BITS(sb);
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 4d4b1e8..1e52207 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -1168,7 +1168,7 @@ static int fill_super(struct super_block *sb, struct gfs2_args *args, int silent
 	sb->s_export_op = &gfs2_export_ops;
 	sb->s_xattr = gfs2_xattr_handlers;
 	sb->s_qcop = &gfs2_quotactl_ops;
-	sb_dqopt(sb)->flags |= DQUOT_QUOTA_SYS_FILE;
+	dqopts(sb)->flags |= DQUOT_QUOTA_SYS_FILE;
 	sb->s_time_gran = 1;
 	sb->s_maxbytes = MAX_LFS_FILESIZE;
 
diff --git a/fs/jfs/super.c b/fs/jfs/super.c
index ec8c3e4..b612adf 100644
--- a/fs/jfs/super.c
+++ b/fs/jfs/super.c
@@ -655,7 +655,7 @@ static int jfs_show_options(struct seq_file *seq, struct vfsmount *vfs)
 static ssize_t jfs_quota_read(struct super_block *sb, int type, char *data,
 			      size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	sector_t blk = off >> sb->s_blocksize_bits;
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
@@ -700,7 +700,7 @@ static ssize_t jfs_quota_read(struct super_block *sb, int type, char *data,
 static ssize_t jfs_quota_write(struct super_block *sb, int type,
 			       const char *data, size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	sector_t blk = off >> sb->s_blocksize_bits;
 	int err = 0;
 	int offset = off & (sb->s_blocksize - 1);
diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
index 4607923..cdae8d1 100644
--- a/fs/ocfs2/quota_global.c
+++ b/fs/ocfs2/quota_global.c
@@ -610,7 +610,7 @@ static int ocfs2_sync_dquot_helper(struct dquot *dquot, unsigned long type)
 		mlog_errno(status);
 		goto out_ilock;
 	}
-	mutex_lock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_lock(&dqopts(sb)->dqio_mutex);
 	status = ocfs2_sync_dquot(dquot);
 	if (status < 0)
 		mlog_errno(status);
@@ -618,7 +618,7 @@ static int ocfs2_sync_dquot_helper(struct dquot *dquot, unsigned long type)
 	status = ocfs2_local_write_dquot(dquot);
 	if (status < 0)
 		mlog_errno(status);
-	mutex_unlock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_unlock(&dqopts(sb)->dqio_mutex);
 	ocfs2_commit_trans(osb, handle);
 out_ilock:
 	ocfs2_unlock_global_qf(oinfo, 1);
@@ -657,9 +657,9 @@ static int ocfs2_write_dquot(struct dquot *dquot)
 		mlog_errno(status);
 		goto out;
 	}
-	mutex_lock(&sb_dqopt(dquot->dq_sb)->dqio_mutex);
+	mutex_lock(&sb_dqopts(dquot)->dqio_mutex);
 	status = ocfs2_local_write_dquot(dquot);
-	mutex_unlock(&sb_dqopt(dquot->dq_sb)->dqio_mutex);
+	mutex_unlock(&sb_dqopts(dquot)->dqio_mutex);
 	ocfs2_commit_trans(osb, handle);
 out:
 	mlog_exit(status);
@@ -854,7 +854,7 @@ static int ocfs2_mark_dquot_dirty(struct dquot *dquot)
 		mlog_errno(status);
 		goto out_ilock;
 	}
-	mutex_lock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_lock(&dqopts(sb)->dqio_mutex);
 	status = ocfs2_sync_dquot(dquot);
 	if (status < 0) {
 		mlog_errno(status);
@@ -863,7 +863,7 @@ static int ocfs2_mark_dquot_dirty(struct dquot *dquot)
 	/* Now write updated local dquot structure */
 	status = ocfs2_local_write_dquot(dquot);
 out_dlock:
-	mutex_unlock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_unlock(&dqopts(sb)->dqio_mutex);
 	ocfs2_commit_trans(osb, handle);
 out_ilock:
 	ocfs2_unlock_global_qf(oinfo, 1);
diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
index dc78764..056cb24 100644
--- a/fs/ocfs2/quota_local.c
+++ b/fs/ocfs2/quota_local.c
@@ -173,7 +173,7 @@ static int ocfs2_local_check_quota_file(struct super_block *sb, int type)
 	unsigned int ino[MAXQUOTAS] = { USER_QUOTA_SYSTEM_INODE,
 					GROUP_QUOTA_SYSTEM_INODE };
 	struct buffer_head *bh = NULL;
-	struct inode *linode = sb_dqopt(sb)->files[type];
+	struct inode *linode = dqopts(sb)->files[type];
 	struct inode *ginode = NULL;
 	struct ocfs2_disk_dqheader *dqhead;
 	int status, ret = 0;
@@ -522,7 +522,7 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 				mlog_errno(status);
 				goto out_drop_lock;
 			}
-			mutex_lock(&sb_dqopt(sb)->dqio_mutex);
+			mutex_lock(&dqopts(sb)->dqio_mutex);
 			spin_lock(&dq_data_lock);
 			/* Add usage from quota entry into quota changes
 			 * of our node. Auxiliary variables are important
@@ -555,7 +555,7 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 			unlock_buffer(qbh);
 			ocfs2_journal_dirty(handle, qbh);
 out_commit:
-			mutex_unlock(&sb_dqopt(sb)->dqio_mutex);
+			mutex_unlock(&dqopts(sb)->dqio_mutex);
 			ocfs2_commit_trans(OCFS2_SB(sb), handle);
 out_drop_lock:
 			ocfs2_unlock_global_qf(oinfo, 1);
@@ -596,7 +596,7 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
 	unsigned int flags;
 
 	mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num);
-	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_lock(&dqopts(sb)->dqonoff_mutex);
 	for (type = 0; type < MAXQUOTAS; type++) {
 		if (list_empty(&(rec->r_list[type])))
 			continue;
@@ -672,7 +672,7 @@ out_put:
 			break;
 	}
 out:
-	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 	kfree(rec);
 	return status;
 }
@@ -683,7 +683,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
 	struct ocfs2_local_disk_dqinfo *ldinfo;
 	struct mem_dqinfo *info = sb_dqinfo(sb, type);
 	struct ocfs2_mem_dqinfo *oinfo;
-	struct inode *lqinode = sb_dqopt(sb)->files[type];
+	struct inode *lqinode = dqopts(sb)->files[type];
 	int status;
 	struct buffer_head *bh = NULL;
 	struct ocfs2_quota_recovery *rec;
@@ -691,7 +691,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
 
 	/* We don't need the lock and we have to acquire quota file locks
 	 * which will later depend on this lock */
-	mutex_unlock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_unlock(&dqopts(sb)->dqio_mutex);
 	info->dqi_maxblimit = 0x7fffffffffffffffLL;
 	info->dqi_maxilimit = 0x7fffffffffffffffLL;
 	oinfo = kmalloc(sizeof(struct ocfs2_mem_dqinfo), GFP_NOFS);
@@ -770,7 +770,7 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
 		goto out_err;
 	}
 
-	mutex_lock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_lock(&dqopts(sb)->dqio_mutex);
 	return 0;
 out_err:
 	if (oinfo) {
@@ -784,7 +784,7 @@ out_err:
 		kfree(oinfo);
 	}
 	brelse(bh);
-	mutex_lock(&sb_dqopt(sb)->dqio_mutex);
+	mutex_lock(&dqopts(sb)->dqio_mutex);
 	return -1;
 }
 
@@ -796,7 +796,7 @@ static int ocfs2_local_write_info(struct super_block *sb, int type)
 						->dqi_libh;
 	int status;
 
-	status = ocfs2_modify_bh(sb_dqopt(sb)->files[type], bh, olq_update_info,
+	status = ocfs2_modify_bh(dqopts(sb)->files[type], bh, olq_update_info,
 				 info);
 	if (status < 0) {
 		mlog_errno(status);
@@ -849,7 +849,7 @@ static int ocfs2_local_free_info(struct super_block *sb, int type)
 
 	/* Mark local file as clean */
 	info->dqi_flags |= OLQF_CLEAN;
-	status = ocfs2_modify_bh(sb_dqopt(sb)->files[type],
+	status = ocfs2_modify_bh(dqopts(sb)->files[type],
 				 oinfo->dqi_libh,
 				 olq_update_info,
 				 info);
@@ -859,7 +859,7 @@ static int ocfs2_local_free_info(struct super_block *sb, int type)
 	}
 
 out:
-	ocfs2_inode_unlock(sb_dqopt(sb)->files[type], 1);
+	ocfs2_inode_unlock(dqopts(sb)->files[type], 1);
 	brelse(oinfo->dqi_libh);
 	brelse(oinfo->dqi_lqi_bh);
 	kfree(oinfo);
@@ -893,7 +893,7 @@ int ocfs2_local_write_dquot(struct dquot *dquot)
 	struct super_block *sb = dquot->dq_sb;
 	struct ocfs2_dquot *od = OCFS2_DQUOT(dquot);
 	struct buffer_head *bh;
-	struct inode *lqinode = sb_dqopt(sb)->files[dquot->dq_type];
+	struct inode *lqinode = dqopts(sb)->files[dquot->dq_type];
 	int status;
 
 	status = ocfs2_read_quota_phys_block(lqinode, od->dq_local_phys_blk,
@@ -962,7 +962,7 @@ static struct ocfs2_quota_chunk *ocfs2_local_quota_add_chunk(
 {
 	struct mem_dqinfo *info = sb_dqinfo(sb, type);
 	struct ocfs2_mem_dqinfo *oinfo = info->dqi_priv;
-	struct inode *lqinode = sb_dqopt(sb)->files[type];
+	struct inode *lqinode = dqopts(sb)->files[type];
 	struct ocfs2_quota_chunk *chunk = NULL;
 	struct ocfs2_local_disk_chunk *dchunk;
 	int status;
@@ -1094,7 +1094,7 @@ static struct ocfs2_quota_chunk *ocfs2_extend_local_quota_file(
 	struct mem_dqinfo *info = sb_dqinfo(sb, type);
 	struct ocfs2_mem_dqinfo *oinfo = info->dqi_priv;
 	struct ocfs2_quota_chunk *chunk;
-	struct inode *lqinode = sb_dqopt(sb)->files[type];
+	struct inode *lqinode = dqopts(sb)->files[type];
 	struct ocfs2_local_disk_chunk *dchunk;
 	int epb = ol_quota_entries_per_block(sb);
 	unsigned int chunk_blocks;
@@ -1215,7 +1215,7 @@ int ocfs2_create_local_dquot(struct dquot *dquot)
 {
 	struct super_block *sb = dquot->dq_sb;
 	int type = dquot->dq_type;
-	struct inode *lqinode = sb_dqopt(sb)->files[type];
+	struct inode *lqinode = dqopts(sb)->files[type];
 	struct ocfs2_quota_chunk *chunk;
 	struct ocfs2_dquot *od = OCFS2_DQUOT(dquot);
 	int offset;
@@ -1275,7 +1275,7 @@ int ocfs2_local_release_dquot(handle_t *handle, struct dquot *dquot)
 	int offset;
 
 	status = ocfs2_journal_access_dq(handle,
-			INODE_CACHE(sb_dqopt(sb)->files[type]),
+			INODE_CACHE(dqopts(sb)->files[type]),
 			od->dq_chunk->qc_headerbh, OCFS2_JOURNAL_ACCESS_WRITE);
 	if (status < 0) {
 		mlog_errno(status);
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index 8e6a20c..16065ae 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -907,7 +907,7 @@ static int ocfs2_enable_quotas(struct ocfs2_super *osb)
 	int status;
 	int type;
 
-	sb_dqopt(sb)->flags |= DQUOT_QUOTA_SYS_FILE | DQUOT_NEGATIVE_USAGE;
+	dqopts(sb)->flags |= DQUOT_QUOTA_SYS_FILE | DQUOT_NEGATIVE_USAGE;
 	for (type = 0; type < MAXQUOTAS; type++) {
 		if (!OCFS2_HAS_RO_COMPAT_FEATURE(sb, feature[type]))
 			continue;
@@ -949,7 +949,7 @@ static void ocfs2_disable_quotas(struct ocfs2_super *osb)
 		/* Cancel periodic syncing before we grab dqonoff_mutex */
 		oinfo = sb_dqinfo(sb, type)->dqi_priv;
 		cancel_delayed_work_sync(&oinfo->dqi_sync_work);
-		inode = igrab(sb->s_dquot.files[type]);
+		inode = igrab(dqopts(sb)->files[type]);
 		/* Turn off quotas. This will remove all dquot structures from
 		 * memory and so they will be automatically synced to global
 		 * quota files */
@@ -970,7 +970,7 @@ static int ocfs2_quota_on(struct super_block *sb, int type, int format_id)
 	if (!OCFS2_HAS_RO_COMPAT_FEATURE(sb, feature[type]))
 		return -EINVAL;
 
-	return dquot_enable(sb_dqopt(sb)->files[type], type,
+	return dquot_enable(dqopts(sb)->files[type], type,
 			    format_id, DQUOT_LIMITS_ENABLED);
 }
 
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index d1d1c51..748d744 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -346,7 +346,7 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
 
 	spin_lock(&dq_list_lock);
 	if (!test_and_set_bit(DQ_MOD_B, &dquot->dq_flags)) {
-		list_add(&dquot->dq_dirty, &sb_dqopt(dquot->dq_sb)->
+		list_add(&dquot->dq_dirty, &sb_dqopts(dquot)->
 				info[dquot->dq_type].dqi_dirty_list);
 		ret = 0;
 	}
@@ -390,7 +390,7 @@ static inline int clear_dquot_dirty(struct dquot *dquot)
 
 void mark_info_dirty(struct super_block *sb, int type)
 {
-	set_bit(DQF_INFO_DIRTY_B, &sb_dqopt(sb)->info[type].dqi_flags);
+	set_bit(DQF_INFO_DIRTY_B, &dqopts(sb)->info[type].dqi_flags);
 }
 EXPORT_SYMBOL(mark_info_dirty);
 
@@ -401,7 +401,7 @@ EXPORT_SYMBOL(mark_info_dirty);
 int dquot_acquire(struct dquot *dquot)
 {
 	int ret = 0, ret2 = 0;
-	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+	struct quota_info *dqopt = sb_dqopts(dquot);
 
 	mutex_lock(&dquot->dq_lock);
 	mutex_lock(&dqopt->dqio_mutex);
@@ -439,7 +439,7 @@ EXPORT_SYMBOL(dquot_acquire);
 int dquot_commit(struct dquot *dquot)
 {
 	int ret = 0, ret2 = 0;
-	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+	struct quota_info *dqopt = sb_dqopts(dquot);
 
 	mutex_lock(&dqopt->dqio_mutex);
 	spin_lock(&dq_list_lock);
@@ -471,7 +471,7 @@ EXPORT_SYMBOL(dquot_commit);
 int dquot_release(struct dquot *dquot)
 {
 	int ret = 0, ret2 = 0;
-	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+	struct quota_info *dqopt = sb_dqopts(dquot);
 
 	mutex_lock(&dquot->dq_lock);
 	/* Check whether we are not racing with some other dqget() */
@@ -568,7 +568,7 @@ int dquot_scan_active(struct super_block *sb,
 	struct dquot *dquot, *old_dquot = NULL;
 	int ret = 0;
 
-	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_lock(&dqopts(sb)->dqonoff_mutex);
 	spin_lock(&dq_list_lock);
 	list_for_each_entry(dquot, &inuse_list, dq_inuse) {
 		if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
@@ -591,7 +591,7 @@ int dquot_scan_active(struct super_block *sb,
 	spin_unlock(&dq_list_lock);
 out:
 	dqput(old_dquot);
-	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_scan_active);
@@ -600,7 +600,7 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 {
 	struct list_head *dirty;
 	struct dquot *dquot;
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 	int cnt;
 
 	mutex_lock(&dqopt->dqonoff_mutex);
@@ -639,7 +639,7 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 	dqstats_inc(DQST_SYNCS);
 	mutex_unlock(&dqopt->dqonoff_mutex);
 
-	if (!wait || (sb_dqopt(sb)->flags & DQUOT_QUOTA_SYS_FILE))
+	if (!wait || (dqopts(sb)->flags & DQUOT_QUOTA_SYS_FILE))
 		return 0;
 
 	/* This is not very clever (and fast) but currently I don't know about
@@ -653,18 +653,18 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 	 * Now when everything is written we can discard the pagecache so
 	 * that userspace sees the changes.
 	 */
-	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_lock(&dqopts(sb)->dqonoff_mutex);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (type != -1 && cnt != type)
 			continue;
 		if (!sb_has_quota_active(sb, cnt))
 			continue;
-		mutex_lock_nested(&sb_dqopt(sb)->files[cnt]->i_mutex,
+		mutex_lock_nested(&dqopts(sb)->files[cnt]->i_mutex,
 				  I_MUTEX_QUOTA);
-		truncate_inode_pages(&sb_dqopt(sb)->files[cnt]->i_data, 0);
-		mutex_unlock(&sb_dqopt(sb)->files[cnt]->i_mutex);
+		truncate_inode_pages(&dqopts(sb)->files[cnt]->i_data, 0);
+		mutex_unlock(&dqopts(sb)->files[cnt]->i_mutex);
 	}
-	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 
 	return 0;
 }
@@ -1033,9 +1033,9 @@ static void drop_dquot_ref(struct super_block *sb, int type)
 	LIST_HEAD(tofree_head);
 
 	if (sb->dq_op) {
-		down_write(&sb_dqopt(sb)->dqptr_sem);
+		down_write(&dqopts(sb)->dqptr_sem);
 		remove_dquot_ref(sb, type, &tofree_head);
-		up_write(&sb_dqopt(sb)->dqptr_sem);
+		up_write(&dqopts(sb)->dqptr_sem);
 		put_dquot_list(&tofree_head);
 	}
 }
@@ -1081,7 +1081,7 @@ void dquot_free_reserved_space(struct dquot *dquot, qsize_t number)
 
 static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
 {
-	if (sb_dqopt(dquot->dq_sb)->flags & DQUOT_NEGATIVE_USAGE ||
+	if (sb_dqopts(dquot)->flags & DQUOT_NEGATIVE_USAGE ||
 	    dquot->dq_dqb.dqb_curinodes >= number)
 		dquot->dq_dqb.dqb_curinodes -= number;
 	else
@@ -1093,7 +1093,7 @@ static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
 
 static void dquot_decr_space(struct dquot *dquot, qsize_t number)
 {
-	if (sb_dqopt(dquot->dq_sb)->flags & DQUOT_NEGATIVE_USAGE ||
+	if (sb_dqopts(dquot)->flags & DQUOT_NEGATIVE_USAGE ||
 	    dquot->dq_dqb.dqb_curspace >= number)
 		dquot->dq_dqb.dqb_curspace -= number;
 	else
@@ -1203,7 +1203,7 @@ static void flush_warnings(struct dquot *const *dquots, char *warntype)
 
 static int ignore_hardlimit(struct dquot *dquot)
 {
-	struct mem_dqinfo *info = &sb_dqopt(dquot->dq_sb)->info[dquot->dq_type];
+	struct mem_dqinfo *info = &sb_dqopts(dquot)->info[dquot->dq_type];
 
 	return capable(CAP_SYS_RESOURCE) &&
 	       (info->dqi_format->qf_fmt_id != QFMT_VFS_OLD ||
@@ -1241,7 +1241,7 @@ static int check_idq(struct dquot *dquot, qsize_t inodes, char *warntype)
 	    dquot->dq_dqb.dqb_itime == 0) {
 		*warntype = QUOTA_NL_ISOFTWARN;
 		dquot->dq_dqb.dqb_itime = get_seconds() +
-		    sb_dqopt(dquot->dq_sb)->info[dquot->dq_type].dqi_igrace;
+		    sb_dqopts(dquot)->info[dquot->dq_type].dqi_igrace;
 	}
 
 	return 0;
@@ -1285,7 +1285,7 @@ static int check_bdq(struct dquot *dquot, qsize_t space, int prealloc, char *war
 		if (!prealloc) {
 			*warntype = QUOTA_NL_BSOFTWARN;
 			dquot->dq_dqb.dqb_btime = get_seconds() +
-			    sb_dqopt(sb)->info[dquot->dq_type].dqi_bgrace;
+			    dqopts(sb)->info[dquot->dq_type].dqi_bgrace;
 		}
 		else
 			/*
@@ -1377,7 +1377,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 		got[cnt] = dqget(sb, id, cnt);
 	}
 
-	down_write(&sb_dqopt(sb)->dqptr_sem);
+	down_write(&dqopts(sb)->dqptr_sem);
 	if (IS_NOQUOTA(inode))
 		goto out_err;
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1402,7 +1402,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 		}
 	}
 out_err:
-	up_write(&sb_dqopt(sb)->dqptr_sem);
+	up_write(&dqopts(sb)->dqptr_sem);
 	/* Drop unused references */
 	dqput_all(got);
 }
@@ -1421,12 +1421,12 @@ static void __dquot_drop(struct inode *inode)
 	int cnt;
 	struct dquot *put[MAXQUOTAS];
 
-	down_write(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_write(&dqopts(inode->i_sb)->dqptr_sem);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		put[cnt] = inode->i_dquot[cnt];
 		inode->i_dquot[cnt] = NULL;
 	}
-	up_write(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	dqput_all(put);
 }
 
@@ -1550,7 +1550,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 		goto out;
 	}
 
-	down_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 
@@ -1581,7 +1581,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 	mark_all_dquot_dirty(inode->i_dquot);
 out_flush_warn:
 	flush_warnings(inode->i_dquot, warntype);
-	up_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 out:
 	return ret;
 }
@@ -1601,7 +1601,7 @@ int dquot_alloc_inode(const struct inode *inode)
 		return 0;
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
-	down_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1622,7 +1622,7 @@ warn_put_all:
 	if (ret == 0)
 		mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
-	up_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_alloc_inode);
@@ -1639,7 +1639,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 		return 0;
 	}
 
-	down_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	/* Claim reserved quotas to allocated quotas */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1651,7 +1651,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	inode_claim_rsv_space(inode, number);
 	spin_unlock(&dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
-	up_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_claim_space_nodirty);
@@ -1672,7 +1672,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 		return;
 	}
 
-	down_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1691,7 +1691,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 	mark_all_dquot_dirty(inode->i_dquot);
 out_unlock:
 	flush_warnings(inode->i_dquot, warntype);
-	up_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 }
 EXPORT_SYMBOL(__dquot_free_space);
 
@@ -1708,7 +1708,7 @@ void dquot_free_inode(const struct inode *inode)
 	if (!dquot_active(inode))
 		return;
 
-	down_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1719,7 +1719,7 @@ void dquot_free_inode(const struct inode *inode)
 	spin_unlock(&dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
-	up_read(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 }
 EXPORT_SYMBOL(dquot_free_inode);
 
@@ -1750,9 +1750,9 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	/* Initialize the arrays */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype_to[cnt] = QUOTA_NL_NOWARN;
-	down_write(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	down_write(&dqopts(inode->i_sb)->dqptr_sem);
 	if (IS_NOQUOTA(inode)) {	/* File without quota accounting? */
-		up_write(&sb_dqopt(inode->i_sb)->dqptr_sem);
+		up_write(&dqopts(inode->i_sb)->dqptr_sem);
 		return 0;
 	}
 	spin_lock(&dq_data_lock);
@@ -1804,7 +1804,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 		inode->i_dquot[cnt] = transfer_to[cnt];
 	}
 	spin_unlock(&dq_data_lock);
-	up_write(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 
 	mark_all_dquot_dirty(transfer_from);
 	mark_all_dquot_dirty(transfer_to);
@@ -1818,7 +1818,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	return 0;
 over_quota:
 	spin_unlock(&dq_data_lock);
-	up_write(&sb_dqopt(inode->i_sb)->dqptr_sem);
+	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	flush_warnings(transfer_to, warntype_to);
 	return ret;
 }
@@ -1853,7 +1853,7 @@ EXPORT_SYMBOL(dquot_transfer);
 int dquot_commit_info(struct super_block *sb, int type)
 {
 	int ret;
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 
 	mutex_lock(&dqopt->dqio_mutex);
 	ret = dqopt->ops[type]->write_file_info(sb, type);
@@ -1892,13 +1892,13 @@ EXPORT_SYMBOL(dquot_file_open);
 
 int dquot_get_dqfmt(struct super_block *sb, int type, unsigned int *fmt)
 {
-	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_lock(&dqopts(sb)->dqonoff_mutex);
 	if (!sb_has_quota_active(sb, type)) {
-		mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+		mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 		return -ESRCH;
 	}
-	*fmt = sb_dqopt(sb)->info[type].dqi_format->qf_fmt_id;
-	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	*fmt = dqopts(sb)->info[type].dqi_format->qf_fmt_id;
+	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_get_dqfmt);
@@ -1909,7 +1909,7 @@ EXPORT_SYMBOL(dquot_get_dqfmt);
 int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 {
 	int cnt, ret = 0;
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 	struct inode *toputinode[MAXQUOTAS];
 
 	/* Cannot turn off usage accounting without turning off limits, or
@@ -2058,7 +2058,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 {
 	struct quota_format_type *fmt = find_quota_format(format_id);
 	struct super_block *sb = inode->i_sb;
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 	int error;
 	int oldflags = -1;
 
@@ -2164,7 +2164,7 @@ out_fmt:
 /* Reenable quotas on remount RW */
 int dquot_resume(struct super_block *sb, int type)
 {
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 	struct inode *inode;
 	int ret = 0, cnt;
 	unsigned int flags;
@@ -2224,7 +2224,7 @@ int dquot_enable(struct inode *inode, int type, int format_id,
 {
 	int ret = 0;
 	struct super_block *sb = inode->i_sb;
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 
 	/* Just unsuspend quotas? */
 	BUG_ON(flags & DQUOT_SUSPENDED);
@@ -2250,7 +2250,7 @@ int dquot_enable(struct inode *inode, int type, int format_id,
 			goto out_lock;
 		}
 		spin_lock(&dq_state_lock);
-		sb_dqopt(sb)->flags |= dquot_state_flag(flags, type);
+		dqopts(sb)->flags |= dquot_state_flag(flags, type);
 		spin_unlock(&dq_state_lock);
 out_lock:
 		mutex_unlock(&dqopt->dqonoff_mutex);
@@ -2352,7 +2352,7 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 {
 	struct mem_dqblk *dm = &dquot->dq_dqb;
 	int check_blim = 0, check_ilim = 0;
-	struct mem_dqinfo *dqi = &sb_dqopt(dquot->dq_sb)->info[dquot->dq_type];
+	struct mem_dqinfo *dqi = &sb_dqopts(dquot)->info[dquot->dq_type];
 
 	if (di->d_fieldmask & ~VFS_FS_DQ_MASK)
 		return -EINVAL;
@@ -2462,19 +2462,19 @@ int dquot_get_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 {
 	struct mem_dqinfo *mi;
   
-	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_lock(&dqopts(sb)->dqonoff_mutex);
 	if (!sb_has_quota_active(sb, type)) {
-		mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+		mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 		return -ESRCH;
 	}
-	mi = sb_dqopt(sb)->info + type;
+	mi = dqopts(sb)->info + type;
 	spin_lock(&dq_data_lock);
 	ii->dqi_bgrace = mi->dqi_bgrace;
 	ii->dqi_igrace = mi->dqi_igrace;
 	ii->dqi_flags = mi->dqi_flags & DQF_MASK;
 	ii->dqi_valid = IIF_ALL;
 	spin_unlock(&dq_data_lock);
-	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_get_dqinfo);
@@ -2485,12 +2485,12 @@ int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 	struct mem_dqinfo *mi;
 	int err = 0;
 
-	mutex_lock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_lock(&dqopts(sb)->dqonoff_mutex);
 	if (!sb_has_quota_active(sb, type)) {
 		err = -ESRCH;
 		goto out;
 	}
-	mi = sb_dqopt(sb)->info + type;
+	mi = dqopts(sb)->info + type;
 	spin_lock(&dq_data_lock);
 	if (ii->dqi_valid & IIF_BGRACE)
 		mi->dqi_bgrace = ii->dqi_bgrace;
@@ -2504,7 +2504,7 @@ int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 	/* Force write to disk */
 	sb->dq_op->write_info(sb, type);
 out:
-	mutex_unlock(&sb_dqopt(sb)->dqonoff_mutex);
+	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
 	return err;
 }
 EXPORT_SYMBOL(dquot_set_dqinfo);
diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
index 9e48874..c0917f4 100644
--- a/fs/quota/quota_tree.c
+++ b/fs/quota/quota_tree.c
@@ -596,7 +596,7 @@ int qtree_read_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 
 #ifdef __QUOTA_QT_PARANOIA
 	/* Invalidated quota? */
-	if (!sb_dqopt(dquot->dq_sb)->files[type]) {
+	if (!sb_dqopts(dquot)->files[type]) {
 		quota_error(sb, "Quota invalidated while reading!");
 		return -EIO;
 	}
diff --git a/fs/quota/quota_v1.c b/fs/quota/quota_v1.c
index 34b37a6..cab3ca3 100644
--- a/fs/quota/quota_v1.c
+++ b/fs/quota/quota_v1.c
@@ -57,7 +57,7 @@ static int v1_read_dqblk(struct dquot *dquot)
 	int type = dquot->dq_type;
 	struct v1_disk_dqblk dqblk;
 
-	if (!sb_dqopt(dquot->dq_sb)->files[type])
+	if (!sb_dqopts(dquot)->files[type])
 		return -EINVAL;
 
 	/* Set structure to 0s in case read fails/is after end of file */
@@ -85,12 +85,12 @@ static int v1_commit_dqblk(struct dquot *dquot)
 	v1_mem2disk_dqblk(&dqblk, &dquot->dq_dqb);
 	if (dquot->dq_id == 0) {
 		dqblk.dqb_btime =
-			sb_dqopt(dquot->dq_sb)->info[type].dqi_bgrace;
+			sb_dqopts(dquot)->info[type].dqi_bgrace;
 		dqblk.dqb_itime =
-			sb_dqopt(dquot->dq_sb)->info[type].dqi_igrace;
+			sb_dqopts(dquot)->info[type].dqi_igrace;
 	}
 	ret = 0;
-	if (sb_dqopt(dquot->dq_sb)->files[type])
+	if (sb_dqopts(dquot)->files[type])
 		ret = dquot->dq_sb->s_op->quota_write(dquot->dq_sb, type,
 			(char *)&dqblk, sizeof(struct v1_disk_dqblk),
 			v1_dqoff(dquot->dq_id));
@@ -122,7 +122,7 @@ struct v2_disk_dqheader {
 
 static int v1_check_quota_file(struct super_block *sb, int type)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	ulong blocks;
 	size_t off; 
 	struct v2_disk_dqheader dqhead;
@@ -154,7 +154,7 @@ static int v1_check_quota_file(struct super_block *sb, int type)
 
 static int v1_read_file_info(struct super_block *sb, int type)
 {
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 	struct v1_disk_dqblk dqblk;
 	int ret;
 
@@ -179,7 +179,7 @@ out:
 
 static int v1_write_file_info(struct super_block *sb, int type)
 {
-	struct quota_info *dqopt = sb_dqopt(sb);
+	struct quota_info *dqopt = dqopts(sb);
 	struct v1_disk_dqblk dqblk;
 	int ret;
 
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index d4d32df..a07f30a 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -312,7 +312,7 @@ static int finish_unfinished(struct super_block *s)
 #ifdef CONFIG_QUOTA
 	/* Turn quotas off */
 	for (i = 0; i < MAXQUOTAS; i++) {
-		if (sb_dqopt(s)->files[i] && quota_enabled[i])
+		if (dqopts(s)->files[i] && quota_enabled[i])
 			dquot_quota_off(s, i);
 	}
 	if (ms_active_set)
@@ -2106,7 +2106,7 @@ out:
 static ssize_t reiserfs_quota_read(struct super_block *sb, int type, char *data,
 				   size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	unsigned long blk = off >> sb->s_blocksize_bits;
 	int err = 0, offset = off & (sb->s_blocksize - 1), tocopy;
 	size_t toread;
@@ -2151,7 +2151,7 @@ static ssize_t reiserfs_quota_read(struct super_block *sb, int type, char *data,
 static ssize_t reiserfs_quota_write(struct super_block *sb, int type,
 				    const char *data, size_t len, loff_t off)
 {
-	struct inode *inode = sb_dqopt(sb)->files[type];
+	struct inode *inode = dqopts(sb)->files[type];
 	unsigned long blk = off >> sb->s_blocksize_bits;
 	int err = 0, offset = off & (sb->s_blocksize - 1), tocopy;
 	int journal_quota = REISERFS_SB(sb)->s_qf_names[type] != NULL;
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 2767e4c..bc495d0 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -182,6 +182,7 @@ enum {
 
 #include <asm/atomic.h>
 
+
 typedef __kernel_uid32_t qid_t; /* Type in which we store ids in memory */
 typedef long long qsize_t;	/* Type in which we store sizes */
 
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index 45ae255..9750d86 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -13,10 +13,14 @@
 #define DQUOT_SPACE_RESERVE	0x2
 #define DQUOT_SPACE_NOFAIL	0x4
 
-static inline struct quota_info *sb_dqopt(struct super_block *sb)
+static inline struct quota_info *dqopts(struct super_block *sb)
 {
 	return &sb->s_dquot;
 }
+static inline struct quota_info* sb_dqopts(struct dquot *dq)
+{
+	return dqopts(dq->dq_sb);
+}
 
 /* i_mutex must being held */
 static inline bool is_quota_modification(struct inode *inode, struct iattr *ia)
@@ -96,7 +100,7 @@ int dquot_transfer(struct inode *inode, struct iattr *iattr);
 
 static inline struct mem_dqinfo *sb_dqinfo(struct super_block *sb, int type)
 {
-	return sb_dqopt(sb)->info + type;
+	return dqopts(sb)->info + type;
 }
 
 /*
@@ -105,19 +109,19 @@ static inline struct mem_dqinfo *sb_dqinfo(struct super_block *sb, int type)
 
 static inline bool sb_has_quota_usage_enabled(struct super_block *sb, int type)
 {
-	return sb_dqopt(sb)->flags &
+	return dqopts(sb)->flags &
 				dquot_state_flag(DQUOT_USAGE_ENABLED, type);
 }
 
 static inline bool sb_has_quota_limits_enabled(struct super_block *sb, int type)
 {
-	return sb_dqopt(sb)->flags &
+	return dqopts(sb)->flags &
 				dquot_state_flag(DQUOT_LIMITS_ENABLED, type);
 }
 
 static inline bool sb_has_quota_suspended(struct super_block *sb, int type)
 {
-	return sb_dqopt(sb)->flags &
+	return dqopts(sb)->flags &
 				dquot_state_flag(DQUOT_SUSPENDED, type);
 }
 
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (2 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 03/19] quota: Wrap common expression to helper function Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-22 21:21   ` Jan Kara
  2010-11-11 12:14 ` [PATCH 05/19] quota: mode quota internals from sb to quota_info Dmitry Monakhov
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

In order to hide quota internals inside didicated structure pointer.
We have to serialize that object lifetime with dqget(), and change/uncharge
functions.
Quota_info construction/destruction will be protected via ->dq_srcu.
SRCU counter temproraly placed inside sb, but will be moved inside
quota object pointer in next patch.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c      |  113 ++++++++++++++++++++++++++++++++++++++----------
 fs/super.c            |    9 ++++
 include/linux/quota.h |    2 +
 3 files changed, 100 insertions(+), 24 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 748d744..7e937b0 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -805,7 +805,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
 /*
  * Get reference to dquot
  *
- * Locking is slightly tricky here. We are guarded from parallel quotaoff()
+ * We are guarded from parallel quotaoff() by holding srcu_read_lock
  * destroying our dquot by:
  *   a) checking for quota flags under dq_list_lock and
  *   b) getting a reference to dquot before we release dq_list_lock
@@ -814,9 +814,15 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
 {
 	unsigned int hashent = hashfn(sb, id, type);
 	struct dquot *dquot = NULL, *empty = NULL;
+	int idx;
 
-        if (!sb_has_quota_active(sb, type))
+	rcu_read_lock();
+	if (!sb_has_quota_active(sb, type)) {
+		rcu_read_unlock();
 		return NULL;
+	}
+	idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
+	rcu_read_unlock();
 we_slept:
 	spin_lock(&dq_list_lock);
 	spin_lock(&dq_state_lock);
@@ -867,6 +873,7 @@ we_slept:
 	BUG_ON(!dquot->dq_sb);	/* Has somebody invalidated entry under us? */
 #endif
 out:
+	srcu_read_unlock(&dqopts(sb)->dq_srcu, idx);
 	if (empty)
 		do_destroy_dquot(empty);
 
@@ -1351,16 +1358,20 @@ static int dquot_active(const struct inode *inode)
 static void __dquot_initialize(struct inode *inode, int type)
 {
 	unsigned int id = 0;
-	int cnt;
+	int cnt, idx;
 	struct dquot *got[MAXQUOTAS];
 	struct super_block *sb = inode->i_sb;
 	qsize_t rsv;
 
 	/* First test before acquiring mutex - solves deadlocks when we
          * re-enter the quota code and are already holding the mutex */
-	if (!dquot_active(inode))
+	rcu_read_lock();
+	if (!dquot_active(inode)) {
+		rcu_read_unlock();
 		return;
-
+	}
+	idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
+	rcu_read_unlock();
 	/* First get references to structures we might need. */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		got[cnt] = NULL;
@@ -1403,6 +1414,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 	}
 out_err:
 	up_write(&dqopts(sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(sb)->dq_srcu, idx);
 	/* Drop unused references */
 	dqput_all(got);
 }
@@ -1432,11 +1444,10 @@ static void __dquot_drop(struct inode *inode)
 
 void dquot_drop(struct inode *inode)
 {
-	int cnt;
+	int cnt, idx;
 
 	if (IS_NOQUOTA(inode))
 		return;
-
 	/*
 	 * Test before calling to rule out calls from proc and such
 	 * where we are not allowed to block. Note that this is
@@ -1444,13 +1455,19 @@ void dquot_drop(struct inode *inode)
 	 * must assure that nobody can come after the DQUOT_DROP and
 	 * add quota pointers back anyway.
 	 */
+	rcu_read_lock();
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (inode->i_dquot[cnt])
 			break;
 	}
-
-	if (cnt < MAXQUOTAS)
-		__dquot_drop(inode);
+	if (cnt == MAXQUOTAS) {
+		rcu_read_unlock();
+		return;
+	}
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
+	__dquot_drop(inode);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 }
 EXPORT_SYMBOL(dquot_drop);
 
@@ -1535,7 +1552,7 @@ static void inode_decr_space(struct inode *inode, qsize_t number, int reserve)
  */
 int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 {
-	int cnt, ret = 0;
+	int cnt, idx, ret = 0;
 	char warntype[MAXQUOTAS];
 	int warn = flags & DQUOT_SPACE_WARN;
 	int reserve = flags & DQUOT_SPACE_RESERVE;
@@ -1545,11 +1562,14 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 	 * First test before acquiring mutex - solves deadlocks when we
 	 * re-enter the quota code and are already holding the mutex
 	 */
+	rcu_read_lock();
 	if (!dquot_active(inode)) {
 		inode_incr_space(inode, number, reserve);
+		rcu_read_unlock();
 		goto out;
 	}
-
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
@@ -1582,6 +1602,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 out_flush_warn:
 	flush_warnings(inode->i_dquot, warntype);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 out:
 	return ret;
 }
@@ -1592,13 +1613,19 @@ EXPORT_SYMBOL(__dquot_alloc_space);
  */
 int dquot_alloc_inode(const struct inode *inode)
 {
-	int cnt, ret = 0;
+	int cnt, idx, ret = 0;
 	char warntype[MAXQUOTAS];
 
 	/* First test before acquiring mutex - solves deadlocks when we
          * re-enter the quota code and are already holding the mutex */
-	if (!dquot_active(inode))
+	rcu_read_lock();
+	if (!dquot_active(inode)) {
+		rcu_read_unlock();
 		return 0;
+	}
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
+
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
@@ -1623,6 +1650,7 @@ warn_put_all:
 		mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_alloc_inode);
@@ -1632,13 +1660,16 @@ EXPORT_SYMBOL(dquot_alloc_inode);
  */
 int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 {
-	int cnt;
+	int cnt, idx;
 
+	rcu_read_lock();
 	if (!dquot_active(inode)) {
 		inode_claim_rsv_space(inode, number);
+		rcu_read_unlock();
 		return 0;
 	}
-
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	/* Claim reserved quotas to allocated quotas */
@@ -1652,6 +1683,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	spin_unlock(&dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_claim_space_nodirty);
@@ -1661,17 +1693,21 @@ EXPORT_SYMBOL(dquot_claim_space_nodirty);
  */
 void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 {
-	unsigned int cnt;
+	unsigned int cnt, idx;
 	char warntype[MAXQUOTAS];
 	int reserve = flags & DQUOT_SPACE_RESERVE;
 
 	/* First test before acquiring mutex - solves deadlocks when we
          * re-enter the quota code and are already holding the mutex */
+	rcu_read_lock();
 	if (!dquot_active(inode)) {
 		inode_decr_space(inode, number, reserve);
+		rcu_read_unlock();
 		return;
 	}
 
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1692,6 +1728,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 out_unlock:
 	flush_warnings(inode->i_dquot, warntype);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 }
 EXPORT_SYMBOL(__dquot_free_space);
 
@@ -1700,14 +1737,18 @@ EXPORT_SYMBOL(__dquot_free_space);
  */
 void dquot_free_inode(const struct inode *inode)
 {
-	unsigned int cnt;
+	unsigned int cnt, idx;
 	char warntype[MAXQUOTAS];
 
 	/* First test before acquiring mutex - solves deadlocks when we
          * re-enter the quota code and are already holding the mutex */
-	if (!dquot_active(inode))
+	rcu_read_lock();
+	if (!dquot_active(inode)) {
+		rcu_read_unlock();
 		return;
-
+	}
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1720,6 +1761,7 @@ void dquot_free_inode(const struct inode *inode)
 	mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 }
 EXPORT_SYMBOL(dquot_free_inode);
 
@@ -1738,21 +1780,28 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	qsize_t space, cur_space;
 	qsize_t rsv_space = 0;
 	struct dquot *transfer_from[MAXQUOTAS] = {};
-	int cnt, ret = 0;
+	int cnt, idx, ret = 0;
 	char is_valid[MAXQUOTAS] = {};
 	char warntype_to[MAXQUOTAS];
 	char warntype_from_inodes[MAXQUOTAS], warntype_from_space[MAXQUOTAS];
 
 	/* First test before acquiring mutex - solves deadlocks when we
          * re-enter the quota code and are already holding the mutex */
-	if (IS_NOQUOTA(inode))
+	rcu_read_lock();
+	if (!dquot_active(inode)) {
+		rcu_read_unlock();
 		return 0;
+	}
 	/* Initialize the arrays */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype_to[cnt] = QUOTA_NL_NOWARN;
+
+	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
+	rcu_read_unlock();
 	down_write(&dqopts(inode->i_sb)->dqptr_sem);
 	if (IS_NOQUOTA(inode)) {	/* File without quota accounting? */
 		up_write(&dqopts(inode->i_sb)->dqptr_sem);
+		srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 		return 0;
 	}
 	spin_lock(&dq_data_lock);
@@ -1805,7 +1854,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	}
 	spin_unlock(&dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
-
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	mark_all_dquot_dirty(transfer_from);
 	mark_all_dquot_dirty(transfer_to);
 	flush_warnings(transfer_to, warntype_to);
@@ -1819,6 +1868,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 over_quota:
 	spin_unlock(&dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
+	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	flush_warnings(transfer_to, warntype_to);
 	return ret;
 }
@@ -1963,6 +2013,22 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 		if (sb_has_quota_loaded(sb, cnt) && !(flags & DQUOT_SUSPENDED))
 			continue;
 
+		toputinode[cnt] = dqopt->files[cnt];
+	}
+	/*
+	 * Wait for all dqget() callers to finish.
+	 */
+	synchronize_rcu();
+	synchronize_srcu(&dqopt->dq_srcu);
+
+	/*
+	 * At this moment all quota functions disabled, is is now safe to
+	 * perform final cleanup.
+	 */
+	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+
+		if (!toputinode[cnt])
+			continue;
 		/* Note: these are blocking operations */
 		drop_dquot_ref(sb, cnt);
 		invalidate_dquots(sb, cnt);
@@ -1976,7 +2042,6 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 			dqopt->ops[cnt]->free_file_info(sb, cnt);
 		put_quota_format(dqopt->info[cnt].dqi_format);
 
-		toputinode[cnt] = dqopt->files[cnt];
 		if (!sb_has_quota_loaded(sb, cnt))
 			dqopt->files[cnt] = NULL;
 		dqopt->info[cnt].dqi_flags = 0;
diff --git a/fs/super.c b/fs/super.c
index 8819e3a..473bdf6 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -54,9 +54,16 @@ static struct super_block *alloc_super(struct file_system_type *type)
 			s = NULL;
 			goto out;
 		}
+		if (init_srcu_struct(&s->s_dquot.dq_srcu)) {
+			security_sb_free(s);
+			kfree(s);
+			s = NULL;
+			goto out;
+		}
 #ifdef CONFIG_SMP
 		s->s_files = alloc_percpu(struct list_head);
 		if (!s->s_files) {
+			cleanup_srcu_struct(&s->s_dquot.dq_srcu);
 			security_sb_free(s);
 			kfree(s);
 			s = NULL;
@@ -106,6 +113,7 @@ static struct super_block *alloc_super(struct file_system_type *type)
 		mutex_init(&s->s_dquot.dqio_mutex);
 		mutex_init(&s->s_dquot.dqonoff_mutex);
 		init_rwsem(&s->s_dquot.dqptr_sem);
+
 		init_waitqueue_head(&s->s_wait_unfrozen);
 		s->s_maxbytes = MAX_NON_LFS;
 		s->s_op = &default_op;
@@ -126,6 +134,7 @@ static inline void destroy_super(struct super_block *s)
 #ifdef CONFIG_SMP
 	free_percpu(s->s_files);
 #endif
+	cleanup_srcu_struct(&s->s_dquot.dq_srcu);
 	security_sb_free(s);
 	kfree(s->s_subtype);
 	kfree(s->s_options);
diff --git a/include/linux/quota.h b/include/linux/quota.h
index bc495d0..7e859eb 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -175,6 +175,7 @@ enum {
 #include <linux/spinlock.h>
 #include <linux/wait.h>
 #include <linux/percpu_counter.h>
+#include <linux/srcu.h>
 
 #include <linux/dqblk_xfs.h>
 #include <linux/dqblk_v1.h>
@@ -402,6 +403,7 @@ struct quota_info {
 	struct inode *files[MAXQUOTAS];		/* inodes of quotafiles */
 	struct mem_dqinfo info[MAXQUOTAS];	/* Information for each quota type */
 	const struct quota_format_ops *ops[MAXQUOTAS];	/* Operations for each type */
+	struct srcu_struct dq_srcu;
 };
 
 int register_quota_format(struct quota_format_type *fmt);
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 05/19] quota: mode quota internals from sb to quota_info
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (3 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 06/19] quota: Remove state_lock Dmitry Monakhov
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Most super_blocks don't use quota. So it is reasonable to hide quota internals
under a sb pointer. The only fields are steel on sb are:
*flags      indicate state, checked without locks
*onoff_mutex
*dq_op
*qcop
We can hide dq_op/qcop inside pointer too, but IMHO it is not necessary.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ext2/super.c              |    4 +-
 fs/ext3/super.c              |    4 +-
 fs/ext4/super.c              |    4 +-
 fs/gfs2/ops_fstype.c         |    4 +-
 fs/jfs/super.c               |    4 +-
 fs/ocfs2/quota_local.c       |    4 +-
 fs/ocfs2/super.c             |    6 +-
 fs/quota/dquot.c             |  217 ++++++++++++++++++++++++++----------------
 fs/quota/quota.c             |   58 ++++++------
 fs/reiserfs/super.c          |    4 +-
 fs/super.c                   |   10 --
 fs/sync.c                    |    4 +-
 fs/xfs/linux-2.6/xfs_super.c |    2 +-
 include/linux/fs.h           |    4 +-
 include/linux/quota.h        |   21 +++--
 include/linux/quotaops.h     |   15 ++-
 16 files changed, 210 insertions(+), 155 deletions(-)

diff --git a/fs/ext2/super.c b/fs/ext2/super.c
index 7727491..de8d2c4 100644
--- a/fs/ext2/super.c
+++ b/fs/ext2/super.c
@@ -1055,8 +1055,8 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
 	sb->s_xattr = ext2_xattr_handlers;
 
 #ifdef CONFIG_QUOTA
-	sb->dq_op = &dquot_operations;
-	sb->s_qcop = &dquot_quotactl_ops;
+	dqctl(sb)->dq_op = &dquot_operations;
+	dqctl(sb)->qcop = &dquot_quotactl_ops;
 #endif
 
 	root = ext2_iget(sb, EXT2_ROOT_INO);
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 5cd148a..9ed767d 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -1930,8 +1930,8 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
 	sb->s_export_op = &ext3_export_ops;
 	sb->s_xattr = ext3_xattr_handlers;
 #ifdef CONFIG_QUOTA
-	sb->s_qcop = &ext3_qctl_operations;
-	sb->dq_op = &ext3_quota_operations;
+	dqctl(sb)->qcop = &ext3_qctl_operations;
+	dqctl(sb)->dq_op = &ext3_quota_operations;
 #endif
 	INIT_LIST_HEAD(&sbi->s_orphan); /* unlinked but open files */
 	mutex_init(&sbi->s_orphan_lock);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 053aee4..698dc6d 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2939,8 +2939,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
 	sb->s_export_op = &ext4_export_ops;
 	sb->s_xattr = ext4_xattr_handlers;
 #ifdef CONFIG_QUOTA
-	sb->s_qcop = &ext4_qctl_operations;
-	sb->dq_op = &ext4_quota_operations;
+	dqctl(sb)->qcop = &ext4_qctl_operations;
+	dqctl(sb)->dq_op = &ext4_quota_operations;
 #endif
 	INIT_LIST_HEAD(&sbi->s_orphan); /* unlinked but open files */
 	mutex_init(&sbi->s_orphan_lock);
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 1e52207..43d0a24 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -1167,8 +1167,8 @@ static int fill_super(struct super_block *sb, struct gfs2_args *args, int silent
 	sb->s_op = &gfs2_super_ops;
 	sb->s_export_op = &gfs2_export_ops;
 	sb->s_xattr = gfs2_xattr_handlers;
-	sb->s_qcop = &gfs2_quotactl_ops;
-	dqopts(sb)->flags |= DQUOT_QUOTA_SYS_FILE;
+	dqctl(sb)->qcop = &gfs2_quotactl_ops;
+	dqctl(sb)->flags |= DQUOT_QUOTA_SYS_FILE;
 	sb->s_time_gran = 1;
 	sb->s_maxbytes = MAX_LFS_FILESIZE;
 
diff --git a/fs/jfs/super.c b/fs/jfs/super.c
index b612adf..a8a94e6 100644
--- a/fs/jfs/super.c
+++ b/fs/jfs/super.c
@@ -477,8 +477,8 @@ static int jfs_fill_super(struct super_block *sb, void *data, int silent)
 	sb->s_op = &jfs_super_operations;
 	sb->s_export_op = &jfs_export_operations;
 #ifdef CONFIG_QUOTA
-	sb->dq_op = &dquot_operations;
-	sb->s_qcop = &dquot_quotactl_ops;
+	dqctl(sb)->dq_op = &dquot_operations;
+	dqctl(sb)->qcop = &dquot_quotactl_ops;
 #endif
 
 	/*
diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
index 056cb24..7c30ba3 100644
--- a/fs/ocfs2/quota_local.c
+++ b/fs/ocfs2/quota_local.c
@@ -596,7 +596,7 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
 	unsigned int flags;
 
 	mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num);
-	mutex_lock(&dqopts(sb)->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
 	for (type = 0; type < MAXQUOTAS; type++) {
 		if (list_empty(&(rec->r_list[type])))
 			continue;
@@ -672,7 +672,7 @@ out_put:
 			break;
 	}
 out:
-	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 	kfree(rec);
 	return status;
 }
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index 16065ae..b6dcb65 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -907,7 +907,7 @@ static int ocfs2_enable_quotas(struct ocfs2_super *osb)
 	int status;
 	int type;
 
-	dqopts(sb)->flags |= DQUOT_QUOTA_SYS_FILE | DQUOT_NEGATIVE_USAGE;
+	dqctl(sb)->flags |= DQUOT_QUOTA_SYS_FILE | DQUOT_NEGATIVE_USAGE;
 	for (type = 0; type < MAXQUOTAS; type++) {
 		if (!OCFS2_HAS_RO_COMPAT_FEATURE(sb, feature[type]))
 			continue;
@@ -2015,8 +2015,8 @@ static int ocfs2_initialize_super(struct super_block *sb,
 	sb->s_fs_info = osb;
 	sb->s_op = &ocfs2_sops;
 	sb->s_export_op = &ocfs2_export_ops;
-	sb->s_qcop = &ocfs2_quotactl_ops;
-	sb->dq_op = &ocfs2_quota_operations;
+	dqctl(sb)->qcop = &ocfs2_quotactl_ops;
+	dqctl(sb)->dq_op = &ocfs2_quota_operations;
 	sb->s_xattr = ocfs2_xattr_handlers;
 	sb->s_time_gran = 1;
 	sb->s_flags |= MS_NOATIME;
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 7e937b0..78e48f3 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -332,7 +332,7 @@ static inline int dquot_dirty(struct dquot *dquot)
 
 static inline int mark_dquot_dirty(struct dquot *dquot)
 {
-	return dquot->dq_sb->dq_op->mark_dirty(dquot);
+	return dqctl(dquot->dq_sb)->dq_op->mark_dirty(dquot);
 }
 
 /* Mark dquot dirty in atomic manner, and return it's old dirty flag state */
@@ -406,16 +406,16 @@ int dquot_acquire(struct dquot *dquot)
 	mutex_lock(&dquot->dq_lock);
 	mutex_lock(&dqopt->dqio_mutex);
 	if (!test_bit(DQ_READ_B, &dquot->dq_flags))
-		ret = dqopt->ops[dquot->dq_type]->read_dqblk(dquot);
+		ret = dqopt->fmt_ops[dquot->dq_type]->read_dqblk(dquot);
 	if (ret < 0)
 		goto out_iolock;
 	set_bit(DQ_READ_B, &dquot->dq_flags);
 	/* Instantiate dquot if needed */
 	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && !dquot->dq_off) {
-		ret = dqopt->ops[dquot->dq_type]->commit_dqblk(dquot);
+		ret = dqopt->fmt_ops[dquot->dq_type]->commit_dqblk(dquot);
 		/* Write the info if needed */
 		if (info_dirty(&dqopt->info[dquot->dq_type])) {
-			ret2 = dqopt->ops[dquot->dq_type]->write_file_info(
+			ret2 = dqopt->fmt_ops[dquot->dq_type]->write_file_info(
 						dquot->dq_sb, dquot->dq_type);
 		}
 		if (ret < 0)
@@ -451,9 +451,9 @@ int dquot_commit(struct dquot *dquot)
 	/* Inactive dquot can be only if there was error during read/init
 	 * => we have better not writing it */
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
-		ret = dqopt->ops[dquot->dq_type]->commit_dqblk(dquot);
+		ret = dqopt->fmt_ops[dquot->dq_type]->commit_dqblk(dquot);
 		if (info_dirty(&dqopt->info[dquot->dq_type])) {
-			ret2 = dqopt->ops[dquot->dq_type]->write_file_info(
+			ret2 = dqopt->fmt_ops[dquot->dq_type]->write_file_info(
 						dquot->dq_sb, dquot->dq_type);
 		}
 		if (ret >= 0)
@@ -478,11 +478,11 @@ int dquot_release(struct dquot *dquot)
 	if (atomic_read(&dquot->dq_count) > 1)
 		goto out_dqlock;
 	mutex_lock(&dqopt->dqio_mutex);
-	if (dqopt->ops[dquot->dq_type]->release_dqblk) {
-		ret = dqopt->ops[dquot->dq_type]->release_dqblk(dquot);
+	if (dqopt->fmt_ops[dquot->dq_type]->release_dqblk) {
+		ret = dqopt->fmt_ops[dquot->dq_type]->release_dqblk(dquot);
 		/* Write the info */
 		if (info_dirty(&dqopt->info[dquot->dq_type])) {
-			ret2 = dqopt->ops[dquot->dq_type]->write_file_info(
+			ret2 = dqopt->fmt_ops[dquot->dq_type]->write_file_info(
 						dquot->dq_sb, dquot->dq_type);
 		}
 		if (ret >= 0)
@@ -504,7 +504,7 @@ EXPORT_SYMBOL(dquot_destroy);
 
 static inline void do_destroy_dquot(struct dquot *dquot)
 {
-	dquot->dq_sb->dq_op->destroy_dquot(dquot);
+	dqctl(dquot->dq_sb)->dq_op->destroy_dquot(dquot);
 }
 
 /* Invalidate all dquots on the list. Note that this function is called after
@@ -568,7 +568,7 @@ int dquot_scan_active(struct super_block *sb,
 	struct dquot *dquot, *old_dquot = NULL;
 	int ret = 0;
 
-	mutex_lock(&dqopts(sb)->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
 	spin_lock(&dq_list_lock);
 	list_for_each_entry(dquot, &inuse_list, dq_inuse) {
 		if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
@@ -591,7 +591,7 @@ int dquot_scan_active(struct super_block *sb,
 	spin_unlock(&dq_list_lock);
 out:
 	dqput(old_dquot);
-	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_scan_active);
@@ -600,10 +600,11 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 {
 	struct list_head *dirty;
 	struct dquot *dquot;
-	struct quota_info *dqopt = dqopts(sb);
+	struct quota_info *dqopt;
 	int cnt;
 
-	mutex_lock(&dqopt->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
+	dqopt = dqopts(sb);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (type != -1 && cnt != type)
 			continue;
@@ -625,7 +626,7 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 			atomic_inc(&dquot->dq_count);
 			spin_unlock(&dq_list_lock);
 			dqstats_inc(DQST_LOOKUPS);
-			sb->dq_op->write_dquot(dquot);
+			dqctl(sb)->dq_op->write_dquot(dquot);
 			dqput(dquot);
 			spin_lock(&dq_list_lock);
 		}
@@ -635,11 +636,11 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		if ((cnt == type || type == -1) && sb_has_quota_active(sb, cnt)
 		    && info_dirty(&dqopt->info[cnt]))
-			sb->dq_op->write_info(sb, cnt);
+			dqctl(sb)->dq_op->write_info(sb, cnt);
 	dqstats_inc(DQST_SYNCS);
-	mutex_unlock(&dqopt->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 
-	if (!wait || (dqopts(sb)->flags & DQUOT_QUOTA_SYS_FILE))
+	if (!wait || (dqctl(sb)->flags & DQUOT_QUOTA_SYS_FILE))
 		return 0;
 
 	/* This is not very clever (and fast) but currently I don't know about
@@ -653,18 +654,19 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 	 * Now when everything is written we can discard the pagecache so
 	 * that userspace sees the changes.
 	 */
-	mutex_lock(&dqopts(sb)->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
+	dqopt = dqopts(sb);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (type != -1 && cnt != type)
 			continue;
 		if (!sb_has_quota_active(sb, cnt))
 			continue;
-		mutex_lock_nested(&dqopts(sb)->files[cnt]->i_mutex,
+		mutex_lock_nested(&dqopt->files[cnt]->i_mutex,
 				  I_MUTEX_QUOTA);
-		truncate_inode_pages(&dqopts(sb)->files[cnt]->i_data, 0);
-		mutex_unlock(&dqopts(sb)->files[cnt]->i_mutex);
+		truncate_inode_pages(&dqopt->files[cnt]->i_data, 0);
+		mutex_unlock(&dqopt->files[cnt]->i_mutex);
 	}
-	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 
 	return 0;
 }
@@ -743,7 +745,7 @@ we_slept:
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && dquot_dirty(dquot)) {
 		spin_unlock(&dq_list_lock);
 		/* Commit dquot before releasing */
-		ret = dquot->dq_sb->dq_op->write_dquot(dquot);
+		ret = dqctl(dquot->dq_sb)->dq_op->write_dquot(dquot);
 		if (ret < 0) {
 			quota_error(dquot->dq_sb, "Can't write quota structure"
 				    " (error %d). Quota may get out of sync!",
@@ -762,7 +764,7 @@ we_slept:
 	clear_dquot_dirty(dquot);
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
 		spin_unlock(&dq_list_lock);
-		dquot->dq_sb->dq_op->release_dquot(dquot);
+		dqctl(dquot->dq_sb)->dq_op->release_dquot(dquot);
 		goto we_slept;
 	}
 	atomic_dec(&dquot->dq_count);
@@ -785,7 +787,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
 {
 	struct dquot *dquot;
 
-	dquot = sb->dq_op->alloc_dquot(sb, type);
+	dquot = dqctl(sb)->dq_op->alloc_dquot(sb, type);
 	if(!dquot)
 		return NULL;
 
@@ -864,7 +866,7 @@ we_slept:
 	wait_on_dquot(dquot);
 	/* Read the dquot / allocate space in quota file */
 	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags) &&
-	    sb->dq_op->acquire_dquot(dquot) < 0) {
+	    dqctl(sb)->dq_op->acquire_dquot(dquot) < 0) {
 		dqput(dquot);
 		dquot = NULL;
 		goto out;
@@ -1039,7 +1041,7 @@ static void drop_dquot_ref(struct super_block *sb, int type)
 {
 	LIST_HEAD(tofree_head);
 
-	if (sb->dq_op) {
+	if (dqctl(sb)->dq_op) {
 		down_write(&dqopts(sb)->dqptr_sem);
 		remove_dquot_ref(sb, type, &tofree_head);
 		up_write(&dqopts(sb)->dqptr_sem);
@@ -1088,7 +1090,7 @@ void dquot_free_reserved_space(struct dquot *dquot, qsize_t number)
 
 static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
 {
-	if (sb_dqopts(dquot)->flags & DQUOT_NEGATIVE_USAGE ||
+	if (dqctl(dquot->dq_sb)->flags & DQUOT_NEGATIVE_USAGE ||
 	    dquot->dq_dqb.dqb_curinodes >= number)
 		dquot->dq_dqb.dqb_curinodes -= number;
 	else
@@ -1100,7 +1102,7 @@ static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
 
 static void dquot_decr_space(struct dquot *dquot, qsize_t number)
 {
-	if (sb_dqopts(dquot)->flags & DQUOT_NEGATIVE_USAGE ||
+	if (dqctl(dquot->dq_sb)->flags & DQUOT_NEGATIVE_USAGE ||
 	    dquot->dq_dqb.dqb_curspace >= number)
 		dquot->dq_dqb.dqb_curspace -= number;
 	else
@@ -1479,8 +1481,8 @@ static qsize_t *inode_reserved_space(struct inode * inode)
 {
 	/* Filesystem must explicitly define it's own method in order to use
 	 * quota reservation interface */
-	BUG_ON(!inode->i_sb->dq_op->get_reserved_space);
-	return inode->i_sb->dq_op->get_reserved_space(inode);
+	BUG_ON(!dqctl(inode->i_sb)->dq_op->get_reserved_space);
+	return dqctl(inode->i_sb)->dq_op->get_reserved_space(inode);
 }
 
 void inode_add_rsv_space(struct inode *inode, qsize_t number)
@@ -1512,7 +1514,7 @@ static qsize_t inode_get_rsv_space(struct inode *inode)
 {
 	qsize_t ret;
 
-	if (!inode->i_sb->dq_op->get_reserved_space)
+	if (!dqctl(inode->i_sb)->dq_op->get_reserved_space)
 		return 0;
 	spin_lock(&inode->i_lock);
 	ret = *inode_reserved_space(inode);
@@ -1906,7 +1908,7 @@ int dquot_commit_info(struct super_block *sb, int type)
 	struct quota_info *dqopt = dqopts(sb);
 
 	mutex_lock(&dqopt->dqio_mutex);
-	ret = dqopt->ops[type]->write_file_info(sb, type);
+	ret = dqopt->fmt_ops[type]->write_file_info(sb, type);
 	mutex_unlock(&dqopt->dqio_mutex);
 	return ret;
 }
@@ -1942,24 +1944,55 @@ EXPORT_SYMBOL(dquot_file_open);
 
 int dquot_get_dqfmt(struct super_block *sb, int type, unsigned int *fmt)
 {
-	mutex_lock(&dqopts(sb)->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
 	if (!sb_has_quota_active(sb, type)) {
-		mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+		mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 		return -ESRCH;
 	}
 	*fmt = dqopts(sb)->info[type].dqi_format->qf_fmt_id;
-	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_get_dqfmt);
 
+/* Next two helpers called with dqonoff_mutex held */
+static int alloc_quota_info(struct quota_ctl_info *dqctl) {
+	int err = -ENOMEM;
+	struct quota_info *dqopt;
+	BUG_ON(dqctl->dq_opt);
+
+	dqopt = kzalloc(sizeof(*dqopt), GFP_NOFS);
+	if (!dqopt)
+		return err;
+
+	err = init_srcu_struct(&dqopt->dq_srcu);
+	if (err) {
+		kfree(dqopt);
+		return err;
+	}
+	mutex_init(&dqopt->dqio_mutex);
+	init_rwsem(&dqopt->dqptr_sem);
+	dqctl->dq_opt = dqopt;
+	return 0;
+}
+
+static void free_quota_info(struct quota_ctl_info *dqctl)
+{
+	if (dqctl->dq_opt) {
+		cleanup_srcu_struct(&dqctl->dq_opt->dq_srcu);
+		kfree(dqctl->dq_opt);
+		dqctl->dq_opt = NULL;
+	}
+}
+
 /*
  * Turn quota off on a device. type == -1 ==> quotaoff for all types (umount)
  */
 int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 {
 	int cnt, ret = 0;
-	struct quota_info *dqopt = dqopts(sb);
+	struct quota_ctl_info *qctl = dqctl(sb);
+	struct quota_info *dqopt;
 	struct inode *toputinode[MAXQUOTAS];
 
 	/* Cannot turn off usage accounting without turning off limits, or
@@ -1970,15 +2003,15 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 		return -EINVAL;
 
 	/* We need to serialize quota_off() for device */
-	mutex_lock(&dqopt->dqonoff_mutex);
-
+	mutex_lock(&qctl->dqonoff_mutex);
+	dqopt = dqopts(sb);
 	/*
 	 * Skip everything if there's nothing to do. We have to do this because
 	 * sometimes we are called when fill_super() failed and calling
 	 * sync_fs() in such cases does no good.
 	 */
 	if (!sb_any_quota_loaded(sb)) {
-		mutex_unlock(&dqopt->dqonoff_mutex);
+		mutex_unlock(&qctl->dqonoff_mutex);
 		return 0;
 	}
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1990,16 +2023,16 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 
 		if (flags & DQUOT_SUSPENDED) {
 			spin_lock(&dq_state_lock);
-			dqopt->flags |=
+			qctl->flags |=
 				dquot_state_flag(DQUOT_SUSPENDED, cnt);
 			spin_unlock(&dq_state_lock);
 		} else {
 			spin_lock(&dq_state_lock);
-			dqopt->flags &= ~dquot_state_flag(flags, cnt);
+			qctl->flags &= ~dquot_state_flag(flags, cnt);
 			/* Turning off suspended quotas? */
 			if (!sb_has_quota_loaded(sb, cnt) &&
 			    sb_has_quota_suspended(sb, cnt)) {
-				dqopt->flags &=	~dquot_state_flag(
+				qctl->flags &=	~dquot_state_flag(
 							DQUOT_SUSPENDED, cnt);
 				spin_unlock(&dq_state_lock);
 				iput(dqopt->files[cnt]);
@@ -2037,9 +2070,9 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 		 * should be only users of the info. No locks needed.
 		 */
 		if (info_dirty(&dqopt->info[cnt]))
-			sb->dq_op->write_info(sb, cnt);
-		if (dqopt->ops[cnt]->free_file_info)
-			dqopt->ops[cnt]->free_file_info(sb, cnt);
+			qctl->dq_op->write_info(sb, cnt);
+		if (dqopt->fmt_ops[cnt]->free_file_info)
+			dqopt->fmt_ops[cnt]->free_file_info(sb, cnt);
 		put_quota_format(dqopt->info[cnt].dqi_format);
 
 		if (!sb_has_quota_loaded(sb, cnt))
@@ -2047,12 +2080,15 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 		dqopt->info[cnt].dqi_flags = 0;
 		dqopt->info[cnt].dqi_igrace = 0;
 		dqopt->info[cnt].dqi_bgrace = 0;
-		dqopt->ops[cnt] = NULL;
+		dqopt->fmt_ops[cnt] = NULL;
 	}
-	mutex_unlock(&dqopt->dqonoff_mutex);
+	if (!sb_any_quota_loaded(sb))
+		free_quota_info(qctl);
+
+	mutex_unlock(&qctl->dqonoff_mutex);
 
 	/* Skip syncing and setting flags if quota files are hidden */
-	if (dqopt->flags & DQUOT_QUOTA_SYS_FILE)
+	if (qctl->flags & DQUOT_QUOTA_SYS_FILE)
 		goto put_inodes;
 
 	/* Sync the superblock so that buffers with quota data are written to
@@ -2067,7 +2103,7 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 	 * changes done by userspace on the next quotaon() */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		if (toputinode[cnt]) {
-			mutex_lock(&dqopt->dqonoff_mutex);
+			mutex_lock(&qctl->dqonoff_mutex);
 			/* If quota was reenabled in the meantime, we have
 			 * nothing to do */
 			if (!sb_has_quota_loaded(sb, cnt)) {
@@ -2080,7 +2116,7 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 				mutex_unlock(&toputinode[cnt]->i_mutex);
 				mark_inode_dirty_sync(toputinode[cnt]);
 			}
-			mutex_unlock(&dqopt->dqonoff_mutex);
+			mutex_unlock(&qctl->dqonoff_mutex);
 		}
 	if (sb->s_bdev)
 		invalidate_bdev(sb->s_bdev);
@@ -2123,7 +2159,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 {
 	struct quota_format_type *fmt = find_quota_format(format_id);
 	struct super_block *sb = inode->i_sb;
-	struct quota_info *dqopt = dqopts(sb);
+	struct quota_info *dqopt;
 	int error;
 	int oldflags = -1;
 
@@ -2147,7 +2183,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 		goto out_fmt;
 	}
 
-	if (!(dqopt->flags & DQUOT_QUOTA_SYS_FILE)) {
+	if (!(dqctl(sb)->flags & DQUOT_QUOTA_SYS_FILE)) {
 		/* As we bypass the pagecache we must now flush all the
 		 * dirty data and invalidate caches so that kernel sees
 		 * changes from userspace. It is not enough to just flush
@@ -2157,13 +2193,14 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 		sync_filesystem(sb);
 		invalidate_bdev(sb->s_bdev);
 	}
-	mutex_lock(&dqopt->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
+	dqopt = dqopts(sb);
 	if (sb_has_quota_loaded(sb, type)) {
 		error = -EBUSY;
 		goto out_lock;
 	}
 
-	if (!(dqopt->flags & DQUOT_QUOTA_SYS_FILE)) {
+	if (!(dqctl(sb)->flags & DQUOT_QUOTA_SYS_FILE)) {
 		/* We don't want quota and atime on quota files (deadlocks
 		 * possible) Also nobody should write to the file - we use
 		 * special IO operations which ignore the immutable bit. */
@@ -2187,23 +2224,23 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 	if (!fmt->qf_ops->check_quota_file(sb, type))
 		goto out_file_init;
 
-	dqopt->ops[type] = fmt->qf_ops;
+	dqopt->fmt_ops[type] = fmt->qf_ops;
 	dqopt->info[type].dqi_format = fmt;
 	dqopt->info[type].dqi_fmt_id = format_id;
 	INIT_LIST_HEAD(&dqopt->info[type].dqi_dirty_list);
 	mutex_lock(&dqopt->dqio_mutex);
-	error = dqopt->ops[type]->read_file_info(sb, type);
+	error = dqopt->fmt_ops[type]->read_file_info(sb, type);
 	if (error < 0) {
 		mutex_unlock(&dqopt->dqio_mutex);
 		goto out_file_init;
 	}
 	mutex_unlock(&dqopt->dqio_mutex);
 	spin_lock(&dq_state_lock);
-	dqopt->flags |= dquot_state_flag(flags, type);
+	dqctl(sb)->flags |= dquot_state_flag(flags, type);
 	spin_unlock(&dq_state_lock);
 
 	add_dquot_ref(sb, type);
-	mutex_unlock(&dqopt->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 
 	return 0;
 
@@ -2219,7 +2256,15 @@ out_lock:
 		inode->i_flags |= oldflags;
 		mutex_unlock(&inode->i_mutex);
 	}
-	mutex_unlock(&dqopt->dqonoff_mutex);
+	/* We have failed to enable quota, so quota flags doesn't changed.
+	 * If all quota is disabled then it was disabled before the call.
+	 * So there is no any quota_info user exits and we can skip
+	 * synchronization stage.
+	 */
+	if (!sb_any_quota_loaded(sb))
+		free_quota_info(dqctl(sb));
+
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 out_fmt:
 	put_quota_format(fmt);
 
@@ -2229,7 +2274,7 @@ out_fmt:
 /* Reenable quotas on remount RW */
 int dquot_resume(struct super_block *sb, int type)
 {
-	struct quota_info *dqopt = dqopts(sb);
+	struct quota_ctl_info *qctl = dqctl(sb);
 	struct inode *inode;
 	int ret = 0, cnt;
 	unsigned int flags;
@@ -2238,24 +2283,24 @@ int dquot_resume(struct super_block *sb, int type)
 		if (type != -1 && cnt != type)
 			continue;
 
-		mutex_lock(&dqopt->dqonoff_mutex);
+		mutex_lock(&qctl->dqonoff_mutex);
 		if (!sb_has_quota_suspended(sb, cnt)) {
-			mutex_unlock(&dqopt->dqonoff_mutex);
+			mutex_unlock(&qctl->dqonoff_mutex);
 			continue;
 		}
-		inode = dqopt->files[cnt];
-		dqopt->files[cnt] = NULL;
+		inode = qctl->dq_opt->files[cnt];
+		qctl->dq_opt->files[cnt] = NULL;
 		spin_lock(&dq_state_lock);
-		flags = dqopt->flags & dquot_state_flag(DQUOT_USAGE_ENABLED |
+		flags = qctl->flags & dquot_state_flag(DQUOT_USAGE_ENABLED |
 							DQUOT_LIMITS_ENABLED,
 							cnt);
-		dqopt->flags &= ~dquot_state_flag(DQUOT_STATE_FLAGS, cnt);
+		qctl->flags &= ~dquot_state_flag(DQUOT_STATE_FLAGS, cnt);
 		spin_unlock(&dq_state_lock);
-		mutex_unlock(&dqopt->dqonoff_mutex);
+		mutex_unlock(&qctl->dqonoff_mutex);
 
 		flags = dquot_generic_flag(flags, cnt);
 		ret = vfs_load_quota_inode(inode, cnt,
-				dqopt->info[cnt].dqi_fmt_id, flags);
+				dqopts(sb)->info[cnt].dqi_fmt_id, flags);
 		iput(inode);
 	}
 
@@ -2266,9 +2311,18 @@ EXPORT_SYMBOL(dquot_resume);
 int dquot_quota_on(struct super_block *sb, int type, int format_id,
 		   struct path *path)
 {
+	struct quota_ctl_info *qctl = dqctl(sb);
 	int error = security_quota_on(path->dentry);
 	if (error)
 		return error;
+
+	mutex_lock(&qctl->dqonoff_mutex);
+	if (!sb_any_quota_loaded(sb))
+		error = alloc_quota_info(qctl);
+	mutex_unlock(&qctl->dqonoff_mutex);
+	if (error)
+		goto out;
+
 	/* Quota file not on the same filesystem? */
 	if (path->mnt->mnt_sb != sb)
 		error = -EXDEV;
@@ -2276,6 +2330,7 @@ int dquot_quota_on(struct super_block *sb, int type, int format_id,
 		error = vfs_load_quota_inode(path->dentry->d_inode, type,
 					     format_id, DQUOT_USAGE_ENABLED |
 					     DQUOT_LIMITS_ENABLED);
+out:
 	return error;
 }
 EXPORT_SYMBOL(dquot_quota_on);
@@ -2289,7 +2344,7 @@ int dquot_enable(struct inode *inode, int type, int format_id,
 {
 	int ret = 0;
 	struct super_block *sb = inode->i_sb;
-	struct quota_info *dqopt = dqopts(sb);
+	struct quota_ctl_info *qctl = dqctl(sb);
 
 	/* Just unsuspend quotas? */
 	BUG_ON(flags & DQUOT_SUSPENDED);
@@ -2298,10 +2353,10 @@ int dquot_enable(struct inode *inode, int type, int format_id,
 		return 0;
 	/* Just updating flags needed? */
 	if (sb_has_quota_loaded(sb, type)) {
-		mutex_lock(&dqopt->dqonoff_mutex);
+		mutex_lock(&qctl->dqonoff_mutex);
 		/* Now do a reliable test... */
 		if (!sb_has_quota_loaded(sb, type)) {
-			mutex_unlock(&dqopt->dqonoff_mutex);
+			mutex_unlock(&qctl->dqonoff_mutex);
 			goto load_quota;
 		}
 		if (flags & DQUOT_USAGE_ENABLED &&
@@ -2315,10 +2370,10 @@ int dquot_enable(struct inode *inode, int type, int format_id,
 			goto out_lock;
 		}
 		spin_lock(&dq_state_lock);
-		dqopts(sb)->flags |= dquot_state_flag(flags, type);
+		qctl->flags |= dquot_state_flag(flags, type);
 		spin_unlock(&dq_state_lock);
 out_lock:
-		mutex_unlock(&dqopt->dqonoff_mutex);
+		mutex_unlock(&qctl->dqonoff_mutex);
 		return ret;
 	}
 
@@ -2527,9 +2582,9 @@ int dquot_get_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 {
 	struct mem_dqinfo *mi;
   
-	mutex_lock(&dqopts(sb)->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
 	if (!sb_has_quota_active(sb, type)) {
-		mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+		mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 		return -ESRCH;
 	}
 	mi = dqopts(sb)->info + type;
@@ -2539,7 +2594,7 @@ int dquot_get_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 	ii->dqi_flags = mi->dqi_flags & DQF_MASK;
 	ii->dqi_valid = IIF_ALL;
 	spin_unlock(&dq_data_lock);
-	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_get_dqinfo);
@@ -2550,7 +2605,7 @@ int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 	struct mem_dqinfo *mi;
 	int err = 0;
 
-	mutex_lock(&dqopts(sb)->dqonoff_mutex);
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
 	if (!sb_has_quota_active(sb, type)) {
 		err = -ESRCH;
 		goto out;
@@ -2567,9 +2622,9 @@ int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 	spin_unlock(&dq_data_lock);
 	mark_info_dirty(sb, type);
 	/* Force write to disk */
-	sb->dq_op->write_info(sb, type);
+	dqctl(sb)->dq_op->write_info(sb, type);
 out:
-	mutex_unlock(&dqopts(sb)->dqonoff_mutex);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 	return err;
 }
 EXPORT_SYMBOL(dquot_set_dqinfo);
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index 3b1d315..4fc38d0 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -47,8 +47,8 @@ static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
 
 static void quota_sync_one(struct super_block *sb, void *arg)
 {
-	if (sb->s_qcop && sb->s_qcop->quota_sync)
-		sb->s_qcop->quota_sync(sb, *(int *)arg, 1);
+	if (dqctl(sb)->qcop && dqctl(sb)->qcop->quota_sync)
+		dqctl(sb)->qcop->quota_sync(sb, *(int *)arg, 1);
 }
 
 static int quota_sync_all(int type)
@@ -66,22 +66,22 @@ static int quota_sync_all(int type)
 static int quota_quotaon(struct super_block *sb, int type, int cmd, qid_t id,
 		         struct path *path)
 {
-	if (!sb->s_qcop->quota_on && !sb->s_qcop->quota_on_meta)
+	if (!dqctl(sb)->qcop->quota_on && !dqctl(sb)->qcop->quota_on_meta)
 		return -ENOSYS;
-	if (sb->s_qcop->quota_on_meta)
-		return sb->s_qcop->quota_on_meta(sb, type, id);
+	if (dqctl(sb)->qcop->quota_on_meta)
+		return dqctl(sb)->qcop->quota_on_meta(sb, type, id);
 	if (IS_ERR(path))
 		return PTR_ERR(path);
-	return sb->s_qcop->quota_on(sb, type, id, path);
+	return dqctl(sb)->qcop->quota_on(sb, type, id, path);
 }
 
 static int quota_getfmt(struct super_block *sb, int type, void __user *addr)
 {
 	__u32 fmt;
 	int ret;
-	if (!sb->s_qcop->get_fmt)
+	if (!dqctl(sb)->qcop->get_fmt)
 		return -ENOSYS;
-	ret = sb->s_qcop->get_fmt(sb, type, &fmt);
+	ret = dqctl(sb)->qcop->get_fmt(sb, type, &fmt);
 	if (!ret && copy_to_user(addr, &fmt, sizeof(fmt)))
 		return -EFAULT;
 	return ret;
@@ -92,9 +92,9 @@ static int quota_getinfo(struct super_block *sb, int type, void __user *addr)
 	struct if_dqinfo info;
 	int ret;
 
-	if (!sb->s_qcop->get_info)
+	if (!dqctl(sb)->qcop->get_info)
 		return -ENOSYS;
-	ret = sb->s_qcop->get_info(sb, type, &info);
+	ret = dqctl(sb)->qcop->get_info(sb, type, &info);
 	if (!ret && copy_to_user(addr, &info, sizeof(info)))
 		return -EFAULT;
 	return ret;
@@ -106,9 +106,9 @@ static int quota_setinfo(struct super_block *sb, int type, void __user *addr)
 
 	if (copy_from_user(&info, addr, sizeof(info)))
 		return -EFAULT;
-	if (!sb->s_qcop->set_info)
+	if (!dqctl(sb)->qcop->set_info)
 		return -ENOSYS;
-	return sb->s_qcop->set_info(sb, type, &info);
+	return dqctl(sb)->qcop->set_info(sb, type, &info);
 }
 
 static void copy_to_if_dqblk(struct if_dqblk *dst, struct fs_disk_quota *src)
@@ -131,9 +131,9 @@ static int quota_getquota(struct super_block *sb, int type, qid_t id,
 	struct if_dqblk idq;
 	int ret;
 
-	if (!sb->s_qcop->get_dqblk)
+	if (!dqctl(sb)->qcop->get_dqblk)
 		return -ENOSYS;
-	ret = sb->s_qcop->get_dqblk(sb, type, id, &fdq);
+	ret = dqctl(sb)->qcop->get_dqblk(sb, type, id, &fdq);
 	if (ret)
 		return ret;
 	copy_to_if_dqblk(&idq, &fdq);
@@ -176,10 +176,10 @@ static int quota_setquota(struct super_block *sb, int type, qid_t id,
 
 	if (copy_from_user(&idq, addr, sizeof(idq)))
 		return -EFAULT;
-	if (!sb->s_qcop->set_dqblk)
+	if (!dqctl(sb)->qcop->set_dqblk)
 		return -ENOSYS;
 	copy_from_if_dqblk(&fdq, &idq);
-	return sb->s_qcop->set_dqblk(sb, type, id, &fdq);
+	return dqctl(sb)->qcop->set_dqblk(sb, type, id, &fdq);
 }
 
 static int quota_setxstate(struct super_block *sb, int cmd, void __user *addr)
@@ -188,9 +188,9 @@ static int quota_setxstate(struct super_block *sb, int cmd, void __user *addr)
 
 	if (copy_from_user(&flags, addr, sizeof(flags)))
 		return -EFAULT;
-	if (!sb->s_qcop->set_xstate)
+	if (!dqctl(sb)->qcop->set_xstate)
 		return -ENOSYS;
-	return sb->s_qcop->set_xstate(sb, flags, cmd);
+	return dqctl(sb)->qcop->set_xstate(sb, flags, cmd);
 }
 
 static int quota_getxstate(struct super_block *sb, void __user *addr)
@@ -198,9 +198,9 @@ static int quota_getxstate(struct super_block *sb, void __user *addr)
 	struct fs_quota_stat fqs;
 	int ret;
 
-	if (!sb->s_qcop->get_xstate)
+	if (!dqctl(sb)->qcop->get_xstate)
 		return -ENOSYS;
-	ret = sb->s_qcop->get_xstate(sb, &fqs);
+	ret = dqctl(sb)->qcop->get_xstate(sb, &fqs);
 	if (!ret && copy_to_user(addr, &fqs, sizeof(fqs)))
 		return -EFAULT;
 	return ret;
@@ -213,9 +213,9 @@ static int quota_setxquota(struct super_block *sb, int type, qid_t id,
 
 	if (copy_from_user(&fdq, addr, sizeof(fdq)))
 		return -EFAULT;
-	if (!sb->s_qcop->set_dqblk)
+	if (!dqctl(sb)->qcop->set_dqblk)
 		return -ENOSYS;
-	return sb->s_qcop->set_dqblk(sb, type, id, &fdq);
+	return dqctl(sb)->qcop->set_dqblk(sb, type, id, &fdq);
 }
 
 static int quota_getxquota(struct super_block *sb, int type, qid_t id,
@@ -224,9 +224,9 @@ static int quota_getxquota(struct super_block *sb, int type, qid_t id,
 	struct fs_disk_quota fdq;
 	int ret;
 
-	if (!sb->s_qcop->get_dqblk)
+	if (!dqctl(sb)->qcop->get_dqblk)
 		return -ENOSYS;
-	ret = sb->s_qcop->get_dqblk(sb, type, id, &fdq);
+	ret = dqctl(sb)->qcop->get_dqblk(sb, type, id, &fdq);
 	if (!ret && copy_to_user(addr, &fdq, sizeof(fdq)))
 		return -EFAULT;
 	return ret;
@@ -240,7 +240,7 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
 
 	if (type >= (XQM_COMMAND(cmd) ? XQM_MAXQUOTAS : MAXQUOTAS))
 		return -EINVAL;
-	if (!sb->s_qcop)
+	if (!dqctl(sb)->qcop)
 		return -ENOSYS;
 
 	ret = check_quotactl_permission(sb, type, cmd, id);
@@ -251,9 +251,9 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
 	case Q_QUOTAON:
 		return quota_quotaon(sb, type, cmd, id, path);
 	case Q_QUOTAOFF:
-		if (!sb->s_qcop->quota_off)
+		if (!dqctl(sb)->qcop->quota_off)
 			return -ENOSYS;
-		return sb->s_qcop->quota_off(sb, type);
+		return dqctl(sb)->qcop->quota_off(sb, type);
 	case Q_GETFMT:
 		return quota_getfmt(sb, type, addr);
 	case Q_GETINFO:
@@ -265,9 +265,9 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
 	case Q_SETQUOTA:
 		return quota_setquota(sb, type, id, addr);
 	case Q_SYNC:
-		if (!sb->s_qcop->quota_sync)
+		if (!dqctl(sb)->qcop->quota_sync)
 			return -ENOSYS;
-		return sb->s_qcop->quota_sync(sb, type, 1);
+		return dqctl(sb)->qcop->quota_sync(sb, type, 1);
 	case Q_XQUOTAON:
 	case Q_XQUOTAOFF:
 	case Q_XQUOTARM:
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index a07f30a..1fbd0d9 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -1408,8 +1408,8 @@ static int read_super_block(struct super_block *s, int offset)
 	s->s_op = &reiserfs_sops;
 	s->s_export_op = &reiserfs_export_ops;
 #ifdef CONFIG_QUOTA
-	s->s_qcop = &reiserfs_qctl_operations;
-	s->dq_op = &reiserfs_quota_operations;
+	dqctl(s)->qcop = &reiserfs_qctl_operations;
+	dqctl(s)->dq_op = &reiserfs_quota_operations;
 #endif
 
 	/* new format is limited by the 32 bit wide i_blocks field, want to
diff --git a/fs/super.c b/fs/super.c
index 473bdf6..60edc1f 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -54,16 +54,9 @@ static struct super_block *alloc_super(struct file_system_type *type)
 			s = NULL;
 			goto out;
 		}
-		if (init_srcu_struct(&s->s_dquot.dq_srcu)) {
-			security_sb_free(s);
-			kfree(s);
-			s = NULL;
-			goto out;
-		}
 #ifdef CONFIG_SMP
 		s->s_files = alloc_percpu(struct list_head);
 		if (!s->s_files) {
-			cleanup_srcu_struct(&s->s_dquot.dq_srcu);
 			security_sb_free(s);
 			kfree(s);
 			s = NULL;
@@ -110,9 +103,7 @@ static struct super_block *alloc_super(struct file_system_type *type)
 		atomic_set(&s->s_active, 1);
 		mutex_init(&s->s_vfs_rename_mutex);
 		lockdep_set_class(&s->s_vfs_rename_mutex, &type->s_vfs_rename_key);
-		mutex_init(&s->s_dquot.dqio_mutex);
 		mutex_init(&s->s_dquot.dqonoff_mutex);
-		init_rwsem(&s->s_dquot.dqptr_sem);
 
 		init_waitqueue_head(&s->s_wait_unfrozen);
 		s->s_maxbytes = MAX_NON_LFS;
@@ -134,7 +125,6 @@ static inline void destroy_super(struct super_block *s)
 #ifdef CONFIG_SMP
 	free_percpu(s->s_files);
 #endif
-	cleanup_srcu_struct(&s->s_dquot.dq_srcu);
 	security_sb_free(s);
 	kfree(s->s_subtype);
 	kfree(s->s_options);
diff --git a/fs/sync.c b/fs/sync.c
index ba76b96..891e8ef 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -36,8 +36,8 @@ static int __sync_filesystem(struct super_block *sb, int wait)
 	if (!sb->s_bdi || sb->s_bdi == &noop_backing_dev_info)
 		return 0;
 
-	if (sb->s_qcop && sb->s_qcop->quota_sync)
-		sb->s_qcop->quota_sync(sb, -1, wait);
+	if (sb->s_dquot.qcop && sb->s_dquot.qcop->quota_sync)
+		sb->s_dquot.qcop->quota_sync(sb, -1, wait);
 
 	if (wait)
 		sync_inodes_sb(sb);
diff --git a/fs/xfs/linux-2.6/xfs_super.c b/fs/xfs/linux-2.6/xfs_super.c
index a4e0797..4e59a08 100644
--- a/fs/xfs/linux-2.6/xfs_super.c
+++ b/fs/xfs/linux-2.6/xfs_super.c
@@ -1510,7 +1510,7 @@ xfs_fs_fill_super(
 	sb->s_xattr = xfs_xattr_handlers;
 	sb->s_export_op = &xfs_export_operations;
 #ifdef CONFIG_XFS_QUOTA
-	sb->s_qcop = &xfs_quotactl_operations;
+	sb->s_dquot.qcop = &xfs_quotactl_operations;
 #endif
 	sb->s_op = &xfs_super_operations;
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 63d069b..e87694a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1324,8 +1324,6 @@ struct super_block {
 	loff_t			s_maxbytes;	/* Max file size */
 	struct file_system_type	*s_type;
 	const struct super_operations	*s_op;
-	const struct dquot_operations	*dq_op;
-	const struct quotactl_ops	*s_qcop;
 	const struct export_operations *s_export_op;
 	unsigned long		s_flags;
 	unsigned long		s_magic;
@@ -1354,7 +1352,7 @@ struct super_block {
 	struct backing_dev_info *s_bdi;
 	struct mtd_info		*s_mtd;
 	struct list_head	s_instances;
-	struct quota_info	s_dquot;	/* Diskquota specific options */
+	struct quota_ctl_info	s_dquot;	/* Diskquota specific options */
 
 	int			s_frozen;
 	wait_queue_head_t	s_wait_unfrozen;
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 7e859eb..7170730 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -325,7 +325,7 @@ struct dquot_operations {
 };
 
 struct path;
-
+struct quota_info;
 /* Operations handling requests from userspace */
 struct quotactl_ops {
 	int (*quota_on)(struct super_block *, int, int, struct path *);
@@ -394,16 +394,23 @@ static inline void quota_send_warning(short type, unsigned int id, dev_t dev,
 	return;
 }
 #endif /* CONFIG_QUOTA_NETLINK_INTERFACE */
+struct quota_ctl_info {
+	unsigned int flags;			/* Flags for diskquotas on this device */
+
+	struct mutex dqonoff_mutex;		/* Serialize quotaon & quotaoff */
+	const struct quotactl_ops *qcop;
+	const struct dquot_operations *dq_op;
+	struct quota_info *dq_opt;
+};
 
 struct quota_info {
-	unsigned int flags;			/* Flags for diskquotas on this device */
 	struct mutex dqio_mutex;		/* lock device while I/O in progress */
-	struct mutex dqonoff_mutex;		/* Serialize quotaon & quotaoff */
-	struct rw_semaphore dqptr_sem;		/* serialize ops using quota_info struct, pointers from inode to dquots */
-	struct inode *files[MAXQUOTAS];		/* inodes of quotafiles */
 	struct mem_dqinfo info[MAXQUOTAS];	/* Information for each quota type */
-	const struct quota_format_ops *ops[MAXQUOTAS];	/* Operations for each type */
-	struct srcu_struct dq_srcu;
+	struct inode *files[MAXQUOTAS];	/* inodes of quotafiles */
+	const struct quota_format_ops *fmt_ops[MAXQUOTAS];	/* Operations for each type */
+	struct srcu_struct dq_srcu;	/* use count read lock */
+	struct rw_semaphore dqptr_sem;	/* serialize ops using quota_info struct, pointers from inode to dquots */
+
 };
 
 int register_quota_format(struct quota_format_type *fmt);
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index 9750d86..68ceef5 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -13,15 +13,20 @@
 #define DQUOT_SPACE_RESERVE	0x2
 #define DQUOT_SPACE_NOFAIL	0x4
 
-static inline struct quota_info *dqopts(struct super_block *sb)
+static inline struct quota_ctl_info* dqctl( struct super_block *sb)
 {
 	return &sb->s_dquot;
 }
-static inline struct quota_info* sb_dqopts(struct dquot *dq)
+static inline struct quota_info *dqopts(const struct super_block *sb)
+{
+	return sb->s_dquot.dq_opt;
+}
+static inline struct quota_info* sb_dqopts(const struct dquot *dq)
 {
 	return dqopts(dq->dq_sb);
 }
 
+
 /* i_mutex must being held */
 static inline bool is_quota_modification(struct inode *inode, struct iattr *ia)
 {
@@ -109,19 +114,19 @@ static inline struct mem_dqinfo *sb_dqinfo(struct super_block *sb, int type)
 
 static inline bool sb_has_quota_usage_enabled(struct super_block *sb, int type)
 {
-	return dqopts(sb)->flags &
+	return dqctl(sb)->flags &
 				dquot_state_flag(DQUOT_USAGE_ENABLED, type);
 }
 
 static inline bool sb_has_quota_limits_enabled(struct super_block *sb, int type)
 {
-	return dqopts(sb)->flags &
+	return dqctl(sb)->flags &
 				dquot_state_flag(DQUOT_LIMITS_ENABLED, type);
 }
 
 static inline bool sb_has_quota_suspended(struct super_block *sb, int type)
 {
-	return dqopts(sb)->flags &
+	return dqctl(sb)->flags &
 				dquot_state_flag(DQUOT_SUSPENDED, type);
 }
 
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 06/19] quota: Remove state_lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (4 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 05/19] quota: mode quota internals from sb to quota_info Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-22 21:12   ` Jan Kara
  2010-11-11 12:14 ` [PATCH 07/19] quota: add quota format lock Dmitry Monakhov
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

The only reader which use state_lock is dqget(), and is it serialized
with quota_disable via SRCU. And state_lock doesn't guaranties anything
for that case. All methods which modify quota flags already protected by
dqonoff_mutex. Get rid of useless state_lock.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c |   24 ++----------------------
 1 files changed, 2 insertions(+), 22 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 78e48f3..a1efacd 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -86,12 +86,9 @@
  * dq_data_lock protects data from dq_dqb and also mem_dqinfo structures and
  * also guards consistency of dquot->dq_dqb with inode->i_blocks, i_bytes.
  * i_blocks and i_bytes updates itself are guarded by i_lock acquired directly
- * in inode_add_bytes() and inode_sub_bytes(). dq_state_lock protects
- * modifications of quota state (on quotaon and quotaoff) and readers who care
- * about latest values take it as well.
+ * in inode_add_bytes() and inode_sub_bytes().
  *
- * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
- *   dq_list_lock > dq_state_lock
+ * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock.
  *
  * Note that some things (eg. sb pointer, type, id) doesn't change during
  * the life of the dquot structure and so needn't to be protected by a lock
@@ -128,7 +125,6 @@
  */
 
 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_list_lock);
-static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_state_lock);
 __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock);
 EXPORT_SYMBOL(dq_data_lock);
 
@@ -827,14 +823,10 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
 	rcu_read_unlock();
 we_slept:
 	spin_lock(&dq_list_lock);
-	spin_lock(&dq_state_lock);
 	if (!sb_has_quota_active(sb, type)) {
-		spin_unlock(&dq_state_lock);
 		spin_unlock(&dq_list_lock);
 		goto out;
 	}
-	spin_unlock(&dq_state_lock);
-
 	dquot = find_dquot(hashent, sb, id, type);
 	if (!dquot) {
 		if (!empty) {
@@ -2022,24 +2014,19 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 			continue;
 
 		if (flags & DQUOT_SUSPENDED) {
-			spin_lock(&dq_state_lock);
 			qctl->flags |=
 				dquot_state_flag(DQUOT_SUSPENDED, cnt);
-			spin_unlock(&dq_state_lock);
 		} else {
-			spin_lock(&dq_state_lock);
 			qctl->flags &= ~dquot_state_flag(flags, cnt);
 			/* Turning off suspended quotas? */
 			if (!sb_has_quota_loaded(sb, cnt) &&
 			    sb_has_quota_suspended(sb, cnt)) {
 				qctl->flags &=	~dquot_state_flag(
 							DQUOT_SUSPENDED, cnt);
-				spin_unlock(&dq_state_lock);
 				iput(dqopt->files[cnt]);
 				dqopt->files[cnt] = NULL;
 				continue;
 			}
-			spin_unlock(&dq_state_lock);
 		}
 
 		/* We still have to keep quota loaded? */
@@ -2235,10 +2222,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 		goto out_file_init;
 	}
 	mutex_unlock(&dqopt->dqio_mutex);
-	spin_lock(&dq_state_lock);
 	dqctl(sb)->flags |= dquot_state_flag(flags, type);
-	spin_unlock(&dq_state_lock);
-
 	add_dquot_ref(sb, type);
 	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 
@@ -2290,12 +2274,10 @@ int dquot_resume(struct super_block *sb, int type)
 		}
 		inode = qctl->dq_opt->files[cnt];
 		qctl->dq_opt->files[cnt] = NULL;
-		spin_lock(&dq_state_lock);
 		flags = qctl->flags & dquot_state_flag(DQUOT_USAGE_ENABLED |
 							DQUOT_LIMITS_ENABLED,
 							cnt);
 		qctl->flags &= ~dquot_state_flag(DQUOT_STATE_FLAGS, cnt);
-		spin_unlock(&dq_state_lock);
 		mutex_unlock(&qctl->dqonoff_mutex);
 
 		flags = dquot_generic_flag(flags, cnt);
@@ -2369,9 +2351,7 @@ int dquot_enable(struct inode *inode, int type, int format_id,
 			ret = -EBUSY;
 			goto out_lock;
 		}
-		spin_lock(&dq_state_lock);
 		qctl->flags |= dquot_state_flag(flags, type);
-		spin_unlock(&dq_state_lock);
 out_lock:
 		mutex_unlock(&qctl->dqonoff_mutex);
 		return ret;
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 07/19] quota: add quota format lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (5 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 06/19] quota: Remove state_lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 08/19] quota: make dquot lists per-sb Dmitry Monakhov
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Currently  dq_list_lock is responsible for quota format protection
which is counter productive. Introduce dedicated lock.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index a1efacd..f719a6f 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -82,7 +82,6 @@
 
 /*
  * There are three quota SMP locks. dq_list_lock protects all lists with quotas
- * and quota formats.
  * dq_data_lock protects data from dq_dqb and also mem_dqinfo structures and
  * also guards consistency of dquot->dq_dqb with inode->i_blocks, i_bytes.
  * i_blocks and i_bytes updates itself are guarded by i_lock acquired directly
@@ -125,6 +124,7 @@
  */
 
 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_list_lock);
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_fmt_lock);
 __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock);
 EXPORT_SYMBOL(dq_data_lock);
 
@@ -155,10 +155,10 @@ static struct kmem_cache *dquot_cachep;
 
 int register_quota_format(struct quota_format_type *fmt)
 {
-	spin_lock(&dq_list_lock);
+	spin_lock(&dq_fmt_lock);
 	fmt->qf_next = quota_formats;
 	quota_formats = fmt;
-	spin_unlock(&dq_list_lock);
+	spin_unlock(&dq_fmt_lock);
 	return 0;
 }
 EXPORT_SYMBOL(register_quota_format);
@@ -167,13 +167,13 @@ void unregister_quota_format(struct quota_format_type *fmt)
 {
 	struct quota_format_type **actqf;
 
-	spin_lock(&dq_list_lock);
+	spin_lock(&dq_fmt_lock);
 	for (actqf = &quota_formats; *actqf && *actqf != fmt;
 	     actqf = &(*actqf)->qf_next)
 		;
 	if (*actqf)
 		*actqf = (*actqf)->qf_next;
-	spin_unlock(&dq_list_lock);
+	spin_unlock(&dq_fmt_lock);
 }
 EXPORT_SYMBOL(unregister_quota_format);
 
@@ -181,14 +181,14 @@ static struct quota_format_type *find_quota_format(int id)
 {
 	struct quota_format_type *actqf;
 
-	spin_lock(&dq_list_lock);
+	spin_lock(&dq_fmt_lock);
 	for (actqf = quota_formats; actqf && actqf->qf_fmt_id != id;
 	     actqf = actqf->qf_next)
 		;
 	if (!actqf || !try_module_get(actqf->qf_owner)) {
 		int qm;
 
-		spin_unlock(&dq_list_lock);
+		spin_unlock(&dq_fmt_lock);
 		
 		for (qm = 0; module_names[qm].qm_fmt_id &&
 			     module_names[qm].qm_fmt_id != id; qm++)
@@ -197,14 +197,14 @@ static struct quota_format_type *find_quota_format(int id)
 		    request_module(module_names[qm].qm_mod_name))
 			return NULL;
 
-		spin_lock(&dq_list_lock);
+		spin_lock(&dq_fmt_lock);
 		for (actqf = quota_formats; actqf && actqf->qf_fmt_id != id;
 		     actqf = actqf->qf_next)
 			;
 		if (actqf && !try_module_get(actqf->qf_owner))
 			actqf = NULL;
 	}
-	spin_unlock(&dq_list_lock);
+	spin_unlock(&dq_fmt_lock);
 	return actqf;
 }
 
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 08/19] quota: make dquot lists per-sb
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (6 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 07/19] quota: add quota format lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-22 21:37   ` Jan Kara
  2010-11-11 12:14 ` [PATCH 09/19] quota: optimize quota_initialize Dmitry Monakhov
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Currently quota lists are global which is very bad for scalability.
* inuse_lists -> sb->s_dquot->dq_inuse_list
* free_lists  -> sb->s_dquot->dq_free_lists
* Add per sb lock for quota's lists protection

Do not remove dq_lists_lock is used now only for protecting quota_hash

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c      |   88 +++++++++++++++++++++++++++++++++++++++---------
 include/linux/quota.h |    3 ++
 2 files changed, 74 insertions(+), 17 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index f719a6f..d7ec471 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -87,8 +87,8 @@
  * i_blocks and i_bytes updates itself are guarded by i_lock acquired directly
  * in inode_add_bytes() and inode_sub_bytes().
  *
- * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock.
- *
+ * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
+ * dq_list_lock > sb->s_dquot->dq_list_lock
  * Note that some things (eg. sb pointer, type, id) doesn't change during
  * the life of the dquot structure and so needn't to be protected by a lock
  *
@@ -233,8 +233,6 @@ static void put_quota_format(struct quota_format_type *fmt)
  * mechanism to locate a specific dquot.
  */
 
-static LIST_HEAD(inuse_list);
-static LIST_HEAD(free_dquots);
 static unsigned int dq_hash_bits, dq_hash_mask;
 static struct hlist_head *dquot_hash;
 
@@ -286,7 +284,7 @@ static struct dquot *find_dquot(unsigned int hashent, struct super_block *sb,
 /* Add a dquot to the tail of the free list */
 static inline void put_dquot_last(struct dquot *dquot)
 {
-	list_add_tail(&dquot->dq_free, &free_dquots);
+	list_add_tail(&dquot->dq_free, &sb_dqopts(dquot)->dq_free_list);
 	dqstats_inc(DQST_FREE_DQUOTS);
 }
 
@@ -302,7 +300,7 @@ static inline void put_inuse(struct dquot *dquot)
 {
 	/* We add to the back of inuse list so we don't have to restart
 	 * when traversing this list and we block */
-	list_add_tail(&dquot->dq_inuse, &inuse_list);
+	list_add_tail(&dquot->dq_inuse, &sb_dqopts(dquot)->dq_inuse_list);
 	dqstats_inc(DQST_ALLOC_DQUOTS);
 }
 
@@ -335,17 +333,20 @@ static inline int mark_dquot_dirty(struct dquot *dquot)
 int dquot_mark_dquot_dirty(struct dquot *dquot)
 {
 	int ret = 1;
+	struct quota_info *dqopt = sb_dqopts(dquot);
 
 	/* If quota is dirty already, we don't have to acquire dq_list_lock */
 	if (test_bit(DQ_MOD_B, &dquot->dq_flags))
 		return 1;
 
 	spin_lock(&dq_list_lock);
+	spin_lock(&dqopt->dq_list_lock);
 	if (!test_and_set_bit(DQ_MOD_B, &dquot->dq_flags)) {
-		list_add(&dquot->dq_dirty, &sb_dqopts(dquot)->
-				info[dquot->dq_type].dqi_dirty_list);
+		list_add(&dquot->dq_dirty,
+			&dqopt->info[dquot->dq_type].dqi_dirty_list);
 		ret = 0;
 	}
+	spin_unlock(&dqopt->dq_list_lock);
 	spin_unlock(&dq_list_lock);
 	return ret;
 }
@@ -439,10 +440,13 @@ int dquot_commit(struct dquot *dquot)
 
 	mutex_lock(&dqopt->dqio_mutex);
 	spin_lock(&dq_list_lock);
+	spin_lock(&dqopt->dq_list_lock);
 	if (!clear_dquot_dirty(dquot)) {
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		goto out_sem;
 	}
+	spin_unlock(&dqopt->dq_list_lock);
 	spin_unlock(&dq_list_lock);
 	/* Inactive dquot can be only if there was error during read/init
 	 * => we have better not writing it */
@@ -512,10 +516,12 @@ static inline void do_destroy_dquot(struct dquot *dquot)
 static void invalidate_dquots(struct super_block *sb, int type)
 {
 	struct dquot *dquot, *tmp;
+	struct quota_info *dqopt = dqopts(sb);
 
 restart:
 	spin_lock(&dq_list_lock);
-	list_for_each_entry_safe(dquot, tmp, &inuse_list, dq_inuse) {
+	spin_lock(&dqopt->dq_list_lock);
+	list_for_each_entry_safe(dquot, tmp, &dqopt->dq_inuse_list, dq_inuse) {
 		if (dquot->dq_sb != sb)
 			continue;
 		if (dquot->dq_type != type)
@@ -527,6 +533,7 @@ restart:
 			atomic_inc(&dquot->dq_count);
 			prepare_to_wait(&dquot->dq_wait_unused, &wait,
 					TASK_UNINTERRUPTIBLE);
+			spin_unlock(&dqopt->dq_list_lock);
 			spin_unlock(&dq_list_lock);
 			/* Once dqput() wakes us up, we know it's time to free
 			 * the dquot.
@@ -553,6 +560,7 @@ restart:
 		remove_inuse(dquot);
 		do_destroy_dquot(dquot);
 	}
+	spin_unlock(&dqopt->dq_list_lock);
 	spin_unlock(&dq_list_lock);
 }
 
@@ -562,17 +570,21 @@ int dquot_scan_active(struct super_block *sb,
 		      unsigned long priv)
 {
 	struct dquot *dquot, *old_dquot = NULL;
+	struct quota_info *dqopt;
 	int ret = 0;
 
 	mutex_lock(&dqctl(sb)->dqonoff_mutex);
+	dqopt = dqopts(sb);
 	spin_lock(&dq_list_lock);
-	list_for_each_entry(dquot, &inuse_list, dq_inuse) {
+	spin_lock(&dqopt->dq_list_lock);
+	list_for_each_entry(dquot, &dqopt->dq_inuse_list, dq_inuse) {
 		if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
 			continue;
 		if (dquot->dq_sb != sb)
 			continue;
 		/* Now we have active dquot so we can just increase use count */
 		atomic_inc(&dquot->dq_count);
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_LOOKUPS);
 		dqput(old_dquot);
@@ -581,9 +593,11 @@ int dquot_scan_active(struct super_block *sb,
 		if (ret < 0)
 			goto out;
 		spin_lock(&dq_list_lock);
+		spin_lock(&dqopt->dq_list_lock);
 		/* We are safe to continue now because our dquot could not
 		 * be moved out of the inuse list while we hold the reference */
 	}
+	spin_unlock(&dqopt->dq_list_lock);
 	spin_unlock(&dq_list_lock);
 out:
 	dqput(old_dquot);
@@ -607,6 +621,7 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 		if (!sb_has_quota_active(sb, cnt))
 			continue;
 		spin_lock(&dq_list_lock);
+		spin_lock(&dqopt->dq_list_lock);
 		dirty = &dqopt->info[cnt].dqi_dirty_list;
 		while (!list_empty(dirty)) {
 			dquot = list_first_entry(dirty, struct dquot,
@@ -620,12 +635,15 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
  			 * holding reference so we can safely just increase
 			 * use count */
 			atomic_inc(&dquot->dq_count);
+			spin_unlock(&dqopt->dq_list_lock);
 			spin_unlock(&dq_list_lock);
 			dqstats_inc(DQST_LOOKUPS);
 			dqctl(sb)->dq_op->write_dquot(dquot);
 			dqput(dquot);
+			spin_lock(&dqopt->dq_list_lock);
 			spin_lock(&dq_list_lock);
 		}
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 	}
 
@@ -669,23 +687,36 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 EXPORT_SYMBOL(dquot_quota_sync);
 
 /* Free unused dquots from cache */
-static void prune_dqcache(int count)
+static void prune_one_sb_dqcache(struct super_block *sb, void *arg)
 {
 	struct list_head *head;
 	struct dquot *dquot;
+	struct quota_info *dqopt = dqopts(sb);
+	int count = *(int*) arg;
 
-	head = free_dquots.prev;
-	while (head != &free_dquots && count) {
+	mutex_lock(&dqctl(sb)->dqonoff_mutex);
+	if (!sb_any_quota_loaded(sb)) {
+		mutex_unlock(&dqctl(sb)->dqonoff_mutex);
+		return;
+	}
+	spin_lock(&dqopt->dq_list_lock);
+	head = dqopt->dq_free_list.prev;
+	while (head != &dqopt->dq_free_list && count) {
 		dquot = list_entry(head, struct dquot, dq_free);
 		remove_dquot_hash(dquot);
 		remove_free_dquot(dquot);
 		remove_inuse(dquot);
 		do_destroy_dquot(dquot);
 		count--;
-		head = free_dquots.prev;
+		head = dqopt->dq_free_list.prev;
 	}
+	spin_unlock(&dqopt->dq_list_lock);
+	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
+}
+static void prune_dqcache(int count)
+{
+	iterate_supers(prune_one_sb_dqcache, &count);
 }
-
 /*
  * This is called from kswapd when we think we need some
  * more memory
@@ -714,6 +745,7 @@ static struct shrinker dqcache_shrinker = {
 void dqput(struct dquot *dquot)
 {
 	int ret;
+	struct quota_info *dqopt;
 
 	if (!dquot)
 		return;
@@ -724,9 +756,11 @@ void dqput(struct dquot *dquot)
 		BUG();
 	}
 #endif
+	dqopt = sb_dqopts(dquot);
 	dqstats_inc(DQST_DROPS);
 we_slept:
 	spin_lock(&dq_list_lock);
+	spin_lock(&dqopt->dq_list_lock);
 	if (atomic_read(&dquot->dq_count) > 1) {
 		/* We have more than one user... nothing to do */
 		atomic_dec(&dquot->dq_count);
@@ -734,11 +768,13 @@ we_slept:
 		if (!sb_has_quota_active(dquot->dq_sb, dquot->dq_type) &&
 		    atomic_read(&dquot->dq_count) == 1)
 			wake_up(&dquot->dq_wait_unused);
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		return;
 	}
 	/* Need to release dquot? */
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && dquot_dirty(dquot)) {
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		/* Commit dquot before releasing */
 		ret = dqctl(dquot->dq_sb)->dq_op->write_dquot(dquot);
@@ -751,7 +787,9 @@ we_slept:
 			 * infinite loop here
 			 */
 			spin_lock(&dq_list_lock);
+			spin_lock(&dqopt->dq_list_lock);
 			clear_dquot_dirty(dquot);
+			spin_unlock(&dqopt->dq_list_lock);
 			spin_unlock(&dq_list_lock);
 		}
 		goto we_slept;
@@ -759,6 +797,7 @@ we_slept:
 	/* Clear flag in case dquot was inactive (something bad happened) */
 	clear_dquot_dirty(dquot);
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		dqctl(dquot->dq_sb)->dq_op->release_dquot(dquot);
 		goto we_slept;
@@ -769,6 +808,7 @@ we_slept:
 	BUG_ON(!list_empty(&dquot->dq_free));
 #endif
 	put_dquot_last(dquot);
+	spin_unlock(&dqopt->dq_list_lock);
 	spin_unlock(&dq_list_lock);
 }
 EXPORT_SYMBOL(dqput);
@@ -812,6 +852,7 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
 {
 	unsigned int hashent = hashfn(sb, id, type);
 	struct dquot *dquot = NULL, *empty = NULL;
+	struct quota_info *dqopt;
 	int idx;
 
 	rcu_read_lock();
@@ -819,17 +860,21 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
 		rcu_read_unlock();
 		return NULL;
 	}
-	idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
+	dqopt = dqopts(sb);
+	idx = srcu_read_lock(&dqopt->dq_srcu);
 	rcu_read_unlock();
 we_slept:
 	spin_lock(&dq_list_lock);
+	spin_lock(&dqopt->dq_list_lock);
 	if (!sb_has_quota_active(sb, type)) {
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		goto out;
 	}
 	dquot = find_dquot(hashent, sb, id, type);
 	if (!dquot) {
 		if (!empty) {
+			spin_unlock(&dqopt->dq_list_lock);
 			spin_unlock(&dq_list_lock);
 			empty = get_empty_dquot(sb, type);
 			if (!empty)
@@ -843,12 +888,14 @@ we_slept:
 		put_inuse(dquot);
 		/* hash it first so it can be found */
 		insert_dquot_hash(dquot);
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_LOOKUPS);
 	} else {
 		if (!atomic_read(&dquot->dq_count))
 			remove_free_dquot(dquot);
 		atomic_inc(&dquot->dq_count);
+		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_CACHE_HITS);
 		dqstats_inc(DQST_LOOKUPS);
@@ -867,7 +914,7 @@ we_slept:
 	BUG_ON(!dquot->dq_sb);	/* Has somebody invalidated entry under us? */
 #endif
 out:
-	srcu_read_unlock(&dqopts(sb)->dq_srcu, idx);
+	srcu_read_unlock(&dqopt->dq_srcu, idx);
 	if (empty)
 		do_destroy_dquot(empty);
 
@@ -955,6 +1002,7 @@ static int remove_inode_dquot_ref(struct inode *inode, int type,
 				  struct list_head *tofree_head)
 {
 	struct dquot *dquot = inode->i_dquot[type];
+	struct quota_info *dqopt = dqopts(inode->i_sb);
 
 	inode->i_dquot[type] = NULL;
 	if (dquot) {
@@ -966,9 +1014,11 @@ static int remove_inode_dquot_ref(struct inode *inode, int type,
 					    atomic_read(&dquot->dq_count));
 #endif
 			spin_lock(&dq_list_lock);
+			spin_lock(&dqopt->dq_list_lock);
 			/* As dquot must have currently users it can't be on
 			 * the free list... */
 			list_add(&dquot->dq_free, tofree_head);
+			spin_unlock(&dqopt->dq_list_lock);
 			spin_unlock(&dq_list_lock);
 			return 1;
 		}
@@ -1964,6 +2014,10 @@ static int alloc_quota_info(struct quota_ctl_info *dqctl) {
 	}
 	mutex_init(&dqopt->dqio_mutex);
 	init_rwsem(&dqopt->dqptr_sem);
+	spin_lock_init(&dqopt->dq_list_lock);
+	INIT_LIST_HEAD(&dqopt->dq_inuse_list);
+	INIT_LIST_HEAD(&dqopt->dq_free_list);
+
 	dqctl->dq_opt = dqopt;
 	return 0;
 }
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 7170730..4ca03aa 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -406,6 +406,9 @@ struct quota_ctl_info {
 struct quota_info {
 	struct mutex dqio_mutex;		/* lock device while I/O in progress */
 	struct mem_dqinfo info[MAXQUOTAS];	/* Information for each quota type */
+	spinlock_t dq_list_lock;		/* protect lists */
+	struct list_head dq_inuse_list;		/* list of inused dquotas */
+	struct list_head dq_free_list;		/* list of free dquotas */
 	struct inode *files[MAXQUOTAS];	/* inodes of quotafiles */
 	const struct quota_format_ops *fmt_ops[MAXQUOTAS];	/* Operations for each type */
 	struct srcu_struct dq_srcu;	/* use count read lock */
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 09/19] quota: optimize quota_initialize
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (7 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 08/19] quota: make dquot lists per-sb Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 10/19] quota: user per-backet hlist lock for dquot_hash Dmitry Monakhov
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Each time we perform full initialization procedure regardless to
whenever quota was already initialized for a given inode.
In fact dquot_initialize() called many times during inode life time,
which result in many useless quota lookup/dqput actions. It is
reasonable to optimize the case where inode already has quota initialized.

We can avoid locking here because:
* Serialization between dquot_initialize() and dquot_drop() is guaranteed
  by caller.
* Other races (quota_off/quota_on) result in incorrect quota regardless to
  locking in dquot_initialize().

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c |   17 +++++++++++++++--
 1 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index d7ec471..c06f969 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -1462,10 +1462,23 @@ out_err:
 	/* Drop unused references */
 	dqput_all(got);
 }
-
 void dquot_initialize(struct inode *inode)
 {
-	__dquot_initialize(inode, -1);
+	int cnt;
+	/*
+	 * We can check inodes quota pointers without lock, because
+	 * serialization between dquot_init/dquot_drop is guaranteed by caller.
+	 */
+	if (IS_NOQUOTA(inode))
+		return;
+	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+		if (!sb_has_quota_active(inode->i_sb, cnt))
+			continue;
+		if (!inode->i_dquot[cnt])
+			break;
+	}
+	if (cnt < MAXQUOTAS)
+		__dquot_initialize(inode, -1);
 }
 EXPORT_SYMBOL(dquot_initialize);
 
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 10/19] quota: user per-backet hlist lock for dquot_hash
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (8 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 09/19] quota: optimize quota_initialize Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 11/19] quota: remove global dq_list_lock Dmitry Monakhov
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

This patch simply transform hlist_head to hlist_bl_head to allow us
to remove dq_list_lock. Later hash lookup will be transformed to
lock less procedure with help of RCU.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c        |   56 ++++++++++++++++++++++------------------------
 include/linux/list_bl.h |   10 ++++++++
 include/linux/quota.h   |    3 +-
 3 files changed, 39 insertions(+), 30 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index c06f969..99dc7a3 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -88,7 +88,7 @@
  * in inode_add_bytes() and inode_sub_bytes().
  *
  * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
- * dq_list_lock > sb->s_dquot->dq_list_lock
+ * dq_list_lock > sb->s_dquot->dq_list_lock > hlist_bl_head
  * Note that some things (eg. sb pointer, type, id) doesn't change during
  * the life of the dquot structure and so needn't to be protected by a lock
  *
@@ -234,7 +234,7 @@ static void put_quota_format(struct quota_format_type *fmt)
  */
 
 static unsigned int dq_hash_bits, dq_hash_mask;
-static struct hlist_head *dquot_hash;
+static struct hlist_bl_head *dquot_hash;
 
 struct dqstats dqstats;
 EXPORT_SYMBOL(dqstats);
@@ -251,29 +251,28 @@ hashfn(const struct super_block *sb, unsigned int id, int type)
 	return (tmp + (tmp >> dq_hash_bits)) & dq_hash_mask;
 }
 
-/*
- * Following list functions expect dq_list_lock to be held
- */
-static inline void insert_dquot_hash(struct dquot *dquot)
+static inline void insert_dquot_hash(struct dquot *dquot, struct hlist_bl_head *blh)
 {
-	struct hlist_head *head;
-	head = dquot_hash + hashfn(dquot->dq_sb, dquot->dq_id, dquot->dq_type);
-	hlist_add_head(&dquot->dq_hash, head);
+	hlist_bl_add_head(&dquot->dq_hash, blh);
 }
 
 static inline void remove_dquot_hash(struct dquot *dquot)
 {
-	hlist_del_init(&dquot->dq_hash);
+	struct hlist_bl_head *blh;
+	blh = dquot_hash + hashfn(dquot->dq_sb, dquot->dq_id, dquot->dq_type);
+	hlist_bl_lock(blh);
+	hlist_bl_del_init(&dquot->dq_hash);
+	hlist_bl_unlock(blh);
 }
 
-static struct dquot *find_dquot(unsigned int hashent, struct super_block *sb,
-				unsigned int id, int type)
+static struct dquot *find_dquot(struct hlist_bl_head *blh,
+				struct super_block *sb, unsigned int id,
+				int type)
 {
-	struct hlist_node *node;
+	struct hlist_bl_node *node;
 	struct dquot *dquot;
 
-	hlist_for_each (node, dquot_hash+hashent) {
-		dquot = hlist_entry(node, struct dquot, dq_hash);
+	hlist_bl_for_each_entry(dquot, node, blh, dq_hash) {
 		if (dquot->dq_sb == sb && dquot->dq_id == id &&
 		    dquot->dq_type == type)
 			return dquot;
@@ -830,8 +829,8 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
 	mutex_init(&dquot->dq_lock);
 	INIT_LIST_HEAD(&dquot->dq_free);
 	INIT_LIST_HEAD(&dquot->dq_inuse);
-	INIT_HLIST_NODE(&dquot->dq_hash);
 	INIT_LIST_HEAD(&dquot->dq_dirty);
+	INIT_HLIST_BL_NODE(&dquot->dq_hash);
 	init_waitqueue_head(&dquot->dq_wait_unused);
 	dquot->dq_sb = sb;
 	dquot->dq_type = type;
@@ -850,7 +849,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
  */
 struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
 {
-	unsigned int hashent = hashfn(sb, id, type);
+	struct hlist_bl_head * blh = dquot_hash + hashfn(sb, id, type);
 	struct dquot *dquot = NULL, *empty = NULL;
 	struct quota_info *dqopt;
 	int idx;
@@ -871,9 +870,11 @@ we_slept:
 		spin_unlock(&dq_list_lock);
 		goto out;
 	}
-	dquot = find_dquot(hashent, sb, id, type);
+	hlist_bl_lock(blh);
+	dquot = find_dquot(blh, sb, id, type);
 	if (!dquot) {
 		if (!empty) {
+			hlist_bl_unlock(blh);
 			spin_unlock(&dqopt->dq_list_lock);
 			spin_unlock(&dq_list_lock);
 			empty = get_empty_dquot(sb, type);
@@ -884,10 +885,11 @@ we_slept:
 		dquot = empty;
 		empty = NULL;
 		dquot->dq_id = id;
+		/* hash it first so it can be found */
+		insert_dquot_hash(dquot, blh);
+		hlist_bl_unlock(blh);
 		/* all dquots go on the inuse_list */
 		put_inuse(dquot);
-		/* hash it first so it can be found */
-		insert_dquot_hash(dquot);
 		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_LOOKUPS);
@@ -895,6 +897,7 @@ we_slept:
 		if (!atomic_read(&dquot->dq_count))
 			remove_free_dquot(dquot);
 		atomic_inc(&dquot->dq_count);
+		hlist_bl_unlock(blh);
 		spin_unlock(&dqopt->dq_list_lock);
 		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_CACHE_HITS);
@@ -2802,7 +2805,7 @@ static int __init dquot_init(void)
 			NULL);
 
 	order = 0;
-	dquot_hash = (struct hlist_head *)__get_free_pages(GFP_ATOMIC, order);
+	dquot_hash = (struct hlist_bl_head *)__get_free_pages(GFP_ATOMIC, order);
 	if (!dquot_hash)
 		panic("Cannot create dquot hash table");
 
@@ -2813,17 +2816,12 @@ static int __init dquot_init(void)
 	}
 
 	/* Find power-of-two hlist_heads which can fit into allocation */
-	nr_hash = (1UL << order) * PAGE_SIZE / sizeof(struct hlist_head);
-	dq_hash_bits = 0;
-	do {
-		dq_hash_bits++;
-	} while (nr_hash >> dq_hash_bits);
-	dq_hash_bits--;
-
+	nr_hash = (1UL << order) * PAGE_SIZE / sizeof(*dquot_hash);
+	dq_hash_bits = ilog2(nr_hash);
 	nr_hash = 1UL << dq_hash_bits;
 	dq_hash_mask = nr_hash - 1;
 	for (i = 0; i < nr_hash; i++)
-		INIT_HLIST_HEAD(dquot_hash + i);
+		INIT_HLIST_BL_HEAD(dquot_hash + i);
 
 	printk("Dquot-cache hash table entries: %ld (order %ld, %ld bytes)\n",
 			nr_hash, order, (PAGE_SIZE << order));
diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index cf8acfc..bea750f 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -110,6 +110,16 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n)
 	}
 }
 
+static inline void hlist_bl_lock(struct hlist_bl_head *b)
+{
+	bit_spin_lock(0, (unsigned long *)b);
+}
+
+static inline void hlist_bl_unlock(struct hlist_bl_head *b)
+{
+	__bit_spin_unlock(0, (unsigned long *)b);
+}
+
 /**
  * hlist_bl_for_each_entry	- iterate over list of given type
  * @tpos:	the type * to use as a loop cursor.
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 4ca03aa..1661afa 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -170,6 +170,7 @@ enum {
 
 #ifdef __KERNEL__
 #include <linux/list.h>
+#include <linux/list_bl.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/spinlock.h>
@@ -284,7 +285,7 @@ static inline void dqstats_dec(unsigned int type)
 				 * clear them when it sees fit. */
 
 struct dquot {
-	struct hlist_node dq_hash;	/* Hash list in memory */
+	struct hlist_bl_node dq_hash;	/* Hash list in memory */
 	struct list_head dq_inuse;	/* List of all quotas */
 	struct list_head dq_free;	/* Free list element */
 	struct list_head dq_dirty;	/* List of dirty dquots */
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 11/19] quota: remove global dq_list_lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (9 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 10/19] quota: user per-backet hlist lock for dquot_hash Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 12/19] quota: rename dq_lock Dmitry Monakhov
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

dq_list_lock is no longer responsible for any synchronization,
get rid it completely.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c |   36 ++----------------------------------
 1 files changed, 2 insertions(+), 34 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 99dc7a3..2aa8faf 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -88,7 +88,8 @@
  * in inode_add_bytes() and inode_sub_bytes().
  *
  * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
- * dq_list_lock > sb->s_dquot->dq_list_lock > hlist_bl_head
+ * dq_list_lock > hlist_bl_head
+
  * Note that some things (eg. sb pointer, type, id) doesn't change during
  * the life of the dquot structure and so needn't to be protected by a lock
  *
@@ -123,7 +124,6 @@
  * i_mutex on quota files is special (it's below dqio_mutex)
  */
 
-static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_list_lock);
 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_fmt_lock);
 __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock);
 EXPORT_SYMBOL(dq_data_lock);
@@ -338,7 +338,6 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
 	if (test_bit(DQ_MOD_B, &dquot->dq_flags))
 		return 1;
 
-	spin_lock(&dq_list_lock);
 	spin_lock(&dqopt->dq_list_lock);
 	if (!test_and_set_bit(DQ_MOD_B, &dquot->dq_flags)) {
 		list_add(&dquot->dq_dirty,
@@ -346,7 +345,6 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
 		ret = 0;
 	}
 	spin_unlock(&dqopt->dq_list_lock);
-	spin_unlock(&dq_list_lock);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_mark_dquot_dirty);
@@ -438,15 +436,12 @@ int dquot_commit(struct dquot *dquot)
 	struct quota_info *dqopt = sb_dqopts(dquot);
 
 	mutex_lock(&dqopt->dqio_mutex);
-	spin_lock(&dq_list_lock);
 	spin_lock(&dqopt->dq_list_lock);
 	if (!clear_dquot_dirty(dquot)) {
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		goto out_sem;
 	}
 	spin_unlock(&dqopt->dq_list_lock);
-	spin_unlock(&dq_list_lock);
 	/* Inactive dquot can be only if there was error during read/init
 	 * => we have better not writing it */
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
@@ -518,7 +513,6 @@ static void invalidate_dquots(struct super_block *sb, int type)
 	struct quota_info *dqopt = dqopts(sb);
 
 restart:
-	spin_lock(&dq_list_lock);
 	spin_lock(&dqopt->dq_list_lock);
 	list_for_each_entry_safe(dquot, tmp, &dqopt->dq_inuse_list, dq_inuse) {
 		if (dquot->dq_sb != sb)
@@ -533,7 +527,6 @@ restart:
 			prepare_to_wait(&dquot->dq_wait_unused, &wait,
 					TASK_UNINTERRUPTIBLE);
 			spin_unlock(&dqopt->dq_list_lock);
-			spin_unlock(&dq_list_lock);
 			/* Once dqput() wakes us up, we know it's time to free
 			 * the dquot.
 			 * IMPORTANT: we rely on the fact that there is always
@@ -560,7 +553,6 @@ restart:
 		do_destroy_dquot(dquot);
 	}
 	spin_unlock(&dqopt->dq_list_lock);
-	spin_unlock(&dq_list_lock);
 }
 
 /* Call callback for every active dquot on given filesystem */
@@ -574,7 +566,6 @@ int dquot_scan_active(struct super_block *sb,
 
 	mutex_lock(&dqctl(sb)->dqonoff_mutex);
 	dqopt = dqopts(sb);
-	spin_lock(&dq_list_lock);
 	spin_lock(&dqopt->dq_list_lock);
 	list_for_each_entry(dquot, &dqopt->dq_inuse_list, dq_inuse) {
 		if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
@@ -584,20 +575,17 @@ int dquot_scan_active(struct super_block *sb,
 		/* Now we have active dquot so we can just increase use count */
 		atomic_inc(&dquot->dq_count);
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_LOOKUPS);
 		dqput(old_dquot);
 		old_dquot = dquot;
 		ret = fn(dquot, priv);
 		if (ret < 0)
 			goto out;
-		spin_lock(&dq_list_lock);
 		spin_lock(&dqopt->dq_list_lock);
 		/* We are safe to continue now because our dquot could not
 		 * be moved out of the inuse list while we hold the reference */
 	}
 	spin_unlock(&dqopt->dq_list_lock);
-	spin_unlock(&dq_list_lock);
 out:
 	dqput(old_dquot);
 	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
@@ -619,7 +607,6 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 			continue;
 		if (!sb_has_quota_active(sb, cnt))
 			continue;
-		spin_lock(&dq_list_lock);
 		spin_lock(&dqopt->dq_list_lock);
 		dirty = &dqopt->info[cnt].dqi_dirty_list;
 		while (!list_empty(dirty)) {
@@ -635,15 +622,12 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 			 * use count */
 			atomic_inc(&dquot->dq_count);
 			spin_unlock(&dqopt->dq_list_lock);
-			spin_unlock(&dq_list_lock);
 			dqstats_inc(DQST_LOOKUPS);
 			dqctl(sb)->dq_op->write_dquot(dquot);
 			dqput(dquot);
 			spin_lock(&dqopt->dq_list_lock);
-			spin_lock(&dq_list_lock);
 		}
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 	}
 
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
@@ -723,9 +707,7 @@ static void prune_dqcache(int count)
 static int shrink_dqcache_memory(struct shrinker *shrink, int nr, gfp_t gfp_mask)
 {
 	if (nr) {
-		spin_lock(&dq_list_lock);
 		prune_dqcache(nr);
-		spin_unlock(&dq_list_lock);
 	}
 	return ((unsigned)
 		percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS])
@@ -758,7 +740,6 @@ void dqput(struct dquot *dquot)
 	dqopt = sb_dqopts(dquot);
 	dqstats_inc(DQST_DROPS);
 we_slept:
-	spin_lock(&dq_list_lock);
 	spin_lock(&dqopt->dq_list_lock);
 	if (atomic_read(&dquot->dq_count) > 1) {
 		/* We have more than one user... nothing to do */
@@ -768,13 +749,11 @@ we_slept:
 		    atomic_read(&dquot->dq_count) == 1)
 			wake_up(&dquot->dq_wait_unused);
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		return;
 	}
 	/* Need to release dquot? */
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && dquot_dirty(dquot)) {
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		/* Commit dquot before releasing */
 		ret = dqctl(dquot->dq_sb)->dq_op->write_dquot(dquot);
 		if (ret < 0) {
@@ -785,11 +764,9 @@ we_slept:
 			 * We clear dirty bit anyway, so that we avoid
 			 * infinite loop here
 			 */
-			spin_lock(&dq_list_lock);
 			spin_lock(&dqopt->dq_list_lock);
 			clear_dquot_dirty(dquot);
 			spin_unlock(&dqopt->dq_list_lock);
-			spin_unlock(&dq_list_lock);
 		}
 		goto we_slept;
 	}
@@ -797,7 +774,6 @@ we_slept:
 	clear_dquot_dirty(dquot);
 	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		dqctl(dquot->dq_sb)->dq_op->release_dquot(dquot);
 		goto we_slept;
 	}
@@ -808,7 +784,6 @@ we_slept:
 #endif
 	put_dquot_last(dquot);
 	spin_unlock(&dqopt->dq_list_lock);
-	spin_unlock(&dq_list_lock);
 }
 EXPORT_SYMBOL(dqput);
 
@@ -863,11 +838,9 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
 	idx = srcu_read_lock(&dqopt->dq_srcu);
 	rcu_read_unlock();
 we_slept:
-	spin_lock(&dq_list_lock);
 	spin_lock(&dqopt->dq_list_lock);
 	if (!sb_has_quota_active(sb, type)) {
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		goto out;
 	}
 	hlist_bl_lock(blh);
@@ -876,7 +849,6 @@ we_slept:
 		if (!empty) {
 			hlist_bl_unlock(blh);
 			spin_unlock(&dqopt->dq_list_lock);
-			spin_unlock(&dq_list_lock);
 			empty = get_empty_dquot(sb, type);
 			if (!empty)
 				schedule();	/* Try to wait for a moment... */
@@ -891,7 +863,6 @@ we_slept:
 		/* all dquots go on the inuse_list */
 		put_inuse(dquot);
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_LOOKUPS);
 	} else {
 		if (!atomic_read(&dquot->dq_count))
@@ -899,7 +870,6 @@ we_slept:
 		atomic_inc(&dquot->dq_count);
 		hlist_bl_unlock(blh);
 		spin_unlock(&dqopt->dq_list_lock);
-		spin_unlock(&dq_list_lock);
 		dqstats_inc(DQST_CACHE_HITS);
 		dqstats_inc(DQST_LOOKUPS);
 	}
@@ -1016,13 +986,11 @@ static int remove_inode_dquot_ref(struct inode *inode, int type,
 					    "dq_count %d to dispose list",
 					    atomic_read(&dquot->dq_count));
 #endif
-			spin_lock(&dq_list_lock);
 			spin_lock(&dqopt->dq_list_lock);
 			/* As dquot must have currently users it can't be on
 			 * the free list... */
 			list_add(&dquot->dq_free, tofree_head);
 			spin_unlock(&dqopt->dq_list_lock);
-			spin_unlock(&dq_list_lock);
 			return 1;
 		}
 		else
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 12/19] quota: rename dq_lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (10 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 11/19] quota: remove global dq_list_lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 13/19] quota: make per-sb dq_data_lock Dmitry Monakhov
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Give dquot mutex more appropriate name.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ocfs2/quota_global.c |   14 +++++++-------
 fs/quota/dquot.c        |   26 +++++++++++++-------------
 fs/quota/quota_tree.c   |    2 +-
 include/linux/quota.h   |    2 +-
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
index cdae8d1..b464947 100644
--- a/fs/ocfs2/quota_global.c
+++ b/fs/ocfs2/quota_global.c
@@ -32,7 +32,7 @@
  * Locking of quotas with OCFS2 is rather complex. Here are rules that
  * should be obeyed by all the functions:
  * - any write of quota structure (either to local or global file) is protected
- *   by dqio_mutex or dquot->dq_lock.
+ *   by dqio_mutex or dquot->dq_mutex.
  * - any modification of global quota file holds inode cluster lock, i_mutex,
  *   and ip_alloc_sem of the global quota file (achieved by
  *   ocfs2_lock_global_qf). It also has to hold qinfo_lock.
@@ -47,13 +47,13 @@
  *     write to gf
  *						       -> write to lf
  * Acquire dquot for the first time:
- *   dq_lock -> ocfs2_lock_global_qf -> qinfo_lock -> read from gf
+ *   dq_mutex -> ocfs2_lock_global_qf -> qinfo_lock -> read from gf
  *				     -> alloc space for gf
  *				     -> start_trans -> qinfo_lock -> write to gf
  *	     -> ip_alloc_sem of lf -> alloc space for lf
  *	     -> write to lf
  * Release last reference to dquot:
- *   dq_lock -> ocfs2_lock_global_qf -> start_trans -> qinfo_lock -> write to gf
+ *   dq_mutex -> ocfs2_lock_global_qf -> start_trans -> qinfo_lock -> write to gf
  *	     -> write to lf
  * Note that all the above operations also hold the inode cluster lock of lf.
  * Recovery:
@@ -690,7 +690,7 @@ static int ocfs2_release_dquot(struct dquot *dquot)
 
 	mlog_entry("id=%u, type=%d", dquot->dq_id, dquot->dq_type);
 
-	mutex_lock(&dquot->dq_lock);
+	mutex_lock(&dquot->dq_mutex);
 	/* Check whether we are not racing with some other dqget() */
 	if (atomic_read(&dquot->dq_count) > 1)
 		goto out;
@@ -723,7 +723,7 @@ out_trans:
 out_ilock:
 	ocfs2_unlock_global_qf(oinfo, 1);
 out:
-	mutex_unlock(&dquot->dq_lock);
+	mutex_unlock(&dquot->dq_mutex);
 	mlog_exit(status);
 	return status;
 }
@@ -746,7 +746,7 @@ static int ocfs2_acquire_dquot(struct dquot *dquot)
 	handle_t *handle;
 
 	mlog_entry("id=%u, type=%d", dquot->dq_id, type);
-	mutex_lock(&dquot->dq_lock);
+	mutex_lock(&dquot->dq_mutex);
 	/*
 	 * We need an exclusive lock, because we're going to update use count
 	 * and instantiate possibly new dquot structure
@@ -810,7 +810,7 @@ out_dq:
 		goto out;
 	set_bit(DQ_ACTIVE_B, &dquot->dq_flags);
 out:
-	mutex_unlock(&dquot->dq_lock);
+	mutex_unlock(&dquot->dq_mutex);
 	mlog_exit(status);
 	return status;
 }
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 2aa8faf..2317a3b 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -103,17 +103,17 @@
  * sure they cannot race with quotaon which first sets S_NOQUOTA flag and
  * then drops all pointers to dquots from an inode.
  *
- * Each dquot has its dq_lock mutex. Locked dquots might not be referenced
- * from inodes (dquot_alloc_space() and such don't check the dq_lock).
+ * Each dquot has its dq_mutex mutex. Locked dquots might not be referenced
+ * from inodes (dquot_alloc_space() and such don't check the dq_mutex).
  * Currently dquot is locked only when it is being read to memory (or space for
  * it is being allocated) on the first dqget() and when it is being released on
  * the last dqput(). The allocation and release oparations are serialized by
- * the dq_lock and by checking the use count in dquot_release().  Write
- * operations on dquots don't hold dq_lock as they copy data under dq_data_lock
+ * the dq_mutex and by checking the use count in dquot_release().  Write
+ * operations on dquots don't hold dq_mutex as they copy data under dq_data_lock
  * spinlock to internal buffers before writing.
  *
  * Lock ordering (including related VFS locks) is the following:
- *   i_mutex > dqonoff_sem > journal_lock > dqptr_sem > dquot->dq_lock >
+ *   i_mutex > dqonoff_sem > journal_lock > dqptr_sem > dquot->dq_mutex >
  *   dqio_mutex
  * The lock ordering of dqptr_sem imposed by quota code is only dqonoff_sem >
  * dqptr_sem. But filesystem has to count with the fact that functions such as
@@ -314,8 +314,8 @@ static inline void remove_inuse(struct dquot *dquot)
 
 static void wait_on_dquot(struct dquot *dquot)
 {
-	mutex_lock(&dquot->dq_lock);
-	mutex_unlock(&dquot->dq_lock);
+	mutex_lock(&dquot->dq_mutex);
+	mutex_unlock(&dquot->dq_mutex);
 }
 
 static inline int dquot_dirty(struct dquot *dquot)
@@ -397,7 +397,7 @@ int dquot_acquire(struct dquot *dquot)
 	int ret = 0, ret2 = 0;
 	struct quota_info *dqopt = sb_dqopts(dquot);
 
-	mutex_lock(&dquot->dq_lock);
+	mutex_lock(&dquot->dq_mutex);
 	mutex_lock(&dqopt->dqio_mutex);
 	if (!test_bit(DQ_READ_B, &dquot->dq_flags))
 		ret = dqopt->fmt_ops[dquot->dq_type]->read_dqblk(dquot);
@@ -422,7 +422,7 @@ int dquot_acquire(struct dquot *dquot)
 	set_bit(DQ_ACTIVE_B, &dquot->dq_flags);
 out_iolock:
 	mutex_unlock(&dqopt->dqio_mutex);
-	mutex_unlock(&dquot->dq_lock);
+	mutex_unlock(&dquot->dq_mutex);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_acquire);
@@ -467,7 +467,7 @@ int dquot_release(struct dquot *dquot)
 	int ret = 0, ret2 = 0;
 	struct quota_info *dqopt = sb_dqopts(dquot);
 
-	mutex_lock(&dquot->dq_lock);
+	mutex_lock(&dquot->dq_mutex);
 	/* Check whether we are not racing with some other dqget() */
 	if (atomic_read(&dquot->dq_count) > 1)
 		goto out_dqlock;
@@ -485,7 +485,7 @@ int dquot_release(struct dquot *dquot)
 	clear_bit(DQ_ACTIVE_B, &dquot->dq_flags);
 	mutex_unlock(&dqopt->dqio_mutex);
 out_dqlock:
-	mutex_unlock(&dquot->dq_lock);
+	mutex_unlock(&dquot->dq_mutex);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_release);
@@ -801,7 +801,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
 	if(!dquot)
 		return NULL;
 
-	mutex_init(&dquot->dq_lock);
+	mutex_init(&dquot->dq_mutex);
 	INIT_LIST_HEAD(&dquot->dq_free);
 	INIT_LIST_HEAD(&dquot->dq_inuse);
 	INIT_LIST_HEAD(&dquot->dq_dirty);
@@ -873,7 +873,7 @@ we_slept:
 		dqstats_inc(DQST_CACHE_HITS);
 		dqstats_inc(DQST_LOOKUPS);
 	}
-	/* Wait for dq_lock - after this we know that either dquot_release() is
+	/* Wait for dq_mutex - after this we know that either dquot_release() is
 	 * already finished or it will be canceled due to dq_count > 1 test */
 	wait_on_dquot(dquot);
 	/* Read the dquot / allocate space in quota file */
diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
index c0917f4..21a4a6a 100644
--- a/fs/quota/quota_tree.c
+++ b/fs/quota/quota_tree.c
@@ -647,7 +647,7 @@ out:
 EXPORT_SYMBOL(qtree_read_dquot);
 
 /* Check whether dquot should not be deleted. We know we are
- * the only one operating on dquot (thanks to dq_lock) */
+ * the only one operating on dquot (thanks to dq_mutex) */
 int qtree_release_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 {
 	if (test_bit(DQ_FAKE_B, &dquot->dq_flags) &&
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 1661afa..d07094b 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -289,7 +289,7 @@ struct dquot {
 	struct list_head dq_inuse;	/* List of all quotas */
 	struct list_head dq_free;	/* Free list element */
 	struct list_head dq_dirty;	/* List of dirty dquots */
-	struct mutex dq_lock;		/* dquot IO lock */
+	struct mutex dq_mutex;		/* dquot IO mutex */
 	atomic_t dq_count;		/* Use count */
 	wait_queue_head_t dq_wait_unused;	/* Wait queue for dquot to become unused */
 	struct super_block *dq_sb;	/* superblock this applies to */
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 13/19] quota: make per-sb dq_data_lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (11 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 12/19] quota: rename dq_lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 14/19] quota: protect dquot mem info with object's lock Dmitry Monakhov
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Currently dq_data_lock is global, which is bad for scalability.
In fact different super_blocks have no shared quota data.
So we may simply convert global the lock to per-sb locks.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ocfs2/quota_global.c |   29 ++++++++++++++-------------
 fs/ocfs2/quota_local.c  |   13 ++++++-----
 fs/quota/dquot.c        |   49 ++++++++++++++++++++++++-----------------------
 fs/quota/quota_tree.c   |    8 +++---
 fs/quota/quota_v2.c     |    4 +-
 include/linux/quota.h   |    3 +-
 6 files changed, 54 insertions(+), 52 deletions(-)

diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
index b464947..d65d18a 100644
--- a/fs/ocfs2/quota_global.c
+++ b/fs/ocfs2/quota_global.c
@@ -296,21 +296,22 @@ int ocfs2_lock_global_qf(struct ocfs2_mem_dqinfo *oinfo, int ex)
 {
 	int status;
 	struct buffer_head *bh = NULL;
+	struct inode *inode = oinfo->dqi_gqinode;
 
-	status = ocfs2_inode_lock(oinfo->dqi_gqinode, &bh, ex);
+	status = ocfs2_inode_lock(inode, &bh, ex);
 	if (status < 0)
 		return status;
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	if (!oinfo->dqi_gqi_count++)
 		oinfo->dqi_gqi_bh = bh;
 	else
 		WARN_ON(bh != oinfo->dqi_gqi_bh);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	if (ex) {
-		mutex_lock(&oinfo->dqi_gqinode->i_mutex);
-		down_write(&OCFS2_I(oinfo->dqi_gqinode)->ip_alloc_sem);
+		mutex_lock(&inode->i_mutex);
+		down_write(&OCFS2_I(inode)->ip_alloc_sem);
 	} else {
-		down_read(&OCFS2_I(oinfo->dqi_gqinode)->ip_alloc_sem);
+		down_read(&OCFS2_I(inode)->ip_alloc_sem);
 	}
 	return 0;
 }
@@ -325,10 +326,10 @@ void ocfs2_unlock_global_qf(struct ocfs2_mem_dqinfo *oinfo, int ex)
 	}
 	ocfs2_inode_unlock(oinfo->dqi_gqinode, ex);
 	brelse(oinfo->dqi_gqi_bh);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(oinfo->dqi_gqinode->i_sb)->dq_data_lock);
 	if (!--oinfo->dqi_gqi_count)
 		oinfo->dqi_gqi_bh = NULL;
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(oinfo->dqi_gqinode->i_sb)->dq_data_lock);
 }
 
 /* Read information header from global quota file */
@@ -421,11 +422,11 @@ static int __ocfs2_global_write_info(struct super_block *sb, int type)
 	struct ocfs2_global_disk_dqinfo dinfo;
 	ssize_t size;
 
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	info->dqi_flags &= ~DQF_INFO_DIRTY;
 	dinfo.dqi_bgrace = cpu_to_le32(info->dqi_bgrace);
 	dinfo.dqi_igrace = cpu_to_le32(info->dqi_igrace);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	dinfo.dqi_syncms = cpu_to_le32(oinfo->dqi_syncms);
 	dinfo.dqi_blocks = cpu_to_le32(oinfo->dqi_gi.dqi_blocks);
 	dinfo.dqi_free_blk = cpu_to_le32(oinfo->dqi_gi.dqi_free_blk);
@@ -502,7 +503,7 @@ int __ocfs2_sync_dquot(struct dquot *dquot, int freeing)
 	/* Update space and inode usage. Get also other information from
 	 * global quota file so that we don't overwrite any changes there.
 	 * We are */
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	spacechange = dquot->dq_dqb.dqb_curspace -
 					OCFS2_DQUOT(dquot)->dq_origspace;
 	inodechange = dquot->dq_dqb.dqb_curinodes -
@@ -556,7 +557,7 @@ int __ocfs2_sync_dquot(struct dquot *dquot, int freeing)
 	__clear_bit(DQ_LASTSET_B + QIF_ITIME_B, &dquot->dq_flags);
 	OCFS2_DQUOT(dquot)->dq_origspace = dquot->dq_dqb.dqb_curspace;
 	OCFS2_DQUOT(dquot)->dq_originodes = dquot->dq_dqb.dqb_curinodes;
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	err = ocfs2_qinfo_lock(info, freeing);
 	if (err < 0) {
 		mlog(ML_ERROR, "Failed to lock quota info, loosing quota write"
@@ -835,10 +836,10 @@ static int ocfs2_mark_dquot_dirty(struct dquot *dquot)
 
 	/* In case user set some limits, sync dquot immediately to global
 	 * quota file so that information propagates quicker */
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	if (dquot->dq_flags & mask)
 		sync = 1;
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	/* This is a slight hack but we can't afford getting global quota
 	 * lock if we already have a transaction started. */
 	if (!sync || journal_current_handle()) {
diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
index 7c30ba3..2d2e981 100644
--- a/fs/ocfs2/quota_local.c
+++ b/fs/ocfs2/quota_local.c
@@ -288,14 +288,15 @@ static void olq_update_info(struct buffer_head *bh, void *private)
 	struct mem_dqinfo *info = private;
 	struct ocfs2_mem_dqinfo *oinfo = info->dqi_priv;
 	struct ocfs2_local_disk_dqinfo *ldinfo;
+	struct quota_info *dqopt = dqopts(oinfo->dqi_gqinode->i_sb);
 
 	ldinfo = (struct ocfs2_local_disk_dqinfo *)(bh->b_data +
 						OCFS2_LOCAL_INFO_OFF);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopt->dq_data_lock);
 	ldinfo->dqi_flags = cpu_to_le32(info->dqi_flags & DQF_MASK);
 	ldinfo->dqi_chunks = cpu_to_le32(oinfo->dqi_chunks);
 	ldinfo->dqi_blocks = cpu_to_le32(oinfo->dqi_blocks);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopt->dq_data_lock);
 }
 
 static int ocfs2_add_recovery_chunk(struct super_block *sb,
@@ -523,7 +524,7 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 				goto out_drop_lock;
 			}
 			mutex_lock(&dqopts(sb)->dqio_mutex);
-			spin_lock(&dq_data_lock);
+			spin_lock(&dqopts(sb)->dq_data_lock);
 			/* Add usage from quota entry into quota changes
 			 * of our node. Auxiliary variables are important
 			 * due to signedness */
@@ -531,7 +532,7 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 			inodechange = le64_to_cpu(dqblk->dqb_inodemod);
 			dquot->dq_dqb.dqb_curspace += spacechange;
 			dquot->dq_dqb.dqb_curinodes += inodechange;
-			spin_unlock(&dq_data_lock);
+			spin_unlock(&dqopts(sb)->dq_data_lock);
 			/* We want to drop reference held by the crashed
 			 * node. Since we have our own reference we know
 			 * global structure actually won't be freed. */
@@ -876,12 +877,12 @@ static void olq_set_dquot(struct buffer_head *bh, void *private)
 		+ ol_dqblk_block_offset(sb, od->dq_local_off));
 
 	dqblk->dqb_id = cpu_to_le64(od->dq_dquot.dq_id);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	dqblk->dqb_spacemod = cpu_to_le64(od->dq_dquot.dq_dqb.dqb_curspace -
 					  od->dq_origspace);
 	dqblk->dqb_inodemod = cpu_to_le64(od->dq_dquot.dq_dqb.dqb_curinodes -
 					  od->dq_originodes);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	mlog(0, "Writing local dquot %u space %lld inodes %lld\n",
 	     od->dq_dquot.dq_id, (long long)le64_to_cpu(dqblk->dqb_spacemod),
 	     (long long)le64_to_cpu(dqblk->dqb_inodemod));
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 2317a3b..0dcf61e 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -125,8 +125,6 @@
  */
 
 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_fmt_lock);
-__cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock);
-EXPORT_SYMBOL(dq_data_lock);
 
 void __quota_error(struct super_block *sb, const char *func,
 		  const char *fmt, ...)
@@ -1406,6 +1404,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 	down_write(&dqopts(sb)->dqptr_sem);
 	if (IS_NOQUOTA(inode))
 		goto out_err;
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (type != -1 && cnt != type)
 			continue;
@@ -1427,6 +1426,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 				dquot_resv_space(inode->i_dquot[cnt], rsv);
 		}
 	}
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 out_err:
 	up_write(&dqopts(sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(sb)->dq_srcu, idx);
@@ -1602,14 +1602,14 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
 		ret = check_bdq(inode->i_dquot[cnt], number, !warn,
 				warntype+cnt);
 		if (ret && !nofail) {
-			spin_unlock(&dq_data_lock);
+			spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 			goto out_flush_warn;
 		}
 	}
@@ -1622,7 +1622,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 			dquot_incr_space(inode->i_dquot[cnt], number);
 	}
 	inode_incr_space(inode, number, reserve);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 
 	if (reserve)
 		goto out_flush_warn;
@@ -1657,7 +1657,7 @@ int dquot_alloc_inode(const struct inode *inode)
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
@@ -1673,7 +1673,7 @@ int dquot_alloc_inode(const struct inode *inode)
 	}
 
 warn_put_all:
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	if (ret == 0)
 		mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
@@ -1699,7 +1699,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	/* Claim reserved quotas to allocated quotas */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (inode->i_dquot[cnt])
@@ -1708,7 +1708,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	}
 	/* Update inode bytes */
 	inode_claim_rsv_space(inode, number);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
@@ -1737,7 +1737,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
@@ -1748,7 +1748,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 			dquot_decr_space(inode->i_dquot[cnt], number);
 	}
 	inode_decr_space(inode, number, reserve);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 
 	if (reserve)
 		goto out_unlock;
@@ -1778,14 +1778,14 @@ void dquot_free_inode(const struct inode *inode)
 	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
 		warntype[cnt] = info_idq_free(inode->i_dquot[cnt], 1);
 		dquot_decr_inodes(inode->i_dquot[cnt], 1);
 	}
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
@@ -1832,7 +1832,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 		srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 		return 0;
 	}
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	cur_space = inode_get_bytes(inode);
 	rsv_space = inode_get_rsv_space(inode);
 	space = cur_space + rsv_space;
@@ -1880,7 +1880,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 
 		inode->i_dquot[cnt] = transfer_to[cnt];
 	}
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	mark_all_dquot_dirty(transfer_from);
@@ -1894,7 +1894,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 			transfer_to[cnt] = transfer_from[cnt];
 	return 0;
 over_quota:
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	flush_warnings(transfer_to, warntype_to);
@@ -1999,6 +1999,7 @@ static int alloc_quota_info(struct quota_ctl_info *dqctl) {
 	mutex_init(&dqopt->dqio_mutex);
 	init_rwsem(&dqopt->dqptr_sem);
 	spin_lock_init(&dqopt->dq_list_lock);
+	spin_lock_init(&dqopt->dq_data_lock);
 	INIT_LIST_HEAD(&dqopt->dq_inuse_list);
 	INIT_LIST_HEAD(&dqopt->dq_free_list);
 
@@ -2453,7 +2454,7 @@ static void do_get_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 			FS_USER_QUOTA : FS_GROUP_QUOTA;
 	di->d_id = dquot->dq_id;
 
-	spin_lock(&dq_data_lock);
+	spin_lock(&sb_dqopts(dquot)->dq_data_lock);
 	di->d_blk_hardlimit = stoqb(dm->dqb_bhardlimit);
 	di->d_blk_softlimit = stoqb(dm->dqb_bsoftlimit);
 	di->d_ino_hardlimit = dm->dqb_ihardlimit;
@@ -2462,7 +2463,7 @@ static void do_get_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	di->d_icount = dm->dqb_curinodes;
 	di->d_btimer = dm->dqb_btime;
 	di->d_itimer = dm->dqb_itime;
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&sb_dqopts(dquot)->dq_data_lock);
 }
 
 int dquot_get_dqblk(struct super_block *sb, int type, qid_t id,
@@ -2505,7 +2506,7 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	     (di->d_ino_hardlimit > dqi->dqi_maxilimit)))
 		return -ERANGE;
 
-	spin_lock(&dq_data_lock);
+	spin_lock(&sb_dqopts(dquot)->dq_data_lock);
 	if (di->d_fieldmask & FS_DQ_BCOUNT) {
 		dm->dqb_curspace = di->d_bcount - dm->dqb_rsvspace;
 		check_blim = 1;
@@ -2571,7 +2572,7 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 		clear_bit(DQ_FAKE_B, &dquot->dq_flags);
 	else
 		set_bit(DQ_FAKE_B, &dquot->dq_flags);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&sb_dqopts(dquot)->dq_data_lock);
 	mark_dquot_dirty(dquot);
 
 	return 0;
@@ -2606,12 +2607,12 @@ int dquot_get_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 		return -ESRCH;
 	}
 	mi = dqopts(sb)->info + type;
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	ii->dqi_bgrace = mi->dqi_bgrace;
 	ii->dqi_igrace = mi->dqi_igrace;
 	ii->dqi_flags = mi->dqi_flags & DQF_MASK;
 	ii->dqi_valid = IIF_ALL;
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
 	return 0;
 }
@@ -2629,7 +2630,7 @@ int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 		goto out;
 	}
 	mi = dqopts(sb)->info + type;
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	if (ii->dqi_valid & IIF_BGRACE)
 		mi->dqi_bgrace = ii->dqi_bgrace;
 	if (ii->dqi_valid & IIF_IGRACE)
@@ -2637,7 +2638,7 @@ int dquot_set_dqinfo(struct super_block *sb, int type, struct if_dqinfo *ii)
 	if (ii->dqi_valid & IIF_FLAGS)
 		mi->dqi_flags = (mi->dqi_flags & ~DQF_MASK) |
 				(ii->dqi_flags & DQF_MASK);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	mark_info_dirty(sb, type);
 	/* Force write to disk */
 	dqctl(sb)->dq_op->write_info(sb, type);
diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
index 21a4a6a..a089c70 100644
--- a/fs/quota/quota_tree.c
+++ b/fs/quota/quota_tree.c
@@ -375,9 +375,9 @@ int qtree_write_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 			return ret;
 		}
 	}
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	info->dqi_ops->mem2disk_dqblk(ddquot, dquot);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	ret = sb->s_op->quota_write(sb, type, ddquot, info->dqi_entry_size,
 				    dquot->dq_off);
 	if (ret != info->dqi_entry_size) {
@@ -631,14 +631,14 @@ int qtree_read_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 		kfree(ddquot);
 		goto out;
 	}
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	info->dqi_ops->disk2mem_dqblk(dquot, ddquot);
 	if (!dquot->dq_dqb.dqb_bhardlimit &&
 	    !dquot->dq_dqb.dqb_bsoftlimit &&
 	    !dquot->dq_dqb.dqb_ihardlimit &&
 	    !dquot->dq_dqb.dqb_isoftlimit)
 		set_bit(DQ_FAKE_B, &dquot->dq_flags);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	kfree(ddquot);
 out:
 	dqstats_inc(DQST_READS);
diff --git a/fs/quota/quota_v2.c b/fs/quota/quota_v2.c
index 65444d2..e4ef8de 100644
--- a/fs/quota/quota_v2.c
+++ b/fs/quota/quota_v2.c
@@ -153,12 +153,12 @@ static int v2_write_file_info(struct super_block *sb, int type)
 	struct qtree_mem_dqinfo *qinfo = info->dqi_priv;
 	ssize_t size;
 
-	spin_lock(&dq_data_lock);
+	spin_lock(&dqopts(sb)->dq_data_lock);
 	info->dqi_flags &= ~DQF_INFO_DIRTY;
 	dinfo.dqi_bgrace = cpu_to_le32(info->dqi_bgrace);
 	dinfo.dqi_igrace = cpu_to_le32(info->dqi_igrace);
 	dinfo.dqi_flags = cpu_to_le32(info->dqi_flags & DQF_MASK);
-	spin_unlock(&dq_data_lock);
+	spin_unlock(&dqopts(sb)->dq_data_lock);
 	dinfo.dqi_blocks = cpu_to_le32(qinfo->dqi_blocks);
 	dinfo.dqi_free_blk = cpu_to_le32(qinfo->dqi_free_blk);
 	dinfo.dqi_free_entry = cpu_to_le32(qinfo->dqi_free_entry);
diff --git a/include/linux/quota.h b/include/linux/quota.h
index d07094b..7693b18 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -188,8 +188,6 @@ enum {
 typedef __kernel_uid32_t qid_t; /* Type in which we store ids in memory */
 typedef long long qsize_t;	/* Type in which we store sizes */
 
-extern spinlock_t dq_data_lock;
-
 /* Maximal numbers of writes for quota operation (insert/delete/update)
  * (over VFS all formats) */
 #define DQUOT_INIT_ALLOC max(V1_INIT_ALLOC, V2_INIT_ALLOC)
@@ -407,6 +405,7 @@ struct quota_ctl_info {
 struct quota_info {
 	struct mutex dqio_mutex;		/* lock device while I/O in progress */
 	struct mem_dqinfo info[MAXQUOTAS];	/* Information for each quota type */
+	spinlock_t dq_data_lock;		/* protect in memory data */
 	spinlock_t dq_list_lock;		/* protect lists */
 	struct list_head dq_inuse_list;		/* list of inused dquotas */
 	struct list_head dq_free_list;		/* list of free dquotas */
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 14/19] quota: protect dquot mem info with object's lock
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (12 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 13/19] quota: make per-sb dq_data_lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 15/19] quota: drop dq_data_lock where possible Dmitry Monakhov
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

currently ->dq_data_lock is responsible for protecting three things
1) dquot->dq_dqb info consistency
2) synchronization between ->dq_dqb with ->i_bytes
3) Protects mem_dqinfo (per-sb data),
  3b) and consistency between mem_dqinfo and dq_dqb for following data.
      dqi_bgrace <=> dqb_btime
      dqi_igrace <=> dqb_itime

In fact (1) and (2) is conceptually different from (3)
By introducing per-dquot data lock we later can split (1)(2) from (3)
This patch simply introduces a new lock, without changing ->dq_data_lock.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ocfs2/quota_global.c |    4 ++
 fs/ocfs2/quota_local.c  |    4 ++
 fs/quota/dquot.c        |  119 ++++++++++++++++++++++++++++++++++++++++++-----
 fs/quota/quota_tree.c   |    4 ++
 include/linux/quota.h   |   13 +++++
 5 files changed, 132 insertions(+), 12 deletions(-)

diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
index d65d18a..c005693 100644
--- a/fs/ocfs2/quota_global.c
+++ b/fs/ocfs2/quota_global.c
@@ -504,6 +504,7 @@ int __ocfs2_sync_dquot(struct dquot *dquot, int freeing)
 	 * global quota file so that we don't overwrite any changes there.
 	 * We are */
 	spin_lock(&dqopts(sb)->dq_data_lock);
+	spin_lock(&dquot->dq_lock);
 	spacechange = dquot->dq_dqb.dqb_curspace -
 					OCFS2_DQUOT(dquot)->dq_origspace;
 	inodechange = dquot->dq_dqb.dqb_curinodes -
@@ -557,6 +558,7 @@ int __ocfs2_sync_dquot(struct dquot *dquot, int freeing)
 	__clear_bit(DQ_LASTSET_B + QIF_ITIME_B, &dquot->dq_flags);
 	OCFS2_DQUOT(dquot)->dq_origspace = dquot->dq_dqb.dqb_curspace;
 	OCFS2_DQUOT(dquot)->dq_originodes = dquot->dq_dqb.dqb_curinodes;
+	spin_unlock(&dquot->dq_lock);
 	spin_unlock(&dqopts(sb)->dq_data_lock);
 	err = ocfs2_qinfo_lock(info, freeing);
 	if (err < 0) {
@@ -837,8 +839,10 @@ static int ocfs2_mark_dquot_dirty(struct dquot *dquot)
 	/* In case user set some limits, sync dquot immediately to global
 	 * quota file so that information propagates quicker */
 	spin_lock(&dqopts(sb)->dq_data_lock);
+	spin_lock(&dquot->dq_lock);
 	if (dquot->dq_flags & mask)
 		sync = 1;
+	spin_unlock(&dquot->dq_lock);
 	spin_unlock(&dqopts(sb)->dq_data_lock);
 	/* This is a slight hack but we can't afford getting global quota
 	 * lock if we already have a transaction started. */
diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
index 2d2e981..1490cb0 100644
--- a/fs/ocfs2/quota_local.c
+++ b/fs/ocfs2/quota_local.c
@@ -525,6 +525,7 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 			}
 			mutex_lock(&dqopts(sb)->dqio_mutex);
 			spin_lock(&dqopts(sb)->dq_data_lock);
+			spin_lock(&dquot->dq_lock);
 			/* Add usage from quota entry into quota changes
 			 * of our node. Auxiliary variables are important
 			 * due to signedness */
@@ -532,6 +533,7 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 			inodechange = le64_to_cpu(dqblk->dqb_inodemod);
 			dquot->dq_dqb.dqb_curspace += spacechange;
 			dquot->dq_dqb.dqb_curinodes += inodechange;
+			spin_unlock(&dquot->dq_lock);
 			spin_unlock(&dqopts(sb)->dq_data_lock);
 			/* We want to drop reference held by the crashed
 			 * node. Since we have our own reference we know
@@ -878,10 +880,12 @@ static void olq_set_dquot(struct buffer_head *bh, void *private)
 
 	dqblk->dqb_id = cpu_to_le64(od->dq_dquot.dq_id);
 	spin_lock(&dqopts(sb)->dq_data_lock);
+	spin_lock(&od->dq_dquot.dq_lock);
 	dqblk->dqb_spacemod = cpu_to_le64(od->dq_dquot.dq_dqb.dqb_curspace -
 					  od->dq_origspace);
 	dqblk->dqb_inodemod = cpu_to_le64(od->dq_dquot.dq_dqb.dqb_curinodes -
 					  od->dq_originodes);
+	spin_unlock(&od->dq_dquot.dq_lock);
 	spin_unlock(&dqopts(sb)->dq_data_lock);
 	mlog(0, "Writing local dquot %u space %lld inodes %lld\n",
 	     od->dq_dquot.dq_id, (long long)le64_to_cpu(dqblk->dqb_spacemod),
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 0dcf61e..5898b46 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -82,14 +82,17 @@
 
 /*
  * There are three quota SMP locks. dq_list_lock protects all lists with quotas
- * dq_data_lock protects data from dq_dqb and also mem_dqinfo structures and
- * also guards consistency of dquot->dq_dqb with inode->i_blocks, i_bytes.
+ * dq_data_lock protects mem_dqinfo structures and mem_dqinfo with
+ * dq_dqb consystency.
+ * dq_lock protects dquot->dq_dqb and also guards consistency of
+ * dquot->dq_dqb with inode->i_blocks, i_bytes.
  * i_blocks and i_bytes updates itself are guarded by i_lock acquired directly
  * in inode_add_bytes() and inode_sub_bytes().
  *
- * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
+ * The spinlock ordering is hence:
+ * dq_data_lock > dq_lock > dq_list_lock > i_lock,
  * dq_list_lock > hlist_bl_head
-
+ *
  * Note that some things (eg. sb pointer, type, id) doesn't change during
  * the life of the dquot structure and so needn't to be protected by a lock
  *
@@ -144,6 +147,9 @@ EXPORT_SYMBOL(__quota_error);
 
 #if defined(CONFIG_QUOTA_DEBUG) || defined(CONFIG_PRINT_QUOTA_WARNING)
 static char *quotatypes[] = INITQFNAMES;
+#define ASSERT_SPIN_LOCKED(lk) assert_spin_locked(lk)
+#else
+#define ASSERT_SPIN_LOCKED(lk)
 #endif
 static struct quota_format_type *quota_formats;	/* List of registered formats */
 static struct quota_module_name module_names[] = INIT_QUOTA_MODULE_NAMES;
@@ -371,6 +377,56 @@ static inline void dqput_all(struct dquot **dquot)
 		dqput(dquot[cnt]);
 }
 
+static void __lock_dquot_double(struct dquot * const dq1,
+				struct dquot * const dq2)
+{
+	if(dq1 < dq2) {
+		spin_lock_nested(&dq1->dq_lock, DQUOT_LOCK_CLASS(dq1));
+		spin_lock_nested(&dq2->dq_lock, DQUOT_LOCK_CLASS_NESTED(dq2));
+	} else {
+		spin_lock_nested(&dq2->dq_lock, DQUOT_LOCK_CLASS(dq2));
+		spin_lock_nested(&dq1->dq_lock,	DQUOT_LOCK_CLASS_NESTED(dq1));
+	}
+}
+
+/* This is strange, but all conditions are possible */
+static inline void lock_dquot_double(struct dquot * const *dq1,
+				struct dquot * const *dq2)
+{
+	unsigned int cnt;
+
+	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+		if (dq1[cnt]) {
+			if (likely(dq2[cnt]))
+				__lock_dquot_double(dq1[cnt], dq2[cnt]);
+			else
+				spin_lock_nested(&dq1[cnt]->dq_lock,
+						DQUOT_LOCK_CLASS(dq1[cnt]));
+		} else {
+			if (unlikely(dq2[cnt]))
+				spin_lock_nested(&dq2[cnt]->dq_lock,
+					DQUOT_LOCK_CLASS(dq2[cnt]));
+		}
+	}
+}
+static inline void lock_inode_dquots(struct dquot * const *dquot)
+{
+	unsigned int cnt;
+	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
+		if (dquot[cnt])
+			spin_lock_nested(&dquot[cnt]->dq_lock,
+					DQUOT_LOCK_CLASS(dquot[cnt]));
+}
+
+static inline void unlock_inode_dquots(struct dquot * const *dquot)
+{
+	unsigned int cnt;
+
+	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
+		if (dquot[cnt])
+			spin_unlock(&dquot[cnt]->dq_lock);
+}
+
 /* This function needs dq_list_lock */
 static inline int clear_dquot_dirty(struct dquot *dquot)
 {
@@ -805,6 +861,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
 	INIT_LIST_HEAD(&dquot->dq_dirty);
 	INIT_HLIST_BL_NODE(&dquot->dq_hash);
 	init_waitqueue_head(&dquot->dq_wait_unused);
+	spin_lock_init(&dquot->dq_lock);
 	dquot->dq_sb = sb;
 	dquot->dq_type = type;
 	atomic_set(&dquot->dq_count, 1);
@@ -1062,16 +1119,19 @@ static void drop_dquot_ref(struct super_block *sb, int type)
 
 static inline void dquot_incr_inodes(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	dquot->dq_dqb.dqb_curinodes += number;
 }
 
 static inline void dquot_incr_space(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	dquot->dq_dqb.dqb_curspace += number;
 }
 
 static inline void dquot_resv_space(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	dquot->dq_dqb.dqb_rsvspace += number;
 }
 
@@ -1080,6 +1140,7 @@ static inline void dquot_resv_space(struct dquot *dquot, qsize_t number)
  */
 static void dquot_claim_reserved_space(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	if (dquot->dq_dqb.dqb_rsvspace < number) {
 		WARN_ON_ONCE(1);
 		number = dquot->dq_dqb.dqb_rsvspace;
@@ -1091,6 +1152,7 @@ static void dquot_claim_reserved_space(struct dquot *dquot, qsize_t number)
 static inline
 void dquot_free_reserved_space(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	if (dquot->dq_dqb.dqb_rsvspace >= number)
 		dquot->dq_dqb.dqb_rsvspace -= number;
 	else {
@@ -1101,6 +1163,7 @@ void dquot_free_reserved_space(struct dquot *dquot, qsize_t number)
 
 static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	if (dqctl(dquot->dq_sb)->flags & DQUOT_NEGATIVE_USAGE ||
 	    dquot->dq_dqb.dqb_curinodes >= number)
 		dquot->dq_dqb.dqb_curinodes -= number;
@@ -1113,6 +1176,7 @@ static void dquot_decr_inodes(struct dquot *dquot, qsize_t number)
 
 static void dquot_decr_space(struct dquot *dquot, qsize_t number)
 {
+	ASSERT_SPIN_LOCKED(&dquot->dq_lock);
 	if (dqctl(dquot->dq_sb)->flags & DQUOT_NEGATIVE_USAGE ||
 	    dquot->dq_dqb.dqb_curspace >= number)
 		dquot->dq_dqb.dqb_curspace -= number;
@@ -1230,7 +1294,7 @@ static int ignore_hardlimit(struct dquot *dquot)
 		!(info->dqi_flags & V1_DQF_RSQUASH));
 }
 
-/* needs dq_data_lock */
+/* needs dq_data_lock,  ->dq_lock */
 static int check_idq(struct dquot *dquot, qsize_t inodes, char *warntype)
 {
 	qsize_t newinodes = dquot->dq_dqb.dqb_curinodes + inodes;
@@ -1267,7 +1331,7 @@ static int check_idq(struct dquot *dquot, qsize_t inodes, char *warntype)
 	return 0;
 }
 
-/* needs dq_data_lock */
+/* needs dq_data_lock, ->dq_lock */
 static int check_bdq(struct dquot *dquot, qsize_t space, int prealloc, char *warntype)
 {
 	qsize_t tspace;
@@ -1422,8 +1486,11 @@ static void __dquot_initialize(struct inode *inode, int type)
 			 * did a write before quota was turned on
 			 */
 			rsv = inode_get_rsv_space(inode);
-			if (unlikely(rsv))
+			if (unlikely(rsv)) {
+				spin_lock(&inode->i_dquot[cnt]->dq_lock);
 				dquot_resv_space(inode->i_dquot[cnt], rsv);
+				spin_unlock(&inode->i_dquot[cnt]->dq_lock);
+			}
 		}
 	}
 	spin_unlock(&dqopts(sb)->dq_data_lock);
@@ -1603,12 +1670,14 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 
 	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
+	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
 		ret = check_bdq(inode->i_dquot[cnt], number, !warn,
-				warntype+cnt);
+				warntype + cnt);
 		if (ret && !nofail) {
+			unlock_inode_dquots(inode->i_dquot);
 			spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 			goto out_flush_warn;
 		}
@@ -1622,6 +1691,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 			dquot_incr_space(inode->i_dquot[cnt], number);
 	}
 	inode_incr_space(inode, number, reserve);
+	unlock_inode_dquots(inode->i_dquot);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 
 	if (reserve)
@@ -1658,6 +1728,7 @@ int dquot_alloc_inode(const struct inode *inode)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
+	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
@@ -1673,6 +1744,7 @@ int dquot_alloc_inode(const struct inode *inode)
 	}
 
 warn_put_all:
+	unlock_inode_dquots(inode->i_dquot);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	if (ret == 0)
 		mark_all_dquot_dirty(inode->i_dquot);
@@ -1700,6 +1772,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
+	lock_inode_dquots(inode->i_dquot);
 	/* Claim reserved quotas to allocated quotas */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (inode->i_dquot[cnt])
@@ -1708,6 +1781,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	}
 	/* Update inode bytes */
 	inode_claim_rsv_space(inode, number);
+	unlock_inode_dquots(inode->i_dquot);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
@@ -1738,6 +1812,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
+	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
@@ -1748,6 +1823,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 			dquot_decr_space(inode->i_dquot[cnt], number);
 	}
 	inode_decr_space(inode, number, reserve);
+	unlock_inode_dquots(inode->i_dquot);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 
 	if (reserve)
@@ -1779,12 +1855,14 @@ void dquot_free_inode(const struct inode *inode)
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
+	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
 		warntype[cnt] = info_idq_free(inode->i_dquot[cnt], 1);
 		dquot_decr_inodes(inode->i_dquot[cnt], 1);
 	}
+	unlock_inode_dquots(inode->i_dquot);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
@@ -1833,10 +1911,6 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 		return 0;
 	}
 	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
-	cur_space = inode_get_bytes(inode);
-	rsv_space = inode_get_rsv_space(inode);
-	space = cur_space + rsv_space;
-	/* Build the transfer_from list and check the limits */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		/*
 		 * Skip changes for same uid or gid or for turned off quota-type.
@@ -1846,8 +1920,21 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 		/* Avoid races with quotaoff() */
 		if (!sb_has_quota_active(inode->i_sb, cnt))
 			continue;
+		/* Quota may be the same due to previous errors when
+		   chown succeed but inode still belongs to old dquot*/
+		if (transfer_to[cnt] == inode->i_dquot[cnt])
+			continue;
 		is_valid[cnt] = 1;
 		transfer_from[cnt] = inode->i_dquot[cnt];
+	}
+	lock_dquot_double(transfer_from, transfer_to);
+	cur_space = inode_get_bytes(inode);
+	rsv_space = inode_get_rsv_space(inode);
+	space = cur_space + rsv_space;
+	/* Build the transfer_from list and check the limits */
+	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+		if (!is_valid[cnt])
+			continue;
 		ret = check_idq(transfer_to[cnt], 1, warntype_to + cnt);
 		if (ret)
 			goto over_quota;
@@ -1880,6 +1967,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 
 		inode->i_dquot[cnt] = transfer_to[cnt];
 	}
+	unlock_inode_dquots(transfer_to);
+	unlock_inode_dquots(transfer_from);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
@@ -1894,6 +1983,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 			transfer_to[cnt] = transfer_from[cnt];
 	return 0;
 over_quota:
+	unlock_inode_dquots(transfer_to);
+	unlock_inode_dquots(transfer_from);
 	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
@@ -2455,6 +2546,7 @@ static void do_get_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	di->d_id = dquot->dq_id;
 
 	spin_lock(&sb_dqopts(dquot)->dq_data_lock);
+	spin_lock(&dquot->dq_lock);
 	di->d_blk_hardlimit = stoqb(dm->dqb_bhardlimit);
 	di->d_blk_softlimit = stoqb(dm->dqb_bsoftlimit);
 	di->d_ino_hardlimit = dm->dqb_ihardlimit;
@@ -2463,6 +2555,7 @@ static void do_get_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	di->d_icount = dm->dqb_curinodes;
 	di->d_btimer = dm->dqb_btime;
 	di->d_itimer = dm->dqb_itime;
+	spin_unlock(&dquot->dq_lock);
 	spin_unlock(&sb_dqopts(dquot)->dq_data_lock);
 }
 
@@ -2507,6 +2600,7 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 		return -ERANGE;
 
 	spin_lock(&sb_dqopts(dquot)->dq_data_lock);
+	spin_lock(&dquot->dq_lock);
 	if (di->d_fieldmask & FS_DQ_BCOUNT) {
 		dm->dqb_curspace = di->d_bcount - dm->dqb_rsvspace;
 		check_blim = 1;
@@ -2572,6 +2666,7 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 		clear_bit(DQ_FAKE_B, &dquot->dq_flags);
 	else
 		set_bit(DQ_FAKE_B, &dquot->dq_flags);
+	spin_unlock(&dquot->dq_lock);
 	spin_unlock(&sb_dqopts(dquot)->dq_data_lock);
 	mark_dquot_dirty(dquot);
 
diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
index a089c70..e6307f6 100644
--- a/fs/quota/quota_tree.c
+++ b/fs/quota/quota_tree.c
@@ -376,7 +376,9 @@ int qtree_write_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 		}
 	}
 	spin_lock(&dqopts(sb)->dq_data_lock);
+	spin_lock(&dquot->dq_lock);
 	info->dqi_ops->mem2disk_dqblk(ddquot, dquot);
+	spin_unlock(&dquot->dq_lock);
 	spin_unlock(&dqopts(sb)->dq_data_lock);
 	ret = sb->s_op->quota_write(sb, type, ddquot, info->dqi_entry_size,
 				    dquot->dq_off);
@@ -632,12 +634,14 @@ int qtree_read_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 		goto out;
 	}
 	spin_lock(&dqopts(sb)->dq_data_lock);
+	spin_lock(&dquot->dq_lock);
 	info->dqi_ops->disk2mem_dqblk(dquot, ddquot);
 	if (!dquot->dq_dqb.dqb_bhardlimit &&
 	    !dquot->dq_dqb.dqb_bsoftlimit &&
 	    !dquot->dq_dqb.dqb_ihardlimit &&
 	    !dquot->dq_dqb.dqb_isoftlimit)
 		set_bit(DQ_FAKE_B, &dquot->dq_flags);
+	spin_unlock(&dquot->dq_lock);
 	spin_unlock(&dqopts(sb)->dq_data_lock);
 	kfree(ddquot);
 out:
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 7693b18..c88a352 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -281,6 +281,18 @@ static inline void dqstats_dec(unsigned int type)
 				 * quotactl. They are set under dq_data_lock\
 				 * and the quota format handling dquot can\
 				 * clear them when it sees fit. */
+/*
+ * To make lock_dep happy we have to place different dquot types to
+ * different lock classes.
+*/
+enum dquot_lock_class
+{
+	DQUOT_LOCK_NORMAL, /* implicitly used by plain spin_lock() APIs. */
+	DQUOT_LOCK_NESTED
+};
+#define DQUOT_LOCK_CLASS(dquot) (DQUOT_LOCK_NORMAL + (dquot)->dq_type * 2)
+#define DQUOT_LOCK_CLASS_NESTED(dquot) (DQUOT_LOCK_NESTED + \
+						(dquot)->dq_type * 2)
 
 struct dquot {
 	struct hlist_bl_node dq_hash;	/* Hash list in memory */
@@ -296,6 +308,7 @@ struct dquot {
 	unsigned long dq_flags;		/* See DQ_* */
 	short dq_type;			/* Type of quota */
 	struct mem_dqblk dq_dqb;	/* Diskquota usage */
+	spinlock_t dq_lock;		/* protect in mem_dqblk */
 };
 
 /* Operations which must be implemented by each quota format */
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 15/19] quota: drop dq_data_lock where possible
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (13 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 14/19] quota: protect dquot mem info with object's lock Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 16/19] quota: relax dq_data_lock dq_lock locking consistency Dmitry Monakhov
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

dq_data_lock is no longer responsible for dquot data protection.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ocfs2/quota_global.c |    4 ----
 fs/ocfs2/quota_local.c  |    4 ----
 fs/quota/dquot.c        |    5 +----
 fs/quota/quota_tree.c   |    4 ----
 4 files changed, 1 insertions(+), 16 deletions(-)

diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
index c005693..3d2841c 100644
--- a/fs/ocfs2/quota_global.c
+++ b/fs/ocfs2/quota_global.c
@@ -503,7 +503,6 @@ int __ocfs2_sync_dquot(struct dquot *dquot, int freeing)
 	/* Update space and inode usage. Get also other information from
 	 * global quota file so that we don't overwrite any changes there.
 	 * We are */
-	spin_lock(&dqopts(sb)->dq_data_lock);
 	spin_lock(&dquot->dq_lock);
 	spacechange = dquot->dq_dqb.dqb_curspace -
 					OCFS2_DQUOT(dquot)->dq_origspace;
@@ -559,7 +558,6 @@ int __ocfs2_sync_dquot(struct dquot *dquot, int freeing)
 	OCFS2_DQUOT(dquot)->dq_origspace = dquot->dq_dqb.dqb_curspace;
 	OCFS2_DQUOT(dquot)->dq_originodes = dquot->dq_dqb.dqb_curinodes;
 	spin_unlock(&dquot->dq_lock);
-	spin_unlock(&dqopts(sb)->dq_data_lock);
 	err = ocfs2_qinfo_lock(info, freeing);
 	if (err < 0) {
 		mlog(ML_ERROR, "Failed to lock quota info, loosing quota write"
@@ -838,12 +836,10 @@ static int ocfs2_mark_dquot_dirty(struct dquot *dquot)
 
 	/* In case user set some limits, sync dquot immediately to global
 	 * quota file so that information propagates quicker */
-	spin_lock(&dqopts(sb)->dq_data_lock);
 	spin_lock(&dquot->dq_lock);
 	if (dquot->dq_flags & mask)
 		sync = 1;
 	spin_unlock(&dquot->dq_lock);
-	spin_unlock(&dqopts(sb)->dq_data_lock);
 	/* This is a slight hack but we can't afford getting global quota
 	 * lock if we already have a transaction started. */
 	if (!sync || journal_current_handle()) {
diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
index 1490cb0..9e68ce5 100644
--- a/fs/ocfs2/quota_local.c
+++ b/fs/ocfs2/quota_local.c
@@ -524,7 +524,6 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 				goto out_drop_lock;
 			}
 			mutex_lock(&dqopts(sb)->dqio_mutex);
-			spin_lock(&dqopts(sb)->dq_data_lock);
 			spin_lock(&dquot->dq_lock);
 			/* Add usage from quota entry into quota changes
 			 * of our node. Auxiliary variables are important
@@ -534,7 +533,6 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 			dquot->dq_dqb.dqb_curspace += spacechange;
 			dquot->dq_dqb.dqb_curinodes += inodechange;
 			spin_unlock(&dquot->dq_lock);
-			spin_unlock(&dqopts(sb)->dq_data_lock);
 			/* We want to drop reference held by the crashed
 			 * node. Since we have our own reference we know
 			 * global structure actually won't be freed. */
@@ -879,14 +877,12 @@ static void olq_set_dquot(struct buffer_head *bh, void *private)
 		+ ol_dqblk_block_offset(sb, od->dq_local_off));
 
 	dqblk->dqb_id = cpu_to_le64(od->dq_dquot.dq_id);
-	spin_lock(&dqopts(sb)->dq_data_lock);
 	spin_lock(&od->dq_dquot.dq_lock);
 	dqblk->dqb_spacemod = cpu_to_le64(od->dq_dquot.dq_dqb.dqb_curspace -
 					  od->dq_origspace);
 	dqblk->dqb_inodemod = cpu_to_le64(od->dq_dquot.dq_dqb.dqb_curinodes -
 					  od->dq_originodes);
 	spin_unlock(&od->dq_dquot.dq_lock);
-	spin_unlock(&dqopts(sb)->dq_data_lock);
 	mlog(0, "Writing local dquot %u space %lld inodes %lld\n",
 	     od->dq_dquot.dq_id, (long long)le64_to_cpu(dqblk->dqb_spacemod),
 	     (long long)le64_to_cpu(dqblk->dqb_inodemod));
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 5898b46..794c486 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -1468,7 +1468,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 	down_write(&dqopts(sb)->dqptr_sem);
 	if (IS_NOQUOTA(inode))
 		goto out_err;
-	spin_lock(&dqopts(sb)->dq_data_lock);
+
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (type != -1 && cnt != type)
 			continue;
@@ -1493,7 +1493,6 @@ static void __dquot_initialize(struct inode *inode, int type)
 			}
 		}
 	}
-	spin_unlock(&dqopts(sb)->dq_data_lock);
 out_err:
 	up_write(&dqopts(sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(sb)->dq_srcu, idx);
@@ -2545,7 +2544,6 @@ static void do_get_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 			FS_USER_QUOTA : FS_GROUP_QUOTA;
 	di->d_id = dquot->dq_id;
 
-	spin_lock(&sb_dqopts(dquot)->dq_data_lock);
 	spin_lock(&dquot->dq_lock);
 	di->d_blk_hardlimit = stoqb(dm->dqb_bhardlimit);
 	di->d_blk_softlimit = stoqb(dm->dqb_bsoftlimit);
@@ -2556,7 +2554,6 @@ static void do_get_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	di->d_btimer = dm->dqb_btime;
 	di->d_itimer = dm->dqb_itime;
 	spin_unlock(&dquot->dq_lock);
-	spin_unlock(&sb_dqopts(dquot)->dq_data_lock);
 }
 
 int dquot_get_dqblk(struct super_block *sb, int type, qid_t id,
diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
index e6307f6..3af6d89 100644
--- a/fs/quota/quota_tree.c
+++ b/fs/quota/quota_tree.c
@@ -375,11 +375,9 @@ int qtree_write_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 			return ret;
 		}
 	}
-	spin_lock(&dqopts(sb)->dq_data_lock);
 	spin_lock(&dquot->dq_lock);
 	info->dqi_ops->mem2disk_dqblk(ddquot, dquot);
 	spin_unlock(&dquot->dq_lock);
-	spin_unlock(&dqopts(sb)->dq_data_lock);
 	ret = sb->s_op->quota_write(sb, type, ddquot, info->dqi_entry_size,
 				    dquot->dq_off);
 	if (ret != info->dqi_entry_size) {
@@ -633,7 +631,6 @@ int qtree_read_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 		kfree(ddquot);
 		goto out;
 	}
-	spin_lock(&dqopts(sb)->dq_data_lock);
 	spin_lock(&dquot->dq_lock);
 	info->dqi_ops->disk2mem_dqblk(dquot, ddquot);
 	if (!dquot->dq_dqb.dqb_bhardlimit &&
@@ -642,7 +639,6 @@ int qtree_read_dquot(struct qtree_mem_dqinfo *info, struct dquot *dquot)
 	    !dquot->dq_dqb.dqb_isoftlimit)
 		set_bit(DQ_FAKE_B, &dquot->dq_flags);
 	spin_unlock(&dquot->dq_lock);
-	spin_unlock(&dqopts(sb)->dq_data_lock);
 	kfree(ddquot);
 out:
 	dqstats_inc(DQST_READS);
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 16/19] quota: relax dq_data_lock dq_lock locking consistency
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (14 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 15/19] quota: drop dq_data_lock where possible Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 17/19] quota: Some stylistic cleanup for dquot interface Dmitry Monakhov
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

Consistency between mem_info and dq_dqb is weak because we just copy
data from dqi_{bi}grace to dqb_{bi}time. So we protect dqb_{bi}time from
races with quota_ctl call.
Nothing actually happens if we relax this consistency requirement.
Since dqi_{bi}grace is (int) it is possible read it atomically without lock.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c      |   16 ----------------
 include/linux/quota.h |    2 ++
 2 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 794c486..33dc32e 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -1668,7 +1668,6 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 
-	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1677,7 +1676,6 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 				warntype + cnt);
 		if (ret && !nofail) {
 			unlock_inode_dquots(inode->i_dquot);
-			spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 			goto out_flush_warn;
 		}
 	}
@@ -1691,7 +1689,6 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 	}
 	inode_incr_space(inode, number, reserve);
 	unlock_inode_dquots(inode->i_dquot);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 
 	if (reserve)
 		goto out_flush_warn;
@@ -1726,7 +1723,6 @@ int dquot_alloc_inode(const struct inode *inode)
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1744,7 +1740,6 @@ int dquot_alloc_inode(const struct inode *inode)
 
 warn_put_all:
 	unlock_inode_dquots(inode->i_dquot);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	if (ret == 0)
 		mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
@@ -1770,7 +1765,6 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	lock_inode_dquots(inode->i_dquot);
 	/* Claim reserved quotas to allocated quotas */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1781,7 +1775,6 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 	/* Update inode bytes */
 	inode_claim_rsv_space(inode, number);
 	unlock_inode_dquots(inode->i_dquot);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
@@ -1810,7 +1803,6 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1823,7 +1815,6 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 	}
 	inode_decr_space(inode, number, reserve);
 	unlock_inode_dquots(inode->i_dquot);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 
 	if (reserve)
 		goto out_unlock;
@@ -1853,7 +1844,6 @@ void dquot_free_inode(const struct inode *inode)
 	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
 	rcu_read_unlock();
 	down_read(&dqopts(inode->i_sb)->dqptr_sem);
-	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
@@ -1862,7 +1852,6 @@ void dquot_free_inode(const struct inode *inode)
 		dquot_decr_inodes(inode->i_dquot[cnt], 1);
 	}
 	unlock_inode_dquots(inode->i_dquot);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	mark_all_dquot_dirty(inode->i_dquot);
 	flush_warnings(inode->i_dquot, warntype);
 	up_read(&dqopts(inode->i_sb)->dqptr_sem);
@@ -1909,7 +1898,6 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 		srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 		return 0;
 	}
-	spin_lock(&dqopts(inode->i_sb)->dq_data_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		/*
 		 * Skip changes for same uid or gid or for turned off quota-type.
@@ -1968,7 +1956,6 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	}
 	unlock_inode_dquots(transfer_to);
 	unlock_inode_dquots(transfer_from);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	mark_all_dquot_dirty(transfer_from);
@@ -1984,7 +1971,6 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 over_quota:
 	unlock_inode_dquots(transfer_to);
 	unlock_inode_dquots(transfer_from);
-	spin_unlock(&dqopts(inode->i_sb)->dq_data_lock);
 	up_write(&dqopts(inode->i_sb)->dqptr_sem);
 	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
 	flush_warnings(transfer_to, warntype_to);
@@ -2596,7 +2582,6 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	     (di->d_ino_hardlimit > dqi->dqi_maxilimit)))
 		return -ERANGE;
 
-	spin_lock(&sb_dqopts(dquot)->dq_data_lock);
 	spin_lock(&dquot->dq_lock);
 	if (di->d_fieldmask & FS_DQ_BCOUNT) {
 		dm->dqb_curspace = di->d_bcount - dm->dqb_rsvspace;
@@ -2664,7 +2649,6 @@ static int do_set_dqblk(struct dquot *dquot, struct fs_disk_quota *di)
 	else
 		set_bit(DQ_FAKE_B, &dquot->dq_flags);
 	spin_unlock(&dquot->dq_lock);
-	spin_unlock(&sb_dqopts(dquot)->dq_data_lock);
 	mark_dquot_dirty(dquot);
 
 	return 0;
diff --git a/include/linux/quota.h b/include/linux/quota.h
index c88a352..949347a 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -221,6 +221,8 @@ struct mem_dqinfo {
 				 * quotas on after remount RW */
 	struct list_head dqi_dirty_list;	/* List of dirty dquots */
 	unsigned long dqi_flags;
+	/* Readers are allowed to read following two variables without
+	   ->dq_data_lock held */
 	unsigned int dqi_bgrace;
 	unsigned int dqi_igrace;
 	qsize_t dqi_maxblimit;
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 17/19] quota: Some stylistic cleanup for dquot interface
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (15 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 16/19] quota: relax dq_data_lock dq_lock locking consistency Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-23 11:37   ` Jan Kara
  2010-11-11 12:14 ` [PATCH 18/19] fs: add unlocked helpers Dmitry Monakhov
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

This patch performs only stylistic cleanup. No changes in logic at all.
- Rename dqget() to find_get_dquot()
- Wrap direct dq_count increment to helper function

Some places still access dq_count directly, but this is because
reference counting algorithm. It will be changed in later patches.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ocfs2/file.c          |    8 ++++----
 fs/ocfs2/quota_global.c  |    2 +-
 fs/ocfs2/quota_local.c   |    3 ++-
 fs/quota/dquot.c         |   42 ++++++++++++++++++++++++++----------------
 include/linux/quotaops.h |    3 ++-
 5 files changed, 35 insertions(+), 23 deletions(-)

diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 9a03c15..b7e7c9b 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -1205,8 +1205,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
 		if (attr->ia_valid & ATTR_UID && attr->ia_uid != inode->i_uid
 		    && OCFS2_HAS_RO_COMPAT_FEATURE(sb,
 		    OCFS2_FEATURE_RO_COMPAT_USRQUOTA)) {
-			transfer_to[USRQUOTA] = dqget(sb, attr->ia_uid,
-						      USRQUOTA);
+			transfer_to[USRQUOTA] =
+				find_get_dquot(sb, attr->ia_uid, USRQUOTA);
 			if (!transfer_to[USRQUOTA]) {
 				status = -ESRCH;
 				goto bail_unlock;
@@ -1215,8 +1215,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
 		if (attr->ia_valid & ATTR_GID && attr->ia_gid != inode->i_gid
 		    && OCFS2_HAS_RO_COMPAT_FEATURE(sb,
 		    OCFS2_FEATURE_RO_COMPAT_GRPQUOTA)) {
-			transfer_to[GRPQUOTA] = dqget(sb, attr->ia_gid,
-						      GRPQUOTA);
+			transfer_to[GRPQUOTA] =
+				find_get_dquot(sb, attr->ia_gid, GRPQUOTA);
 			if (!transfer_to[GRPQUOTA]) {
 				status = -ESRCH;
 				goto bail_unlock;
diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
index 3d2841c..cdf2a23 100644
--- a/fs/ocfs2/quota_global.c
+++ b/fs/ocfs2/quota_global.c
@@ -692,7 +692,7 @@ static int ocfs2_release_dquot(struct dquot *dquot)
 	mlog_entry("id=%u, type=%d", dquot->dq_id, dquot->dq_type);
 
 	mutex_lock(&dquot->dq_mutex);
-	/* Check whether we are not racing with some other dqget() */
+	/* Check whether we are not racing with some other find_get_dquot() */
 	if (atomic_read(&dquot->dq_count) > 1)
 		goto out;
 	status = ocfs2_lock_global_qf(oinfo, 1);
diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
index 9e68ce5..6e5c7e9 100644
--- a/fs/ocfs2/quota_local.c
+++ b/fs/ocfs2/quota_local.c
@@ -500,7 +500,8 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
 			}
 			dqblk = (struct ocfs2_local_disk_dqblk *)(qbh->b_data +
 				ol_dqblk_block_off(sb, chunk, bit));
-			dquot = dqget(sb, le64_to_cpu(dqblk->dqb_id), type);
+			dquot = find_get_dquot(sb, le64_to_cpu(dqblk->dqb_id),
+					type);
 			if (!dquot) {
 				status = -EIO;
 				mlog(ML_ERROR, "Failed to get quota structure "
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 33dc32e..af3413e 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -16,7 +16,8 @@
  *		Revised list management to avoid races
  *		-- Bill Hawes, <whawes@star.net>, 9/98
  *
- *		Fixed races in dquot_transfer(), dqget() and dquot_alloc_...().
+ *		Fixed races in dquot_transfer(), find_get_dquot() and
+ *		dquot_alloc_...().
  *		As the consequence the locking was moved from dquot_decr_...(),
  *		dquot_incr_...() to calling functions.
  *		invalidate_dquots() now writes modified dquots.
@@ -109,8 +110,9 @@
  * Each dquot has its dq_mutex mutex. Locked dquots might not be referenced
  * from inodes (dquot_alloc_space() and such don't check the dq_mutex).
  * Currently dquot is locked only when it is being read to memory (or space for
- * it is being allocated) on the first dqget() and when it is being released on
- * the last dqput(). The allocation and release oparations are serialized by
+ * it is being allocated) on the first find_get_dquot() and when it is being
+ * released on the last dqput().
+ * The allocation and release oparations are serialized by
  * the dq_mutex and by checking the use count in dquot_release().  Write
  * operations on dquots don't hold dq_mutex as they copy data under dq_data_lock
  * spinlock to internal buffers before writing.
@@ -522,7 +524,7 @@ int dquot_release(struct dquot *dquot)
 	struct quota_info *dqopt = sb_dqopts(dquot);
 
 	mutex_lock(&dquot->dq_mutex);
-	/* Check whether we are not racing with some other dqget() */
+	/* Check whether we are not racing with some other find_get_dquot() */
 	if (atomic_read(&dquot->dq_count) > 1)
 		goto out_dqlock;
 	mutex_lock(&dqopt->dqio_mutex);
@@ -577,7 +579,7 @@ restart:
 		if (atomic_read(&dquot->dq_count)) {
 			DEFINE_WAIT(wait);
 
-			atomic_inc(&dquot->dq_count);
+			dqget(dquot);
 			prepare_to_wait(&dquot->dq_wait_unused, &wait,
 					TASK_UNINTERRUPTIBLE);
 			spin_unlock(&dqopt->dq_list_lock);
@@ -627,7 +629,7 @@ int dquot_scan_active(struct super_block *sb,
 		if (dquot->dq_sb != sb)
 			continue;
 		/* Now we have active dquot so we can just increase use count */
-		atomic_inc(&dquot->dq_count);
+		dqget(dquot);
 		spin_unlock(&dqopt->dq_list_lock);
 		dqstats_inc(DQST_LOOKUPS);
 		dqput(old_dquot);
@@ -674,7 +676,7 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
 			/* Now we have active dquot from which someone is
  			 * holding reference so we can safely just increase
 			 * use count */
-			atomic_inc(&dquot->dq_count);
+			dqget(dquot);
 			spin_unlock(&dqopt->dq_list_lock);
 			dqstats_inc(DQST_LOOKUPS);
 			dqctl(sb)->dq_op->write_dquot(dquot);
@@ -869,6 +871,11 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
 	return dquot;
 }
 
+inline void dqget(struct dquot *dquot)
+{
+	atomic_inc(&dquot->dq_count);
+}
+
 /*
  * Get reference to dquot
  *
@@ -877,7 +884,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
  *   a) checking for quota flags under dq_list_lock and
  *   b) getting a reference to dquot before we release dq_list_lock
  */
-struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
+struct dquot *find_get_dquot(struct super_block *sb, unsigned int id, int type)
 {
 	struct hlist_bl_head * blh = dquot_hash + hashfn(sb, id, type);
 	struct dquot *dquot = NULL, *empty = NULL;
@@ -922,7 +929,7 @@ we_slept:
 	} else {
 		if (!atomic_read(&dquot->dq_count))
 			remove_free_dquot(dquot);
-		atomic_inc(&dquot->dq_count);
+		dqget(dquot);
 		hlist_bl_unlock(blh);
 		spin_unlock(&dqopt->dq_list_lock);
 		dqstats_inc(DQST_CACHE_HITS);
@@ -948,7 +955,7 @@ out:
 
 	return dquot;
 }
-EXPORT_SYMBOL(dqget);
+EXPORT_SYMBOL(find_get_dquot);
 
 static int dqinit_needed(struct inode *inode, int type)
 {
@@ -1427,7 +1434,7 @@ static int dquot_active(const struct inode *inode)
  * Initialize quota pointers in inode
  *
  * We do things in a bit complicated way but by that we avoid calling
- * dqget() and thus filesystem callbacks under dqptr_sem.
+ * find_get_dquot() and thus filesystem callbacks under dqptr_sem.
  *
  * It is better to call this function outside of any transaction as it
  * might need a lot of space in journal for dquot structure allocation.
@@ -1462,7 +1469,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 			id = inode->i_gid;
 			break;
 		}
-		got[cnt] = dqget(sb, id, cnt);
+		got[cnt] = find_get_dquot(sb, id, cnt);
 	}
 
 	down_write(&dqopts(sb)->dqptr_sem);
@@ -1991,9 +1998,12 @@ int dquot_transfer(struct inode *inode, struct iattr *iattr)
 		return 0;
 
 	if (iattr->ia_valid & ATTR_UID && iattr->ia_uid != inode->i_uid)
-		transfer_to[USRQUOTA] = dqget(sb, iattr->ia_uid, USRQUOTA);
+		transfer_to[USRQUOTA] = find_get_dquot(sb, iattr->ia_uid,
+						USRQUOTA);
+
 	if (iattr->ia_valid & ATTR_GID && iattr->ia_gid != inode->i_gid)
-		transfer_to[GRPQUOTA] = dqget(sb, iattr->ia_gid, GRPQUOTA);
+		transfer_to[GRPQUOTA] = find_get_dquot(sb, iattr->ia_gid,
+						GRPQUOTA);
 
 	ret = __dquot_transfer(inode, transfer_to);
 	dqput_all(transfer_to);
@@ -2547,7 +2557,7 @@ int dquot_get_dqblk(struct super_block *sb, int type, qid_t id,
 {
 	struct dquot *dquot;
 
-	dquot = dqget(sb, id, type);
+	dquot = find_get_dquot(sb, id, type);
 	if (!dquot)
 		return -ESRCH;
 	do_get_dqblk(dquot, di);
@@ -2660,7 +2670,7 @@ int dquot_set_dqblk(struct super_block *sb, int type, qid_t id,
 	struct dquot *dquot;
 	int rc;
 
-	dquot = dqget(sb, id, type);
+	dquot = find_get_dquot(sb, id, type);
 	if (!dquot) {
 		rc = -ESRCH;
 		goto out;
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index 68ceef5..93e39c6 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -52,7 +52,8 @@ void inode_sub_rsv_space(struct inode *inode, qsize_t number);
 
 void dquot_initialize(struct inode *inode);
 void dquot_drop(struct inode *inode);
-struct dquot *dqget(struct super_block *sb, unsigned int id, int type);
+struct dquot*find_get_dquot(struct super_block *sb, unsigned int id, int type);
+void dqget(struct dquot *dquot);
 void dqput(struct dquot *dquot);
 int dquot_scan_active(struct super_block *sb,
 		      int (*fn)(struct dquot *dquot, unsigned long priv),
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 18/19] fs: add unlocked helpers
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (16 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 17/19] quota: Some stylistic cleanup for dquot interface Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-11 12:14 ` [PATCH 19/19] quota: protect i_dquot with i_lock instead of dqptr_sem Dmitry Monakhov
  2010-11-19  5:44 ` [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

inode_{add,sub}_bytes will be used by dquot code

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c   |   50 ++++++++++++++++++++++++++++++++++++++------------
 fs/stat.c          |   15 ++++++++++++---
 include/linux/fs.h |    2 ++
 3 files changed, 52 insertions(+), 15 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index af3413e..b2cf04d 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -246,6 +246,7 @@ struct dqstats dqstats;
 EXPORT_SYMBOL(dqstats);
 
 static qsize_t inode_get_rsv_space(struct inode *inode);
+static qsize_t __inode_get_rsv_space(struct inode *inode);
 static void __dquot_initialize(struct inode *inode, int type);
 
 static inline unsigned int
@@ -1592,11 +1593,17 @@ void inode_add_rsv_space(struct inode *inode, qsize_t number)
 }
 EXPORT_SYMBOL(inode_add_rsv_space);
 
-void inode_claim_rsv_space(struct inode *inode, qsize_t number)
+inline void __inode_claim_rsv_space(struct inode *inode, qsize_t number)
 {
-	spin_lock(&inode->i_lock);
 	*inode_reserved_space(inode) -= number;
 	__inode_add_bytes(inode, number);
+
+}
+
+void inode_claim_rsv_space(struct inode *inode, qsize_t number)
+{
+	spin_lock(&inode->i_lock);
+	__inode_claim_rsv_space(inode, number);
 	spin_unlock(&inode->i_lock);
 }
 EXPORT_SYMBOL(inode_claim_rsv_space);
@@ -1609,33 +1616,52 @@ void inode_sub_rsv_space(struct inode *inode, qsize_t number)
 }
 EXPORT_SYMBOL(inode_sub_rsv_space);
 
-static qsize_t inode_get_rsv_space(struct inode *inode)
+static qsize_t __inode_get_rsv_space(struct inode *inode)
 {
-	qsize_t ret;
-
 	if (!dqctl(inode->i_sb)->dq_op->get_reserved_space)
 		return 0;
+	return *inode_reserved_space(inode);
+}
+
+static qsize_t inode_get_rsv_space(struct inode *inode)
+{
+	qsize_t ret;
 	spin_lock(&inode->i_lock);
-	ret = *inode_reserved_space(inode);
+	ret = __inode_get_rsv_space(inode);
 	spin_unlock(&inode->i_lock);
 	return ret;
 }
 
-static void inode_incr_space(struct inode *inode, qsize_t number,
+static void __inode_incr_space(struct inode *inode, qsize_t number,
 				int reserve)
 {
 	if (reserve)
-		inode_add_rsv_space(inode, number);
+		*inode_reserved_space(inode) += number;
 	else
-		inode_add_bytes(inode, number);
+		__inode_add_bytes(inode, number);
 }
 
-static void inode_decr_space(struct inode *inode, qsize_t number, int reserve)
+static void inode_incr_space(struct inode *inode, qsize_t number,
+				int reserve)
+{
+	spin_lock(&inode->i_lock);
+	__inode_incr_space(inode, number, reserve);
+	spin_unlock(&inode->i_lock);
+}
+
+
+static void __inode_decr_space(struct inode *inode, qsize_t number, int reserve)
 {
 	if (reserve)
-		inode_sub_rsv_space(inode, number);
+		*inode_reserved_space(inode) -= number;
 	else
-		inode_sub_bytes(inode, number);
+		__inode_sub_bytes(inode, number);
+}
+static void inode_decr_space(struct inode *inode, qsize_t number, int reserve)
+{
+	spin_lock(&inode->i_lock);
+	__inode_decr_space(inode, number, reserve);
+	spin_unlock(&inode->i_lock);
 }
 
 /*
diff --git a/fs/stat.c b/fs/stat.c
index 12e90e2..f2da983 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -429,9 +429,8 @@ void inode_add_bytes(struct inode *inode, loff_t bytes)
 
 EXPORT_SYMBOL(inode_add_bytes);
 
-void inode_sub_bytes(struct inode *inode, loff_t bytes)
+void __inode_sub_bytes(struct inode *inode, loff_t bytes)
 {
-	spin_lock(&inode->i_lock);
 	inode->i_blocks -= bytes >> 9;
 	bytes &= 511;
 	if (inode->i_bytes < bytes) {
@@ -439,17 +438,27 @@ void inode_sub_bytes(struct inode *inode, loff_t bytes)
 		inode->i_bytes += 512;
 	}
 	inode->i_bytes -= bytes;
+}
+
+void inode_sub_bytes(struct inode *inode, loff_t bytes)
+{
+	spin_lock(&inode->i_lock);
+	__inode_sub_bytes(inode, bytes);
 	spin_unlock(&inode->i_lock);
 }
 
 EXPORT_SYMBOL(inode_sub_bytes);
 
+inline loff_t __inode_get_bytes(struct inode *inode)
+{
+	return (((loff_t)inode->i_blocks) << 9) + inode->i_bytes;
+}
 loff_t inode_get_bytes(struct inode *inode)
 {
 	loff_t ret;
 
 	spin_lock(&inode->i_lock);
-	ret = (((loff_t)inode->i_blocks) << 9) + inode->i_bytes;
+	ret = __inode_get_bytes(inode);
 	spin_unlock(&inode->i_lock);
 	return ret;
 }
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e87694a..3ef2ec1 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2308,7 +2308,9 @@ extern void generic_fillattr(struct inode *, struct kstat *);
 extern int vfs_getattr(struct vfsmount *, struct dentry *, struct kstat *);
 void __inode_add_bytes(struct inode *inode, loff_t bytes);
 void inode_add_bytes(struct inode *inode, loff_t bytes);
+void __inode_sub_bytes(struct inode *inode, loff_t bytes);
 void inode_sub_bytes(struct inode *inode, loff_t bytes);
+loff_t __inode_get_bytes(struct inode *inode);
 loff_t inode_get_bytes(struct inode *inode);
 void inode_set_bytes(struct inode *inode, loff_t bytes);
 
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 19/19] quota: protect i_dquot with i_lock instead of dqptr_sem
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (17 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 18/19] fs: add unlocked helpers Dmitry Monakhov
@ 2010-11-11 12:14 ` Dmitry Monakhov
  2010-11-19  5:44 ` [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry Monakhov @ 2010-11-11 12:14 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack, hch, Dmitry Monakhov

dqptr_sem is one of the most contenting locks for now
each dquot_initialize and dquot_transfer result in down_write(dqptr_sem)
Let's user inode->i_lock to protect i_dquot pointers. In that case all
users which modified i_dquot simply converted to that lock. But users
who hold the dqptr_sem for read(charge/uncharge methods) usually
looks like follows

down_read(&dqptr_sem)
___charge_quota()
make_quota_dirty(inode->i_dquot) --> may_sleep
up_read(&dquot_sem)

We must drop i_lock before make_quota_dirty or flush_warnings,
to protect dquot from being fried let's grab extra reference for dquot,
and drop it after we have done with dquot object.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/quota/dquot.c         |  314 +++++++++++++++++----------------------------
 include/linux/quota.h    |    2 -
 include/linux/quotaops.h |    4 +-
 3 files changed, 121 insertions(+), 199 deletions(-)

diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index b2cf04d..de3990f 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -91,19 +91,17 @@
  * in inode_add_bytes() and inode_sub_bytes().
  *
  * The spinlock ordering is hence:
- * dq_data_lock > dq_lock > dq_list_lock > i_lock,
+ * dq_data_lock > i_lock > dq_lock > dq_list_lock
  * dq_list_lock > hlist_bl_head
  *
  * Note that some things (eg. sb pointer, type, id) doesn't change during
  * the life of the dquot structure and so needn't to be protected by a lock
  *
- * Any operation working on dquots via inode pointers must hold dqptr_sem.  If
- * operation is just reading pointers from inode (or not using them at all) the
- * read lock is enough. If pointers are altered function must hold write lock.
+ * Any operation working on dquots via inode pointers must hold i_lock.
  * Special care needs to be taken about S_NOQUOTA inode flag (marking that
  * inode is a quota file). Functions adding pointers from inode to dquots have
- * to check this flag under dqptr_sem and then (if S_NOQUOTA is not set) they
- * have to do all pointer modifications before dropping dqptr_sem. This makes
+ * to check this flag under i_lock and then (if S_NOQUOTA is not set) they
+ * have to do all pointer modifications before dropping i_lock. This makes
  * sure they cannot race with quotaon which first sets S_NOQUOTA flag and
  * then drops all pointers to dquots from an inode.
  *
@@ -118,14 +116,7 @@
  * spinlock to internal buffers before writing.
  *
  * Lock ordering (including related VFS locks) is the following:
- *   i_mutex > dqonoff_sem > journal_lock > dqptr_sem > dquot->dq_mutex >
- *   dqio_mutex
- * The lock ordering of dqptr_sem imposed by quota code is only dqonoff_sem >
- * dqptr_sem. But filesystem has to count with the fact that functions such as
- * dquot_alloc_space() acquire dqptr_sem and they usually have to be called
- * from inside a transaction to keep filesystem consistency after a crash. Also
- * filesystems usually want to do some IO on dquot from ->mark_dirty which is
- * called with dqptr_sem held.
+ *   i_mutex > dqonoff_sem > journal_lock > dquot->dq_mutex > dqio_mutex
  * i_mutex on quota files is special (it's below dqio_mutex)
  */
 
@@ -778,7 +769,6 @@ static struct shrinker dqcache_shrinker = {
 
 /*
  * Put reference to dquot
- * NOTE: If you change this function please check whether dqput_blocks() works right...
  */
 void dqput(struct dquot *dquot)
 {
@@ -1019,50 +1009,6 @@ static void add_dquot_ref(struct super_block *sb, int type)
 }
 
 /*
- * Return 0 if dqput() won't block.
- * (note that 1 doesn't necessarily mean blocking)
- */
-static inline int dqput_blocks(struct dquot *dquot)
-{
-	if (atomic_read(&dquot->dq_count) <= 1)
-		return 1;
-	return 0;
-}
-
-/*
- * Remove references to dquots from inode and add dquot to list for freeing
- * if we have the last referece to dquot
- * We can't race with anybody because we hold dqptr_sem for writing...
- */
-static int remove_inode_dquot_ref(struct inode *inode, int type,
-				  struct list_head *tofree_head)
-{
-	struct dquot *dquot = inode->i_dquot[type];
-	struct quota_info *dqopt = dqopts(inode->i_sb);
-
-	inode->i_dquot[type] = NULL;
-	if (dquot) {
-		if (dqput_blocks(dquot)) {
-#ifdef CONFIG_QUOTA_DEBUG
-			if (atomic_read(&dquot->dq_count) != 1)
-				quota_error(inode->i_sb, "Adding dquot with "
-					    "dq_count %d to dispose list",
-					    atomic_read(&dquot->dq_count));
-#endif
-			spin_lock(&dqopt->dq_list_lock);
-			/* As dquot must have currently users it can't be on
-			 * the free list... */
-			list_add(&dquot->dq_free, tofree_head);
-			spin_unlock(&dqopt->dq_list_lock);
-			return 1;
-		}
-		else
-			dqput(dquot);   /* We have guaranteed we won't block */
-	}
-	return 0;
-}
-
-/*
  * Free list of dquots
  * Dquots are removed from inodes and no new references can be got so we are
  * the only ones holding reference
@@ -1087,20 +1033,35 @@ static void remove_dquot_ref(struct super_block *sb, int type,
 {
 	struct inode *inode;
 	int reserved = 0;
-
+	struct dquot *dquot;
 	spin_lock(&inode_lock);
 	list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
 		/*
 		 *  We have to scan also I_NEW inodes because they can already
 		 *  have quota pointer initialized. Luckily, we need to touch
 		 *  only quota pointers and these have separate locking
-		 *  (dqptr_sem).
+		 *  (i_lock).
 		 */
+		spin_lock(&inode->i_lock);
 		if (!IS_NOQUOTA(inode)) {
-			if (unlikely(inode_get_rsv_space(inode) > 0))
+			if (unlikely(__inode_get_rsv_space(inode) > 0))
 				reserved = 1;
-			remove_inode_dquot_ref(inode, type, tofree_head);
+			dquot = inode->i_dquot[type];
+			inode->i_dquot[type] = NULL;
+			/*
+			 * As dquot must have currently users it can't be on
+			 * the dq_free_list we can use ->dq_free here.
+			 * If dquot already in the list than we already deffer
+			 * one dqput() call, so dqput() can not block
+			 */
+			if (dquot) {
+				if (list_empty(&dquot->dq_free))
+					list_add(&dquot->dq_free, tofree_head);
+				else
+					dqput(dquot);
+			}
 		}
+		spin_unlock(&inode->i_lock);
 	}
 	spin_unlock(&inode_lock);
 #ifdef CONFIG_QUOTA_DEBUG
@@ -1118,9 +1079,7 @@ static void drop_dquot_ref(struct super_block *sb, int type)
 	LIST_HEAD(tofree_head);
 
 	if (dqctl(sb)->dq_op) {
-		down_write(&dqopts(sb)->dqptr_sem);
 		remove_dquot_ref(sb, type, &tofree_head);
-		up_write(&dqopts(sb)->dqptr_sem);
 		put_dquot_list(&tofree_head);
 	}
 }
@@ -1434,29 +1393,21 @@ static int dquot_active(const struct inode *inode)
 /*
  * Initialize quota pointers in inode
  *
- * We do things in a bit complicated way but by that we avoid calling
- * find_get_dquot() and thus filesystem callbacks under dqptr_sem.
- *
  * It is better to call this function outside of any transaction as it
  * might need a lot of space in journal for dquot structure allocation.
  */
 static void __dquot_initialize(struct inode *inode, int type)
 {
 	unsigned int id = 0;
-	int cnt, idx;
+	int cnt;
 	struct dquot *got[MAXQUOTAS];
 	struct super_block *sb = inode->i_sb;
 	qsize_t rsv;
 
 	/* First test before acquiring mutex - solves deadlocks when we
          * re-enter the quota code and are already holding the mutex */
-	rcu_read_lock();
-	if (!dquot_active(inode)) {
-		rcu_read_unlock();
+	if (!dquot_active(inode))
 		return;
-	}
-	idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
-	rcu_read_unlock();
 	/* First get references to structures we might need. */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		got[cnt] = NULL;
@@ -1473,7 +1424,8 @@ static void __dquot_initialize(struct inode *inode, int type)
 		got[cnt] = find_get_dquot(sb, id, cnt);
 	}
 
-	down_write(&dqopts(sb)->dqptr_sem);
+	spin_lock(&inode->i_lock);
+	rsv = __inode_get_rsv_space(inode);
 	if (IS_NOQUOTA(inode))
 		goto out_err;
 
@@ -1493,7 +1445,6 @@ static void __dquot_initialize(struct inode *inode, int type)
 			 * Make quota reservation system happy if someone
 			 * did a write before quota was turned on
 			 */
-			rsv = inode_get_rsv_space(inode);
 			if (unlikely(rsv)) {
 				spin_lock(&inode->i_dquot[cnt]->dq_lock);
 				dquot_resv_space(inode->i_dquot[cnt], rsv);
@@ -1502,8 +1453,7 @@ static void __dquot_initialize(struct inode *inode, int type)
 		}
 	}
 out_err:
-	up_write(&dqopts(sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(sb)->dq_srcu, idx);
+	spin_unlock(&inode->i_lock);
 	/* Drop unused references */
 	dqput_all(got);
 }
@@ -1535,12 +1485,12 @@ static void __dquot_drop(struct inode *inode)
 	int cnt;
 	struct dquot *put[MAXQUOTAS];
 
-	down_write(&dqopts(inode->i_sb)->dqptr_sem);
+	spin_lock(&inode->i_lock);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		put[cnt] = inode->i_dquot[cnt];
 		inode->i_dquot[cnt] = NULL;
 	}
-	up_write(&dqopts(inode->i_sb)->dqptr_sem);
+	spin_unlock(&inode->i_lock);
 	dqput_all(put);
 }
 
@@ -1641,15 +1591,6 @@ static void __inode_incr_space(struct inode *inode, qsize_t number,
 		__inode_add_bytes(inode, number);
 }
 
-static void inode_incr_space(struct inode *inode, qsize_t number,
-				int reserve)
-{
-	spin_lock(&inode->i_lock);
-	__inode_incr_space(inode, number, reserve);
-	spin_unlock(&inode->i_lock);
-}
-
-
 static void __inode_decr_space(struct inode *inode, qsize_t number, int reserve)
 {
 	if (reserve)
@@ -1657,12 +1598,6 @@ static void __inode_decr_space(struct inode *inode, qsize_t number, int reserve)
 	else
 		__inode_sub_bytes(inode, number);
 }
-static void inode_decr_space(struct inode *inode, qsize_t number, int reserve)
-{
-	spin_lock(&inode->i_lock);
-	__inode_decr_space(inode, number, reserve);
-	spin_unlock(&inode->i_lock);
-}
 
 /*
  * This functions updates i_blocks+i_bytes fields and quota information
@@ -1679,25 +1614,19 @@ static void inode_decr_space(struct inode *inode, qsize_t number, int reserve)
  */
 int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 {
-	int cnt, idx, ret = 0;
+	int cnt, ret = 0;
 	char warntype[MAXQUOTAS];
 	int warn = flags & DQUOT_SPACE_WARN;
 	int reserve = flags & DQUOT_SPACE_RESERVE;
 	int nofail = flags & DQUOT_SPACE_NOFAIL;
+	struct dquot *dquot[MAXQUOTAS] = {};
 
-	/*
-	 * First test before acquiring mutex - solves deadlocks when we
-	 * re-enter the quota code and are already holding the mutex
-	 */
-	rcu_read_lock();
+	spin_lock(&inode->i_lock);
 	if (!dquot_active(inode)) {
-		inode_incr_space(inode, number, reserve);
-		rcu_read_unlock();
+		__inode_incr_space(inode, number, reserve);
+		spin_unlock(&inode->i_lock);
 		goto out;
 	}
-	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
-	rcu_read_unlock();
-	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
 
@@ -1705,10 +1634,13 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
+		dquot[cnt] = inode->i_dquot[cnt];
+		dqget(dquot[cnt]);
 		ret = check_bdq(inode->i_dquot[cnt], number, !warn,
 				warntype + cnt);
 		if (ret && !nofail) {
 			unlock_inode_dquots(inode->i_dquot);
+			spin_unlock(&inode->i_lock);
 			goto out_flush_warn;
 		}
 	}
@@ -1720,16 +1652,15 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
 		else
 			dquot_incr_space(inode->i_dquot[cnt], number);
 	}
-	inode_incr_space(inode, number, reserve);
+	__inode_incr_space(inode, number, reserve);
 	unlock_inode_dquots(inode->i_dquot);
-
-	if (reserve)
-		goto out_flush_warn;
-	mark_all_dquot_dirty(inode->i_dquot);
+	spin_unlock(&inode->i_lock);
+	if (!reserve)
+		mark_all_dquot_dirty(dquot);
 out_flush_warn:
-	flush_warnings(inode->i_dquot, warntype);
-	up_read(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+
+	flush_warnings(dquot, warntype);
+	dqput_all(dquot);
 out:
 	return ret;
 }
@@ -1738,28 +1669,26 @@ EXPORT_SYMBOL(__dquot_alloc_space);
 /*
  * This operation can block, but only after everything is updated
  */
-int dquot_alloc_inode(const struct inode *inode)
+int dquot_alloc_inode(struct inode *inode)
 {
-	int cnt, idx, ret = 0;
+	int cnt, ret = 0;
 	char warntype[MAXQUOTAS];
+	struct dquot *dquot[MAXQUOTAS] = {};
 
-	/* First test before acquiring mutex - solves deadlocks when we
-         * re-enter the quota code and are already holding the mutex */
-	rcu_read_lock();
+	spin_lock(&inode->i_lock);
 	if (!dquot_active(inode)) {
-		rcu_read_unlock();
+		spin_unlock(&inode->i_lock);
 		return 0;
 	}
-	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
-	rcu_read_unlock();
-
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype[cnt] = QUOTA_NL_NOWARN;
-	down_read(&dqopts(inode->i_sb)->dqptr_sem);
+
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
+		dquot[cnt] = inode->i_dquot[cnt];
+		dqget(dquot[cnt]);
 		ret = check_idq(inode->i_dquot[cnt], 1, warntype + cnt);
 		if (ret)
 			goto warn_put_all;
@@ -1773,11 +1702,11 @@ int dquot_alloc_inode(const struct inode *inode)
 
 warn_put_all:
 	unlock_inode_dquots(inode->i_dquot);
+	spin_unlock(&inode->i_lock);
 	if (ret == 0)
-		mark_all_dquot_dirty(inode->i_dquot);
-	flush_warnings(inode->i_dquot, warntype);
-	up_read(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+		mark_all_dquot_dirty(dquot);
+	flush_warnings(dquot, warntype);
+	dqput_all(dquot);
 	return ret;
 }
 EXPORT_SYMBOL(dquot_alloc_inode);
@@ -1787,30 +1716,30 @@ EXPORT_SYMBOL(dquot_alloc_inode);
  */
 int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
 {
-	int cnt, idx;
+	int cnt;
+	struct dquot *dquot[MAXQUOTAS] = {};
 
-	rcu_read_lock();
+	spin_lock(&inode->i_lock);
 	if (!dquot_active(inode)) {
-		inode_claim_rsv_space(inode, number);
-		rcu_read_unlock();
+		__inode_claim_rsv_space(inode, number);
+		spin_unlock(&inode->i_lock);
 		return 0;
 	}
-	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
-	rcu_read_unlock();
-	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	lock_inode_dquots(inode->i_dquot);
 	/* Claim reserved quotas to allocated quotas */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
-		if (inode->i_dquot[cnt])
-			dquot_claim_reserved_space(inode->i_dquot[cnt],
-							number);
+		if (inode->i_dquot[cnt]) {
+			dquot[cnt] = inode->i_dquot[cnt];
+			dqget(dquot[cnt]);
+			dquot_claim_reserved_space(inode->i_dquot[cnt], number);
+		}
 	}
 	/* Update inode bytes */
-	inode_claim_rsv_space(inode, number);
+	__inode_claim_rsv_space(inode, number);
 	unlock_inode_dquots(inode->i_dquot);
-	mark_all_dquot_dirty(inode->i_dquot);
-	up_read(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+	spin_unlock(&inode->i_lock);
+	mark_all_dquot_dirty(dquot);
+	dqput_all(dquot);
 	return 0;
 }
 EXPORT_SYMBOL(dquot_claim_space_nodirty);
@@ -1820,75 +1749,68 @@ EXPORT_SYMBOL(dquot_claim_space_nodirty);
  */
 void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
 {
-	unsigned int cnt, idx;
+	unsigned int cnt;
 	char warntype[MAXQUOTAS];
 	int reserve = flags & DQUOT_SPACE_RESERVE;
+	struct dquot *dquot[MAXQUOTAS] = {};
 
-	/* First test before acquiring mutex - solves deadlocks when we
-         * re-enter the quota code and are already holding the mutex */
-	rcu_read_lock();
+	spin_lock(&inode->i_lock);
 	if (!dquot_active(inode)) {
-		inode_decr_space(inode, number, reserve);
-		rcu_read_unlock();
+		__inode_decr_space(inode, number, reserve);
+		spin_unlock(&inode->i_lock);
 		return;
 	}
-
-	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
-	rcu_read_unlock();
-	down_read(&dqopts(inode->i_sb)->dqptr_sem);
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
+		dquot[cnt] = inode->i_dquot[cnt];
+		dqget(dquot[cnt]);
 		warntype[cnt] = info_bdq_free(inode->i_dquot[cnt], number);
 		if (reserve)
 			dquot_free_reserved_space(inode->i_dquot[cnt], number);
 		else
 			dquot_decr_space(inode->i_dquot[cnt], number);
 	}
-	inode_decr_space(inode, number, reserve);
+	__inode_decr_space(inode, number, reserve);
 	unlock_inode_dquots(inode->i_dquot);
-
-	if (reserve)
-		goto out_unlock;
-	mark_all_dquot_dirty(inode->i_dquot);
-out_unlock:
-	flush_warnings(inode->i_dquot, warntype);
-	up_read(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+	spin_unlock(&inode->i_lock);
+	if (!reserve)
+		mark_all_dquot_dirty(dquot);
+	flush_warnings(dquot, warntype);
+	dqput_all(dquot);
 }
 EXPORT_SYMBOL(__dquot_free_space);
 
 /*
  * This operation can block, but only after everything is updated
  */
-void dquot_free_inode(const struct inode *inode)
+void dquot_free_inode(struct inode *inode)
 {
-	unsigned int cnt, idx;
+	unsigned int cnt;
 	char warntype[MAXQUOTAS];
+	struct dquot *dquot[MAXQUOTAS] = {};
 
-	/* First test before acquiring mutex - solves deadlocks when we
-         * re-enter the quota code and are already holding the mutex */
-	rcu_read_lock();
+	spin_lock(&inode->i_lock);
 	if (!dquot_active(inode)) {
-		rcu_read_unlock();
+		spin_unlock(&inode->i_lock);
 		return;
 	}
-	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
-	rcu_read_unlock();
-	down_read(&dqopts(inode->i_sb)->dqptr_sem);
+
 	lock_inode_dquots(inode->i_dquot);
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		if (!inode->i_dquot[cnt])
 			continue;
+		dquot[cnt] = inode->i_dquot[cnt];
+		dqget(dquot[cnt]);
 		warntype[cnt] = info_idq_free(inode->i_dquot[cnt], 1);
 		dquot_decr_inodes(inode->i_dquot[cnt], 1);
 	}
 	unlock_inode_dquots(inode->i_dquot);
-	mark_all_dquot_dirty(inode->i_dquot);
-	flush_warnings(inode->i_dquot, warntype);
-	up_read(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+	spin_unlock(&inode->i_lock);
+	mark_all_dquot_dirty(dquot);
+	flush_warnings(dquot, warntype);
+	dqput_all(dquot);
 }
 EXPORT_SYMBOL(dquot_free_inode);
 
@@ -1907,36 +1829,27 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	qsize_t space, cur_space;
 	qsize_t rsv_space = 0;
 	struct dquot *transfer_from[MAXQUOTAS] = {};
-	int cnt, idx, ret = 0;
+	int cnt, ret = 0;
 	char is_valid[MAXQUOTAS] = {};
 	char warntype_to[MAXQUOTAS];
 	char warntype_from_inodes[MAXQUOTAS], warntype_from_space[MAXQUOTAS];
 
-	/* First test before acquiring mutex - solves deadlocks when we
-         * re-enter the quota code and are already holding the mutex */
-	rcu_read_lock();
+	spin_lock(&inode->i_lock);
 	if (!dquot_active(inode)) {
-		rcu_read_unlock();
+		spin_unlock(&inode->i_lock);
 		return 0;
 	}
 	/* Initialize the arrays */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		warntype_to[cnt] = QUOTA_NL_NOWARN;
 
-	idx = srcu_read_lock(&dqopts(inode->i_sb)->dq_srcu);
-	rcu_read_unlock();
-	down_write(&dqopts(inode->i_sb)->dqptr_sem);
-	if (IS_NOQUOTA(inode)) {	/* File without quota accounting? */
-		up_write(&dqopts(inode->i_sb)->dqptr_sem);
-		srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
-		return 0;
-	}
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
 		/*
 		 * Skip changes for same uid or gid or for turned off quota-type.
 		 */
 		if (!transfer_to[cnt])
 			continue;
+		dqget(transfer_to[cnt]);
 		/* Avoid races with quotaoff() */
 		if (!sb_has_quota_active(inode->i_sb, cnt))
 			continue;
@@ -1946,10 +1859,13 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 			continue;
 		is_valid[cnt] = 1;
 		transfer_from[cnt] = inode->i_dquot[cnt];
+		if (transfer_from[cnt])
+			dqget(transfer_from[cnt]);
+
 	}
 	lock_dquot_double(transfer_from, transfer_to);
-	cur_space = inode_get_bytes(inode);
-	rsv_space = inode_get_rsv_space(inode);
+	cur_space = __inode_get_bytes(inode);
+	rsv_space = __inode_get_rsv_space(inode);
 	space = cur_space + rsv_space;
 	/* Build the transfer_from list and check the limits */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
@@ -1989,13 +1905,15 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 	}
 	unlock_inode_dquots(transfer_to);
 	unlock_inode_dquots(transfer_from);
-	up_write(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+	spin_unlock(&inode->i_lock);
+
 	mark_all_dquot_dirty(transfer_from);
 	mark_all_dquot_dirty(transfer_to);
 	flush_warnings(transfer_to, warntype_to);
 	flush_warnings(transfer_from, warntype_from_inodes);
 	flush_warnings(transfer_from, warntype_from_space);
+	dqput_all(transfer_to);
+	dqput_all(transfer_from);
 	/* Pass back references to put */
 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
 		if (is_valid[cnt])
@@ -2004,9 +1922,10 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
 over_quota:
 	unlock_inode_dquots(transfer_to);
 	unlock_inode_dquots(transfer_from);
-	up_write(&dqopts(inode->i_sb)->dqptr_sem);
-	srcu_read_unlock(&dqopts(inode->i_sb)->dq_srcu, idx);
+	spin_unlock(&inode->i_lock);
 	flush_warnings(transfer_to, warntype_to);
+	dqput_all(transfer_to);
+	dqput_all(transfer_from);
 	return ret;
 }
 EXPORT_SYMBOL(__dquot_transfer);
@@ -2109,7 +2028,6 @@ static int alloc_quota_info(struct quota_ctl_info *dqctl) {
 		return err;
 	}
 	mutex_init(&dqopt->dqio_mutex);
-	init_rwsem(&dqopt->dqptr_sem);
 	spin_lock_init(&dqopt->dq_list_lock);
 	spin_lock_init(&dqopt->dq_data_lock);
 	INIT_LIST_HEAD(&dqopt->dq_inuse_list);
@@ -2247,8 +2165,10 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
 			if (!sb_has_quota_loaded(sb, cnt)) {
 				mutex_lock_nested(&toputinode[cnt]->i_mutex,
 						  I_MUTEX_QUOTA);
+				spin_lock(&toputinode[cnt]->i_lock);
 				toputinode[cnt]->i_flags &= ~(S_IMMUTABLE |
 				  S_NOATIME | S_NOQUOTA);
+				spin_unlock(&toputinode[cnt]->i_lock);
 				truncate_inode_pages(&toputinode[cnt]->i_data,
 						     0);
 				mutex_unlock(&toputinode[cnt]->i_mutex);
@@ -2343,9 +2263,11 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
 		 * possible) Also nobody should write to the file - we use
 		 * special IO operations which ignore the immutable bit. */
 		mutex_lock_nested(&inode->i_mutex, I_MUTEX_QUOTA);
+		spin_lock(&inode->i_lock);
 		oldflags = inode->i_flags & (S_NOATIME | S_IMMUTABLE |
 					     S_NOQUOTA);
 		inode->i_flags |= S_NOQUOTA | S_NOATIME | S_IMMUTABLE;
+		spin_unlock(&inode->i_lock);
 		mutex_unlock(&inode->i_mutex);
 		/*
 		 * When S_NOQUOTA is set, remove dquot references as no more
@@ -2387,8 +2309,10 @@ out_lock:
 		mutex_lock_nested(&inode->i_mutex, I_MUTEX_QUOTA);
 		/* Set the flags back (in the case of accidental quotaon()
 		 * on a wrong file we don't want to mess up the flags) */
+		spin_lock(&inode->i_lock);
 		inode->i_flags &= ~(S_NOATIME | S_NOQUOTA | S_IMMUTABLE);
 		inode->i_flags |= oldflags;
+		spin_unlock(&inode->i_lock);
 		mutex_unlock(&inode->i_mutex);
 	}
 	/* We have failed to enable quota, so quota flags doesn't changed.
diff --git a/include/linux/quota.h b/include/linux/quota.h
index 949347a..f39a756 100644
--- a/include/linux/quota.h
+++ b/include/linux/quota.h
@@ -427,8 +427,6 @@ struct quota_info {
 	struct inode *files[MAXQUOTAS];	/* inodes of quotafiles */
 	const struct quota_format_ops *fmt_ops[MAXQUOTAS];	/* Operations for each type */
 	struct srcu_struct dq_srcu;	/* use count read lock */
-	struct rw_semaphore dqptr_sem;	/* serialize ops using quota_info struct, pointers from inode to dquots */
-
 };
 
 int register_quota_format(struct quota_format_type *fmt);
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index 93e39c6..c19a904 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -64,10 +64,10 @@ void dquot_destroy(struct dquot *dquot);
 int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags);
 void __dquot_free_space(struct inode *inode, qsize_t number, int flags);
 
-int dquot_alloc_inode(const struct inode *inode);
+int dquot_alloc_inode(struct inode *inode);
 
 int dquot_claim_space_nodirty(struct inode *inode, qsize_t number);
-void dquot_free_inode(const struct inode *inode);
+void dquot_free_inode(struct inode *inode);
 
 int dquot_disable(struct super_block *sb, int type, unsigned int flags);
 /* Suspend quotas on remount RO */
-- 
1.6.5.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock
  2010-11-11 12:14 ` [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock Dmitry Monakhov
@ 2010-11-11 13:36   ` Christoph Hellwig
  2010-11-22 19:35   ` Jan Kara
  1 sibling, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2010-11-11 13:36 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-fsdevel, jack, hch

On Thu, Nov 11, 2010 at 03:14:20PM +0300, Dmitry Monakhov wrote:
> dqptr_sem hasn't any thing in common with quota files,
> quota file load  protected with dqonoff_mutex, so we have to use
> it for reading fmt info.
> 
> Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>

Looks good, but the commit message doesn't mention adding the ->get_fmt
method.


Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 00/19] quota: RFC SMP improvements for generic quota V3
  2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
                   ` (18 preceding siblings ...)
  2010-11-11 12:14 ` [PATCH 19/19] quota: protect i_dquot with i_lock instead of dqptr_sem Dmitry Monakhov
@ 2010-11-19  5:44 ` Dmitry
  19 siblings, 0 replies; 32+ messages in thread
From: Dmitry @ 2010-11-19  5:44 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: jack

On Thu, 11 Nov 2010 15:14:19 +0300, Dmitry Monakhov <dmonakhov@openvz.org> wrote:
Ping. Jan can you please take a look at the series.
>  This patch set is my attempt to make quota code more scalable.
>  Main goal of this patch-set is to split global locking to per-sb basis.
>  Actually it consists of several parts
>  * Fixes : trivial fixes which i hope will be accepted w/o any complain
>  * Splitup global locks: Imho this part clean and simple. I hope it is
>    also a sane candidate for for_testing branch.
>  * More scalability for single sb : Some of this patches was already
>    submitted previously, some wasn't. This part is just my first vision
>    of the way we can move. This way result in real speedup, but i'm not
>    shure about design solutions, please do not punch me too strong
>    if you dont like that direction.
> 
>  This patch-set survived after some stress testing
>   * parallel quota{on,off}
>   * fssress
>   * triggering ENOSPC
> 
> More info here: download.openvz.org/~dmonakhov/quota.html
> 
> Changes from V2
>    * Move getfmt call to dquot  (suggested by hch@)
>    * Use global hash with per backet lock (suggested by viro@)
>    * Protect dqget with SRCU to prevent race on quota_info ptr
>    * Add dquot_init optimization
>    * Remove data_lock for ocfs2 where possible.
>    * I've remove dq_count optimization patch because it was buggy,
>      and in fact it should belongs to another patch-set.
>    * Bug fixes
>    ** Fix deadlock on dquot transfer due to previous ENOSPC
> 
> Changes from V1
>    * random fixes according to Jan's comments
>      + fix spelling
>      + fix deadlock on dquot_transfer, and lock_dep issues
>      - list_lock patches split is still the same as before.
>    *  move quota data from sb to dedicated pointer.
>    *  Basic improvements fore per-sb scalability
> 
> patch against 2.6.36-rc5, linux-fs-2.6.git for_testing branch
> <Out of tree patches from other developers>
>       kernel: add bl_list
> <Cleanups and Fixes>
>       quota: protect getfmt call with dqonoff_mutex lock
>       quota: Wrap common expression to  helper function
> <Split-up global locks>
>       quota: protect dqget() from parallels quotaoff via SRCU
>       quota: mode quota internals from sb to quota_info
>       quota: Remove state_lock
>       quota: add quota format lock
>       quota: make dquot lists per-sb
>       quota: optimize quota_initialize
>       quota: user per-backet hlist lock for dquot_hash
>       quota: remove global dq_list_lock
> <More scalability for single sb>
>       quota: rename dq_lock
>       quota: make per-sb dq_data_lock
>       quota: protect dquot mem info with object's lock
>       quota: drop dq_data_lock where possible
>       quota: relax dq_data_lock dq_lock locking consistency
>       quota: Some stylistic cleanup for dquot interface
>       fs: add unlocked helpers
>       quota: protect i_dquot with i_lock instead of dqptr_sem
> 
> Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
> ---
>  Makefile |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/Makefile b/Makefile
> index 471c49f..4e7602b 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -1,7 +1,7 @@
>  VERSION = 2
>  PATCHLEVEL = 6
>  SUBLEVEL = 36
> -EXTRAVERSION = -rc6
> +EXTRAVERSION = -rc6-quota
>  NAME = Sheep on Meth
>  
>  # *DOCUMENTATION*
> -- 
> 1.6.5.2
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock
  2010-11-11 12:14 ` [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock Dmitry Monakhov
  2010-11-11 13:36   ` Christoph Hellwig
@ 2010-11-22 19:35   ` Jan Kara
  2010-12-02 11:40     ` Dmitry
  1 sibling, 1 reply; 32+ messages in thread
From: Jan Kara @ 2010-11-22 19:35 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-fsdevel, jack, hch

On Thu 11-11-10 15:14:20, Dmitry Monakhov wrote:
> dqptr_sem hasn't any thing in common with quota files,
> quota file load  protected with dqonoff_mutex, so we have to use
> it for reading fmt info.
  You missed ext2 and jfs from the conversion. Also you should mention that
you introduce new ->get_fmt call. Other than that just one nitpick:

> diff --git a/include/linux/quota.h b/include/linux/quota.h
> index 9a85412..2767e4c 100644
> --- a/include/linux/quota.h
> +++ b/include/linux/quota.h
> @@ -331,6 +331,7 @@ struct quotactl_ops {
>  	int (*quota_off)(struct super_block *, int);
>  	int (*quota_sync)(struct super_block *, int, int);
>  	int (*get_info)(struct super_block *, int, struct if_dqinfo *);
> +	int (*get_fmt)(struct super_block*, int, unsigned int*);
                                                             ^ space before *

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 06/19] quota: Remove state_lock
  2010-11-11 12:14 ` [PATCH 06/19] quota: Remove state_lock Dmitry Monakhov
@ 2010-11-22 21:12   ` Jan Kara
  2010-11-22 21:31     ` Dmitry
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2010-11-22 21:12 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-fsdevel, jack, hch

On Thu 11-11-10 15:14:25, Dmitry Monakhov wrote:
> The only reader which use state_lock is dqget(), and is it serialized
> with quota_disable via SRCU. And state_lock doesn't guaranties anything
> for that case. All methods which modify quota flags already protected by
> dqonoff_mutex. Get rid of useless state_lock.
  dq_state_lock has two properties you miss here:
a) It guarantees we read a consistent value from a quota state variable.
b) We read an uptodate value from quota state variable.

b) is kind of achieved by rcu_read_lock() which implies a memory barrier
but still stores can be reordered and it's not completely clear it does not
matter. a) is simply not achieved by your code.

So I think you have to change quota code to manipulate state flags in an
atomic manner and at least add comments why reordering the state flags
store e.g. during quotaon does not matter...

								Honza

> Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
> ---
>  fs/quota/dquot.c |   24 ++----------------------
>  1 files changed, 2 insertions(+), 22 deletions(-)
> 
> diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> index 78e48f3..a1efacd 100644
> --- a/fs/quota/dquot.c
> +++ b/fs/quota/dquot.c
> @@ -86,12 +86,9 @@
>   * dq_data_lock protects data from dq_dqb and also mem_dqinfo structures and
>   * also guards consistency of dquot->dq_dqb with inode->i_blocks, i_bytes.
>   * i_blocks and i_bytes updates itself are guarded by i_lock acquired directly
> - * in inode_add_bytes() and inode_sub_bytes(). dq_state_lock protects
> - * modifications of quota state (on quotaon and quotaoff) and readers who care
> - * about latest values take it as well.
> + * in inode_add_bytes() and inode_sub_bytes().
>   *
> - * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
> - *   dq_list_lock > dq_state_lock
> + * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock.
>   *
>   * Note that some things (eg. sb pointer, type, id) doesn't change during
>   * the life of the dquot structure and so needn't to be protected by a lock
> @@ -128,7 +125,6 @@
>   */
>  
>  static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_list_lock);
> -static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_state_lock);
>  __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock);
>  EXPORT_SYMBOL(dq_data_lock);
>  
> @@ -827,14 +823,10 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
>  	rcu_read_unlock();
>  we_slept:
>  	spin_lock(&dq_list_lock);
> -	spin_lock(&dq_state_lock);
>  	if (!sb_has_quota_active(sb, type)) {
> -		spin_unlock(&dq_state_lock);
>  		spin_unlock(&dq_list_lock);
>  		goto out;
>  	}
> -	spin_unlock(&dq_state_lock);
> -
>  	dquot = find_dquot(hashent, sb, id, type);
>  	if (!dquot) {
>  		if (!empty) {
> @@ -2022,24 +2014,19 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
>  			continue;
>  
>  		if (flags & DQUOT_SUSPENDED) {
> -			spin_lock(&dq_state_lock);
>  			qctl->flags |=
>  				dquot_state_flag(DQUOT_SUSPENDED, cnt);
> -			spin_unlock(&dq_state_lock);
>  		} else {
> -			spin_lock(&dq_state_lock);
>  			qctl->flags &= ~dquot_state_flag(flags, cnt);
>  			/* Turning off suspended quotas? */
>  			if (!sb_has_quota_loaded(sb, cnt) &&
>  			    sb_has_quota_suspended(sb, cnt)) {
>  				qctl->flags &=	~dquot_state_flag(
>  							DQUOT_SUSPENDED, cnt);
> -				spin_unlock(&dq_state_lock);
>  				iput(dqopt->files[cnt]);
>  				dqopt->files[cnt] = NULL;
>  				continue;
>  			}
> -			spin_unlock(&dq_state_lock);
>  		}
>  
>  		/* We still have to keep quota loaded? */
> @@ -2235,10 +2222,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
>  		goto out_file_init;
>  	}
>  	mutex_unlock(&dqopt->dqio_mutex);
> -	spin_lock(&dq_state_lock);
>  	dqctl(sb)->flags |= dquot_state_flag(flags, type);
> -	spin_unlock(&dq_state_lock);
> -
>  	add_dquot_ref(sb, type);
>  	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
>  
> @@ -2290,12 +2274,10 @@ int dquot_resume(struct super_block *sb, int type)
>  		}
>  		inode = qctl->dq_opt->files[cnt];
>  		qctl->dq_opt->files[cnt] = NULL;
> -		spin_lock(&dq_state_lock);
>  		flags = qctl->flags & dquot_state_flag(DQUOT_USAGE_ENABLED |
>  							DQUOT_LIMITS_ENABLED,
>  							cnt);
>  		qctl->flags &= ~dquot_state_flag(DQUOT_STATE_FLAGS, cnt);
> -		spin_unlock(&dq_state_lock);
>  		mutex_unlock(&qctl->dqonoff_mutex);
>  
>  		flags = dquot_generic_flag(flags, cnt);
> @@ -2369,9 +2351,7 @@ int dquot_enable(struct inode *inode, int type, int format_id,
>  			ret = -EBUSY;
>  			goto out_lock;
>  		}
> -		spin_lock(&dq_state_lock);
>  		qctl->flags |= dquot_state_flag(flags, type);
> -		spin_unlock(&dq_state_lock);
>  out_lock:
>  		mutex_unlock(&qctl->dqonoff_mutex);
>  		return ret;
> -- 
> 1.6.5.2
> 
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU
  2010-11-11 12:14 ` [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU Dmitry Monakhov
@ 2010-11-22 21:21   ` Jan Kara
  2010-11-22 21:53     ` Dmitry
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2010-11-22 21:21 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-fsdevel, jack, hch

On Thu 11-11-10 15:14:23, Dmitry Monakhov wrote:
> In order to hide quota internals inside didicated structure pointer.
> We have to serialize that object lifetime with dqget(), and change/uncharge
> functions.
> Quota_info construction/destruction will be protected via ->dq_srcu.
> SRCU counter temproraly placed inside sb, but will be moved inside
> quota object pointer in next patch.
  The changelog seems rather insufficient to me. Could you please write
here the new locking rules more in detail? What structures are exactly
protected by RCU? Which lock are you able to relax after these changes (is
it only dq_state_lock?)? The rules should be also placed in dquot.c where
locking is described.

> diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> index 748d744..7e937b0 100644
> --- a/fs/quota/dquot.c
> +++ b/fs/quota/dquot.c
> @@ -805,7 +805,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
>  /*
>   * Get reference to dquot
>   *
> - * Locking is slightly tricky here. We are guarded from parallel quotaoff()
> + * We are guarded from parallel quotaoff() by holding srcu_read_lock
  The comment does not make sense after your change anymore.

>   * destroying our dquot by:
>   *   a) checking for quota flags under dq_list_lock and
>   *   b) getting a reference to dquot before we release dq_list_lock
> @@ -814,9 +814,15 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
>  {
>  	unsigned int hashent = hashfn(sb, id, type);
>  	struct dquot *dquot = NULL, *empty = NULL;
> +	int idx;
>  
> -        if (!sb_has_quota_active(sb, type))
> +	rcu_read_lock();
> +	if (!sb_has_quota_active(sb, type)) {
> +		rcu_read_unlock();
>  		return NULL;
> +	}
> +	idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
> +	rcu_read_unlock();
  Ugh, I'm kind of puzzled by your combination of RCU and SRCU. What's the
point?

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 06/19] quota: Remove state_lock
  2010-11-22 21:12   ` Jan Kara
@ 2010-11-22 21:31     ` Dmitry
  2010-11-23 10:55       ` Jan Kara
  0 siblings, 1 reply; 32+ messages in thread
From: Dmitry @ 2010-11-22 21:31 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-fsdevel, jack, hch

On Mon, 22 Nov 2010 22:12:09 +0100, Jan Kara <jack@suse.cz> wrote:
> On Thu 11-11-10 15:14:25, Dmitry Monakhov wrote:
> > The only reader which use state_lock is dqget(), and is it serialized
> > with quota_disable via SRCU. And state_lock doesn't guaranties anything
> > for that case. All methods which modify quota flags already protected by
> > dqonoff_mutex. Get rid of useless state_lock.
>   dq_state_lock has two properties you miss here:
> a) It guarantees we read a consistent value from a quota state variable.
  So what, it is not guaranteed anything. Quota make becomes disabled
  one nanosecond after check.
  And with srcu_read_lock we can be guarantee what quota will be available
  for the caller. Even in case of stale value.
> b) We read an uptodate value from quota state variable.
   All callers which modify the state already holds onoff_mutex so
   implicit mem barriers always guaranteed.
> 
> b) is kind of achieved by rcu_read_lock() which implies a memory barrier
> but still stores can be reordered and it's not completely clear it does not
> matter. a) is simply not achieved by your code.
> 
> So I think you have to change quota code to manipulate state flags in an
> atomic manner and at least add comments why reordering the state flags
> store e.g. during quotaon does not matter...
Ok, agree comment would be reasonable here.
> 
> 								Honza
> 
> > Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
> > ---
> >  fs/quota/dquot.c |   24 ++----------------------
> >  1 files changed, 2 insertions(+), 22 deletions(-)
> > 
> > diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> > index 78e48f3..a1efacd 100644
> > --- a/fs/quota/dquot.c
> > +++ b/fs/quota/dquot.c
> > @@ -86,12 +86,9 @@
> >   * dq_data_lock protects data from dq_dqb and also mem_dqinfo structures and
> >   * also guards consistency of dquot->dq_dqb with inode->i_blocks, i_bytes.
> >   * i_blocks and i_bytes updates itself are guarded by i_lock acquired directly
> > - * in inode_add_bytes() and inode_sub_bytes(). dq_state_lock protects
> > - * modifications of quota state (on quotaon and quotaoff) and readers who care
> > - * about latest values take it as well.
> > + * in inode_add_bytes() and inode_sub_bytes().
> >   *
> > - * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock,
> > - *   dq_list_lock > dq_state_lock
> > + * The spinlock ordering is hence: dq_data_lock > dq_list_lock > i_lock.
> >   *
> >   * Note that some things (eg. sb pointer, type, id) doesn't change during
> >   * the life of the dquot structure and so needn't to be protected by a lock
> > @@ -128,7 +125,6 @@
> >   */
> >  
> >  static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_list_lock);
> > -static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_state_lock);
> >  __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock);
> >  EXPORT_SYMBOL(dq_data_lock);
> >  
> > @@ -827,14 +823,10 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
> >  	rcu_read_unlock();
> >  we_slept:
> >  	spin_lock(&dq_list_lock);
> > -	spin_lock(&dq_state_lock);
> >  	if (!sb_has_quota_active(sb, type)) {
> > -		spin_unlock(&dq_state_lock);
> >  		spin_unlock(&dq_list_lock);
> >  		goto out;
> >  	}
> > -	spin_unlock(&dq_state_lock);
> > -
> >  	dquot = find_dquot(hashent, sb, id, type);
> >  	if (!dquot) {
> >  		if (!empty) {
> > @@ -2022,24 +2014,19 @@ int dquot_disable(struct super_block *sb, int type, unsigned int flags)
> >  			continue;
> >  
> >  		if (flags & DQUOT_SUSPENDED) {
> > -			spin_lock(&dq_state_lock);
> >  			qctl->flags |=
> >  				dquot_state_flag(DQUOT_SUSPENDED, cnt);
> > -			spin_unlock(&dq_state_lock);
> >  		} else {
> > -			spin_lock(&dq_state_lock);
> >  			qctl->flags &= ~dquot_state_flag(flags, cnt);
> >  			/* Turning off suspended quotas? */
> >  			if (!sb_has_quota_loaded(sb, cnt) &&
> >  			    sb_has_quota_suspended(sb, cnt)) {
> >  				qctl->flags &=	~dquot_state_flag(
> >  							DQUOT_SUSPENDED, cnt);
> > -				spin_unlock(&dq_state_lock);
> >  				iput(dqopt->files[cnt]);
> >  				dqopt->files[cnt] = NULL;
> >  				continue;
> >  			}
> > -			spin_unlock(&dq_state_lock);
> >  		}
> >  
> >  		/* We still have to keep quota loaded? */
> > @@ -2235,10 +2222,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
> >  		goto out_file_init;
> >  	}
> >  	mutex_unlock(&dqopt->dqio_mutex);
> > -	spin_lock(&dq_state_lock);
> >  	dqctl(sb)->flags |= dquot_state_flag(flags, type);
> > -	spin_unlock(&dq_state_lock);
> > -
> >  	add_dquot_ref(sb, type);
> >  	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
> >  
> > @@ -2290,12 +2274,10 @@ int dquot_resume(struct super_block *sb, int type)
> >  		}
> >  		inode = qctl->dq_opt->files[cnt];
> >  		qctl->dq_opt->files[cnt] = NULL;
> > -		spin_lock(&dq_state_lock);
> >  		flags = qctl->flags & dquot_state_flag(DQUOT_USAGE_ENABLED |
> >  							DQUOT_LIMITS_ENABLED,
> >  							cnt);
> >  		qctl->flags &= ~dquot_state_flag(DQUOT_STATE_FLAGS, cnt);
> > -		spin_unlock(&dq_state_lock);
> >  		mutex_unlock(&qctl->dqonoff_mutex);
> >  
> >  		flags = dquot_generic_flag(flags, cnt);
> > @@ -2369,9 +2351,7 @@ int dquot_enable(struct inode *inode, int type, int format_id,
> >  			ret = -EBUSY;
> >  			goto out_lock;
> >  		}
> > -		spin_lock(&dq_state_lock);
> >  		qctl->flags |= dquot_state_flag(flags, type);
> > -		spin_unlock(&dq_state_lock);
> >  out_lock:
> >  		mutex_unlock(&qctl->dqonoff_mutex);
> >  		return ret;
> > -- 
> > 1.6.5.2
> > 
> -- 
> Jan Kara <jack@suse.cz>
> SUSE Labs, CR
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 08/19] quota: make dquot lists per-sb
  2010-11-11 12:14 ` [PATCH 08/19] quota: make dquot lists per-sb Dmitry Monakhov
@ 2010-11-22 21:37   ` Jan Kara
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Kara @ 2010-11-22 21:37 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-fsdevel, jack, hch

On Thu 11-11-10 15:14:27, Dmitry Monakhov wrote:
> Currently quota lists are global which is very bad for scalability.
> * inuse_lists -> sb->s_dquot->dq_inuse_list
> * free_lists  -> sb->s_dquot->dq_free_lists
> * Add per sb lock for quota's lists protection
> 
> Do not remove dq_lists_lock is used now only for protecting quota_hash
> 
> Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
> ---
>  fs/quota/dquot.c      |   88 +++++++++++++++++++++++++++++++++++++++---------
>  include/linux/quota.h |    3 ++
>  2 files changed, 74 insertions(+), 17 deletions(-)
> 
> diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> index f719a6f..d7ec471 100644
> --- a/fs/quota/dquot.c
> +++ b/fs/quota/dquot.c
...
> @@ -335,17 +333,20 @@ static inline int mark_dquot_dirty(struct dquot *dquot)
>  int dquot_mark_dquot_dirty(struct dquot *dquot)
>  {
>  	int ret = 1;
> +	struct quota_info *dqopt = sb_dqopts(dquot);
>  
>  	/* If quota is dirty already, we don't have to acquire dq_list_lock */
>  	if (test_bit(DQ_MOD_B, &dquot->dq_flags))
>  		return 1;
>  
>  	spin_lock(&dq_list_lock);
> +	spin_lock(&dqopt->dq_list_lock);
>  	if (!test_and_set_bit(DQ_MOD_B, &dquot->dq_flags)) {
> -		list_add(&dquot->dq_dirty, &sb_dqopts(dquot)->
> -				info[dquot->dq_type].dqi_dirty_list);
> +		list_add(&dquot->dq_dirty,
> +			&dqopt->info[dquot->dq_type].dqi_dirty_list);
>  		ret = 0;
>  	}
> +	spin_unlock(&dqopt->dq_list_lock);
>  	spin_unlock(&dq_list_lock);
  OK, but the above code does nothing with the hash so you can remove
dq_list_lock immediately, can't you? Not that it would matter too much
since you remove it eventually but I'm curious...

>  /* Free unused dquots from cache */
> -static void prune_dqcache(int count)
> +static void prune_one_sb_dqcache(struct super_block *sb, void *arg)
>  {
>  	struct list_head *head;
>  	struct dquot *dquot;
> +	struct quota_info *dqopt = dqopts(sb);
> +	int count = *(int*) arg;
>  
> -	head = free_dquots.prev;
> -	while (head != &free_dquots && count) {
> +	mutex_lock(&dqctl(sb)->dqonoff_mutex);
  You cannot call mutex_lock() because you already hold dq_list_lock from
shrink_dqcache_memory(). If we could get away without the mutex completely,
it would be really welcome. The code can be called from page-reclaim
possibly holding all sorts of locks so if you cannot get rid of
dqonoff_mutex, you must bail out if gfp_mask passed to shrinker does not
have __GFP_FS set (which would be unfortunate).

> +	if (!sb_any_quota_loaded(sb)) {
> +		mutex_unlock(&dqctl(sb)->dqonoff_mutex);
> +		return;
> +	}
> +	spin_lock(&dqopt->dq_list_lock);
> +	head = dqopt->dq_free_list.prev;
> +	while (head != &dqopt->dq_free_list && count) {
>  		dquot = list_entry(head, struct dquot, dq_free);
>  		remove_dquot_hash(dquot);
>  		remove_free_dquot(dquot);
>  		remove_inuse(dquot);
>  		do_destroy_dquot(dquot);
>  		count--;
> -		head = free_dquots.prev;
> +		head = dqopt->dq_free_list.prev;
>  	}
> +	spin_unlock(&dqopt->dq_list_lock);
> +	mutex_unlock(&dqctl(sb)->dqonoff_mutex);
> +}
> +static void prune_dqcache(int count)
> +{
> +	iterate_supers(prune_one_sb_dqcache, &count);
>  }
> -
>  /*
>   * This is called from kswapd when we think we need some
>   * more memory

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU
  2010-11-22 21:21   ` Jan Kara
@ 2010-11-22 21:53     ` Dmitry
  0 siblings, 0 replies; 32+ messages in thread
From: Dmitry @ 2010-11-22 21:53 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-fsdevel, jack, hch

On Mon, 22 Nov 2010 22:21:21 +0100, Jan Kara <jack@suse.cz> wrote:
> On Thu 11-11-10 15:14:23, Dmitry Monakhov wrote:
> > In order to hide quota internals inside didicated structure pointer.
> > We have to serialize that object lifetime with dqget(), and change/uncharge
> > functions.
> > Quota_info construction/destruction will be protected via ->dq_srcu.
> > SRCU counter temproraly placed inside sb, but will be moved inside
> > quota object pointer in next patch.
>   The changelog seems rather insufficient to me. Could you please write
> here the new locking rules more in detail? What structures are exactly
> protected by RCU? Which lock are you able to relax after these changes (is
> it only dq_state_lock?)? The rules should be also placed in dquot.c where
> locking is described.
Unfortunately you right, comments are not very descriptive, will redo. 
> 
> > diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> > index 748d744..7e937b0 100644
> > --- a/fs/quota/dquot.c
> > +++ b/fs/quota/dquot.c
> > @@ -805,7 +805,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
> >  /*
> >   * Get reference to dquot
> >   *
> > - * Locking is slightly tricky here. We are guarded from parallel quotaoff()
> > + * We are guarded from parallel quotaoff() by holding srcu_read_lock
>   The comment does not make sense after your change anymore.
> 
> >   * destroying our dquot by:
> >   *   a) checking for quota flags under dq_list_lock and
> >   *   b) getting a reference to dquot before we release dq_list_lock
> > @@ -814,9 +814,15 @@ struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
> >  {
> >  	unsigned int hashent = hashfn(sb, id, type);
> >  	struct dquot *dquot = NULL, *empty = NULL;
> > +	int idx;
> >  
> > -        if (!sb_has_quota_active(sb, type))
> > +	rcu_read_lock();
> > +	if (!sb_has_quota_active(sb, type)) {
> > +		rcu_read_unlock();
> >  		return NULL;
> > +	}
> > +	idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
> > +	rcu_read_unlock();
>   Ugh, I'm kind of puzzled by your combination of RCU and SRCU. What's the
> point?
   This is just a trick for non static srcu variables, to prevent
   races between use/free, see  http://lwn.net/Articles/202847/
   In my case i implement it like follows

  /* All readers do */
   rcu_read_lock();
   /* Stage1: at this moment we prevent dq_srcu from being fried */
   if (!sb_has_quota_active(sb, type)) {
       rcu_read_unlock();
       return NULL;
   }
   /* Stage2: grab reference to quota_info. */
   idx = srcu_read_lock(&dqopts(sb)->dq_srcu);
   rcu_read_unlock();


  /* quota_disable: Cleanup path looks like this  */
  quota_clear_active(sb, type);
  /* Wait for all callers in stage1 */
  synchronize_rcu();
  /* Wait for all caller in stage2 */
  synchronize_srcu(&dqopts(sb)->dq_srcu);
  /* Now we can finely   destroy srcu pointer 
  cleanup_srcu(&dqopts(sb)->dq_srcu); 
  
>  
> 								Honza
> -- 
> Jan Kara <jack@suse.cz>
> SUSE Labs, CR
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 06/19] quota: Remove state_lock
  2010-11-22 21:31     ` Dmitry
@ 2010-11-23 10:55       ` Jan Kara
  2010-11-23 11:33         ` Jan Kara
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Kara @ 2010-11-23 10:55 UTC (permalink / raw)
  To: Dmitry; +Cc: Jan Kara, linux-fsdevel, hch

On Tue 23-11-10 00:31:56, Dmitry wrote:
> On Mon, 22 Nov 2010 22:12:09 +0100, Jan Kara <jack@suse.cz> wrote:
> > On Thu 11-11-10 15:14:25, Dmitry Monakhov wrote:
> > > The only reader which use state_lock is dqget(), and is it serialized
> > > with quota_disable via SRCU. And state_lock doesn't guaranties anything
> > > for that case. All methods which modify quota flags already protected by
> > > dqonoff_mutex. Get rid of useless state_lock.
> >   dq_state_lock has two properties you miss here:
> > a) It guarantees we read a consistent value from a quota state variable.
>   So what, it is not guaranteed anything. Quota make becomes disabled
>   one nanosecond after check.
>   And with srcu_read_lock we can be guarantee what quota will be available
>   for the caller. Even in case of stale value.
  Well, that's not completely my point. The point is that when we
manipulate flags in an non-atomic way (and using &=, |=, ... is a
non-atomic way), we are not really guaranteed what values a reader will see
while the change happens. So in theory the reader could see that all quotas
are disabled during a transition from usrquota enabled to usrquota+grpquota
enabled (I admit a stupid example but you get the point).

> > b) We read an uptodate value from quota state variable.
>    All callers which modify the state already holds onoff_mutex so
>    implicit mem barriers always guaranteed.
  Yes, so eventually the quota state is guaranteed to have a consistent
value but it says nothing about when and what reader not using onoff_mutex
sees.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 06/19] quota: Remove state_lock
  2010-11-23 10:55       ` Jan Kara
@ 2010-11-23 11:33         ` Jan Kara
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Kara @ 2010-11-23 11:33 UTC (permalink / raw)
  To: Dmitry; +Cc: Jan Kara, linux-fsdevel, hch

On Tue 23-11-10 11:55:30, Jan Kara wrote:
> On Tue 23-11-10 00:31:56, Dmitry wrote:
> > On Mon, 22 Nov 2010 22:12:09 +0100, Jan Kara <jack@suse.cz> wrote:
> > > On Thu 11-11-10 15:14:25, Dmitry Monakhov wrote:
> > > > The only reader which use state_lock is dqget(), and is it serialized
> > > > with quota_disable via SRCU. And state_lock doesn't guaranties anything
> > > > for that case. All methods which modify quota flags already protected by
> > > > dqonoff_mutex. Get rid of useless state_lock.
> > >   dq_state_lock has two properties you miss here:
> > > a) It guarantees we read a consistent value from a quota state variable.
> >   So what, it is not guaranteed anything. Quota make becomes disabled
> >   one nanosecond after check.
> >   And with srcu_read_lock we can be guarantee what quota will be available
> >   for the caller. Even in case of stale value.
>   Well, that's not completely my point. The point is that when we
> manipulate flags in an non-atomic way (and using &=, |=, ... is a
> non-atomic way), we are not really guaranteed what values a reader will see
> while the change happens. So in theory the reader could see that all quotas
> are disabled during a transition from usrquota enabled to usrquota+grpquota
> enabled (I admit a stupid example but you get the point).
  Looking into the code once more I should add that this has been a
possible problem even before your patch. So it's not a problem you
introduced. You just made it more visible ;).

								Honza

-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 17/19] quota: Some stylistic cleanup for dquot interface
  2010-11-11 12:14 ` [PATCH 17/19] quota: Some stylistic cleanup for dquot interface Dmitry Monakhov
@ 2010-11-23 11:37   ` Jan Kara
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Kara @ 2010-11-23 11:37 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-fsdevel, jack, hch

On Thu 11-11-10 15:14:36, Dmitry Monakhov wrote:
> This patch performs only stylistic cleanup. No changes in logic at all.
> - Rename dqget() to find_get_dquot()
  This seems as a pointless excercise to me...

> - Wrap direct dq_count increment to helper function
  This makes sense so if you need a name for it, why not dqgrab() - along
the way how iget() vs igrab() work?

> Some places still access dq_count directly, but this is because
> reference counting algorithm. It will be changed in later patches.

									Honza
> 
> Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
> ---
>  fs/ocfs2/file.c          |    8 ++++----
>  fs/ocfs2/quota_global.c  |    2 +-
>  fs/ocfs2/quota_local.c   |    3 ++-
>  fs/quota/dquot.c         |   42 ++++++++++++++++++++++++++----------------
>  include/linux/quotaops.h |    3 ++-
>  5 files changed, 35 insertions(+), 23 deletions(-)
> 
> diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
> index 9a03c15..b7e7c9b 100644
> --- a/fs/ocfs2/file.c
> +++ b/fs/ocfs2/file.c
> @@ -1205,8 +1205,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
>  		if (attr->ia_valid & ATTR_UID && attr->ia_uid != inode->i_uid
>  		    && OCFS2_HAS_RO_COMPAT_FEATURE(sb,
>  		    OCFS2_FEATURE_RO_COMPAT_USRQUOTA)) {
> -			transfer_to[USRQUOTA] = dqget(sb, attr->ia_uid,
> -						      USRQUOTA);
> +			transfer_to[USRQUOTA] =
> +				find_get_dquot(sb, attr->ia_uid, USRQUOTA);
>  			if (!transfer_to[USRQUOTA]) {
>  				status = -ESRCH;
>  				goto bail_unlock;
> @@ -1215,8 +1215,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
>  		if (attr->ia_valid & ATTR_GID && attr->ia_gid != inode->i_gid
>  		    && OCFS2_HAS_RO_COMPAT_FEATURE(sb,
>  		    OCFS2_FEATURE_RO_COMPAT_GRPQUOTA)) {
> -			transfer_to[GRPQUOTA] = dqget(sb, attr->ia_gid,
> -						      GRPQUOTA);
> +			transfer_to[GRPQUOTA] =
> +				find_get_dquot(sb, attr->ia_gid, GRPQUOTA);
>  			if (!transfer_to[GRPQUOTA]) {
>  				status = -ESRCH;
>  				goto bail_unlock;
> diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
> index 3d2841c..cdf2a23 100644
> --- a/fs/ocfs2/quota_global.c
> +++ b/fs/ocfs2/quota_global.c
> @@ -692,7 +692,7 @@ static int ocfs2_release_dquot(struct dquot *dquot)
>  	mlog_entry("id=%u, type=%d", dquot->dq_id, dquot->dq_type);
>  
>  	mutex_lock(&dquot->dq_mutex);
> -	/* Check whether we are not racing with some other dqget() */
> +	/* Check whether we are not racing with some other find_get_dquot() */
>  	if (atomic_read(&dquot->dq_count) > 1)
>  		goto out;
>  	status = ocfs2_lock_global_qf(oinfo, 1);
> diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
> index 9e68ce5..6e5c7e9 100644
> --- a/fs/ocfs2/quota_local.c
> +++ b/fs/ocfs2/quota_local.c
> @@ -500,7 +500,8 @@ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
>  			}
>  			dqblk = (struct ocfs2_local_disk_dqblk *)(qbh->b_data +
>  				ol_dqblk_block_off(sb, chunk, bit));
> -			dquot = dqget(sb, le64_to_cpu(dqblk->dqb_id), type);
> +			dquot = find_get_dquot(sb, le64_to_cpu(dqblk->dqb_id),
> +					type);
>  			if (!dquot) {
>  				status = -EIO;
>  				mlog(ML_ERROR, "Failed to get quota structure "
> diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> index 33dc32e..af3413e 100644
> --- a/fs/quota/dquot.c
> +++ b/fs/quota/dquot.c
> @@ -16,7 +16,8 @@
>   *		Revised list management to avoid races
>   *		-- Bill Hawes, <whawes@star.net>, 9/98
>   *
> - *		Fixed races in dquot_transfer(), dqget() and dquot_alloc_...().
> + *		Fixed races in dquot_transfer(), find_get_dquot() and
> + *		dquot_alloc_...().
>   *		As the consequence the locking was moved from dquot_decr_...(),
>   *		dquot_incr_...() to calling functions.
>   *		invalidate_dquots() now writes modified dquots.
> @@ -109,8 +110,9 @@
>   * Each dquot has its dq_mutex mutex. Locked dquots might not be referenced
>   * from inodes (dquot_alloc_space() and such don't check the dq_mutex).
>   * Currently dquot is locked only when it is being read to memory (or space for
> - * it is being allocated) on the first dqget() and when it is being released on
> - * the last dqput(). The allocation and release oparations are serialized by
> + * it is being allocated) on the first find_get_dquot() and when it is being
> + * released on the last dqput().
> + * The allocation and release oparations are serialized by
>   * the dq_mutex and by checking the use count in dquot_release().  Write
>   * operations on dquots don't hold dq_mutex as they copy data under dq_data_lock
>   * spinlock to internal buffers before writing.
> @@ -522,7 +524,7 @@ int dquot_release(struct dquot *dquot)
>  	struct quota_info *dqopt = sb_dqopts(dquot);
>  
>  	mutex_lock(&dquot->dq_mutex);
> -	/* Check whether we are not racing with some other dqget() */
> +	/* Check whether we are not racing with some other find_get_dquot() */
>  	if (atomic_read(&dquot->dq_count) > 1)
>  		goto out_dqlock;
>  	mutex_lock(&dqopt->dqio_mutex);
> @@ -577,7 +579,7 @@ restart:
>  		if (atomic_read(&dquot->dq_count)) {
>  			DEFINE_WAIT(wait);
>  
> -			atomic_inc(&dquot->dq_count);
> +			dqget(dquot);
>  			prepare_to_wait(&dquot->dq_wait_unused, &wait,
>  					TASK_UNINTERRUPTIBLE);
>  			spin_unlock(&dqopt->dq_list_lock);
> @@ -627,7 +629,7 @@ int dquot_scan_active(struct super_block *sb,
>  		if (dquot->dq_sb != sb)
>  			continue;
>  		/* Now we have active dquot so we can just increase use count */
> -		atomic_inc(&dquot->dq_count);
> +		dqget(dquot);
>  		spin_unlock(&dqopt->dq_list_lock);
>  		dqstats_inc(DQST_LOOKUPS);
>  		dqput(old_dquot);
> @@ -674,7 +676,7 @@ int dquot_quota_sync(struct super_block *sb, int type, int wait)
>  			/* Now we have active dquot from which someone is
>   			 * holding reference so we can safely just increase
>  			 * use count */
> -			atomic_inc(&dquot->dq_count);
> +			dqget(dquot);
>  			spin_unlock(&dqopt->dq_list_lock);
>  			dqstats_inc(DQST_LOOKUPS);
>  			dqctl(sb)->dq_op->write_dquot(dquot);
> @@ -869,6 +871,11 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
>  	return dquot;
>  }
>  
> +inline void dqget(struct dquot *dquot)
> +{
> +	atomic_inc(&dquot->dq_count);
> +}
> +
>  /*
>   * Get reference to dquot
>   *
> @@ -877,7 +884,7 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type)
>   *   a) checking for quota flags under dq_list_lock and
>   *   b) getting a reference to dquot before we release dq_list_lock
>   */
> -struct dquot *dqget(struct super_block *sb, unsigned int id, int type)
> +struct dquot *find_get_dquot(struct super_block *sb, unsigned int id, int type)
>  {
>  	struct hlist_bl_head * blh = dquot_hash + hashfn(sb, id, type);
>  	struct dquot *dquot = NULL, *empty = NULL;
> @@ -922,7 +929,7 @@ we_slept:
>  	} else {
>  		if (!atomic_read(&dquot->dq_count))
>  			remove_free_dquot(dquot);
> -		atomic_inc(&dquot->dq_count);
> +		dqget(dquot);
>  		hlist_bl_unlock(blh);
>  		spin_unlock(&dqopt->dq_list_lock);
>  		dqstats_inc(DQST_CACHE_HITS);
> @@ -948,7 +955,7 @@ out:
>  
>  	return dquot;
>  }
> -EXPORT_SYMBOL(dqget);
> +EXPORT_SYMBOL(find_get_dquot);
>  
>  static int dqinit_needed(struct inode *inode, int type)
>  {
> @@ -1427,7 +1434,7 @@ static int dquot_active(const struct inode *inode)
>   * Initialize quota pointers in inode
>   *
>   * We do things in a bit complicated way but by that we avoid calling
> - * dqget() and thus filesystem callbacks under dqptr_sem.
> + * find_get_dquot() and thus filesystem callbacks under dqptr_sem.
>   *
>   * It is better to call this function outside of any transaction as it
>   * might need a lot of space in journal for dquot structure allocation.
> @@ -1462,7 +1469,7 @@ static void __dquot_initialize(struct inode *inode, int type)
>  			id = inode->i_gid;
>  			break;
>  		}
> -		got[cnt] = dqget(sb, id, cnt);
> +		got[cnt] = find_get_dquot(sb, id, cnt);
>  	}
>  
>  	down_write(&dqopts(sb)->dqptr_sem);
> @@ -1991,9 +1998,12 @@ int dquot_transfer(struct inode *inode, struct iattr *iattr)
>  		return 0;
>  
>  	if (iattr->ia_valid & ATTR_UID && iattr->ia_uid != inode->i_uid)
> -		transfer_to[USRQUOTA] = dqget(sb, iattr->ia_uid, USRQUOTA);
> +		transfer_to[USRQUOTA] = find_get_dquot(sb, iattr->ia_uid,
> +						USRQUOTA);
> +
>  	if (iattr->ia_valid & ATTR_GID && iattr->ia_gid != inode->i_gid)
> -		transfer_to[GRPQUOTA] = dqget(sb, iattr->ia_gid, GRPQUOTA);
> +		transfer_to[GRPQUOTA] = find_get_dquot(sb, iattr->ia_gid,
> +						GRPQUOTA);
>  
>  	ret = __dquot_transfer(inode, transfer_to);
>  	dqput_all(transfer_to);
> @@ -2547,7 +2557,7 @@ int dquot_get_dqblk(struct super_block *sb, int type, qid_t id,
>  {
>  	struct dquot *dquot;
>  
> -	dquot = dqget(sb, id, type);
> +	dquot = find_get_dquot(sb, id, type);
>  	if (!dquot)
>  		return -ESRCH;
>  	do_get_dqblk(dquot, di);
> @@ -2660,7 +2670,7 @@ int dquot_set_dqblk(struct super_block *sb, int type, qid_t id,
>  	struct dquot *dquot;
>  	int rc;
>  
> -	dquot = dqget(sb, id, type);
> +	dquot = find_get_dquot(sb, id, type);
>  	if (!dquot) {
>  		rc = -ESRCH;
>  		goto out;
> diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
> index 68ceef5..93e39c6 100644
> --- a/include/linux/quotaops.h
> +++ b/include/linux/quotaops.h
> @@ -52,7 +52,8 @@ void inode_sub_rsv_space(struct inode *inode, qsize_t number);
>  
>  void dquot_initialize(struct inode *inode);
>  void dquot_drop(struct inode *inode);
> -struct dquot *dqget(struct super_block *sb, unsigned int id, int type);
> +struct dquot*find_get_dquot(struct super_block *sb, unsigned int id, int type);
> +void dqget(struct dquot *dquot);
>  void dqput(struct dquot *dquot);
>  int dquot_scan_active(struct super_block *sb,
>  		      int (*fn)(struct dquot *dquot, unsigned long priv),
> -- 
> 1.6.5.2
> 
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock
  2010-11-22 19:35   ` Jan Kara
@ 2010-12-02 11:40     ` Dmitry
  0 siblings, 0 replies; 32+ messages in thread
From: Dmitry @ 2010-12-02 11:40 UTC (permalink / raw)
  To: Jan Kara; +Cc: linux-fsdevel, jack, hch

On Mon, 22 Nov 2010 20:35:02 +0100, Jan Kara <jack@suse.cz> wrote:
> On Thu 11-11-10 15:14:20, Dmitry Monakhov wrote:
> > dqptr_sem hasn't any thing in common with quota files,
> > quota file load  protected with dqonoff_mutex, so we have to use
> > it for reading fmt info.
>   You missed ext2 and jfs from the conversion. Also you should mention that
Hm... i've recheck that and it appears you are not right here.
ext2,jfs2 use default quota operations dquot_quotactl_ops;
And this patch added ->get_fmt call to default ops.
> you introduce new ->get_fmt call. Other than that just one nitpick:
> 
> > diff --git a/include/linux/quota.h b/include/linux/quota.h
> > index 9a85412..2767e4c 100644
> > --- a/include/linux/quota.h
> > +++ b/include/linux/quota.h
> > @@ -331,6 +331,7 @@ struct quotactl_ops {
> >  	int (*quota_off)(struct super_block *, int);
> >  	int (*quota_sync)(struct super_block *, int, int);
> >  	int (*get_info)(struct super_block *, int, struct if_dqinfo *);
> > +	int (*get_fmt)(struct super_block*, int, unsigned int*);
>                                                              ^ space before *
> 
> 									Honza
> -- 
> Jan Kara <jack@suse.cz>
> SUSE Labs, CR

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2010-12-02 11:40 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-11-11 12:14 [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 01/19] quota: protect getfmt call with dqonoff_mutex lock Dmitry Monakhov
2010-11-11 13:36   ` Christoph Hellwig
2010-11-22 19:35   ` Jan Kara
2010-12-02 11:40     ` Dmitry
2010-11-11 12:14 ` [PATCH 02/19] kernel: add bl_list Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 03/19] quota: Wrap common expression to helper function Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 04/19] quota: protect dqget() from parallels quotaoff via SRCU Dmitry Monakhov
2010-11-22 21:21   ` Jan Kara
2010-11-22 21:53     ` Dmitry
2010-11-11 12:14 ` [PATCH 05/19] quota: mode quota internals from sb to quota_info Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 06/19] quota: Remove state_lock Dmitry Monakhov
2010-11-22 21:12   ` Jan Kara
2010-11-22 21:31     ` Dmitry
2010-11-23 10:55       ` Jan Kara
2010-11-23 11:33         ` Jan Kara
2010-11-11 12:14 ` [PATCH 07/19] quota: add quota format lock Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 08/19] quota: make dquot lists per-sb Dmitry Monakhov
2010-11-22 21:37   ` Jan Kara
2010-11-11 12:14 ` [PATCH 09/19] quota: optimize quota_initialize Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 10/19] quota: user per-backet hlist lock for dquot_hash Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 11/19] quota: remove global dq_list_lock Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 12/19] quota: rename dq_lock Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 13/19] quota: make per-sb dq_data_lock Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 14/19] quota: protect dquot mem info with object's lock Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 15/19] quota: drop dq_data_lock where possible Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 16/19] quota: relax dq_data_lock dq_lock locking consistency Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 17/19] quota: Some stylistic cleanup for dquot interface Dmitry Monakhov
2010-11-23 11:37   ` Jan Kara
2010-11-11 12:14 ` [PATCH 18/19] fs: add unlocked helpers Dmitry Monakhov
2010-11-11 12:14 ` [PATCH 19/19] quota: protect i_dquot with i_lock instead of dqptr_sem Dmitry Monakhov
2010-11-19  5:44 ` [PATCH 00/19] quota: RFC SMP improvements for generic quota V3 Dmitry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.