All of lore.kernel.org
 help / color / mirror / Atom feed
* GFS2: Pre-pull patch posting (fixes)
@ 2010-02-04  9:46 ` Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-02-04  9:46 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are a couple of patches which between them fix a problem where
occasionally it was possible for the GFS2 module to be unloaded
before all the glocks were deallocated, which, needless to say, made
the slab allocator unhappy,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Cluster-devel] GFS2: Pre-pull patch posting (fixes)
@ 2010-02-04  9:46 ` Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-02-04  9:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

Here are a couple of patches which between them fix a problem where
occasionally it was possible for the GFS2 module to be unloaded
before all the glocks were deallocated, which, needless to say, made
the slab allocator unhappy,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/2] GFS2: Wait for unlock completion on umount
  2010-02-04  9:46 ` [Cluster-devel] " Steven Whitehouse
@ 2010-02-04  9:46   ` Steven Whitehouse
  -1 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-02-04  9:46 UTC (permalink / raw)
  To: linux-kernel, cluster-devel; +Cc: Steven Whitehouse

This patch adds a wait on umount between the point at which we
dispose of all glocks and the point at which we unmount the
lock protocol. This ensures that we've received all the replies
to our unlock requests before we stop the locking.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Reported-by: Fabio M. Di Nitto <fdinitto@redhat.com>
---
 fs/gfs2/incore.h     |    2 ++
 fs/gfs2/lock_dlm.c   |    7 ++++++-
 fs/gfs2/ops_fstype.c |    2 ++
 fs/gfs2/super.c      |    3 +++
 4 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
index 4792200..bc0ad15 100644
--- a/fs/gfs2/incore.h
+++ b/fs/gfs2/incore.h
@@ -544,6 +544,8 @@ struct gfs2_sbd {
 	struct gfs2_holder sd_live_gh;
 	struct gfs2_glock *sd_rename_gl;
 	struct gfs2_glock *sd_trans_gl;
+	wait_queue_head_t sd_glock_wait;
+	atomic_t sd_glock_disposal;
 
 	/* Inode Stuff */
 
diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
index 46df988..cdd0755 100644
--- a/fs/gfs2/lock_dlm.c
+++ b/fs/gfs2/lock_dlm.c
@@ -21,6 +21,7 @@ static void gdlm_ast(void *arg)
 {
 	struct gfs2_glock *gl = arg;
 	unsigned ret = gl->gl_state;
+	struct gfs2_sbd *sdp = gl->gl_sbd;
 
 	BUG_ON(gl->gl_lksb.sb_flags & DLM_SBF_DEMOTED);
 
@@ -30,6 +31,8 @@ static void gdlm_ast(void *arg)
 	switch (gl->gl_lksb.sb_status) {
 	case -DLM_EUNLOCK: /* Unlocked, so glock can be freed */
 		kmem_cache_free(gfs2_glock_cachep, gl);
+		if (atomic_dec_and_test(&sdp->sd_glock_disposal))
+			wake_up(&sdp->sd_glock_wait);
 		return;
 	case -DLM_ECANCEL: /* Cancel while getting lock */
 		ret |= LM_OUT_CANCELED;
@@ -167,7 +170,8 @@ static unsigned int gdlm_lock(struct gfs2_glock *gl,
 static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
 {
 	struct gfs2_glock *gl = ptr;
-	struct lm_lockstruct *ls = &gl->gl_sbd->sd_lockstruct;
+	struct gfs2_sbd *sdp = gl->gl_sbd;
+	struct lm_lockstruct *ls = &sdp->sd_lockstruct;
 	int error;
 
 	if (gl->gl_lksb.sb_lkid == 0) {
@@ -183,6 +187,7 @@ static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
 		       (unsigned long long)gl->gl_name.ln_number, error);
 		return;
 	}
+	atomic_inc(&sdp->sd_glock_disposal);
 }
 
 static void gdlm_cancel(struct gfs2_glock *gl)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index edfee24..9390fc7 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -82,6 +82,8 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
 
 	gfs2_tune_init(&sdp->sd_tune);
 
+	init_waitqueue_head(&sdp->sd_glock_wait);
+	atomic_set(&sdp->sd_glock_disposal, 0);
 	spin_lock_init(&sdp->sd_statfs_spin);
 
 	spin_lock_init(&sdp->sd_rindex_spin);
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index c282ad4..66242b3 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -21,6 +21,7 @@
 #include <linux/gfs2_ondisk.h>
 #include <linux/crc32.h>
 #include <linux/time.h>
+#include <linux/wait.h>
 
 #include "gfs2.h"
 #include "incore.h"
@@ -860,6 +861,8 @@ restart:
 	gfs2_jindex_free(sdp);
 	/*  Take apart glock structures and buffer lists  */
 	gfs2_gl_hash_clear(sdp);
+	/* Wait for dlm to reply to all our unlock requests */
+	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
 	/*  Unmount the locking protocol  */
 	gfs2_lm_unmount(sdp);
 
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Cluster-devel] [PATCH 1/2] GFS2: Wait for unlock completion on umount
@ 2010-02-04  9:46   ` Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-02-04  9:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch adds a wait on umount between the point at which we
dispose of all glocks and the point at which we unmount the
lock protocol. This ensures that we've received all the replies
to our unlock requests before we stop the locking.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Reported-by: Fabio M. Di Nitto <fdinitto@redhat.com>
---
 fs/gfs2/incore.h     |    2 ++
 fs/gfs2/lock_dlm.c   |    7 ++++++-
 fs/gfs2/ops_fstype.c |    2 ++
 fs/gfs2/super.c      |    3 +++
 4 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
index 4792200..bc0ad15 100644
--- a/fs/gfs2/incore.h
+++ b/fs/gfs2/incore.h
@@ -544,6 +544,8 @@ struct gfs2_sbd {
 	struct gfs2_holder sd_live_gh;
 	struct gfs2_glock *sd_rename_gl;
 	struct gfs2_glock *sd_trans_gl;
+	wait_queue_head_t sd_glock_wait;
+	atomic_t sd_glock_disposal;
 
 	/* Inode Stuff */
 
diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
index 46df988..cdd0755 100644
--- a/fs/gfs2/lock_dlm.c
+++ b/fs/gfs2/lock_dlm.c
@@ -21,6 +21,7 @@ static void gdlm_ast(void *arg)
 {
 	struct gfs2_glock *gl = arg;
 	unsigned ret = gl->gl_state;
+	struct gfs2_sbd *sdp = gl->gl_sbd;
 
 	BUG_ON(gl->gl_lksb.sb_flags & DLM_SBF_DEMOTED);
 
@@ -30,6 +31,8 @@ static void gdlm_ast(void *arg)
 	switch (gl->gl_lksb.sb_status) {
 	case -DLM_EUNLOCK: /* Unlocked, so glock can be freed */
 		kmem_cache_free(gfs2_glock_cachep, gl);
+		if (atomic_dec_and_test(&sdp->sd_glock_disposal))
+			wake_up(&sdp->sd_glock_wait);
 		return;
 	case -DLM_ECANCEL: /* Cancel while getting lock */
 		ret |= LM_OUT_CANCELED;
@@ -167,7 +170,8 @@ static unsigned int gdlm_lock(struct gfs2_glock *gl,
 static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
 {
 	struct gfs2_glock *gl = ptr;
-	struct lm_lockstruct *ls = &gl->gl_sbd->sd_lockstruct;
+	struct gfs2_sbd *sdp = gl->gl_sbd;
+	struct lm_lockstruct *ls = &sdp->sd_lockstruct;
 	int error;
 
 	if (gl->gl_lksb.sb_lkid == 0) {
@@ -183,6 +187,7 @@ static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
 		       (unsigned long long)gl->gl_name.ln_number, error);
 		return;
 	}
+	atomic_inc(&sdp->sd_glock_disposal);
 }
 
 static void gdlm_cancel(struct gfs2_glock *gl)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index edfee24..9390fc7 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -82,6 +82,8 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
 
 	gfs2_tune_init(&sdp->sd_tune);
 
+	init_waitqueue_head(&sdp->sd_glock_wait);
+	atomic_set(&sdp->sd_glock_disposal, 0);
 	spin_lock_init(&sdp->sd_statfs_spin);
 
 	spin_lock_init(&sdp->sd_rindex_spin);
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index c282ad4..66242b3 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -21,6 +21,7 @@
 #include <linux/gfs2_ondisk.h>
 #include <linux/crc32.h>
 #include <linux/time.h>
+#include <linux/wait.h>
 
 #include "gfs2.h"
 #include "incore.h"
@@ -860,6 +861,8 @@ restart:
 	gfs2_jindex_free(sdp);
 	/*  Take apart glock structures and buffer lists  */
 	gfs2_gl_hash_clear(sdp);
+	/* Wait for dlm to reply to all our unlock requests */
+	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
 	/*  Unmount the locking protocol  */
 	gfs2_lm_unmount(sdp);
 
-- 
1.6.2.5



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/2] GFS2: Extend umount wait coverage to full glock lifetime
  2010-02-04  9:46   ` [Cluster-devel] " Steven Whitehouse
@ 2010-02-04  9:46     ` Steven Whitehouse
  -1 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-02-04  9:46 UTC (permalink / raw)
  To: linux-kernel, cluster-devel; +Cc: Steven Whitehouse

Although all glocks are, by the time of the umount glock wait,
scheduled for demotion, some of them haven't made it far
enough through the process for the original set of waiting
code to wait for them.

This extends the ref count to the whole glock lifetime in order
to ensure that the waiting does catch all glocks. It does make
it a bit more invasive, but it seems the only sensible solution
at the moment.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
---
 fs/gfs2/glock.c      |    4 ++++
 fs/gfs2/glock.h      |    2 +-
 fs/gfs2/lock_dlm.c   |    6 +++---
 fs/gfs2/ops_fstype.c |   10 +++++++++-
 fs/gfs2/super.c      |    2 --
 5 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index f455a03..f426633 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -769,6 +769,7 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
 	if (!gl)
 		return -ENOMEM;
 
+	atomic_inc(&sdp->sd_glock_disposal);
 	gl->gl_flags = 0;
 	gl->gl_name = name;
 	atomic_set(&gl->gl_ref, 1);
@@ -1538,6 +1539,9 @@ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
 		up_write(&gfs2_umount_flush_sem);
 		msleep(10);
 	}
+	flush_workqueue(glock_workqueue);
+	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
+	gfs2_dump_lockstate(sdp);
 }
 
 void gfs2_glock_finish_truncate(struct gfs2_inode *ip)
diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
index 13f0bd2..c0262fa 100644
--- a/fs/gfs2/glock.h
+++ b/fs/gfs2/glock.h
@@ -123,7 +123,7 @@ struct lm_lockops {
 	int (*lm_mount) (struct gfs2_sbd *sdp, const char *fsname);
  	void (*lm_unmount) (struct gfs2_sbd *sdp);
 	void (*lm_withdraw) (struct gfs2_sbd *sdp);
-	void (*lm_put_lock) (struct kmem_cache *cachep, void *gl);
+	void (*lm_put_lock) (struct kmem_cache *cachep, struct gfs2_glock *gl);
 	unsigned int (*lm_lock) (struct gfs2_glock *gl,
 				 unsigned int req_state, unsigned int flags);
 	void (*lm_cancel) (struct gfs2_glock *gl);
diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
index cdd0755..0e5e0e7 100644
--- a/fs/gfs2/lock_dlm.c
+++ b/fs/gfs2/lock_dlm.c
@@ -167,15 +167,16 @@ static unsigned int gdlm_lock(struct gfs2_glock *gl,
 	return LM_OUT_ASYNC;
 }
 
-static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
+static void gdlm_put_lock(struct kmem_cache *cachep, struct gfs2_glock *gl)
 {
-	struct gfs2_glock *gl = ptr;
 	struct gfs2_sbd *sdp = gl->gl_sbd;
 	struct lm_lockstruct *ls = &sdp->sd_lockstruct;
 	int error;
 
 	if (gl->gl_lksb.sb_lkid == 0) {
 		kmem_cache_free(cachep, gl);
+		if (atomic_dec_and_test(&sdp->sd_glock_disposal))
+			wake_up(&sdp->sd_glock_wait);
 		return;
 	}
 
@@ -187,7 +188,6 @@ static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
 		       (unsigned long long)gl->gl_name.ln_number, error);
 		return;
 	}
-	atomic_inc(&sdp->sd_glock_disposal);
 }
 
 static void gdlm_cancel(struct gfs2_glock *gl)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 9390fc7..8a102f7 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -985,9 +985,17 @@ static const match_table_t nolock_tokens = {
 	{ Opt_err, NULL },
 };
 
+static void nolock_put_lock(struct kmem_cache *cachep, struct gfs2_glock *gl)
+{
+	struct gfs2_sbd *sdp = gl->gl_sbd;
+	kmem_cache_free(cachep, gl);
+	if (atomic_dec_and_test(&sdp->sd_glock_disposal))
+		wake_up(&sdp->sd_glock_wait);
+}
+
 static const struct lm_lockops nolock_ops = {
 	.lm_proto_name = "lock_nolock",
-	.lm_put_lock = kmem_cache_free,
+	.lm_put_lock = nolock_put_lock,
 	.lm_tokens = &nolock_tokens,
 };
 
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 66242b3..b9dd3da 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -861,8 +861,6 @@ restart:
 	gfs2_jindex_free(sdp);
 	/*  Take apart glock structures and buffer lists  */
 	gfs2_gl_hash_clear(sdp);
-	/* Wait for dlm to reply to all our unlock requests */
-	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
 	/*  Unmount the locking protocol  */
 	gfs2_lm_unmount(sdp);
 
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Cluster-devel] [PATCH 2/2] GFS2: Extend umount wait coverage to full glock lifetime
@ 2010-02-04  9:46     ` Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-02-04  9:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Although all glocks are, by the time of the umount glock wait,
scheduled for demotion, some of them haven't made it far
enough through the process for the original set of waiting
code to wait for them.

This extends the ref count to the whole glock lifetime in order
to ensure that the waiting does catch all glocks. It does make
it a bit more invasive, but it seems the only sensible solution
at the moment.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
---
 fs/gfs2/glock.c      |    4 ++++
 fs/gfs2/glock.h      |    2 +-
 fs/gfs2/lock_dlm.c   |    6 +++---
 fs/gfs2/ops_fstype.c |   10 +++++++++-
 fs/gfs2/super.c      |    2 --
 5 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index f455a03..f426633 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -769,6 +769,7 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
 	if (!gl)
 		return -ENOMEM;
 
+	atomic_inc(&sdp->sd_glock_disposal);
 	gl->gl_flags = 0;
 	gl->gl_name = name;
 	atomic_set(&gl->gl_ref, 1);
@@ -1538,6 +1539,9 @@ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
 		up_write(&gfs2_umount_flush_sem);
 		msleep(10);
 	}
+	flush_workqueue(glock_workqueue);
+	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
+	gfs2_dump_lockstate(sdp);
 }
 
 void gfs2_glock_finish_truncate(struct gfs2_inode *ip)
diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
index 13f0bd2..c0262fa 100644
--- a/fs/gfs2/glock.h
+++ b/fs/gfs2/glock.h
@@ -123,7 +123,7 @@ struct lm_lockops {
 	int (*lm_mount) (struct gfs2_sbd *sdp, const char *fsname);
  	void (*lm_unmount) (struct gfs2_sbd *sdp);
 	void (*lm_withdraw) (struct gfs2_sbd *sdp);
-	void (*lm_put_lock) (struct kmem_cache *cachep, void *gl);
+	void (*lm_put_lock) (struct kmem_cache *cachep, struct gfs2_glock *gl);
 	unsigned int (*lm_lock) (struct gfs2_glock *gl,
 				 unsigned int req_state, unsigned int flags);
 	void (*lm_cancel) (struct gfs2_glock *gl);
diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
index cdd0755..0e5e0e7 100644
--- a/fs/gfs2/lock_dlm.c
+++ b/fs/gfs2/lock_dlm.c
@@ -167,15 +167,16 @@ static unsigned int gdlm_lock(struct gfs2_glock *gl,
 	return LM_OUT_ASYNC;
 }
 
-static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
+static void gdlm_put_lock(struct kmem_cache *cachep, struct gfs2_glock *gl)
 {
-	struct gfs2_glock *gl = ptr;
 	struct gfs2_sbd *sdp = gl->gl_sbd;
 	struct lm_lockstruct *ls = &sdp->sd_lockstruct;
 	int error;
 
 	if (gl->gl_lksb.sb_lkid == 0) {
 		kmem_cache_free(cachep, gl);
+		if (atomic_dec_and_test(&sdp->sd_glock_disposal))
+			wake_up(&sdp->sd_glock_wait);
 		return;
 	}
 
@@ -187,7 +188,6 @@ static void gdlm_put_lock(struct kmem_cache *cachep, void *ptr)
 		       (unsigned long long)gl->gl_name.ln_number, error);
 		return;
 	}
-	atomic_inc(&sdp->sd_glock_disposal);
 }
 
 static void gdlm_cancel(struct gfs2_glock *gl)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 9390fc7..8a102f7 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -985,9 +985,17 @@ static const match_table_t nolock_tokens = {
 	{ Opt_err, NULL },
 };
 
+static void nolock_put_lock(struct kmem_cache *cachep, struct gfs2_glock *gl)
+{
+	struct gfs2_sbd *sdp = gl->gl_sbd;
+	kmem_cache_free(cachep, gl);
+	if (atomic_dec_and_test(&sdp->sd_glock_disposal))
+		wake_up(&sdp->sd_glock_wait);
+}
+
 static const struct lm_lockops nolock_ops = {
 	.lm_proto_name = "lock_nolock",
-	.lm_put_lock = kmem_cache_free,
+	.lm_put_lock = nolock_put_lock,
 	.lm_tokens = &nolock_tokens,
 };
 
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 66242b3..b9dd3da 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -861,8 +861,6 @@ restart:
 	gfs2_jindex_free(sdp);
 	/*  Take apart glock structures and buffer lists  */
 	gfs2_gl_hash_clear(sdp);
-	/* Wait for dlm to reply to all our unlock requests */
-	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
 	/*  Unmount the locking protocol  */
 	gfs2_lm_unmount(sdp);
 
-- 
1.6.2.5



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2014-09-15  9:32 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2014-09-15  9:32 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are a number of small fixes for GFS2. There is a fix for FIEMAP
on large sparse files, a negative dentry hashing fix, a fix for
flock, and a bug fix relating to d_splice_alias usage. There are
also (patches 1 and 5) a couple of updates which are less
critical, but small and low risk.

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2014-07-18 10:37 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2014-07-18 10:37 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are the current set of small fixes relating to GFS2.

This patch set contains two minor docs/spelling fixes, some fixes for
flock, a change to use GFP_NOFS to avoid recursion on a rarely used code
path and a fix for a race relating to the glock lru,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2014-01-02 12:28 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2014-01-02 12:28 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here is a set of small fixes for GFS2. There is a fix to drop
s_umount which is copied in from the core vfs, two patches
relate to a hard to hit "use after free" and memory leak.
Two patches related to using DIO and buffered I/O on the same
file to ensure correct operation in relation to glock state
changes. The final patch adds an RCU read lock to ensure
correct locking on an error path,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2013-11-22 10:35 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2013-11-22 10:35 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are a couple of very small, but important, fixes,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2013-08-19  8:48 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2013-08-19  8:48 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Out of these fives patches, the one for ensuring that the number of
revokes is not exceeded, and the one for checking the glock is not
already held in gfs2_getxattr are the two most important. The latter
can be triggered by selinux.

The other three patches are very small and fix mostly fairly
trivial issues,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2013-06-04 13:40 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2013-06-04 13:40 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

There are four patches this time. The first fixes a problem where the
wrong descriptor type was being written into the log for journaled data
blocks. The second fixes a race relating to the deallocation of allocator
data. The third provides a fallback if kmalloc is unable to satisfy a
request to allocate a directory hash table. The fourth fixes the iopen
glock caching so that inodes are deleted in a more timely manner after
rmdir/unlink,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2013-05-24 13:37 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2013-05-24 13:37 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

This time there are just four fixes. There are a couple of minor updates
to the quota code, a fix for KConfig to ensure that only valid combinations
including GFS2 can be built, and a fix for a typo affecting end i/o
processing when writing the journal.

Also, there is a temporary fix for a performance regression relating to
block reservations and directories. A longer fix will be applied in
due course, but this deals with the most immediate problem for now,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2012-11-07 10:15 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2012-11-07 10:15 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are a number of GFS2 bug fixes. There are three from Andy Price
which fix various issues spotted by automated code analysis. There are two
from Lukas Czerner fixing my mistaken assumptions as to how FITRIM
should work. Finally Ben Marzinski has fixed a bug relating to mmap and
atime and also a bug relating to a locking issue in the transaction code.

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2012-09-13  9:42 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2012-09-13  9:42 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are three GFS2 fixes for the current kernel tree. These are all
related to the block reservation code which was added at the merge
window. That code will be getting an update at the forthcoming merge
window too. In the mean time though there are a few smaller issues
which should be fixed.

The first patch resolves an issue with write sizes of greater than
32 bits with the size hinting code. The second ensures that the
allocation data structure is initialised when using xattrs and the
third takes into account allocations which may have been made by
other nodes which affect a reservation on the local node,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2012-04-11  8:56 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2012-04-11  8:56 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are four fixes for issues that have come up since the last
merge window,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2012-02-28 11:11 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2012-02-28 11:11 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are four patches which provided fixes for bugs found in the
current upstream code. Please see individual patches for
descriptions,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2011-07-14  8:21 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2011-07-14  8:21 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

The following three fixes should help ensure that 3.0 is the best
release yet, GFS2-wise at least. I've had the first two queued for
some time waiting for the final one of this set. All three are relatively
short, although the third is a bit longer than the others, mainly
due to the extra comments added describing the inode eviction process,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2011-05-23 12:39 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2011-05-23 12:39 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are a couple of small fixes which just missed the previous pull
request. Both fairly short and self-explanatory,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2011-04-19  8:40 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2011-04-19  8:40 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are four fixes for GFS2. See the individual patches for the detailed
descriptions,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2010-07-15 13:57 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-07-15 13:57 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Hi,

Here are a few small fixes for GFS2,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2010-05-25  8:21 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-05-25  8:21 UTC (permalink / raw)
  To: cluster-devel, linux-kernel

Hi,

These are three important, but relatively small, bug fixes for GFS2.
The first prevents a kernel BUG triggering in a relatively unlikely
(but possible) scenario when a log flush caused by glock demotion
races with a log flush from some other initiator (e.g. fsync).

The second and third patches add extra conditions to two areas
of code. This makes their behaviour match ext3 in those cases,

Steve.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2010-01-11  9:11 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2010-01-11  9:11 UTC (permalink / raw)
  To: linux-kernel, cluster-devel

Here are four small fixes for GFS2. Assuming that nobody spots
any errors, I'll be sending a pull request for these shortly,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* GFS2: Pre-pull patch posting (fixes)
@ 2009-07-30 13:45 Steven Whitehouse
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Whitehouse @ 2009-07-30 13:45 UTC (permalink / raw)
  To: cluster-devel, linux-kernel

Hi,

Here is the current content of the GFS2 -fixes git tree. Nothing
very exciting this time... some fixes for issues we've
had relating to flushing glocks/memory usage, plus a couple of
other fixes relating to statfs and the timely removal of inodes
which have been unlinked on a remote node,

Steve.


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-09-15  9:33 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-02-04  9:46 GFS2: Pre-pull patch posting (fixes) Steven Whitehouse
2010-02-04  9:46 ` [Cluster-devel] " Steven Whitehouse
2010-02-04  9:46 ` [PATCH 1/2] GFS2: Wait for unlock completion on umount Steven Whitehouse
2010-02-04  9:46   ` [Cluster-devel] " Steven Whitehouse
2010-02-04  9:46   ` [PATCH 2/2] GFS2: Extend umount wait coverage to full glock lifetime Steven Whitehouse
2010-02-04  9:46     ` [Cluster-devel] " Steven Whitehouse
  -- strict thread matches above, loose matches on Subject: below --
2014-09-15  9:32 GFS2: Pre-pull patch posting (fixes) Steven Whitehouse
2014-07-18 10:37 Steven Whitehouse
2014-01-02 12:28 Steven Whitehouse
2013-11-22 10:35 Steven Whitehouse
2013-08-19  8:48 Steven Whitehouse
2013-06-04 13:40 Steven Whitehouse
2013-05-24 13:37 Steven Whitehouse
2012-11-07 10:15 Steven Whitehouse
2012-09-13  9:42 Steven Whitehouse
2012-04-11  8:56 Steven Whitehouse
2012-02-28 11:11 Steven Whitehouse
2011-07-14  8:21 Steven Whitehouse
2011-05-23 12:39 Steven Whitehouse
2011-04-19  8:40 Steven Whitehouse
2010-07-15 13:57 Steven Whitehouse
2010-05-25  8:21 Steven Whitehouse
2010-01-11  9:11 Steven Whitehouse
2009-07-30 13:45 Steven Whitehouse

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.