All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] fixes for cephfs
@ 2012-11-19  2:49 Yan, Zheng
  2012-11-19  2:49 ` [PATCH 1/6] ceph: Don't update i_max_size when handling non-auth cap Yan, Zheng
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage

Hi,

These patchs are fixes for cephfs kernel client bugs when running
2 MDS with thrash_exports enabled.

These patches are also in:
  git://github.com/ukernel/ceph-client.git master

Regards
Yan, Zheng


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/6] ceph: Don't update i_max_size when handling non-auth cap
  2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
@ 2012-11-19  2:49 ` Yan, Zheng
  2012-11-19  2:49 ` [PATCH 2/6] ceph: Hold caps_list_lock when adjusting caps_{use,total}_count Yan, Zheng
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage; +Cc: Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

The cap from non-auth mds doesn't have a meaningful max_size value.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 fs/ceph/caps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 3251e9c..c633d1d 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -2388,7 +2388,7 @@ static void handle_cap_grant(struct inode *inode, struct ceph_mds_caps *grant,
 			    &atime);
 
 	/* max size increase? */
-	if (max_size != ci->i_max_size) {
+	if (ci->i_auth_cap == cap && max_size != ci->i_max_size) {
 		dout("max_size %lld -> %llu\n", ci->i_max_size, max_size);
 		ci->i_max_size = max_size;
 		if (max_size >= ci->i_wanted_max_size) {
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/6] ceph: Hold caps_list_lock when adjusting caps_{use,total}_count
  2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
  2012-11-19  2:49 ` [PATCH 1/6] ceph: Don't update i_max_size when handling non-auth cap Yan, Zheng
@ 2012-11-19  2:49 ` Yan, Zheng
  2012-11-19  2:49 ` [PATCH 3/6] ceph: Fix infinite loop in __wake_requests Yan, Zheng
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage; +Cc: Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 fs/ceph/caps.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index c633d1d..8072aef 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -236,8 +236,10 @@ static struct ceph_cap *get_cap(struct ceph_mds_client *mdsc,
 	if (!ctx) {
 		cap = kmem_cache_alloc(ceph_cap_cachep, GFP_NOFS);
 		if (cap) {
+			spin_lock(&mdsc->caps_list_lock);
 			mdsc->caps_use_count++;
 			mdsc->caps_total_count++;
+			spin_unlock(&mdsc->caps_list_lock);
 		}
 		return cap;
 	}
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/6] ceph: Fix infinite loop in __wake_requests
  2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
  2012-11-19  2:49 ` [PATCH 1/6] ceph: Don't update i_max_size when handling non-auth cap Yan, Zheng
  2012-11-19  2:49 ` [PATCH 2/6] ceph: Hold caps_list_lock when adjusting caps_{use,total}_count Yan, Zheng
@ 2012-11-19  2:49 ` Yan, Zheng
  2012-11-19  2:49 ` [PATCH 4/6] ceph: Don't add dirty inode to dirty list if caps is in migration Yan, Zheng
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage; +Cc: Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

__wake_requests() will enter infinite loop if we use it to wake
requests in the session->s_waiting list. __wake_requests() deletes
requests from the list and __do_request() adds requests back to
the list.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 fs/ceph/mds_client.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 1bcf712..0d9864f 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -1876,9 +1876,14 @@ finish:
 static void __wake_requests(struct ceph_mds_client *mdsc,
 			    struct list_head *head)
 {
-	struct ceph_mds_request *req, *nreq;
+	struct ceph_mds_request *req;
+	LIST_HEAD(tmp_list);
+
+	list_splice_init(head, &tmp_list);
 
-	list_for_each_entry_safe(req, nreq, head, r_wait) {
+	while (!list_empty(&tmp_list)) {
+		req = list_entry(tmp_list.next,
+				 struct ceph_mds_request, r_wait);
 		list_del_init(&req->r_wait);
 		__do_request(mdsc, req);
 	}
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/6] ceph: Don't add dirty inode to dirty list if caps is in migration
  2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
                   ` (2 preceding siblings ...)
  2012-11-19  2:49 ` [PATCH 3/6] ceph: Fix infinite loop in __wake_requests Yan, Zheng
@ 2012-11-19  2:49 ` Yan, Zheng
  2012-11-19  2:49 ` [PATCH 5/6] ceph: Fix __ceph_do_pending_vmtruncate Yan, Zheng
  2012-11-19  2:49 ` [PATCH 6/6] ceph: call handle_cap_grant() for cap import message Yan, Zheng
  5 siblings, 0 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage; +Cc: Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

Add dirty inode to cap_dirty_migrating list instead, this can avoid
ceph_flush_dirty_caps() entering infinite loop.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 fs/ceph/caps.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 8072aef..5efa3f5 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -1351,11 +1351,15 @@ int __ceph_mark_dirty_caps(struct ceph_inode_info *ci, int mask)
 		if (!ci->i_head_snapc)
 			ci->i_head_snapc = ceph_get_snap_context(
 				ci->i_snap_realm->cached_context);
-		dout(" inode %p now dirty snapc %p\n", &ci->vfs_inode,
-			ci->i_head_snapc);
+		dout(" inode %p now dirty snapc %p auth cap %p\n",
+		     &ci->vfs_inode, ci->i_head_snapc, ci->i_auth_cap);
 		BUG_ON(!list_empty(&ci->i_dirty_item));
 		spin_lock(&mdsc->cap_dirty_lock);
-		list_add(&ci->i_dirty_item, &mdsc->cap_dirty);
+		if (ci->i_auth_cap)
+			list_add(&ci->i_dirty_item, &mdsc->cap_dirty);
+		else
+			list_add(&ci->i_dirty_item,
+				 &mdsc->cap_dirty_migrating);
 		spin_unlock(&mdsc->cap_dirty_lock);
 		if (ci->i_flushing_caps == 0) {
 			ihold(inode);
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/6] ceph: Fix __ceph_do_pending_vmtruncate
  2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
                   ` (3 preceding siblings ...)
  2012-11-19  2:49 ` [PATCH 4/6] ceph: Don't add dirty inode to dirty list if caps is in migration Yan, Zheng
@ 2012-11-19  2:49 ` Yan, Zheng
  2012-11-19  2:49 ` [PATCH 6/6] ceph: call handle_cap_grant() for cap import message Yan, Zheng
  5 siblings, 0 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage; +Cc: Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

we should set i_truncate_pending to 0 after page cache is truncated
to i_truncate_size

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 fs/ceph/inode.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index 4b5762e..81613bc 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -1466,7 +1466,7 @@ void __ceph_do_pending_vmtruncate(struct inode *inode)
 {
 	struct ceph_inode_info *ci = ceph_inode(inode);
 	u64 to;
-	int wrbuffer_refs, wake = 0;
+	int wrbuffer_refs, finish = 0;
 
 retry:
 	spin_lock(&ci->i_ceph_lock);
@@ -1498,15 +1498,18 @@ retry:
 	truncate_inode_pages(inode->i_mapping, to);
 
 	spin_lock(&ci->i_ceph_lock);
-	ci->i_truncate_pending--;
-	if (ci->i_truncate_pending == 0)
-		wake = 1;
+	if (to == ci->i_truncate_size) {
+		ci->i_truncate_pending = 0;
+		finish = 1;
+	}
 	spin_unlock(&ci->i_ceph_lock);
+	if (!finish)
+		goto retry;
 
 	if (wrbuffer_refs == 0)
 		ceph_check_caps(ci, CHECK_CAPS_AUTHONLY, NULL);
-	if (wake)
-		wake_up_all(&ci->i_cap_wq);
+
+	wake_up_all(&ci->i_cap_wq);
 }
 
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6/6] ceph: call handle_cap_grant() for cap import message
  2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
                   ` (4 preceding siblings ...)
  2012-11-19  2:49 ` [PATCH 5/6] ceph: Fix __ceph_do_pending_vmtruncate Yan, Zheng
@ 2012-11-19  2:49 ` Yan, Zheng
  5 siblings, 0 replies; 7+ messages in thread
From: Yan, Zheng @ 2012-11-19  2:49 UTC (permalink / raw)
  To: ceph-devel, sage; +Cc: Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

If client sends cap message that requests new max size during
exporting caps, the exporting MDS will drop the message quietly.
So the client may wait for the reply that updates the max size
forever. call handle_cap_grant() for cap import message can
avoid this issue.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 fs/ceph/caps.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 5efa3f5..a1d9bb3 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -2751,6 +2751,7 @@ static void handle_cap_import(struct ceph_mds_client *mdsc,
 
 	/* make sure we re-request max_size, if necessary */
 	spin_lock(&ci->i_ceph_lock);
+	ci->i_wanted_max_size = 0;  /* reset */
 	ci->i_requested_max_size = 0;
 	spin_unlock(&ci->i_ceph_lock);
 }
@@ -2846,8 +2847,6 @@ void ceph_handle_caps(struct ceph_mds_session *session,
 	case CEPH_CAP_OP_IMPORT:
 		handle_cap_import(mdsc, inode, h, session,
 				  snaptrace, snaptrace_len);
-		ceph_check_caps(ceph_inode(inode), 0, session);
-		goto done_unlocked;
 	}
 
 	/* the rest require a cap */
@@ -2864,6 +2863,7 @@ void ceph_handle_caps(struct ceph_mds_session *session,
 	switch (op) {
 	case CEPH_CAP_OP_REVOKE:
 	case CEPH_CAP_OP_GRANT:
+	case CEPH_CAP_OP_IMPORT:
 		handle_cap_grant(inode, h, session, cap, msg->middle);
 		goto done_unlocked;
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-11-19  2:49 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-19  2:49 [PATCH 0/6] fixes for cephfs Yan, Zheng
2012-11-19  2:49 ` [PATCH 1/6] ceph: Don't update i_max_size when handling non-auth cap Yan, Zheng
2012-11-19  2:49 ` [PATCH 2/6] ceph: Hold caps_list_lock when adjusting caps_{use,total}_count Yan, Zheng
2012-11-19  2:49 ` [PATCH 3/6] ceph: Fix infinite loop in __wake_requests Yan, Zheng
2012-11-19  2:49 ` [PATCH 4/6] ceph: Don't add dirty inode to dirty list if caps is in migration Yan, Zheng
2012-11-19  2:49 ` [PATCH 5/6] ceph: Fix __ceph_do_pending_vmtruncate Yan, Zheng
2012-11-19  2:49 ` [PATCH 6/6] ceph: call handle_cap_grant() for cap import message Yan, Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.