All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] misc fixes for mds
@ 2013-07-17  8:28 Yan, Zheng
  2013-07-17  8:28 ` [PATCH 1/6] mds: fix cap revoke confirmation Yan, Zheng
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

these patches are also in:
  git://github.com/ukernel/ceph.git wip-mds

Regards
Yan, Zheng

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/6] mds: fix cap revoke confirmation
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
@ 2013-07-17  8:28 ` Yan, Zheng
  2013-07-17  8:28 ` [PATCH 2/6] mds: revoke GSHARED cap when finishing xlock Yan, Zheng
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

There are several issues in the Capability::confirm_receipt()

1. when receiving a client caps message with 'seq == last_sent',
   it doesn't mean we finish revoking caps. The client can send
   caps message that only flushes dirty metadata.

2. When receiving a client caps message with 'seq == N', we should
   forget pending revocations whose seq numbers are less than N.
   This is because, when revoking caps, we create a revoke_info
   structure and set its seq number to 'last_sent', then increase
   the 'last_sent'.

3. When client actively releases caps (by request), the code only
   works for the 'seq == last_sent' case. If there are pending
   revocations, we should update them as if the release message
   is received before we revoke the corresponding caps.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 src/mds/Capability.h | 44 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 34 insertions(+), 10 deletions(-)

diff --git a/src/mds/Capability.h b/src/mds/Capability.h
index fdecb90..d56bdc9 100644
--- a/src/mds/Capability.h
+++ b/src/mds/Capability.h
@@ -142,13 +142,11 @@ public:
       _pending = c;
       _issued |= c;
     } else if (~_pending & c) {
-      // adding bits only.  remove obsolete revocations?
+      // note prior caps if there are pending revocations
+      if (!_revokes.empty())
+	_revokes.push_back(revoke_info(_pending, last_sent, last_issue));
       _pending |= c;
       _issued |= c;
-      // drop old _revokes with no bits we don't have
-      while (!_revokes.empty() &&
-	     (_revokes.back().before & ~_pending) == 0)
-	_revokes.pop_back();
     } else {
       // no change.
       assert(_pending == c);
@@ -169,16 +167,42 @@ public:
     for (list<revoke_info>::iterator p = _revokes.begin(); p != _revokes.end(); ++p)
       _issued |= p->before;
   }
+  void _update_revokes(ceph_seq_t seq, unsigned caps) {
+    // can i forget any revocations?
+    while (!_revokes.empty() && _revokes.front().seq < seq)
+      _revokes.pop_front();
+
+    if (!_revokes.empty() && _revokes.front().seq == seq) {
+      list<revoke_info>::iterator p = _revokes.begin();
+      unsigned prev_pending = p->before;
+      p->before = caps;
+      // client actively released caps?
+      unsigned release = prev_pending & ~caps;
+      if (release) {
+	for (++p; p != _revokes.end(); ++p) {
+	  // we issued new caps to client?
+	  release &= prev_pending | ~(p->before);
+	  if (release == 0)
+	    break;
+	  prev_pending = p->before;
+	  p->before &= ~release;
+	}
+	if (release) {
+	  // we issued new caps to client?
+	  release &= prev_pending | ~_pending;
+	  _pending &= ~release;
+	}
+      }
+    }
+  }
   void confirm_receipt(ceph_seq_t seq, unsigned caps) {
     if (seq == last_sent) {
-      _pending = caps;
       _revokes.clear();
       _issued = caps;
+      // don't add bits
+      _pending &= caps;
     } else {
-      // can i forget any revocations?
-      while (!_revokes.empty() &&
-	     _revokes.front().seq <= seq)
-	_revokes.pop_front();
+      _update_revokes(seq, caps);
       _calc_issued();
     }
     //check_rdcaps_list();
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/6] mds: revoke GSHARED cap when finishing xlock
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
  2013-07-17  8:28 ` [PATCH 1/6] mds: fix cap revoke confirmation Yan, Zheng
@ 2013-07-17  8:28 ` Yan, Zheng
  2013-07-17  8:28 ` [PATCH 3/6] mds: remove "type != CEPH_LOCK_DN" check in Locker::cancel_locking() Yan, Zheng
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

If lock state is LOCK_XLOCKDONE, the xlocker can have GSHARED cap.
So when finishing xlock, we may need to revoke the GSHARED cap.

In most cases Locker::_finish_xlock() directly set lock state to
LOCK_LOCK or LOCK_EXCL, which hides the issue. If 'num_rdlock > 0'
or 'num_wrlock > 0' when finishing xlock, the issue reveals.
(lock get stuck in LOCK_XLOCKDONE forever)

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 src/mds/Locker.cc | 42 +++++++++++++++---------------------------
 1 file changed, 15 insertions(+), 27 deletions(-)

diff --git a/src/mds/Locker.cc b/src/mds/Locker.cc
index 30e014a..afaac1c 100644
--- a/src/mds/Locker.cc
+++ b/src/mds/Locker.cc
@@ -547,8 +547,6 @@ void Locker::cancel_locking(Mutation *mut, set<CInode*> *pneed_issue)
       bool need_issue = false;
       if (lock->get_state() == LOCK_PREXLOCK)
 	_finish_xlock(lock, &need_issue);
-      if (lock->is_stable())
-	eval(lock, &need_issue);
       if (need_issue)
 	pneed_issue->insert(static_cast<CInode *>(lock->get_parent()));
     }
@@ -1461,16 +1459,21 @@ bool Locker::xlock_start(SimpleLock *lock, MDRequest *mut)
 void Locker::_finish_xlock(SimpleLock *lock, bool *pneed_issue)
 {
   assert(!lock->is_stable());
-  if (lock->get_type() != CEPH_LOCK_DN && (static_cast<CInode*>(lock->get_parent())->get_loner()) >= 0)
+  if (lock->get_num_rdlocks() == 0 &&
+      lock->get_num_wrlocks() == 0 &&
+      lock->get_num_client_lease() == 0 &&
+      lock->get_type() != CEPH_LOCK_DN &&
+      (static_cast<CInode*>(lock->get_parent())->get_loner()) >= 0) {
     lock->set_state(LOCK_EXCL);
-  else
-    lock->set_state(LOCK_LOCK);
-  if (lock->get_type() == CEPH_LOCK_DN && lock->get_parent()->is_replicated() &&
-      !lock->is_waiter_for(SimpleLock::WAIT_WR))
-    simple_sync(lock, pneed_issue);
-  if (lock->get_cap_shift())
-    *pneed_issue = true;
-  lock->get_parent()->auth_unpin(lock);
+    lock->get_parent()->auth_unpin(lock);
+    lock->finish_waiters(SimpleLock::WAIT_STABLE|SimpleLock::WAIT_WR|SimpleLock::WAIT_RD);
+    if (lock->get_cap_shift())
+      *pneed_issue = true;
+  } else {
+    // the xlocker may have CEPH_CAP_GSHARED, need to revoke it
+    // if next state is LOCK_LOCK
+    eval_gather(lock, true, pneed_issue);
+  }
 }
 
 void Locker::xlock_finish(SimpleLock *lock, Mutation *mut, bool *pneed_issue)
@@ -1508,24 +1511,9 @@ void Locker::xlock_finish(SimpleLock *lock, Mutation *mut, bool *pneed_issue)
 			 SimpleLock::WAIT_WR | 
 			 SimpleLock::WAIT_RD, 0); 
   } else {
-    if (lock->get_num_xlocks() == 0 &&
-	lock->get_num_rdlocks() == 0 &&
-	lock->get_num_wrlocks() == 0 &&
-	lock->get_num_client_lease() == 0) {
+    if (lock->get_num_xlocks() == 0)
       _finish_xlock(lock, &do_issue);
-    }
-
-    // others waiting?
-    lock->finish_waiters(SimpleLock::WAIT_STABLE |
-			 SimpleLock::WAIT_WR | 
-			 SimpleLock::WAIT_RD, 0); 
   }
-    
-  // eval?
-  if (!lock->is_stable())
-    eval_gather(lock, false, &do_issue);
-  else if (lock->get_parent()->is_auth())
-    try_eval(lock, &do_issue);
   
   if (do_issue) {
     CInode *in = static_cast<CInode*>(lock->get_parent());
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/6] mds: remove "type != CEPH_LOCK_DN" check in Locker::cancel_locking()
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
  2013-07-17  8:28 ` [PATCH 1/6] mds: fix cap revoke confirmation Yan, Zheng
  2013-07-17  8:28 ` [PATCH 2/6] mds: revoke GSHARED cap when finishing xlock Yan, Zheng
@ 2013-07-17  8:28 ` Yan, Zheng
  2013-07-17  8:28 ` [PATCH 4/6] mds: handle "state == LOCK_LOCK_XLOCK" when cancelling xlock Yan, Zheng
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

For acquiring/cancelling xlock, the lock state transitions for
dentry lock and other types of locks are the same. So I think
the "type != CEPH_LOCK_DN" check doesn't make sense.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 src/mds/Locker.cc | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/src/mds/Locker.cc b/src/mds/Locker.cc
index afaac1c..aaf6590 100644
--- a/src/mds/Locker.cc
+++ b/src/mds/Locker.cc
@@ -543,13 +543,11 @@ void Locker::cancel_locking(Mutation *mut, set<CInode*> *pneed_issue)
   dout(10) << "cancel_locking " << *lock << " on " << *mut << dendl;
 
   if (lock->get_parent()->is_auth()) {
-    if (lock->get_type() != CEPH_LOCK_DN) {
-      bool need_issue = false;
-      if (lock->get_state() == LOCK_PREXLOCK)
-	_finish_xlock(lock, &need_issue);
-      if (need_issue)
-	pneed_issue->insert(static_cast<CInode *>(lock->get_parent()));
-    }
+    bool need_issue = false;
+    if (lock->get_state() == LOCK_PREXLOCK)
+      _finish_xlock(lock, &need_issue);
+    if (need_issue)
+      pneed_issue->insert(static_cast<CInode *>(lock->get_parent()));
   }
   mut->finish_locking(lock);
 }
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/6] mds: handle "state == LOCK_LOCK_XLOCK" when cancelling xlock
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
                   ` (2 preceding siblings ...)
  2013-07-17  8:28 ` [PATCH 3/6] mds: remove "type != CEPH_LOCK_DN" check in Locker::cancel_locking() Yan, Zheng
@ 2013-07-17  8:28 ` Yan, Zheng
  2013-07-17  8:28 ` [PATCH 5/6] mds: wake xlock waiter when xlock is done Yan, Zheng
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

If we find lock state is LOCK_LOCK_XLOCK when cancelling xlock,
set lock state to LOCK_XLOCK_DONE and call Locker::eval_gather().
This makes sure the lock will eventually transit to a stable state.
(LOCK_XLOCK_DONE's next state is stable)

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 src/mds/Locker.cc | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/src/mds/Locker.cc b/src/mds/Locker.cc
index aaf6590..47ea6fc 100644
--- a/src/mds/Locker.cc
+++ b/src/mds/Locker.cc
@@ -544,8 +544,13 @@ void Locker::cancel_locking(Mutation *mut, set<CInode*> *pneed_issue)
 
   if (lock->get_parent()->is_auth()) {
     bool need_issue = false;
-    if (lock->get_state() == LOCK_PREXLOCK)
+    if (lock->get_state() == LOCK_PREXLOCK) {
       _finish_xlock(lock, &need_issue);
+    } else if (lock->get_state() == LOCK_LOCK_XLOCK &&
+	       lock->get_num_xlocks() == 0) {
+      lock->set_state(LOCK_XLOCKDONE);
+      eval_gather(lock, true, &need_issue);
+    }
     if (need_issue)
       pneed_issue->insert(static_cast<CInode *>(lock->get_parent()));
   }
@@ -1509,8 +1514,11 @@ void Locker::xlock_finish(SimpleLock *lock, Mutation *mut, bool *pneed_issue)
 			 SimpleLock::WAIT_WR | 
 			 SimpleLock::WAIT_RD, 0); 
   } else {
-    if (lock->get_num_xlocks() == 0)
+    if (lock->get_num_xlocks() == 0) {
+      if (lock->get_state() == LOCK_LOCK_XLOCK)
+	lock->set_state(LOCK_XLOCKDONE);
       _finish_xlock(lock, &do_issue);
+    }
   }
   
   if (do_issue) {
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/6] mds: wake xlock waiter when xlock is done
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
                   ` (3 preceding siblings ...)
  2013-07-17  8:28 ` [PATCH 4/6] mds: handle "state == LOCK_LOCK_XLOCK" when cancelling xlock Yan, Zheng
@ 2013-07-17  8:28 ` Yan, Zheng
  2013-07-17  8:28 ` [PATCH 6/6] mds: change LOCK_SCAN to unstable state Yan, Zheng
  2013-07-23  5:34 ` [PATCH 0/6] misc fixes for mds Yan, Zheng
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

Once a lock is in LOCK_XLOCKDONE state, client already holds xlock
can acquire extra xlock. So wake up xlock waiter when we set lock
state to LOCK_XLOCKDONE.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 src/mds/Locker.cc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/mds/Locker.cc b/src/mds/Locker.cc
index 47ea6fc..a4b50db 100644
--- a/src/mds/Locker.cc
+++ b/src/mds/Locker.cc
@@ -477,6 +477,7 @@ void Locker::set_xlocks_done(Mutation *mut, bool skip_dentry)
       continue;
     dout(10) << "set_xlocks_done on " << **p << " " << *(*p)->get_parent() << dendl;
     (*p)->set_xlock_done();
+    (*p)->finish_waiters(ScatterLock::WAIT_XLOCK);
   }
 }
 
@@ -1419,7 +1420,7 @@ bool Locker::xlock_start(SimpleLock *lock, MDRequest *mut)
       }
     }
     
-    lock->add_waiter(SimpleLock::WAIT_WR|SimpleLock::WAIT_STABLE, new C_MDS_RetryRequest(mdcache, mut));
+    lock->add_waiter(SimpleLock::WAIT_XLOCK|SimpleLock::WAIT_STABLE, new C_MDS_RetryRequest(mdcache, mut));
     nudge_log(lock);
     return false;
   } else {
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 6/6] mds: change LOCK_SCAN to unstable state
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
                   ` (4 preceding siblings ...)
  2013-07-17  8:28 ` [PATCH 5/6] mds: wake xlock waiter when xlock is done Yan, Zheng
@ 2013-07-17  8:28 ` Yan, Zheng
  2013-07-23  5:34 ` [PATCH 0/6] misc fixes for mds Yan, Zheng
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-17  8:28 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

commit 0071b8e75b (mds: stay in SCAN state in file_eval) makes
Locker::file_eval() ignore lock in LOCK_SCAN state. If there
is no request changes the lock state, the lock can be stuck in
LOCK_SCAN state forever. This can cause client read/write hang
because lock in LOCK_SCAN state does not allow Frw caps.

The fix is change LOCK_SCAN to a unstable state. Thank to the
CInode::STATE_RECOVERING check in Locker::eval_gather(), the
lock stays in the SCAN state while file is being recovering.
The lock will transit to a stable state once the recovery
finishes.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
---
 src/mds/Locker.cc  | 18 +++++-------------
 src/mds/MDCache.cc |  1 +
 src/mds/locks.c    |  3 +--
 src/mds/locks.h    |  1 -
 4 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/src/mds/Locker.cc b/src/mds/Locker.cc
index a4b50db..eef48f9 100644
--- a/src/mds/Locker.cc
+++ b/src/mds/Locker.cc
@@ -3435,7 +3435,6 @@ bool Locker::simple_sync(SimpleLock *lock, bool *need_issue)
 
     switch (lock->get_state()) {
     case LOCK_MIX: lock->set_state(LOCK_MIX_SYNC); break;
-    case LOCK_SCAN:
     case LOCK_LOCK: lock->set_state(LOCK_LOCK_SYNC); break;
     case LOCK_XSYN: lock->set_state(LOCK_XSYN_SYNC); break;
     case LOCK_EXCL: lock->set_state(LOCK_EXCL_SYNC); break;
@@ -3512,7 +3511,6 @@ void Locker::simple_excl(SimpleLock *lock, bool *need_issue)
     in = static_cast<CInode *>(lock->get_parent());
 
   switch (lock->get_state()) {
-  case LOCK_SCAN:
   case LOCK_LOCK: lock->set_state(LOCK_LOCK_EXCL); break;
   case LOCK_SYNC: lock->set_state(LOCK_SYNC_EXCL); break;
   case LOCK_XSYN: lock->set_state(LOCK_XSYN_EXCL); break;
@@ -3571,7 +3569,6 @@ void Locker::simple_lock(SimpleLock *lock, bool *need_issue)
   int old_state = lock->get_state();
 
   switch (lock->get_state()) {
-  case LOCK_SCAN: lock->set_state(LOCK_SCAN_LOCK); break;
   case LOCK_SYNC: lock->set_state(LOCK_SYNC_LOCK); break;
   case LOCK_XSYN:
     file_excl(static_cast<ScatterLock*>(lock), need_issue);
@@ -4157,10 +4154,6 @@ void Locker::file_eval(ScatterLock *lock, bool *need_issue)
   if (lock->get_parent()->is_freezing_or_frozen())
     return;
 
-  // wait for scan
-  if (lock->get_state() == LOCK_SCAN)
-    return;
-
   // excl -> *?
   if (lock->get_state() == LOCK_EXCL) {
     dout(20) << " is excl" << dendl;
@@ -4347,7 +4340,6 @@ void Locker::file_excl(ScatterLock *lock, bool *need_issue)
   switch (lock->get_state()) {
   case LOCK_SYNC: lock->set_state(LOCK_SYNC_EXCL); break;
   case LOCK_MIX: lock->set_state(LOCK_MIX_EXCL); break;
-  case LOCK_SCAN:
   case LOCK_LOCK: lock->set_state(LOCK_LOCK_EXCL); break;
   case LOCK_XSYN: lock->set_state(LOCK_XSYN_EXCL); break;
   default: assert(0);
@@ -4458,12 +4450,12 @@ void Locker::file_recover(ScatterLock *lock)
     issue_caps(in);
     gather++;
   }
-  if (gather) {
-    lock->get_parent()->auth_pin(lock);
-  } else {
-    lock->set_state(LOCK_SCAN);
+  
+  lock->set_state(LOCK_SCAN);
+  if (gather)
+    in->state_set(CInode::STATE_NEEDSRECOVER);
+  else
     mds->mdcache->queue_file_recover(in);
-  }
 }
 
 
diff --git a/src/mds/MDCache.cc b/src/mds/MDCache.cc
index e592dde..51efce8 100644
--- a/src/mds/MDCache.cc
+++ b/src/mds/MDCache.cc
@@ -5720,6 +5720,7 @@ void MDCache::identify_files_to_recover(vector<CInode*>& recover_q, vector<CInod
     }
 
     if (recover) {
+      in->auth_pin(&in->filelock);
       in->filelock.set_state(LOCK_PRE_SCAN);
       recover_q.push_back(in);
       
diff --git a/src/mds/locks.c b/src/mds/locks.c
index 9031087..37e3f5e 100644
--- a/src/mds/locks.c
+++ b/src/mds/locks.c
@@ -122,8 +122,7 @@ const struct sm_state_t filelock[LOCK_MAX] = {
     [LOCK_EXCL_XSYN] = { LOCK_XSYN, false, LOCK_LOCK, 0,    0,   XCL, 0,   0,   0,   0,   0,CEPH_CAP_GCACHE|CEPH_CAP_GBUFFER,0,0 },
 
     [LOCK_PRE_SCAN]  = { LOCK_SCAN, false, LOCK_LOCK, 0,    0,   0,   0,   0,   0,   0,   0,0,0,0 },
-    [LOCK_SCAN]      = { 0,         false, LOCK_LOCK, 0,    0,   0,   0,   0,   0,   0,   0,0,0,0 },
-    [LOCK_SCAN_LOCK] = { LOCK_LOCK, false, LOCK_LOCK, 0,    0,   0,   0,   0,   0,   0,   0,0,0,0 },
+    [LOCK_SCAN]      = { LOCK_LOCK, false, LOCK_LOCK, 0,    0,   0,   0,   0,   0,   0,   0,0,0,0 },
 };
 
 const struct sm_t sm_filelock = {
diff --git a/src/mds/locks.h b/src/mds/locks.h
index 2adcbf2..d1585ce 100644
--- a/src/mds/locks.h
+++ b/src/mds/locks.h
@@ -86,7 +86,6 @@ enum {
 
   LOCK_PRE_SCAN,
   LOCK_SCAN,
-  LOCK_SCAN_LOCK,
 
   LOCK_SNAP_SYNC,
 
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/6] misc fixes for mds
  2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
                   ` (5 preceding siblings ...)
  2013-07-17  8:28 ` [PATCH 6/6] mds: change LOCK_SCAN to unstable state Yan, Zheng
@ 2013-07-23  5:34 ` Yan, Zheng
  6 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-07-23  5:34 UTC (permalink / raw)
  To: sage; +Cc: ceph-devel

On 07/17/2013 04:28 PM, Yan, Zheng wrote:
> From: "Yan, Zheng" <zheng.z.yan@intel.com>
> 
> these patches are also in:
>   git://github.com/ukernel/ceph.git wip-mds
> 

I found bug in these patches, please ignore them.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 0/6] misc fixes for mds
@ 2013-08-05  6:10 Yan, Zheng
  0 siblings, 0 replies; 9+ messages in thread
From: Yan, Zheng @ 2013-08-05  6:10 UTC (permalink / raw)
  To: ceph-devel; +Cc: sage, Yan, Zheng

From: "Yan, Zheng" <zheng.z.yan@intel.com>

These patches are also in:
  git://github.com/ukernel/ceph.git wip-mds

These patches together with my kernel patches have been tested
for about two weeks. They passed several 48+ hours stress tests,
no request hang was found.

Regards
Yan, Zheng
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-08-05  6:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-17  8:28 [PATCH 0/6] misc fixes for mds Yan, Zheng
2013-07-17  8:28 ` [PATCH 1/6] mds: fix cap revoke confirmation Yan, Zheng
2013-07-17  8:28 ` [PATCH 2/6] mds: revoke GSHARED cap when finishing xlock Yan, Zheng
2013-07-17  8:28 ` [PATCH 3/6] mds: remove "type != CEPH_LOCK_DN" check in Locker::cancel_locking() Yan, Zheng
2013-07-17  8:28 ` [PATCH 4/6] mds: handle "state == LOCK_LOCK_XLOCK" when cancelling xlock Yan, Zheng
2013-07-17  8:28 ` [PATCH 5/6] mds: wake xlock waiter when xlock is done Yan, Zheng
2013-07-17  8:28 ` [PATCH 6/6] mds: change LOCK_SCAN to unstable state Yan, Zheng
2013-07-23  5:34 ` [PATCH 0/6] misc fixes for mds Yan, Zheng
2013-08-05  6:10 Yan, Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.