All of lore.kernel.org
 help / color / mirror / Atom feed
From: Changwei Ge <ge.changwei@h3c.com>
To: ocfs2-devel@oss.oracle.com
Subject: [Ocfs2-devel] [PATCH] ocfs2: wait for recovering done after direct unlock request
Date: Fri, 15 Feb 2019 06:29:17 +0000	[thread overview]
Message-ID: <63ADC13FD55D6546B7DECE290D39E3730127854170@H3CMLB14-EX.srv.huawei-3com.com> (raw)
In-Reply-To: 5C6659FA.2000100@huawei.com

Hi Jun,

On 2019/2/15 14:20, piaojun wrote:
> Hi Changwei,
> 
> The DLM process is a little bit complex, so I suggest pasting the code
> path. And I wonder if my code is right?
> 
> Thanks,
> Jun
> 
> On 2019/2/14 14:14, Changwei Ge wrote:
>> There is scenario causing ocfs2 umount hang when multiple hosts are
>> rebooting at the same time.
>>
>> NODE1                           NODE2               NODE3
>> send unlock requset to NODE2
>>                                  dies
>>                                                      become recovery master
>>                                                      recover NODE2
>> find NODE2 dead
>> mark resource RECOVERING
>> directly remove lock from grant list
> dlm_do_local_recovery_cleanup
>    dlm_move_lockres_to_recovery_list
>      res->state |= DLM_LOCK_RES_RECOVERING;
>      list_add_tail(&res->recovering, &dlm->reco.resources);
> 
>> calculate usage but RECOVERING marked
>> **miss the window of purging
> dlmunlock
>    dlmunlock_remote
>      dlmunlock_common // unlock successfully directly
> 
>    dlm_lockres_calc_usage
>      __dlm_lockres_calc_usage
>        __dlm_lockres_unused
>          if (res->state & (DLM_LOCK_RES_RECOVERING| // won't purge lock as DLM_LOCK_RES_RECOVERING is set

True.

> 
>> clear RECOVERING
> dlm_finish_local_lockres_recovery
>    res->state &= ~DLM_LOCK_RES_RECOVERING;
> 
> Could you help explaining where getting stuck?

Sure,
As dlm missed the window to purge lock resource, it can't be unhashed.

During umount:
dlm_unregister_domain
   dlm_migrate_all_locks -> there is always a lock resource hashed, so can't return from dlm_migrate_all_locks() thus hang during umount.

Thanks,
Changwei
	

> 
>>
>> To reproduce this iusse, crash a host and then umount ocfs2
>> from another node.
>>
>> To sovle this, just let unlock progress wait for recovery done.
>>
>> Signed-off-by: Changwei Ge <ge.changwei@h3c.com>
>> ---
>>   fs/ocfs2/dlm/dlmunlock.c | 23 +++++++++++++++++++----
>>   1 file changed, 19 insertions(+), 4 deletions(-)
>>
>> diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c
>> index 63d701c..c8e9b70 100644
>> --- a/fs/ocfs2/dlm/dlmunlock.c
>> +++ b/fs/ocfs2/dlm/dlmunlock.c
>> @@ -105,7 +105,8 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm,
>>   	enum dlm_status status;
>>   	int actions = 0;
>>   	int in_use;
>> -        u8 owner;
>> +	u8 owner;
>> +	int recovery_wait = 0;
>>   
>>   	mlog(0, "master_node = %d, valblk = %d\n", master_node,
>>   	     flags & LKM_VALBLK);
>> @@ -208,9 +209,12 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm,
>>   		}
>>   		if (flags & LKM_CANCEL)
>>   			lock->cancel_pending = 0;
>> -		else
>> -			lock->unlock_pending = 0;
>> -
>> +		else {
>> +			if (!lock->unlock_pending)
>> +				recovery_wait = 1;
>> +			else
>> +				lock->unlock_pending = 0;
>> +		}
>>   	}
>>   
>>   	/* get an extra ref on lock.  if we are just switching
>> @@ -244,6 +248,17 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm,
>>   	spin_unlock(&res->spinlock);
>>   	wake_up(&res->wq);
>>   
>> +	if (recovery_wait) {
>> +		spin_lock(&res->spinlock);
>> +		/* Unlock request will directly succeed after owner dies,
>> +		 * and the lock is already removed from grant list. We have to
>> +		 * wait for RECOVERING done or we miss the chance to purge it
>> +		 * since the removement is much faster than RECOVERING proc.
>> +		 */
>> +		__dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_RECOVERING);
>> +		spin_unlock(&res->spinlock);
>> +	}
>> +
>>   	/* let the caller's final dlm_lock_put handle the actual kfree */
>>   	if (actions & DLM_UNLOCK_FREE_LOCK) {
>>   		/* this should always be coupled with list removal */
>>
> 

  reply	other threads:[~2019-02-15  6:29 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-14  6:14 [Ocfs2-devel] [PATCH] ocfs2: wait for recovering done after direct unlock request Changwei Ge
2019-02-15  6:19 ` piaojun
2019-02-15  6:29   ` Changwei Ge [this message]
2019-02-15  8:36     ` piaojun
2019-02-15  8:43       ` Changwei Ge
2019-02-18  9:46         ` Changwei Ge
2019-02-19  1:15           ` piaojun
2019-02-19  2:27             ` Changwei Ge
2019-02-19  8:35               ` piaojun
2019-02-21  6:23                 ` Changwei Ge
2019-09-17  1:09 ` Joseph Qi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=63ADC13FD55D6546B7DECE290D39E3730127854170@H3CMLB14-EX.srv.huawei-3com.com \
    --to=ge.changwei@h3c.com \
    --cc=ocfs2-devel@oss.oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.