linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Changwei Ge <ge.changwei@h3c.com>
To: Joseph Qi <jiangqi903@gmail.com>, Larry Chen <lchen@suse.com>,
	"mark@fasheh.com" <mark@fasheh.com>,
	"jlbec@evilplan.org" <jlbec@evilplan.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Ocfs2-devel] [PATCH V3] ocfs2: fix dead lock caused by ocfs2_defrag_extent
Date: Thu, 1 Nov 2018 12:34:58 +0000	[thread overview]
Message-ID: <63ADC13FD55D6546B7DECE290D39E37301277DE3FE@H3CMLB12-EX.srv.huawei-3com.com> (raw)
In-Reply-To: 4f96339a-6209-eee1-3792-4720eca35688@gmail.com

Hello Joseph,

On 2018/11/1 20:16, Joseph Qi wrote:
> 
> 
> On 18/11/1 19:52, Changwei Ge wrote:
>> Hello Joseph,
>>
>> On 2018/11/1 17:01, Joseph Qi wrote:
>>> Hi Larry,
>>>
>>> On 18/11/1 15:14, Larry Chen wrote:
>>>> ocfs2_defrag_extent may fall into deadlock.
>>>>
>>>> ocfs2_ioctl_move_extents
>>>>       ocfs2_ioctl_move_extents
>>>>         ocfs2_move_extents
>>>>           ocfs2_defrag_extent
>>>>             ocfs2_lock_allocators_move_extents
>>>>
>>>>               ocfs2_reserve_clusters
>>>>                 inode_lock GLOBAL_BITMAP_SYSTEM_INODE
>>>>
>>>> 	  __ocfs2_flush_truncate_log
>>>>                 inode_lock GLOBAL_BITMAP_SYSTEM_INODE
>>>>
>>>> As backtrace shows above, ocfs2_reserve_clusters() will call inode_lock
>>>> against the global bitmap if local allocator has not sufficient cluters.
>>>> Once global bitmap could meet the demand, ocfs2_reserve_cluster will
>>>> return success with global bitmap locked.
>>>>
>>>> After ocfs2_reserve_cluster(), if truncate log is full,
>>>> __ocfs2_flush_truncate_log() will definitely fall into deadlock because it
>>>> needs to inode_lock global bitmap, which has already been locked.
>>>>
>>>> To fix this bug, we could remove from ocfs2_lock_allocators_move_extents()
>>>> the code which intends to lock global allocator, and put the removed code
>>>> after __ocfs2_flush_truncate_log().
>>>>
>>>> ocfs2_lock_allocators_move_extents() is referred by 2 places, one is here,
>>>> the other does not need the data allocator context, which means this patch
>>>> does not affect the caller so far.
>>>>
>>>> Change log:
>>>> 1. Correct the function comment.
>>>> 2. Remove unused argument from ocfs2_lock_meta_allocator_move_extents.
>>>>
>>>> Signed-off-by: Larry Chen <lchen@suse.com>
>>>> ---
>>>>    fs/ocfs2/move_extents.c | 47 ++++++++++++++++++++++++++---------------------
>>>>    1 file changed, 26 insertions(+), 21 deletions(-)
>>>>
>>
>>> IMO, here clusters_to_move is only for data_ac, since we change this
>>> function to only handle meta_ac, I'm afraid clusters_to_move related
>>> logic has to been changed correspondingly.
>>
>> I think we can't remove *clusters_to_move* from here as clusters can be reserved latter outsides this function, but we
>> still have to reserve metadata(extents) in advance.
>> So we need that argument.
>>
> I was not saying just remove it.
> IIUC, clusters_to_move is for reserving data clusters (for meta_ac, we

Um...
*cluster_to_move* is not only used for reserving data clusters.
It is also an effecting input for calculating if existed extents still have enough free records for later
tree operation like attaching clusters to extents.

Please refer to below code:
  175         unsigned int max_recs_needed = 2 * extents_to_split + clusters_to_move;



> mostly talk about blocks). Since we have moved data cluster reserve
> logic out of ocfs2_lock_allocators_move_extents() now, then left
> clusters_to_move related logic here is odd.

Like my preceding elaboration, it is used for telling if we need more extents.
Anyway, I think we must keep *cluster_to_move* here as it used to. :-)

Thanks,
Changwei




> 
>>>>    					u32 extents_to_split,
>>>>    					struct ocfs2_alloc_context **meta_ac,
>>>> -					struct ocfs2_alloc_context **data_ac,
>>>>    					int extra_blocks,
>>>>    					int *credits)
>>>>    {
>>>> @@ -192,13 +188,6 @@ static int ocfs2_lock_allocators_move_extents(struct inode *inode,
>>>>    		goto out;
>>>>    	}
>>>>    
>>>> -	if (data_ac) {
>>>> -		ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac);
>>>> -		if (ret) {
>>>> -			mlog_errno(ret);
>>>> -			goto out;
>>>> -		}
>>>> -	}
>>>>    
>>>>    	*credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el);
>>>>    
>>>> @@ -257,10 +246,10 @@ static int ocfs2_defrag_extent(struct ocfs2_move_extents_context *context,
>>>>    		}
>>>>    	}
>>>>    
>>>> -	ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1,
>>>> -						 &context->meta_ac,
>>>> -						 &context->data_ac,
>>>> -						 extra_blocks, &credits);
>>>> +	ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,
>>>> +						*len, 1,
>>>> +						&context->meta_ac,
>>>> +						extra_blocks, &credits);
>>>>    	if (ret) {
>>>>    		mlog_errno(ret);
>>>>    		goto out;
>>>> @@ -283,6 +272,21 @@ static int ocfs2_defrag_extent(struct ocfs2_move_extents_context *context,
>>>>    		}
>>>>    	}
>>>>    
>>>> +	/*
>>>> +	 * Make sure ocfs2_reserve_cluster is called after
>>>> +	 * __ocfs2_flush_truncate_log, otherwise, dead lock may happen.
>>>> +	 *
>>>> +	 * If ocfs2_reserve_cluster is called
>>>> +	 * before __ocfs2_flush_truncate_log, dead lock on global bitmap
>>>> +	 * may happen.
>>>> +	 *
>>>> +	 */
>>>> +	ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac);
>>>> +	if (ret) {
>>>> +		mlog_errno(ret);
>>>> +		goto out_unlock_mutex;
>>>> +	}
>>>> +
>>>>    	handle = ocfs2_start_trans(osb, credits);
>>>>    	if (IS_ERR(handle)) {
>>>>    		ret = PTR_ERR(handle);
>>>> @@ -600,9 +604,10 @@ static int ocfs2_move_extent(struct ocfs2_move_extents_context *context,
>>>>    		}
>>>>    	}
>>>>    
>>>> -	ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1,
>>>> -						 &context->meta_ac,
>>>> -						 NULL, extra_blocks, &credits);
>>>> +	ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,
>>>> +						len, 1,
>>>> +						&context->meta_ac,
>>>> +						extra_blocks, &credits);
>>>>    	if (ret) {
>>>>    		mlog_errno(ret);
>>>>    		goto out;
>>>>
>>>
>>> _______________________________________________
>>> Ocfs2-devel mailing list
>>> Ocfs2-devel@oss.oracle.com
>>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>>>
> 

  reply	other threads:[~2018-11-01 12:39 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-01  7:14 [PATCH V3] ocfs2: fix dead lock caused by ocfs2_defrag_extent Larry Chen
2018-11-01  8:58 ` [Ocfs2-devel] " Joseph Qi
2018-11-01 11:52   ` Changwei Ge
2018-11-01 12:15     ` Joseph Qi
2018-11-01 12:34       ` Changwei Ge [this message]
2018-11-02  1:18         ` Joseph Qi
2018-11-01 12:39     ` Larry Chen
2018-11-01 12:48       ` Changwei Ge
2018-11-02  0:53 ` Changwei Ge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=63ADC13FD55D6546B7DECE290D39E37301277DE3FE@H3CMLB12-EX.srv.huawei-3com.com \
    --to=ge.changwei@h3c.com \
    --cc=akpm@linux-foundation.org \
    --cc=jiangqi903@gmail.com \
    --cc=jlbec@evilplan.org \
    --cc=lchen@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark@fasheh.com \
    --cc=ocfs2-devel@oss.oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).