All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM
@ 2014-04-08 10:47 Joseph Qi
  2014-04-09 21:44 ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: Joseph Qi @ 2014-04-08 10:47 UTC (permalink / raw)
  To: ocfs2-devel

Once dlm_dispatch_assert_master failed in dlm_master_requery_handler,
the only reason is ENOMEM.
Add retry logic to avoid BUG() in case of not enough memory
temporarily.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
---
 fs/ocfs2/dlm/dlmrecovery.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index 7035af0..7db0465 100644
--- a/fs/ocfs2/dlm/dlmrecovery.c
+++ b/fs/ocfs2/dlm/dlmrecovery.c
@@ -1676,6 +1676,9 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
 	unsigned int hash;
 	int master = DLM_LOCK_RES_OWNER_UNKNOWN;
 	u32 flags = DLM_ASSERT_MASTER_REQUERY;
+	int ret, retries = 0;
+
+#define DISPATCH_ASSERT_RETRY_TIMES 3
 
 	if (!dlm_grab(dlm)) {
 		/* since the domain has gone away on this
@@ -1685,18 +1688,30 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
 
 	hash = dlm_lockid_hash(req->name, req->namelen);
 
+retry:
 	spin_lock(&dlm->spinlock);
 	res = __dlm_lookup_lockres(dlm, req->name, req->namelen, hash);
 	if (res) {
 		spin_lock(&res->spinlock);
 		master = res->owner;
 		if (master == dlm->node_num) {
-			int ret = dlm_dispatch_assert_master(dlm, res,
-							     0, 0, flags);
+			ret = dlm_dispatch_assert_master(dlm, res,
+					0, 0, flags);
 			if (ret < 0) {
-				mlog_errno(-ENOMEM);
-				/* retry!? */
-				BUG();
+				mlog_errno(ret);
+
+				/* ENOMEM returns, retry until
+				 * DISPATCH_ASSERT_RETRY_TIMES reached */
+				if (retries < DISPATCH_ASSERT_RETRY_TIMES) {
+					spin_unlock(&res->spinlock);
+					dlm_lockres_put(res);
+					spin_unlock(&dlm->spinlock);
+					msleep(50);
+					retries++;
+					goto retry;
+				} else {
+					BUG();
+				}
 			}
 		} else /* put.. incase we are not the master */
 			dlm_lockres_put(res);
-- 
1.8.4.3

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM
  2014-04-08 10:47 [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM Joseph Qi
@ 2014-04-09 21:44 ` Andrew Morton
  2014-04-10  1:16   ` Wengang
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2014-04-09 21:44 UTC (permalink / raw)
  To: ocfs2-devel

On Tue, 8 Apr 2014 18:47:04 +0800 Joseph Qi <joseph.qi@huawei.com> wrote:

> Once dlm_dispatch_assert_master failed in dlm_master_requery_handler,
> the only reason is ENOMEM.
> Add retry logic to avoid BUG() in case of not enough memory
> temporarily.
> 
> ...
>
> --- a/fs/ocfs2/dlm/dlmrecovery.c
> +++ b/fs/ocfs2/dlm/dlmrecovery.c
> @@ -1676,6 +1676,9 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>  	unsigned int hash;
>  	int master = DLM_LOCK_RES_OWNER_UNKNOWN;
>  	u32 flags = DLM_ASSERT_MASTER_REQUERY;
> +	int ret, retries = 0;
> +
> +#define DISPATCH_ASSERT_RETRY_TIMES 3
>  
>  	if (!dlm_grab(dlm)) {
>  		/* since the domain has gone away on this
> @@ -1685,18 +1688,30 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>  
>  	hash = dlm_lockid_hash(req->name, req->namelen);
>  
> +retry:
>  	spin_lock(&dlm->spinlock);
>  	res = __dlm_lookup_lockres(dlm, req->name, req->namelen, hash);
>  	if (res) {
>  		spin_lock(&res->spinlock);
>  		master = res->owner;
>  		if (master == dlm->node_num) {
> -			int ret = dlm_dispatch_assert_master(dlm, res,
> -							     0, 0, flags);
> +			ret = dlm_dispatch_assert_master(dlm, res,
> +					0, 0, flags);
>  			if (ret < 0) {
> -				mlog_errno(-ENOMEM);
> -				/* retry!? */
> -				BUG();
> +				mlog_errno(ret);
> +
> +				/* ENOMEM returns, retry until
> +				 * DISPATCH_ASSERT_RETRY_TIMES reached */
> +				if (retries < DISPATCH_ASSERT_RETRY_TIMES) {
> +					spin_unlock(&res->spinlock);
> +					dlm_lockres_put(res);
> +					spin_unlock(&dlm->spinlock);
> +					msleep(50);
> +					retries++;
> +					goto retry;
> +				} else {
> +					BUG();
> +				}

urgh, this is not good.  dlm_dispatch_assert_master() uses GFP_ATOMIC
and the chances of that failing are relatively high.  The msleep()
might save us, but what happens if we cannot get more memory until some
writeback occurs to this filesystem?

It would be much better to use GFP_KERNEL in
dlm_dispatch_assert_master().  That means preallocating the
dlm_work_item in the caller before taking the spinlock.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM
  2014-04-09 21:44 ` Andrew Morton
@ 2014-04-10  1:16   ` Wengang
  2014-04-10 22:14     ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: Wengang @ 2014-04-10  1:16 UTC (permalink / raw)
  To: ocfs2-devel

? 2014?04?10? 05:44, Andrew Morton ??:
> On Tue, 8 Apr 2014 18:47:04 +0800 Joseph Qi <joseph.qi@huawei.com> wrote:
>
>> Once dlm_dispatch_assert_master failed in dlm_master_requery_handler,
>> the only reason is ENOMEM.
>> Add retry logic to avoid BUG() in case of not enough memory
>> temporarily.
>>
>> ...
>>
>> --- a/fs/ocfs2/dlm/dlmrecovery.c
>> +++ b/fs/ocfs2/dlm/dlmrecovery.c
>> @@ -1676,6 +1676,9 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>>   	unsigned int hash;
>>   	int master = DLM_LOCK_RES_OWNER_UNKNOWN;
>>   	u32 flags = DLM_ASSERT_MASTER_REQUERY;
>> +	int ret, retries = 0;
>> +
>> +#define DISPATCH_ASSERT_RETRY_TIMES 3
>>   
>>   	if (!dlm_grab(dlm)) {
>>   		/* since the domain has gone away on this
>> @@ -1685,18 +1688,30 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>>   
>>   	hash = dlm_lockid_hash(req->name, req->namelen);
>>   
>> +retry:
>>   	spin_lock(&dlm->spinlock);
>>   	res = __dlm_lookup_lockres(dlm, req->name, req->namelen, hash);
>>   	if (res) {
>>   		spin_lock(&res->spinlock);
>>   		master = res->owner;
>>   		if (master == dlm->node_num) {
>> -			int ret = dlm_dispatch_assert_master(dlm, res,
>> -							     0, 0, flags);
>> +			ret = dlm_dispatch_assert_master(dlm, res,
>> +					0, 0, flags);
>>   			if (ret < 0) {
>> -				mlog_errno(-ENOMEM);
>> -				/* retry!? */
>> -				BUG();
>> +				mlog_errno(ret);
>> +
>> +				/* ENOMEM returns, retry until
>> +				 * DISPATCH_ASSERT_RETRY_TIMES reached */
>> +				if (retries < DISPATCH_ASSERT_RETRY_TIMES) {
>> +					spin_unlock(&res->spinlock);
>> +					dlm_lockres_put(res);
>> +					spin_unlock(&dlm->spinlock);
>> +					msleep(50);
>> +					retries++;
>> +					goto retry;
>> +				} else {
>> +					BUG();
>> +				}
> urgh, this is not good.  dlm_dispatch_assert_master() uses GFP_ATOMIC
> and the chances of that failing are relatively high.  The msleep()
> might save us, but what happens if we cannot get more memory until some
> writeback occurs to this filesystem?
>
> It would be much better to use GFP_KERNEL in
> dlm_dispatch_assert_master().  That means preallocating the
> dlm_work_item in the caller before taking the spinlock.
>
Though to use GFP_KERNEL is safe, it may block the whole o2cb network 
message processing(I detailed this in a separated thread). I think the 
original GFP_ATOMIC is for that purpose. So I still persist we find a 
way to return a EAGAIN-like error code to peer in case of no-mem, and 
peer retries on receiving the EAGAIN-like error.

thanks
wengang

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM
  2014-04-10  1:16   ` Wengang
@ 2014-04-10 22:14     ` Andrew Morton
  2014-04-11  1:32       ` Wengang
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2014-04-10 22:14 UTC (permalink / raw)
  To: ocfs2-devel

On Thu, 10 Apr 2014 09:16:13 +0800 Wengang <wen.gang.wang@oracle.com> wrote:

> >> @@ -1685,18 +1688,30 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
> >>   
> >>   	hash = dlm_lockid_hash(req->name, req->namelen);
> >>   
> >> +retry:
> >>   	spin_lock(&dlm->spinlock);
> >>   	res = __dlm_lookup_lockres(dlm, req->name, req->namelen, hash);
> >>   	if (res) {
> >>   		spin_lock(&res->spinlock);
> >>   		master = res->owner;
> >>   		if (master == dlm->node_num) {
> >> -			int ret = dlm_dispatch_assert_master(dlm, res,
> >> -							     0, 0, flags);
> >> +			ret = dlm_dispatch_assert_master(dlm, res,
> >> +					0, 0, flags);
> >>   			if (ret < 0) {
> >> -				mlog_errno(-ENOMEM);
> >> -				/* retry!? */
> >> -				BUG();
> >> +				mlog_errno(ret);
> >> +
> >> +				/* ENOMEM returns, retry until
> >> +				 * DISPATCH_ASSERT_RETRY_TIMES reached */
> >> +				if (retries < DISPATCH_ASSERT_RETRY_TIMES) {
> >> +					spin_unlock(&res->spinlock);
> >> +					dlm_lockres_put(res);
> >> +					spin_unlock(&dlm->spinlock);
> >> +					msleep(50);
> >> +					retries++;
> >> +					goto retry;
> >> +				} else {
> >> +					BUG();
> >> +				}
> > urgh, this is not good.  dlm_dispatch_assert_master() uses GFP_ATOMIC
> > and the chances of that failing are relatively high.  The msleep()
> > might save us, but what happens if we cannot get more memory until some
> > writeback occurs to this filesystem?
> >
> > It would be much better to use GFP_KERNEL in
> > dlm_dispatch_assert_master().  That means preallocating the
> > dlm_work_item in the caller before taking the spinlock.
> >
> Though to use GFP_KERNEL is safe, it may block the whole o2cb network 
> message processing(I detailed this in a separated thread).

The code you're proposing can stall for 100ms.  Does GFP_KERNEL ever
take longer than that?

> I think the original GFP_ATOMIC is for that purpose.

Well.  We shouldn't "think" - we should know.  And the way we know is
by adding code comments.  The comments in your proposed patch don't
improve things because they describe "what" the code is doing (which
was obvious anyway) but they do not explain "why".

> So I still persist we find a 
> way to return a EAGAIN-like error code to peer in case of no-mem, and 
> peer retries on receiving the EAGAIN-like error.

There are all sorts of ways around this.  For example, try the
allocation with GFP_ATOMIC|__GFP_NOWARN.  In the very rare case where
this fails, fall back to GFP_KERNEL.  This is probably equivalent to
using GFP_KERNEL|__GFP_HIGH.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM
  2014-04-10 22:14     ` Andrew Morton
@ 2014-04-11  1:32       ` Wengang
  0 siblings, 0 replies; 5+ messages in thread
From: Wengang @ 2014-04-11  1:32 UTC (permalink / raw)
  To: ocfs2-devel

? 2014?04?11? 06:14, Andrew Morton ??:
> On Thu, 10 Apr 2014 09:16:13 +0800 Wengang <wen.gang.wang@oracle.com> wrote:
>
>>>> @@ -1685,18 +1688,30 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>>>>    
>>>>    	hash = dlm_lockid_hash(req->name, req->namelen);
>>>>    
>>>> +retry:
>>>>    	spin_lock(&dlm->spinlock);
>>>>    	res = __dlm_lookup_lockres(dlm, req->name, req->namelen, hash);
>>>>    	if (res) {
>>>>    		spin_lock(&res->spinlock);
>>>>    		master = res->owner;
>>>>    		if (master == dlm->node_num) {
>>>> -			int ret = dlm_dispatch_assert_master(dlm, res,
>>>> -							     0, 0, flags);
>>>> +			ret = dlm_dispatch_assert_master(dlm, res,
>>>> +					0, 0, flags);
>>>>    			if (ret < 0) {
>>>> -				mlog_errno(-ENOMEM);
>>>> -				/* retry!? */
>>>> -				BUG();
>>>> +				mlog_errno(ret);
>>>> +
>>>> +				/* ENOMEM returns, retry until
>>>> +				 * DISPATCH_ASSERT_RETRY_TIMES reached */
>>>> +				if (retries < DISPATCH_ASSERT_RETRY_TIMES) {
>>>> +					spin_unlock(&res->spinlock);
>>>> +					dlm_lockres_put(res);
>>>> +					spin_unlock(&dlm->spinlock);
>>>> +					msleep(50);
>>>> +					retries++;
>>>> +					goto retry;
>>>> +				} else {
>>>> +					BUG();
>>>> +				}
>>> urgh, this is not good.  dlm_dispatch_assert_master() uses GFP_ATOMIC
>>> and the chances of that failing are relatively high.  The msleep()
>>> might save us, but what happens if we cannot get more memory until some
>>> writeback occurs to this filesystem?
>>>
>>> It would be much better to use GFP_KERNEL in
>>> dlm_dispatch_assert_master().  That means preallocating the
>>> dlm_work_item in the caller before taking the spinlock.
>>>
>> Though to use GFP_KERNEL is safe, it may block the whole o2cb network
>> message processing(I detailed this in a separated thread).
> The code you're proposing can stall for 100ms.  Does GFP_KERNEL ever
> take longer than that?
>

I not sure about the time allocation with GFP_KERNEL would take, but for 
a heavy loaded system, taking more than 100ms is not possible?

>> I think the original GFP_ATOMIC is for that purpose.
> Well.  We shouldn't "think" - we should know.  And the way we know is
> by adding code comments.  The comments in your proposed patch don't
> improve things because they describe "what" the code is doing (which
> was obvious anyway) but they do not explain "why".

This patch is not my patch, I just said my opinion. Neither did I ACK on 
this patch. I am thinking without making peer retry, this patch doesn't 
look that meaningful.

>> So I still persist we find a
>> way to return a EAGAIN-like error code to peer in case of no-mem, and
>> peer retries on receiving the EAGAIN-like error.
> There are all sorts of ways around this.  For example, try the
> allocation with GFP_ATOMIC|__GFP_NOWARN.  In the very rare case where
> this fails, fall back to GFP_KERNEL.  This is probably equivalent to
> using GFP_KERNEL|__GFP_HIGH.
>
>
Sure, I was suggesting the patch owner. If we are sure with GFP_KERNEL 
won't take longer than 100ms(or 200ms or something like this, not very 
long), I agree with GFP_KERNEL; If not, we'd better let peer node retry.

I am wondering if there can be a timeout version of kzalloc families, 
but cheap. So that we can try this when failed with NOFS.

thanks,
wengang

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-04-11  1:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-08 10:47 [Ocfs2-devel] [PATCH v2] ocfs2: retry once dlm_dispatch_assert_master failed with ENOMEM Joseph Qi
2014-04-09 21:44 ` Andrew Morton
2014-04-10  1:16   ` Wengang
2014-04-10 22:14     ` Andrew Morton
2014-04-11  1:32       ` Wengang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.