From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:46514 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932407AbdHWUZn (ORCPT ); Wed, 23 Aug 2017 16:25:43 -0400 From: "Benjamin Coddington" To: "Trond Myklebust" Cc: "anna.schumaker@netapp.com" , "linux-nfs@vger.kernel.org" Subject: Re: [PATCH v2] NFSv4: Don't add a new lock on an interrupted wait for LOCK Date: Wed, 23 Aug 2017 16:25:42 -0400 Message-ID: In-Reply-To: <1503519306.78164.1.camel@primarydata.com> References: <1501618320.4702.14.camel@redhat.com> <1503519306.78164.1.camel@primarydata.com> MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org List-ID: On 23 Aug 2017, at 16:15, Trond Myklebust wrote: > On Wed, 2017-08-23 at 16:11 -0400, Benjamin Coddington wrote: >> Ping on this one as well -- it was buried in a thread: >> >> Ben >> >> On 2 Aug 2017, at 7:27, Benjamin Coddington wrote: >> >>> If the wait for a LOCK operation is interrupted, and then the file >>> is >>> closed, the locks cleanup code will assume that no new locks will >>> be >>> added >>> to the inode after it has completed. We already have a mechanism >>> to >>> detect >>> if there was signal, so let's use that to avoid recreating the >>> local >>> lock >>> once the RPC completes. >>> >>> Signed-off-by: Benjamin Coddington >>> Reviewed-by: Jeff Layton >>> --- >>> fs/nfs/nfs4proc.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c >>> index dbfa18900e25..5256f429c268 100644 >>> --- a/fs/nfs/nfs4proc.c >>> +++ b/fs/nfs/nfs4proc.c >>> @@ -6100,7 +6100,7 @@ static void nfs4_lock_done(struct rpc_task >>> *task, void *calldata) >>> case 0: >>> renew_lease(NFS_SERVER(d_inode(data->ctx- >>>> dentry)), >>> data->timestamp); >>> - if (data->arg.new_lock) { >>> + if (data->arg.new_lock && !data->cancelled) { >>> data->fl.fl_flags &= ~(FL_SLEEP | >>> FL_ACCESS); >>> if (locks_lock_inode_wait(lsp->ls_state- >>>> inode, &data->fl) < 0) { >>> rpc_restart_call_prepare(task); >>> > > Why do we want to special case '0'? Surely we don't want to restart the > RPC call for any of the error cases if data->cancelled is set. We don't want to add the local lock if data->cancelled is set. It's possible that the file has already been closed and the locks cleanup code has removed all of the local locks, so if this races in and adds a lock we end up with one left over. I don't understand what you mean about restarting the RPC call - we'd only restart the RPC call here if there was an error adding the local lock. Ben