* [PATCH 0/4] Skipped unlocks @ 2017-02-17 18:37 Benjamin Coddington 2017-02-17 18:37 ` [PATCH 1/4] NFS4: remove a redundant lock range check Benjamin Coddington ` (3 more replies) 0 siblings, 4 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:37 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs Well over a year ago I made other attempts to fix the problem of NFS failing to send an unlock when signalled. Those attempts were terrible. Here's another, smaller version that keeps two simple fixes and splits out the behavior for waiting on I/O completion to be different for the v3 and v4 case. For v4, we can easily ensure the unlock is always sent by having the rpc_task perform the wait. For v3, that approach would require significant changes to the way NLM works, so a simpler approach of skipping the wait is used as was suggested by Trond. I think this is probably a little late for 4.11. Comments and review are welcomed. Benjamin Coddington (4): NFS4: remove a redundant lock range check NFS: Move the flock open mode check into nfs_flock() locks: Set FL_CLOSE when removing flock locks on close() NFS: Always wait for I/O completion before unlock fs/locks.c | 2 +- fs/nfs/file.c | 33 +++++++++++++++++---------------- fs/nfs/nfs3proc.c | 13 +++++++++++++ fs/nfs/nfs4proc.c | 24 +++++++----------------- fs/nfs/pagelist.c | 1 + 5 files changed, 39 insertions(+), 34 deletions(-) -- 2.9.3 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/4] NFS4: remove a redundant lock range check 2017-02-17 18:37 [PATCH 0/4] Skipped unlocks Benjamin Coddington @ 2017-02-17 18:37 ` Benjamin Coddington 2017-02-17 18:37 ` [PATCH 2/4] NFS: Move the flock open mode check into nfs_flock() Benjamin Coddington ` (2 subsequent siblings) 3 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:37 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs flock64_to_posix_lock() is already doing this check Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Reviewed-by: Jeff Layton <jeff.layton@primarydata.com> --- fs/nfs/nfs4proc.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 0a0eaecf9676..9388899e4050 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -6570,9 +6570,6 @@ nfs4_proc_lock(struct file *filp, int cmd, struct file_lock *request) ctx = nfs_file_open_context(filp); state = ctx->state; - if (request->fl_start < 0 || request->fl_end < 0) - return -EINVAL; - if (IS_GETLK(cmd)) { if (state != NULL) return nfs4_proc_getlk(state, F_GETLK, request); -- 2.9.3 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/4] NFS: Move the flock open mode check into nfs_flock() 2017-02-17 18:37 [PATCH 0/4] Skipped unlocks Benjamin Coddington 2017-02-17 18:37 ` [PATCH 1/4] NFS4: remove a redundant lock range check Benjamin Coddington @ 2017-02-17 18:37 ` Benjamin Coddington 2017-02-17 18:37 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington 2017-02-17 18:37 ` [PATCH 4/4] NFS: Always wait for I/O completion before unlock Benjamin Coddington 3 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:37 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs We only need to check lock exclusive/shared types against open mode when flock() is used on NFS, so move it into the flock-specific path instead of checking it for all locks. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> --- fs/nfs/file.c | 20 +++++++++++++++++--- fs/nfs/nfs4proc.c | 14 -------------- 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 26dbe8b0c10d..a1499f3e8f7a 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -820,9 +820,23 @@ int nfs_flock(struct file *filp, int cmd, struct file_lock *fl) if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK) is_local = 1; - /* We're simulating flock() locks using posix locks on the server */ - if (fl->fl_type == F_UNLCK) - return do_unlk(filp, cmd, fl, is_local); + /* + * VFS doesn't require the open mode to match a flock() lock's type. + * NFS, however, may simulate flock() locking with posix locking which + * requires the open mode to match the lock type. + */ + switch (fl->fl_type) { + case F_UNLCK: + return do_unlk(filp, cmd, fl, is_local); + case F_RDLCK: + if (!(filp->f_mode & FMODE_READ)) + return -EBADF; + break; + case F_WRLCK: + if (!(filp->f_mode & FMODE_WRITE)) + return -EBADF; + } + return do_setlk(filp, cmd, fl, is_local); } EXPORT_SYMBOL_GPL(nfs_flock); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 9388899e4050..91f88bfbbe79 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -6592,20 +6592,6 @@ nfs4_proc_lock(struct file *filp, int cmd, struct file_lock *request) !test_bit(NFS_STATE_POSIX_LOCKS, &state->flags)) return -ENOLCK; - /* - * Don't rely on the VFS having checked the file open mode, - * since it won't do this for flock() locks. - */ - switch (request->fl_type) { - case F_RDLCK: - if (!(filp->f_mode & FMODE_READ)) - return -EBADF; - break; - case F_WRLCK: - if (!(filp->f_mode & FMODE_WRITE)) - return -EBADF; - } - status = nfs4_set_lock_state(state, request); if (status != 0) return status; -- 2.9.3 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-17 18:37 [PATCH 0/4] Skipped unlocks Benjamin Coddington 2017-02-17 18:37 ` [PATCH 1/4] NFS4: remove a redundant lock range check Benjamin Coddington 2017-02-17 18:37 ` [PATCH 2/4] NFS: Move the flock open mode check into nfs_flock() Benjamin Coddington @ 2017-02-17 18:37 ` Benjamin Coddington 2017-02-17 18:37 ` [PATCH 4/4] NFS: Always wait for I/O completion before unlock Benjamin Coddington 3 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:37 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. NFS will check for this flag to ensure an unlock is sent in a following patch. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> --- fs/locks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/locks.c b/fs/locks.c index 26811321d39b..af2031a1fcff 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) .fl_owner = filp, .fl_pid = current->tgid, .fl_file = filp, - .fl_flags = FL_FLOCK, + .fl_flags = FL_FLOCK | FL_CLOSE, .fl_type = F_UNLCK, .fl_end = OFFSET_MAX, }; -- 2.9.3 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/4] NFS: Always wait for I/O completion before unlock 2017-02-17 18:37 [PATCH 0/4] Skipped unlocks Benjamin Coddington ` (2 preceding siblings ...) 2017-02-17 18:37 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington @ 2017-02-17 18:37 ` Benjamin Coddington 3 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:37 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs NFS attempts to wait for read and write completion before unlocking in order to ensure that the data returned was protected by the lock. When this waiting is interrupted by a signal, the unlock may never be sent, and messages similar to the following are seen in the kernel ring buffer: [20.167876] Leaked locks on dev=0x0:0x2b ino=0x8dd4c3: [20.168286] POSIX: fl_owner=ffff880078b06940 fl_flags=0x1 fl_type=0x0 fl_pid=20183 [20.168727] POSIX: fl_owner=ffff880078b06680 fl_flags=0x1 fl_type=0x0 fl_pid=20185 For NFSv3, the missing unlock will cause the server to refuse conflicting locks indefinitely. For NFSv4, the leftover lock will be removed by the server after the lease timeout. This patch fixes this for NFSv3 by skipping the wait in order to immediately send the unlock if the FL_CLOSE flag is set when signaled. For NFSv4, this approach may cause the server to see the I/O as arriving with an old stateid, so, for the v4 case the fix is different: the wait on I/O completion is moved into nfs4_locku_ops' rpc_call_prepare(). This will cause the sleep to happen in rpciod context, and a signal to the originally waiting process will not cause the unlock to be skipped. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> --- fs/nfs/file.c | 13 ------------- fs/nfs/nfs3proc.c | 13 +++++++++++++ fs/nfs/nfs4proc.c | 7 +++++++ fs/nfs/pagelist.c | 1 + 4 files changed, 21 insertions(+), 13 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index a1499f3e8f7a..4707a35a5dea 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -684,7 +684,6 @@ static int do_unlk(struct file *filp, int cmd, struct file_lock *fl, int is_local) { struct inode *inode = filp->f_mapping->host; - struct nfs_lock_context *l_ctx; int status; /* @@ -693,18 +692,6 @@ do_unlk(struct file *filp, int cmd, struct file_lock *fl, int is_local) */ vfs_fsync(filp, 0); - l_ctx = nfs_get_lock_context(nfs_file_open_context(filp)); - if (!IS_ERR(l_ctx)) { - status = nfs_iocounter_wait(l_ctx); - nfs_put_lock_context(l_ctx); - if (status < 0) - return status; - } - - /* NOTE: special case - * If we're signalled while cleaning up locks on process exit, we - * still need to complete the unlock. - */ /* * Use local locking if mounted with "-onolock" or with appropriate * "-olocal_lock=" diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c index dc925b531f32..ec3f12571c82 100644 --- a/fs/nfs/nfs3proc.c +++ b/fs/nfs/nfs3proc.c @@ -869,6 +869,19 @@ static int nfs3_proc_lock(struct file *filp, int cmd, struct file_lock *fl) { struct inode *inode = file_inode(filp); + int status; + struct nfs_lock_context *l_ctx; + + /* For v3, always send the unlock on FL_CLOSE */ + if (fl->fl_type == F_UNLCK) { + l_ctx = nfs_get_lock_context(nfs_file_open_context(filp)); + if (!IS_ERR(l_ctx)) { + status = nfs_iocounter_wait(l_ctx); + nfs_put_lock_context(l_ctx); + if (status < 0 && !(fl->fl_flags & FL_CLOSE)) + return status; + } + } return nlmclnt_proc(NFS_SERVER(inode)->nlm_host, cmd, fl); } diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 91f88bfbbe79..052b97fd653d 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -5906,6 +5906,7 @@ struct nfs4_unlockdata { struct nfs_locku_res res; struct nfs4_lock_state *lsp; struct nfs_open_context *ctx; + struct nfs_lock_context *l_ctx; struct file_lock fl; struct nfs_server *server; unsigned long timestamp; @@ -5930,6 +5931,7 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl, atomic_inc(&lsp->ls_count); /* Ensure we don't close file until we're done freeing locks! */ p->ctx = get_nfs_open_context(ctx); + p->l_ctx = nfs_get_lock_context(ctx); memcpy(&p->fl, fl, sizeof(p->fl)); p->server = NFS_SERVER(inode); return p; @@ -5988,6 +5990,11 @@ static void nfs4_locku_prepare(struct rpc_task *task, void *data) /* Note: exit _without_ running nfs4_locku_done */ goto out_no_action; } + + if (!IS_ERR(calldata->l_ctx)) { + nfs_iocounter_wait(calldata->l_ctx); + nfs_put_lock_context(calldata->l_ctx); + } calldata->timestamp = jiffies; if (nfs4_setup_sequence(calldata->server, &calldata->arg.seq_args, diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index 6e629b856a00..8bf3cefdaa96 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -114,6 +114,7 @@ nfs_iocounter_wait(struct nfs_lock_context *l_ctx) return wait_on_atomic_t(&l_ctx->io_count, nfs_wait_atomic_killable, TASK_KILLABLE); } +EXPORT_SYMBOL(nfs_iocounter_wait); /* * nfs_page_group_lock - lock the head of the page group -- 2.9.3 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 0/4] Skipped unlocks @ 2017-02-17 18:46 Benjamin Coddington 2017-02-17 18:46 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington 0 siblings, 1 reply; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:46 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs Well over a year ago I made other attempts to fix the problem of NFS failing to send an unlock when signalled. Those attempts were terrible. Here's another, smaller version that keeps two simple fixes and splits out the behavior for waiting on I/O completion to be different for the v3 and v4 case. For v4, we can easily ensure the unlock is always sent by having the rpc_task perform the wait. For v3, that approach would require significant changes to the way NLM works, so a simpler approach of skipping the wait is used as was suggested by Trond. I think this is probably a little late for 4.11. Comments and review are welcomed. since v1: - add Christoph's reviewed-by on 1/4 and 2/4 and fixup switch indentation on 2/4 Benjamin Coddington (4): NFS4: remove a redundant lock range check NFS: Move the flock open mode check into nfs_flock() locks: Set FL_CLOSE when removing flock locks on close() NFS: Always wait for I/O completion before unlock fs/locks.c | 2 +- fs/nfs/file.c | 31 ++++++++++++++++--------------- fs/nfs/nfs3proc.c | 13 +++++++++++++ fs/nfs/nfs4proc.c | 24 +++++++----------------- fs/nfs/pagelist.c | 1 + 5 files changed, 38 insertions(+), 33 deletions(-) -- 2.9.3 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-17 18:46 [PATCH v2 0/4] Skipped unlocks Benjamin Coddington @ 2017-02-17 18:46 ` Benjamin Coddington 0 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-17 18:46 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. NFS will check for this flag to ensure an unlock is sent in a following patch. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> --- fs/locks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/locks.c b/fs/locks.c index 26811321d39b..af2031a1fcff 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) .fl_owner = filp, .fl_pid = current->tgid, .fl_file = filp, - .fl_flags = FL_FLOCK, + .fl_flags = FL_FLOCK | FL_CLOSE, .fl_type = F_UNLCK, .fl_end = OFFSET_MAX, }; -- 2.9.3 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 0/4] Skipped unlocks @ 2017-02-21 15:39 Benjamin Coddington 2017-02-21 15:39 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington 0 siblings, 1 reply; 14+ messages in thread From: Benjamin Coddington @ 2017-02-21 15:39 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs Well over a year ago I made other attempts to fix the problem of NFS failing to send an unlock when signalled. Those attempts were terrible. Here's another, smaller version that keeps two simple fixes and takes the approach of skipping the wait if FL_CLOSE is set as was suggested by Trond. I think this is probably a little late for 4.11. Comments and review are welcomed. since v1: - add Christoph's reviewed-by on 1/4 and 2/4 and fixup switch indentation on 2/4 since v2: - don't sleep in rpciod to wait for I/O completion, just send the unlock immediately for both v3 and v4. Benjamin Coddington (4): NFS4: remove a redundant lock range check NFS: Move the flock open mode check into nfs_flock() locks: Set FL_CLOSE when removing flock locks on close() NFS: Always send an unlock for FL_CLOSE fs/locks.c | 2 +- fs/nfs/file.c | 28 +++++++++++++++++++++------- fs/nfs/nfs4proc.c | 17 ----------------- 3 files changed, 22 insertions(+), 25 deletions(-) -- 2.9.3 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-21 15:39 [PATCH v3 0/4] Skipped unlocks Benjamin Coddington @ 2017-02-21 15:39 ` Benjamin Coddington 2017-02-22 12:13 ` Jeff Layton 0 siblings, 1 reply; 14+ messages in thread From: Benjamin Coddington @ 2017-02-21 15:39 UTC (permalink / raw) To: Trond Myklebust, Anna Schumaker; +Cc: linux-nfs Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. NFS will check for this flag to ensure an unlock is sent in a following patch. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> --- fs/locks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/locks.c b/fs/locks.c index 26811321d39b..af2031a1fcff 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) .fl_owner = filp, .fl_pid = current->tgid, .fl_file = filp, - .fl_flags = FL_FLOCK, + .fl_flags = FL_FLOCK | FL_CLOSE, .fl_type = F_UNLCK, .fl_end = OFFSET_MAX, }; -- 2.9.3 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-21 15:39 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington @ 2017-02-22 12:13 ` Jeff Layton [not found] ` <1487765584.2886.8.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 0 siblings, 1 reply; 14+ messages in thread From: Jeff Layton @ 2017-02-22 12:13 UTC (permalink / raw) To: Benjamin Coddington, Trond Myklebust, Anna Schumaker; +Cc: linux-nfs On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: > Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. > NFS will check for this flag to ensure an unlock is sent in a following > patch. > > Signed-off-by: Benjamin Coddington <bcodding@redhat.com> > --- > fs/locks.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/locks.c b/fs/locks.c > index 26811321d39b..af2031a1fcff 100644 > --- a/fs/locks.c > +++ b/fs/locks.c > @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) > .fl_owner = filp, > .fl_pid = current->tgid, > .fl_file = filp, > - .fl_flags = FL_FLOCK, > + .fl_flags = FL_FLOCK | FL_CLOSE, > .fl_type = F_UNLCK, > .fl_end = OFFSET_MAX, > }; Looks good. I'm fine with merging this one via the NFS tree, btw... Reviewed-by: Jeff Layton <jlayton@redhat.com> ^ permalink raw reply [flat|nested] 14+ messages in thread
[parent not found: <1487765584.2886.8.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>]
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-22 12:13 ` Jeff Layton @ 2017-02-22 12:25 ` Jeff Layton 0 siblings, 0 replies; 14+ messages in thread From: Jeff Layton @ 2017-02-22 12:25 UTC (permalink / raw) To: Benjamin Coddington, Trond Myklebust, Anna Schumaker Cc: linux-nfs-u79uwXL29TY76Z2rM5mHXA, linux-fsdevel-MogPR669STc76Z2rM5mHXA, Miklos Szeredi On Wed, 2017-02-22 at 07:13 -0500, Jeff Layton wrote: > On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: > > Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. > > NFS will check for this flag to ensure an unlock is sent in a following > > patch. > > > > Signed-off-by: Benjamin Coddington <bcodding-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> > > --- > > fs/locks.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/fs/locks.c b/fs/locks.c > > index 26811321d39b..af2031a1fcff 100644 > > --- a/fs/locks.c > > +++ b/fs/locks.c > > @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) > > .fl_owner = filp, > > .fl_pid = current->tgid, > > .fl_file = filp, > > - .fl_flags = FL_FLOCK, > > + .fl_flags = FL_FLOCK | FL_CLOSE, > > .fl_type = F_UNLCK, > > .fl_end = OFFSET_MAX, > > }; > > Looks good. I'm fine with merging this one via the NFS tree, btw... > > Reviewed-by: Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> > (cc'ing linux-fsdevel and Miklos) Hmm, that said, this potentially may affect other filesystems. If you do any more postings of this set, please cc linux-fsdevel. In particular, I'll note that fuse has this: /* Unlock on close is handled by the flush method */ if (fl->fl_flags & FL_CLOSE) return 0; I don't see any lock handling in fuse_flush (though it does pass down a lockowner). I guess the expectation is that the userland driver should close down POSIX locks when the flush method is called. Miklos, does this also apply to flock locks? Would Ben's patch cause any grief here? Thanks, -- Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() @ 2017-02-22 12:25 ` Jeff Layton 0 siblings, 0 replies; 14+ messages in thread From: Jeff Layton @ 2017-02-22 12:25 UTC (permalink / raw) To: Benjamin Coddington, Trond Myklebust, Anna Schumaker Cc: linux-nfs, linux-fsdevel, Miklos Szeredi On Wed, 2017-02-22 at 07:13 -0500, Jeff Layton wrote: > On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: > > Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. > > NFS will check for this flag to ensure an unlock is sent in a following > > patch. > > > > Signed-off-by: Benjamin Coddington <bcodding@redhat.com> > > --- > > fs/locks.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/fs/locks.c b/fs/locks.c > > index 26811321d39b..af2031a1fcff 100644 > > --- a/fs/locks.c > > +++ b/fs/locks.c > > @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) > > .fl_owner = filp, > > .fl_pid = current->tgid, > > .fl_file = filp, > > - .fl_flags = FL_FLOCK, > > + .fl_flags = FL_FLOCK | FL_CLOSE, > > .fl_type = F_UNLCK, > > .fl_end = OFFSET_MAX, > > }; > > Looks good. I'm fine with merging this one via the NFS tree, btw... > > Reviewed-by: Jeff Layton <jlayton@redhat.com> > (cc'ing linux-fsdevel and Miklos) Hmm, that said, this potentially may affect other filesystems. If you do any more postings of this set, please cc linux-fsdevel. In particular, I'll note that fuse has this: /* Unlock on close is handled by the flush method */ if (fl->fl_flags & FL_CLOSE) return 0; I don't see any lock handling in fuse_flush (though it does pass down a lockowner). I guess the expectation is that the userland driver should close down POSIX locks when the flush method is called. Miklos, does this also apply to flock locks? Would Ben's patch cause any grief here? Thanks, -- Jeff Layton <jlayton@redhat.com> ^ permalink raw reply [flat|nested] 14+ messages in thread
[parent not found: <1487766356.2886.11.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>]
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-22 12:25 ` Jeff Layton @ 2017-02-22 13:25 ` Miklos Szeredi -1 siblings, 0 replies; 14+ messages in thread From: Miklos Szeredi @ 2017-02-22 13:25 UTC (permalink / raw) To: Jeff Layton Cc: Benjamin Coddington, Trond Myklebust, Anna Schumaker, Linux NFS list, linux-fsdevel-MogPR669STc76Z2rM5mHXA On Wed, Feb 22, 2017 at 1:25 PM, Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote: > On Wed, 2017-02-22 at 07:13 -0500, Jeff Layton wrote: >> On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: >> > Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. >> > NFS will check for this flag to ensure an unlock is sent in a following >> > patch. >> > >> > Signed-off-by: Benjamin Coddington <bcodding-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> >> > --- >> > fs/locks.c | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/fs/locks.c b/fs/locks.c >> > index 26811321d39b..af2031a1fcff 100644 >> > --- a/fs/locks.c >> > +++ b/fs/locks.c >> > @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) >> > .fl_owner = filp, >> > .fl_pid = current->tgid, >> > .fl_file = filp, >> > - .fl_flags = FL_FLOCK, >> > + .fl_flags = FL_FLOCK | FL_CLOSE, >> > .fl_type = F_UNLCK, >> > .fl_end = OFFSET_MAX, >> > }; >> >> Looks good. I'm fine with merging this one via the NFS tree, btw... >> >> Reviewed-by: Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> >> > > (cc'ing linux-fsdevel and Miklos) > > Hmm, that said, this potentially may affect other filesystems. If you do > any more postings of this set, please cc linux-fsdevel. > > In particular, I'll note that fuse has this: > > /* Unlock on close is handled by the flush method */ > if (fl->fl_flags & FL_CLOSE) > return 0; > > I don't see any lock handling in fuse_flush (though it does pass down a > lockowner). I guess the expectation is that the userland driver should > close down POSIX locks when the flush method is called. Right. > > Miklos, does this also apply to flock locks? Would Ben's patch cause any > grief here? Yes, it would make the final close of file not unlock flocks on that open file. Simple fix is to change that condition to #define FL_CLOSE_POSIX (FL_POSIX | FL_CLOSE) if (fl->fl_flags & FL_CLOSE_POSIX == FL_CLOSE_POSIX) return 0; Thanks, Miklos -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() @ 2017-02-22 13:25 ` Miklos Szeredi 0 siblings, 0 replies; 14+ messages in thread From: Miklos Szeredi @ 2017-02-22 13:25 UTC (permalink / raw) To: Jeff Layton Cc: Benjamin Coddington, Trond Myklebust, Anna Schumaker, Linux NFS list, linux-fsdevel On Wed, Feb 22, 2017 at 1:25 PM, Jeff Layton <jlayton@redhat.com> wrote: > On Wed, 2017-02-22 at 07:13 -0500, Jeff Layton wrote: >> On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: >> > Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing locks. >> > NFS will check for this flag to ensure an unlock is sent in a following >> > patch. >> > >> > Signed-off-by: Benjamin Coddington <bcodding@redhat.com> >> > --- >> > fs/locks.c | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/fs/locks.c b/fs/locks.c >> > index 26811321d39b..af2031a1fcff 100644 >> > --- a/fs/locks.c >> > +++ b/fs/locks.c >> > @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) >> > .fl_owner = filp, >> > .fl_pid = current->tgid, >> > .fl_file = filp, >> > - .fl_flags = FL_FLOCK, >> > + .fl_flags = FL_FLOCK | FL_CLOSE, >> > .fl_type = F_UNLCK, >> > .fl_end = OFFSET_MAX, >> > }; >> >> Looks good. I'm fine with merging this one via the NFS tree, btw... >> >> Reviewed-by: Jeff Layton <jlayton@redhat.com> >> > > (cc'ing linux-fsdevel and Miklos) > > Hmm, that said, this potentially may affect other filesystems. If you do > any more postings of this set, please cc linux-fsdevel. > > In particular, I'll note that fuse has this: > > /* Unlock on close is handled by the flush method */ > if (fl->fl_flags & FL_CLOSE) > return 0; > > I don't see any lock handling in fuse_flush (though it does pass down a > lockowner). I guess the expectation is that the userland driver should > close down POSIX locks when the flush method is called. Right. > > Miklos, does this also apply to flock locks? Would Ben's patch cause any > grief here? Yes, it would make the final close of file not unlock flocks on that open file. Simple fix is to change that condition to #define FL_CLOSE_POSIX (FL_POSIX | FL_CLOSE) if (fl->fl_flags & FL_CLOSE_POSIX == FL_CLOSE_POSIX) return 0; Thanks, Miklos ^ permalink raw reply [flat|nested] 14+ messages in thread
[parent not found: <CAJfpegtArWuOKVS_Qq8E0Fw4pcsYC4sj==BN5=3WQSLDXA5AQg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() 2017-02-22 13:25 ` Miklos Szeredi @ 2017-02-22 13:27 ` Benjamin Coddington -1 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-22 13:27 UTC (permalink / raw) To: Miklos Szeredi, Jeff Layton Cc: Trond Myklebust, Anna Schumaker, Linux NFS list, linux-fsdevel-MogPR669STc76Z2rM5mHXA On 22 Feb 2017, at 8:25, Miklos Szeredi wrote: > On Wed, Feb 22, 2017 at 1:25 PM, Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> > wrote: >> On Wed, 2017-02-22 at 07:13 -0500, Jeff Layton wrote: >>> On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: >>>> Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing >>>> locks. >>>> NFS will check for this flag to ensure an unlock is sent in a >>>> following >>>> patch. >>>> >>>> Signed-off-by: Benjamin Coddington <bcodding-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> >>>> --- >>>> fs/locks.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/fs/locks.c b/fs/locks.c >>>> index 26811321d39b..af2031a1fcff 100644 >>>> --- a/fs/locks.c >>>> +++ b/fs/locks.c >>>> @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct >>>> file_lock_context *flctx) >>>> .fl_owner = filp, >>>> .fl_pid = current->tgid, >>>> .fl_file = filp, >>>> - .fl_flags = FL_FLOCK, >>>> + .fl_flags = FL_FLOCK | FL_CLOSE, >>>> .fl_type = F_UNLCK, >>>> .fl_end = OFFSET_MAX, >>>> }; >>> >>> Looks good. I'm fine with merging this one via the NFS tree, btw... >>> >>> Reviewed-by: Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> >>> >> >> (cc'ing linux-fsdevel and Miklos) >> >> Hmm, that said, this potentially may affect other filesystems. If you >> do >> any more postings of this set, please cc linux-fsdevel. >> >> In particular, I'll note that fuse has this: >> >> /* Unlock on close is handled by the flush method */ >> if (fl->fl_flags & FL_CLOSE) >> return 0; >> >> I don't see any lock handling in fuse_flush (though it does pass down >> a >> lockowner). I guess the expectation is that the userland driver >> should >> close down POSIX locks when the flush method is called. > > Right. > >> >> Miklos, does this also apply to flock locks? Would Ben's patch cause >> any >> grief here? > > Yes, it would make the final close of file not unlock flocks on that > open file. > > Simple fix is to change that condition to > > #define FL_CLOSE_POSIX (FL_POSIX | FL_CLOSE) > > if (fl->fl_flags & FL_CLOSE_POSIX == FL_CLOSE_POSIX) > return 0; OK, I can that in the next version. Thanks for that catch Jeff. Ben -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() @ 2017-02-22 13:27 ` Benjamin Coddington 0 siblings, 0 replies; 14+ messages in thread From: Benjamin Coddington @ 2017-02-22 13:27 UTC (permalink / raw) To: Miklos Szeredi, Jeff Layton Cc: Trond Myklebust, Anna Schumaker, Linux NFS list, linux-fsdevel On 22 Feb 2017, at 8:25, Miklos Szeredi wrote: > On Wed, Feb 22, 2017 at 1:25 PM, Jeff Layton <jlayton@redhat.com> > wrote: >> On Wed, 2017-02-22 at 07:13 -0500, Jeff Layton wrote: >>> On Tue, 2017-02-21 at 10:39 -0500, Benjamin Coddington wrote: >>>> Set FL_CLOSE in fl_flags as in locks_remove_posix() when clearing >>>> locks. >>>> NFS will check for this flag to ensure an unlock is sent in a >>>> following >>>> patch. >>>> >>>> Signed-off-by: Benjamin Coddington <bcodding@redhat.com> >>>> --- >>>> fs/locks.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/fs/locks.c b/fs/locks.c >>>> index 26811321d39b..af2031a1fcff 100644 >>>> --- a/fs/locks.c >>>> +++ b/fs/locks.c >>>> @@ -2504,7 +2504,7 @@ locks_remove_flock(struct file *filp, struct >>>> file_lock_context *flctx) >>>> .fl_owner = filp, >>>> .fl_pid = current->tgid, >>>> .fl_file = filp, >>>> - .fl_flags = FL_FLOCK, >>>> + .fl_flags = FL_FLOCK | FL_CLOSE, >>>> .fl_type = F_UNLCK, >>>> .fl_end = OFFSET_MAX, >>>> }; >>> >>> Looks good. I'm fine with merging this one via the NFS tree, btw... >>> >>> Reviewed-by: Jeff Layton <jlayton@redhat.com> >>> >> >> (cc'ing linux-fsdevel and Miklos) >> >> Hmm, that said, this potentially may affect other filesystems. If you >> do >> any more postings of this set, please cc linux-fsdevel. >> >> In particular, I'll note that fuse has this: >> >> /* Unlock on close is handled by the flush method */ >> if (fl->fl_flags & FL_CLOSE) >> return 0; >> >> I don't see any lock handling in fuse_flush (though it does pass down >> a >> lockowner). I guess the expectation is that the userland driver >> should >> close down POSIX locks when the flush method is called. > > Right. > >> >> Miklos, does this also apply to flock locks? Would Ben's patch cause >> any >> grief here? > > Yes, it would make the final close of file not unlock flocks on that > open file. > > Simple fix is to change that condition to > > #define FL_CLOSE_POSIX (FL_POSIX | FL_CLOSE) > > if (fl->fl_flags & FL_CLOSE_POSIX == FL_CLOSE_POSIX) > return 0; OK, I can that in the next version. Thanks for that catch Jeff. Ben ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2017-02-22 13:27 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-02-17 18:37 [PATCH 0/4] Skipped unlocks Benjamin Coddington 2017-02-17 18:37 ` [PATCH 1/4] NFS4: remove a redundant lock range check Benjamin Coddington 2017-02-17 18:37 ` [PATCH 2/4] NFS: Move the flock open mode check into nfs_flock() Benjamin Coddington 2017-02-17 18:37 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington 2017-02-17 18:37 ` [PATCH 4/4] NFS: Always wait for I/O completion before unlock Benjamin Coddington 2017-02-17 18:46 [PATCH v2 0/4] Skipped unlocks Benjamin Coddington 2017-02-17 18:46 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington 2017-02-21 15:39 [PATCH v3 0/4] Skipped unlocks Benjamin Coddington 2017-02-21 15:39 ` [PATCH 3/4] locks: Set FL_CLOSE when removing flock locks on close() Benjamin Coddington 2017-02-22 12:13 ` Jeff Layton [not found] ` <1487765584.2886.8.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2017-02-22 12:25 ` Jeff Layton 2017-02-22 12:25 ` Jeff Layton [not found] ` <1487766356.2886.11.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2017-02-22 13:25 ` Miklos Szeredi 2017-02-22 13:25 ` Miklos Szeredi [not found] ` <CAJfpegtArWuOKVS_Qq8E0Fw4pcsYC4sj==BN5=3WQSLDXA5AQg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2017-02-22 13:27 ` Benjamin Coddington 2017-02-22 13:27 ` Benjamin Coddington
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.