All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v6 0/2] nfsd: Initial implementation of NFSv4 Courteous Server
@ 2021-12-06 17:59 Dai Ngo
  2021-12-06 17:59 ` [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
  2021-12-06 17:59 ` [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
  0 siblings, 2 replies; 24+ messages in thread
From: Dai Ngo @ 2021-12-06 17:59 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel


Hi Bruce,

This series of patches implement the NFSv4 Courteous Server.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

The v2 patch includes the following:

. add new callback, lm_expire_lock, to lock_manager_operations to
  allow the lock manager to take appropriate action with conflict lock.

. handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.

. expire courtesy client after 24hr if client has not reconnected.

. do not allow expired client to become courtesy client if there are
  waiters for client's locks.

. modify client_info_show to show courtesy client and seconds from
  last renew.

. fix a problem with NFSv4.1 server where the it keeps returning
  SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
  the courtesy client re-connects, causing the client to keep sending
  BCTS requests to server.

The v3 patch includes the following:

. modified posix_test_lock to check and resolve conflict locks
  to handle NLM TEST and NFSv4 LOCKT requests.

. separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

The v4 patch includes:

. rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock
  by asking the laudromat thread to destroy the courtesy client.

. handle NFSv4 share reservation conflicts with courtesy client. This
  includes conflicts between access mode and deny mode and vice versa.

. drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

The v5 patch includes:

. fix recursive locking of file_rwsem from posix_lock_file. 

. retest with LOCKDEP enabled.

The v6 patch includes:

. merge witn 5.15-rc7

. fix a bug in nfs4_check_deny_bmap that did not check for matched
  nfs4_file before checking for access/deny conflict. This bug causes
  pynfs OPEN18 to fail since the server taking too long to release
  lots of un-conflict clients' state.

. enhance share reservation conflict handler to handle case where
  a large number of conflict courtesy clients need to be expired.
  The 1st 100 clients are expired synchronously and the rest are
  expired in the background by the laundromat and NFS4ERR_DELAY
  is returned to the NFS client. This is needed to prevent the
  NFS client from timing out waiting got the reply.

. no regression wirh pynfs 4.0 and 4.1 tests.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 17:59 [PATCH RFC v6 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2021-12-06 17:59 ` Dai Ngo
  2021-12-06 18:39   ` Chuck Lever III
  2021-12-06 17:59 ` [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
  1 sibling, 1 reply; 24+ messages in thread
From: Dai Ngo @ 2021-12-06 17:59 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

Add new callback, lm_expire_lock, to lock_manager_operations to allow
the lock manager to take appropriate action to resolve the lock conflict
if possible. The callback takes 2 arguments, file_lock of the blocker
and a testonly flag:

testonly = 1  check and return true if lock conflict can be resolved
              else return false.
testonly = 0  resolve the conflict if possible, return true if conflict
              was resolved esle return false.

Lock manager, such as NFSv4 courteous server, uses this callback to
resolve conflict by destroying lock owner, or the NFSv4 courtesy client
(client that has expired but allowed to maintains its states) that owns
the lock.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/locks.c         | 28 +++++++++++++++++++++++++---
 include/linux/fs.h |  1 +
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 3d6fb4ae847b..0fef0a6322c7 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
 	struct file_lock *cfl;
 	struct file_lock_context *ctx;
 	struct inode *inode = locks_inode(filp);
+	bool ret;
 
 	ctx = smp_load_acquire(&inode->i_flctx);
 	if (!ctx || list_empty_careful(&ctx->flc_posix)) {
@@ -962,11 +963,20 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
 	}
 
 	spin_lock(&ctx->flc_lock);
+retry:
 	list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
-		if (posix_locks_conflict(fl, cfl)) {
-			locks_copy_conflock(fl, cfl);
-			goto out;
+		if (!posix_locks_conflict(fl, cfl))
+			continue;
+		if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock &&
+				cfl->fl_lmops->lm_expire_lock(cfl, 1)) {
+			spin_unlock(&ctx->flc_lock);
+			ret = cfl->fl_lmops->lm_expire_lock(cfl, 0);
+			spin_lock(&ctx->flc_lock);
+			if (ret)
+				goto retry;
 		}
+		locks_copy_conflock(fl, cfl);
+		goto out;
 	}
 	fl->fl_type = F_UNLCK;
 out:
@@ -1140,6 +1150,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 	int error;
 	bool added = false;
 	LIST_HEAD(dispose);
+	bool ret;
 
 	ctx = locks_get_lock_context(inode, request->fl_type);
 	if (!ctx)
@@ -1166,9 +1177,20 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 	 * blocker's list of waiters and the global blocked_hash.
 	 */
 	if (request->fl_type != F_UNLCK) {
+retry:
 		list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
 			if (!posix_locks_conflict(request, fl))
 				continue;
+			if (fl->fl_lmops && fl->fl_lmops->lm_expire_lock &&
+					fl->fl_lmops->lm_expire_lock(fl, 1)) {
+				spin_unlock(&ctx->flc_lock);
+				percpu_up_read(&file_rwsem);
+				ret = fl->fl_lmops->lm_expire_lock(fl, 0);
+				percpu_down_read(&file_rwsem);
+				spin_lock(&ctx->flc_lock);
+				if (ret)
+					goto retry;
+			}
 			if (conflock)
 				locks_copy_conflock(conflock, fl);
 			error = -EAGAIN;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e7a633353fd2..1a76b6451398 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1071,6 +1071,7 @@ struct lock_manager_operations {
 	int (*lm_change)(struct file_lock *, int, struct list_head *);
 	void (*lm_setup)(struct file_lock *, void **);
 	bool (*lm_breaker_owns_lease)(struct file_lock *);
+	bool (*lm_expire_lock)(struct file_lock *fl, bool testonly);
 };
 
 struct lock_manager {
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 17:59 [PATCH RFC v6 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2021-12-06 17:59 ` [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
@ 2021-12-06 17:59 ` Dai Ngo
  2021-12-06 19:55   ` Chuck Lever III
  1 sibling, 1 reply; 24+ messages in thread
From: Dai Ngo @ 2021-12-06 17:59 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

Currently an NFSv4 client must maintain its lease by using the at least
one of the state tokens or if nothing else, by issuing a RENEW (4.0), or
a singleton SEQUENCE (4.1) at least once during each lease period. If the
client fails to renew the lease, for any reason, the Linux server expunges
the state tokens immediately upon detection of the "failure to renew the
lease" condition and begins returning NFS4ERR_EXPIRED if the client should
reconnect and attempt to use the (now) expired state.

The default lease period for the Linux server is 90 seconds.  The typical
client cuts that in half and will issue a lease renewing operation every
45 seconds. The 90 second lease period is very short considering the
potential for moderately long term network partitions.  A network partition
refers to any loss of network connectivity between the NFS client and the
NFS server, regardless of its root cause.  This includes NIC failures, NIC
driver bugs, network misconfigurations & administrative errors, routers &
switches crashing and/or having software updates applied, even down to
cables being physically pulled.  In most cases, these network failures are
transient, although the duration is unknown.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

The initial implementation of the Courteous Server will do the following:

. when the laundromat thread detects an expired client and if that client
still has established states on the Linux server and there is no waiters
for the client's locks then mark the client as a COURTESY_CLIENT and skip
destroying the client and all its states, otherwise destroy the client as
usual.

. detects conflict of OPEN request with COURTESY_CLIENT, destroys the
expired client and all its states, skips the delegation recall then allows
the conflicting request to succeed.

. detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
requests with COURTESY_CLIENT, destroys the expired client and all its
states then allows the conflicting request to succeed.

. detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
requests with COURTESY_CLIENT, destroys the expired client and all its
states then allows the conflicting request to succeed.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 293 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 fs/nfsd/state.h     |   3 +
 2 files changed, 293 insertions(+), 3 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 3f4027a5de88..759f61dc6685 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *);
 static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
 static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
 
+static struct workqueue_struct *laundry_wq;
+static void laundromat_main(struct work_struct *);
+
+static int courtesy_client_expiry = (24 * 60 * 60);	/* in secs */
+
 static bool is_session_dead(struct nfsd4_session *ses)
 {
 	return ses->se_flags & NFS4_SESSION_DEAD;
@@ -172,6 +177,7 @@ renew_client_locked(struct nfs4_client *clp)
 
 	list_move_tail(&clp->cl_lru, &nn->client_lru);
 	clp->cl_time = ktime_get_boottime_seconds();
+	clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
 }
 
 static void put_client_renew_locked(struct nfs4_client *clp)
@@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
 		seq_puts(m, "status: confirmed\n");
 	else
 		seq_puts(m, "status: unconfirmed\n");
+	seq_printf(m, "courtesy client: %s\n",
+		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
+	seq_printf(m, "seconds from last renew: %lld\n",
+		ktime_get_boottime_seconds() - clp->cl_time);
 	seq_printf(m, "name: ");
 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
@@ -4662,6 +4672,33 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
 	nfsd4_run_cb(&dp->dl_recall);
 }
 
+/*
+ * This function is called when a file is opened and there is a
+ * delegation conflict with another client. If the other client
+ * is a courtesy client then kick start the laundromat to destroy
+ * it.
+ */
+static bool
+nfsd_check_courtesy_client(struct nfs4_delegation *dp)
+{
+	struct svc_rqst *rqst;
+	struct nfs4_client *clp = dp->dl_recall.cb_clp;
+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+
+	if (!i_am_nfsd())
+		goto out;
+	rqst = kthread_data(current);
+	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
+		return false;
+out:
+	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
+		set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
+		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+		return true;
+	}
+	return false;
+}
+
 /* Called from break_lease() with i_lock held. */
 static bool
 nfsd_break_deleg_cb(struct file_lock *fl)
@@ -4670,6 +4707,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
 	struct nfs4_file *fp = dp->dl_stid.sc_file;
 
+	if (nfsd_check_courtesy_client(dp))
+		return false;
 	trace_nfsd_cb_recall(&dp->dl_stid);
 
 	/*
@@ -4912,6 +4951,136 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
 	return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0);
 }
 
+static bool
+__nfs4_check_deny_bmap(struct nfs4_ol_stateid *stp, u32 access,
+			bool share_access)
+{
+	if (share_access) {
+		if (!stp->st_deny_bmap)
+			return false;
+
+		if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) ||
+			(access & NFS4_SHARE_ACCESS_READ &&
+				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) ||
+			(access & NFS4_SHARE_ACCESS_WRITE &&
+				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) {
+			return true;
+		}
+		return false;
+	}
+	if ((access & NFS4_SHARE_DENY_BOTH) ||
+		(access & NFS4_SHARE_DENY_READ &&
+			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) ||
+		(access & NFS4_SHARE_DENY_WRITE &&
+			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) {
+		return true;
+	}
+	return false;
+}
+
+/*
+ * access: if share_access is true then check access mode else check deny mode
+ */
+static bool
+nfs4_check_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp,
+		struct nfs4_ol_stateid *st, u32 access, bool share_access)
+{
+	int i;
+	struct nfs4_openowner *oo;
+	struct nfs4_stateowner *so, *tmp;
+	struct nfs4_ol_stateid *stp, *stmp;
+
+	spin_lock(&clp->cl_lock);
+	for (i = 0; i < OWNER_HASH_SIZE; i++) {
+		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
+					so_strhash) {
+			if (!so->so_is_open_owner)
+				continue;
+			oo = openowner(so);
+			list_for_each_entry_safe(stp, stmp,
+				&oo->oo_owner.so_stateids, st_perstateowner) {
+				if (stp == st || stp->st_stid.sc_file != fp)
+					continue;
+				if (__nfs4_check_deny_bmap(stp, access,
+							share_access)) {
+					spin_unlock(&clp->cl_lock);
+					return true;
+				}
+			}
+		}
+	}
+	spin_unlock(&clp->cl_lock);
+	return false;
+}
+
+/*
+ * Function to check if the nfserr_share_denied error for 'fp' resulted
+ * from conflict with courtesy clients then release their state to resolve
+ * the conflict.
+ *
+ * Function returns:
+ *	 0 -  no conflict with courtesy clients
+ *	>0 -  conflict with courtesy clients resolved, try access/deny check again
+ *	-1 -  conflict with courtesy clients being resolved in background
+ *            return nfserr_jukebox to NFS client
+ */
+static int
+nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
+			struct nfs4_file *fp, struct nfs4_ol_stateid *stp,
+			u32 access, bool share_access)
+{
+	int cnt = 0;
+	int async_cnt = 0;
+	bool no_retry = false;
+	struct nfs4_client *cl;
+	struct list_head *pos, *next, reaplist;
+	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+
+	INIT_LIST_HEAD(&reaplist);
+	spin_lock(&nn->client_lock);
+	list_for_each_safe(pos, next, &nn->client_lru) {
+		cl = list_entry(pos, struct nfs4_client, cl_lru);
+		/*
+		 * check all nfs4_ol_stateid of this client
+		 * for conflicts with 'access'mode.
+		 */
+		if (nfs4_check_deny_bmap(cl, fp, stp, access, share_access)) {
+			if (!test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) {
+				/* conflict with non-courtesy client */
+				no_retry = true;
+				cnt = 0;
+				goto out;
+			}
+			/*
+			 * if too many to resolve synchronously
+			 * then do the rest in background
+			 */
+			if (cnt > 100) {
+				set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags);
+				async_cnt++;
+				continue;
+			}
+			if (mark_client_expired_locked(cl))
+				continue;
+			cnt++;
+			list_add(&cl->cl_lru, &reaplist);
+		}
+	}
+out:
+	spin_unlock(&nn->client_lock);
+	list_for_each_safe(pos, next, &reaplist) {
+		cl = list_entry(pos, struct nfs4_client, cl_lru);
+		list_del_init(&cl->cl_lru);
+		expire_client(cl);
+	}
+	if (async_cnt) {
+		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+		if (!no_retry)
+			cnt = -1;
+	}
+	return cnt;
+}
+
 static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
 		struct nfsd4_open *open)
@@ -4921,6 +5090,7 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 	int oflag = nfs4_access_to_omode(open->op_share_access);
 	int access = nfs4_access_to_access(open->op_share_access);
 	unsigned char old_access_bmap, old_deny_bmap;
+	int cnt = 0;
 
 	spin_lock(&fp->fi_lock);
 
@@ -4928,16 +5098,38 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 	 * Are we trying to set a deny mode that would conflict with
 	 * current access?
 	 */
+chk_deny:
 	status = nfs4_file_check_deny(fp, open->op_share_deny);
 	if (status != nfs_ok) {
 		spin_unlock(&fp->fi_lock);
+		if (status != nfserr_share_denied)
+			goto out;
+		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
+				stp, open->op_share_deny, false);
+		if (cnt > 0) {
+			spin_lock(&fp->fi_lock);
+			goto chk_deny;
+		}
+		if (cnt == -1)
+			status = nfserr_jukebox;
 		goto out;
 	}
 
 	/* set access to the file */
+get_access:
 	status = nfs4_file_get_access(fp, open->op_share_access);
 	if (status != nfs_ok) {
 		spin_unlock(&fp->fi_lock);
+		if (status != nfserr_share_denied)
+			goto out;
+		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
+				stp, open->op_share_access, true);
+		if (cnt > 0) {
+			spin_lock(&fp->fi_lock);
+			goto get_access;
+		}
+		if (cnt == -1)
+			status = nfserr_jukebox;
 		goto out;
 	}
 
@@ -5289,6 +5481,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
 	 */
 }
 
+static bool
+nfs4_destroy_courtesy_client(struct nfs4_client *clp)
+{
+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+
+	spin_lock(&nn->client_lock);
+	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
+			mark_client_expired_locked(clp)) {
+		spin_unlock(&nn->client_lock);
+		return false;
+	}
+	spin_unlock(&nn->client_lock);
+	expire_client(clp);
+	return true;
+}
+
 __be32
 nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
 {
@@ -5572,6 +5780,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+static
+bool nfs4_anylock_conflict(struct nfs4_client *clp)
+{
+	int i;
+	struct nfs4_stateowner *so, *tmp;
+	struct nfs4_lockowner *lo;
+	struct nfs4_ol_stateid *stp;
+	struct nfs4_file *nf;
+	struct inode *ino;
+	struct file_lock_context *ctx;
+	struct file_lock *fl;
+
+	for (i = 0; i < OWNER_HASH_SIZE; i++) {
+		/* scan each lock owner */
+		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
+				so_strhash) {
+			if (so->so_is_open_owner)
+				continue;
+
+			/* scan lock states of this lock owner */
+			lo = lockowner(so);
+			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
+					st_perstateowner) {
+				nf = stp->st_stid.sc_file;
+				ino = nf->fi_inode;
+				ctx = ino->i_flctx;
+				if (!ctx)
+					continue;
+				/* check each lock belongs to this lock state */
+				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+					if (fl->fl_owner != lo)
+						continue;
+					if (!list_empty(&fl->fl_blocked_requests))
+						return true;
+				}
+			}
+		}
+	}
+	return false;
+}
+
 static time64_t
 nfs4_laundromat(struct nfsd_net *nn)
 {
@@ -5587,7 +5836,9 @@ nfs4_laundromat(struct nfsd_net *nn)
 	};
 	struct nfs4_cpntf_state *cps;
 	copy_stateid_t *cps_t;
+	struct nfs4_stid *stid;
 	int i;
+	int id = 0;
 
 	if (clients_still_reclaiming(nn)) {
 		lt.new_timeo = 0;
@@ -5608,8 +5859,33 @@ nfs4_laundromat(struct nfsd_net *nn)
 	spin_lock(&nn->client_lock);
 	list_for_each_safe(pos, next, &nn->client_lru) {
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
+		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
+			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
+			goto exp_client;
+		}
+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
+			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
+				goto exp_client;
+			/*
+			 * after umount, v4.0 client is still
+			 * around waiting to be expired
+			 */
+			if (clp->cl_minorversion)
+				continue;
+		}
 		if (!state_expired(&lt, clp->cl_time))
 			break;
+		spin_lock(&clp->cl_lock);
+		stid = idr_get_next(&clp->cl_stateids, &id);
+		spin_unlock(&clp->cl_lock);
+		if (stid && !nfs4_anylock_conflict(clp)) {
+			/* client still has states */
+			clp->courtesy_client_expiry =
+				ktime_get_boottime_seconds() + courtesy_client_expiry;
+			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
+			continue;
+		}
+exp_client:
 		if (mark_client_expired_locked(clp))
 			continue;
 		list_add(&clp->cl_lru, &reaplist);
@@ -5689,9 +5965,6 @@ nfs4_laundromat(struct nfsd_net *nn)
 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
 }
 
-static struct workqueue_struct *laundry_wq;
-static void laundromat_main(struct work_struct *);
-
 static void
 laundromat_main(struct work_struct *laundry)
 {
@@ -6496,6 +6769,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
 		lock->fl_end = OFFSET_MAX;
 }
 
+/* return true if lock was expired else return false */
+static bool
+nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
+{
+	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
+	struct nfs4_client *clp = lo->lo_owner.so_client;
+
+	if (testonly)
+		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
+			true : false;
+	return nfs4_destroy_courtesy_client(clp);
+}
+
 static fl_owner_t
 nfsd4_fl_get_owner(fl_owner_t owner)
 {
@@ -6543,6 +6829,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
 	.lm_notify = nfsd4_lm_notify,
 	.lm_get_owner = nfsd4_fl_get_owner,
 	.lm_put_owner = nfsd4_fl_put_owner,
+	.lm_expire_lock = nfsd4_fl_expire_lock,
 };
 
 static inline void
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index e73bdbb1634a..93e30b101578 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -345,6 +345,8 @@ struct nfs4_client {
 #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
 #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
 					 1 << NFSD4_CLIENT_CB_KILL)
+#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
+#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
 	unsigned long		cl_flags;
 	const struct cred	*cl_cb_cred;
 	struct rpc_clnt		*cl_cb_client;
@@ -385,6 +387,7 @@ struct nfs4_client {
 	struct list_head	async_copies;	/* list of async copies */
 	spinlock_t		async_lock;	/* lock for async copies */
 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
+	int			courtesy_client_expiry;
 };
 
 /* struct nfs4_client_reset
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 17:59 ` [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
@ 2021-12-06 18:39   ` Chuck Lever III
  2021-12-06 19:52     ` Trond Myklebust
  0 siblings, 1 reply; 24+ messages in thread
From: Chuck Lever III @ 2021-12-06 18:39 UTC (permalink / raw)
  To: Jeff Layton, Al Viro
  Cc: Bruce Fields, Linux NFS Mailing List, linux-fsdevel, Dai Ngo



> On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> 
> Add new callback, lm_expire_lock, to lock_manager_operations to allow
> the lock manager to take appropriate action to resolve the lock conflict
> if possible. The callback takes 2 arguments, file_lock of the blocker
> and a testonly flag:
> 
> testonly = 1  check and return true if lock conflict can be resolved
>              else return false.
> testonly = 0  resolve the conflict if possible, return true if conflict
>              was resolved esle return false.
> 
> Lock manager, such as NFSv4 courteous server, uses this callback to
> resolve conflict by destroying lock owner, or the NFSv4 courtesy client
> (client that has expired but allowed to maintains its states) that owns
> the lock.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>

Al, Jeff, as co-maintainers of record for fs/locks.c, can you give
an Ack or Reviewed-by? I'd like to take this patch through the nfsd
tree for v5.17. Thanks for your time!


> ---
> fs/locks.c         | 28 +++++++++++++++++++++++++---
> include/linux/fs.h |  1 +
> 2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/locks.c b/fs/locks.c
> index 3d6fb4ae847b..0fef0a6322c7 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
> 	struct file_lock *cfl;
> 	struct file_lock_context *ctx;
> 	struct inode *inode = locks_inode(filp);
> +	bool ret;
> 
> 	ctx = smp_load_acquire(&inode->i_flctx);
> 	if (!ctx || list_empty_careful(&ctx->flc_posix)) {
> @@ -962,11 +963,20 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
> 	}
> 
> 	spin_lock(&ctx->flc_lock);
> +retry:
> 	list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
> -		if (posix_locks_conflict(fl, cfl)) {
> -			locks_copy_conflock(fl, cfl);
> -			goto out;
> +		if (!posix_locks_conflict(fl, cfl))
> +			continue;
> +		if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock &&
> +				cfl->fl_lmops->lm_expire_lock(cfl, 1)) {
> +			spin_unlock(&ctx->flc_lock);
> +			ret = cfl->fl_lmops->lm_expire_lock(cfl, 0);
> +			spin_lock(&ctx->flc_lock);
> +			if (ret)
> +				goto retry;
> 		}
> +		locks_copy_conflock(fl, cfl);
> +		goto out;
> 	}
> 	fl->fl_type = F_UNLCK;
> out:
> @@ -1140,6 +1150,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
> 	int error;
> 	bool added = false;
> 	LIST_HEAD(dispose);
> +	bool ret;
> 
> 	ctx = locks_get_lock_context(inode, request->fl_type);
> 	if (!ctx)
> @@ -1166,9 +1177,20 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
> 	 * blocker's list of waiters and the global blocked_hash.
> 	 */
> 	if (request->fl_type != F_UNLCK) {
> +retry:
> 		list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
> 			if (!posix_locks_conflict(request, fl))
> 				continue;
> +			if (fl->fl_lmops && fl->fl_lmops->lm_expire_lock &&
> +					fl->fl_lmops->lm_expire_lock(fl, 1)) {
> +				spin_unlock(&ctx->flc_lock);
> +				percpu_up_read(&file_rwsem);
> +				ret = fl->fl_lmops->lm_expire_lock(fl, 0);
> +				percpu_down_read(&file_rwsem);
> +				spin_lock(&ctx->flc_lock);
> +				if (ret)
> +					goto retry;
> +			}
> 			if (conflock)
> 				locks_copy_conflock(conflock, fl);
> 			error = -EAGAIN;
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index e7a633353fd2..1a76b6451398 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1071,6 +1071,7 @@ struct lock_manager_operations {
> 	int (*lm_change)(struct file_lock *, int, struct list_head *);
> 	void (*lm_setup)(struct file_lock *, void **);
> 	bool (*lm_breaker_owns_lease)(struct file_lock *);
> +	bool (*lm_expire_lock)(struct file_lock *fl, bool testonly);
> };
> 
> struct lock_manager {
> -- 
> 2.9.5
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 18:39   ` Chuck Lever III
@ 2021-12-06 19:52     ` Trond Myklebust
  2021-12-06 20:05       ` bfields
  0 siblings, 1 reply; 24+ messages in thread
From: Trond Myklebust @ 2021-12-06 19:52 UTC (permalink / raw)
  To: jlayton, viro, chuck.lever; +Cc: bfields, linux-nfs, linux-fsdevel, dai.ngo

On Mon, 2021-12-06 at 18:39 +0000, Chuck Lever III wrote:
> 
> 
> > On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> > 
> > Add new callback, lm_expire_lock, to lock_manager_operations to
> > allow
> > the lock manager to take appropriate action to resolve the lock
> > conflict
> > if possible. The callback takes 2 arguments, file_lock of the
> > blocker
> > and a testonly flag:
> > 
> > testonly = 1  check and return true if lock conflict can be
> > resolved
> >              else return false.
> > testonly = 0  resolve the conflict if possible, return true if
> > conflict
> >              was resolved esle return false.
> > 
> > Lock manager, such as NFSv4 courteous server, uses this callback to
> > resolve conflict by destroying lock owner, or the NFSv4 courtesy
> > client
> > (client that has expired but allowed to maintains its states) that
> > owns
> > the lock.
> > 
> > Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> 
> Al, Jeff, as co-maintainers of record for fs/locks.c, can you give
> an Ack or Reviewed-by? I'd like to take this patch through the nfsd
> tree for v5.17. Thanks for your time!
> 
> 
> > ---
> > fs/locks.c         | 28 +++++++++++++++++++++++++---
> > include/linux/fs.h |  1 +
> > 2 files changed, 26 insertions(+), 3 deletions(-)
> > 
> > diff --git a/fs/locks.c b/fs/locks.c
> > index 3d6fb4ae847b..0fef0a6322c7 100644
> > --- a/fs/locks.c
> > +++ b/fs/locks.c
> > @@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct
> > file_lock *fl)
> >         struct file_lock *cfl;
> >         struct file_lock_context *ctx;
> >         struct inode *inode = locks_inode(filp);
> > +       bool ret;
> > 
> >         ctx = smp_load_acquire(&inode->i_flctx);
> >         if (!ctx || list_empty_careful(&ctx->flc_posix)) {
> > @@ -962,11 +963,20 @@ posix_test_lock(struct file *filp, struct
> > file_lock *fl)
> >         }
> > 
> >         spin_lock(&ctx->flc_lock);
> > +retry:
> >         list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
> > -               if (posix_locks_conflict(fl, cfl)) {
> > -                       locks_copy_conflock(fl, cfl);
> > -                       goto out;
> > +               if (!posix_locks_conflict(fl, cfl))
> > +                       continue;
> > +               if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock
> > &&
> > +                               cfl->fl_lmops->lm_expire_lock(cfl,
> > 1)) {
> > +                       spin_unlock(&ctx->flc_lock);
> > +                       ret = cfl->fl_lmops->lm_expire_lock(cfl,
> > 0);
> > +                       spin_lock(&ctx->flc_lock);
> > +                       if (ret)
> > +                               goto retry;
> >                 }
> > +               locks_copy_conflock(fl, cfl);

How do you know 'cfl' still points to a valid object after you've
dropped the spin lock that was protecting the list?

> > +               goto out;
> >         }
> >         fl->fl_type = F_UNLCK;
> > out:
> > @@ -1140,6 +1150,7 @@ static int posix_lock_inode(struct inode
> > *inode, struct file_lock *request,
> >         int error;
> >         bool added = false;
> >         LIST_HEAD(dispose);
> > +       bool ret;
> > 
> >         ctx = locks_get_lock_context(inode, request->fl_type);
> >         if (!ctx)
> > @@ -1166,9 +1177,20 @@ static int posix_lock_inode(struct inode
> > *inode, struct file_lock *request,
> >          * blocker's list of waiters and the global blocked_hash.
> >          */
> >         if (request->fl_type != F_UNLCK) {
> > +retry:
> >                 list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
> >                         if (!posix_locks_conflict(request, fl))
> >                                 continue;
> > +                       if (fl->fl_lmops && fl->fl_lmops-
> > >lm_expire_lock &&
> > +                                       fl->fl_lmops-
> > >lm_expire_lock(fl, 1)) {
> > +                               spin_unlock(&ctx->flc_lock);
> > +                               percpu_up_read(&file_rwsem);
> > +                               ret = fl->fl_lmops-
> > >lm_expire_lock(fl, 0);
> > +                               percpu_down_read(&file_rwsem);
> > +                               spin_lock(&ctx->flc_lock);
> > +                               if (ret)
> > +                                       goto retry;
> > +                       }

ditto.

> >                         if (conflock)
> >                                 locks_copy_conflock(conflock, fl);
> >                         error = -EAGAIN;
> > diff --git a/include/linux/fs.h b/include/linux/fs.h
> > index e7a633353fd2..1a76b6451398 100644
> > --- a/include/linux/fs.h
> > +++ b/include/linux/fs.h
> > @@ -1071,6 +1071,7 @@ struct lock_manager_operations {
> >         int (*lm_change)(struct file_lock *, int, struct list_head
> > *);
> >         void (*lm_setup)(struct file_lock *, void **);
> >         bool (*lm_breaker_owns_lease)(struct file_lock *);
> > +       bool (*lm_expire_lock)(struct file_lock *fl, bool
> > testonly);
> > };
> > 
> > struct lock_manager {
> > -- 
> > 2.9.5
> > 
> 
> --
> Chuck Lever
> 
> 
> 

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 17:59 ` [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2021-12-06 19:55   ` Chuck Lever III
  2021-12-06 21:44     ` dai.ngo
  2021-12-08 15:54     ` dai.ngo
  0 siblings, 2 replies; 24+ messages in thread
From: Chuck Lever III @ 2021-12-06 19:55 UTC (permalink / raw)
  To: Dai Ngo; +Cc: Bruce Fields, Linux NFS Mailing List, linux-fsdevel

Hi Dai-

Some comments and questions below:


> On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> 
> Currently an NFSv4 client must maintain its lease by using the at least
> one of the state tokens or if nothing else, by issuing a RENEW (4.0), or
> a singleton SEQUENCE (4.1) at least once during each lease period. If the
> client fails to renew the lease, for any reason, the Linux server expunges
> the state tokens immediately upon detection of the "failure to renew the
> lease" condition and begins returning NFS4ERR_EXPIRED if the client should
> reconnect and attempt to use the (now) expired state.
> 
> The default lease period for the Linux server is 90 seconds.  The typical
> client cuts that in half and will issue a lease renewing operation every
> 45 seconds. The 90 second lease period is very short considering the
> potential for moderately long term network partitions.  A network partition
> refers to any loss of network connectivity between the NFS client and the
> NFS server, regardless of its root cause.  This includes NIC failures, NIC
> driver bugs, network misconfigurations & administrative errors, routers &
> switches crashing and/or having software updates applied, even down to
> cables being physically pulled.  In most cases, these network failures are
> transient, although the duration is unknown.
> 
> A server which does not immediately expunge the state on lease expiration
> is known as a Courteous Server.  A Courteous Server continues to recognize
> previously generated state tokens as valid until conflict arises between
> the expired state and the requests from another client, or the server
> reboots.
> 
> The initial implementation of the Courteous Server will do the following:
> 
> . when the laundromat thread detects an expired client and if that client
> still has established states on the Linux server and there is no waiters
> for the client's locks then mark the client as a COURTESY_CLIENT and skip
> destroying the client and all its states, otherwise destroy the client as
> usual.
> 
> . detects conflict of OPEN request with COURTESY_CLIENT, destroys the
> expired client and all its states, skips the delegation recall then allows
> the conflicting request to succeed.
> 
> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
> requests with COURTESY_CLIENT, destroys the expired client and all its
> states then allows the conflicting request to succeed.
> 
> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
> requests with COURTESY_CLIENT, destroys the expired client and all its
> states then allows the conflicting request to succeed.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
> fs/nfsd/nfs4state.c | 293 +++++++++++++++++++++++++++++++++++++++++++++++++++-
> fs/nfsd/state.h     |   3 +
> 2 files changed, 293 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 3f4027a5de88..759f61dc6685 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *);
> static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
> static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
> 
> +static struct workqueue_struct *laundry_wq;
> +static void laundromat_main(struct work_struct *);
> +
> +static int courtesy_client_expiry = (24 * 60 * 60);	/* in secs */
> +
> static bool is_session_dead(struct nfsd4_session *ses)
> {
> 	return ses->se_flags & NFS4_SESSION_DEAD;
> @@ -172,6 +177,7 @@ renew_client_locked(struct nfs4_client *clp)
> 
> 	list_move_tail(&clp->cl_lru, &nn->client_lru);
> 	clp->cl_time = ktime_get_boottime_seconds();
> +	clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> }
> 
> static void put_client_renew_locked(struct nfs4_client *clp)
> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
> 		seq_puts(m, "status: confirmed\n");
> 	else
> 		seq_puts(m, "status: unconfirmed\n");
> +	seq_printf(m, "courtesy client: %s\n",
> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
> +	seq_printf(m, "seconds from last renew: %lld\n",
> +		ktime_get_boottime_seconds() - clp->cl_time);
> 	seq_printf(m, "name: ");
> 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
> 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
> @@ -4662,6 +4672,33 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
> 	nfsd4_run_cb(&dp->dl_recall);
> }
> 
> +/*
> + * This function is called when a file is opened and there is a
> + * delegation conflict with another client. If the other client
> + * is a courtesy client then kick start the laundromat to destroy
> + * it.
> + */
> +static bool
> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> +{
> +	struct svc_rqst *rqst;
> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +
> +	if (!i_am_nfsd())
> +		goto out;
> +	rqst = kthread_data(current);
> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> +		return false;
> +out:
> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +		set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +		return true;
> +	}
> +	return false;
> +}
> +
> /* Called from break_lease() with i_lock held. */
> static bool
> nfsd_break_deleg_cb(struct file_lock *fl)
> @@ -4670,6 +4707,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
> 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
> 	struct nfs4_file *fp = dp->dl_stid.sc_file;
> 
> +	if (nfsd_check_courtesy_client(dp))
> +		return false;
> 	trace_nfsd_cb_recall(&dp->dl_stid);
> 
> 	/*
> @@ -4912,6 +4951,136 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
> 	return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0);
> }
> 
> +static bool
> +__nfs4_check_deny_bmap(struct nfs4_ol_stateid *stp, u32 access,
> +			bool share_access)
> +{
> +	if (share_access) {
> +		if (!stp->st_deny_bmap)
> +			return false;
> +
> +		if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) ||

Aren't the NFS4_SHARE_DENY macros already bit masks?

NFS4_SHARE_DENY_BOTH is (NFS4_SHARE_DENY_READ | NFS4_SHARE_DENY_WRITE).


> +			(access & NFS4_SHARE_ACCESS_READ &&
> +				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) ||
> +			(access & NFS4_SHARE_ACCESS_WRITE &&
> +				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) {
> +			return true;
> +		}
> +		return false;
> +	}
> +	if ((access & NFS4_SHARE_DENY_BOTH) ||
> +		(access & NFS4_SHARE_DENY_READ &&
> +			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) ||
> +		(access & NFS4_SHARE_DENY_WRITE &&
> +			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) {
> +		return true;
> +	}
> +	return false;
> +}
> +
> +/*
> + * access: if share_access is true then check access mode else check deny mode
> + */
> +static bool
> +nfs4_check_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp,
> +		struct nfs4_ol_stateid *st, u32 access, bool share_access)
> +{
> +	int i;
> +	struct nfs4_openowner *oo;
> +	struct nfs4_stateowner *so, *tmp;
> +	struct nfs4_ol_stateid *stp, *stmp;
> +
> +	spin_lock(&clp->cl_lock);
> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
> +					so_strhash) {
> +			if (!so->so_is_open_owner)
> +				continue;
> +			oo = openowner(so);
> +			list_for_each_entry_safe(stp, stmp,
> +				&oo->oo_owner.so_stateids, st_perstateowner) {
> +				if (stp == st || stp->st_stid.sc_file != fp)
> +					continue;
> +				if (__nfs4_check_deny_bmap(stp, access,
> +							share_access)) {
> +					spin_unlock(&clp->cl_lock);
> +					return true;
> +				}
> +			}
> +		}
> +	}
> +	spin_unlock(&clp->cl_lock);
> +	return false;
> +}
> +
> +/*
> + * Function to check if the nfserr_share_denied error for 'fp' resulted
> + * from conflict with courtesy clients then release their state to resolve
> + * the conflict.
> + *
> + * Function returns:
> + *	 0 -  no conflict with courtesy clients
> + *	>0 -  conflict with courtesy clients resolved, try access/deny check again
> + *	-1 -  conflict with courtesy clients being resolved in background
> + *            return nfserr_jukebox to NFS client
> + */
> +static int
> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
> +			struct nfs4_file *fp, struct nfs4_ol_stateid *stp,
> +			u32 access, bool share_access)
> +{
> +	int cnt = 0;
> +	int async_cnt = 0;
> +	bool no_retry = false;
> +	struct nfs4_client *cl;
> +	struct list_head *pos, *next, reaplist;
> +	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
> +
> +	INIT_LIST_HEAD(&reaplist);
> +	spin_lock(&nn->client_lock);
> +	list_for_each_safe(pos, next, &nn->client_lru) {
> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
> +		/*
> +		 * check all nfs4_ol_stateid of this client
> +		 * for conflicts with 'access'mode.
> +		 */
> +		if (nfs4_check_deny_bmap(cl, fp, stp, access, share_access)) {
> +			if (!test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) {
> +				/* conflict with non-courtesy client */
> +				no_retry = true;
> +				cnt = 0;
> +				goto out;
> +			}
> +			/*
> +			 * if too many to resolve synchronously
> +			 * then do the rest in background
> +			 */
> +			if (cnt > 100) {
> +				set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags);
> +				async_cnt++;
> +				continue;
> +			}
> +			if (mark_client_expired_locked(cl))
> +				continue;
> +			cnt++;
> +			list_add(&cl->cl_lru, &reaplist);
> +		}
> +	}

Bruce suggested simply returning NFS4ERR_DELAY for all cases.
That would simplify this quite a bit for what is a rare edge
case.


> +out:
> +	spin_unlock(&nn->client_lock);
> +	list_for_each_safe(pos, next, &reaplist) {
> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
> +		list_del_init(&cl->cl_lru);
> +		expire_client(cl);
> +	}

A slightly nicer construct here would be something like this:

	while ((pos = list_del_first(&reaplist)))
		expire_client(list_entry(pos, struct nfs4_client, cl_lru));


> +	if (async_cnt) {
> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +		if (!no_retry)
> +			cnt = -1;
> +	}
> +	return cnt;
> +}
> +
> static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
> 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
> 		struct nfsd4_open *open)
> @@ -4921,6 +5090,7 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
> 	int oflag = nfs4_access_to_omode(open->op_share_access);
> 	int access = nfs4_access_to_access(open->op_share_access);
> 	unsigned char old_access_bmap, old_deny_bmap;
> +	int cnt = 0;
> 
> 	spin_lock(&fp->fi_lock);
> 
> @@ -4928,16 +5098,38 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
> 	 * Are we trying to set a deny mode that would conflict with
> 	 * current access?
> 	 */
> +chk_deny:
> 	status = nfs4_file_check_deny(fp, open->op_share_deny);
> 	if (status != nfs_ok) {
> 		spin_unlock(&fp->fi_lock);
> +		if (status != nfserr_share_denied)
> +			goto out;
> +		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
> +				stp, open->op_share_deny, false);
> +		if (cnt > 0) {
> +			spin_lock(&fp->fi_lock);
> +			goto chk_deny;

I'm pondering whether a distributed set of clients can
cause this loop to never terminate.


> +		}
> +		if (cnt == -1)
> +			status = nfserr_jukebox;
> 		goto out;
> 	}
> 
> 	/* set access to the file */
> +get_access:
> 	status = nfs4_file_get_access(fp, open->op_share_access);
> 	if (status != nfs_ok) {
> 		spin_unlock(&fp->fi_lock);
> +		if (status != nfserr_share_denied)
> +			goto out;
> +		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
> +				stp, open->op_share_access, true);
> +		if (cnt > 0) {
> +			spin_lock(&fp->fi_lock);
> +			goto get_access;

Ditto.


> +		}
> +		if (cnt == -1)
> +			status = nfserr_jukebox;
> 		goto out;
> 	}
> 
> @@ -5289,6 +5481,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
> 	 */
> }
> 
> +static bool
> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
> +{
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +
> +	spin_lock(&nn->client_lock);
> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
> +			mark_client_expired_locked(clp)) {
> +		spin_unlock(&nn->client_lock);
> +		return false;
> +	}
> +	spin_unlock(&nn->client_lock);
> +	expire_client(clp);
> +	return true;
> +}
> +

Perhaps nfs4_destroy_courtesy_client() could be merged into
nfsd4_fl_expire_lock(), it's only caller.


> __be32
> nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
> {
> @@ -5572,6 +5780,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
> }
> #endif
> 
> +static
> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
> +{
> +	int i;
> +	struct nfs4_stateowner *so, *tmp;
> +	struct nfs4_lockowner *lo;
> +	struct nfs4_ol_stateid *stp;
> +	struct nfs4_file *nf;
> +	struct inode *ino;
> +	struct file_lock_context *ctx;
> +	struct file_lock *fl;
> +
> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
> +		/* scan each lock owner */
> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
> +				so_strhash) {
> +			if (so->so_is_open_owner)
> +				continue;

Isn't cl_lock needed to protect the cl_ownerstr_hashtbl lists?


> +
> +			/* scan lock states of this lock owner */
> +			lo = lockowner(so);
> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
> +					st_perstateowner) {
> +				nf = stp->st_stid.sc_file;
> +				ino = nf->fi_inode;
> +				ctx = ino->i_flctx;
> +				if (!ctx)
> +					continue;
> +				/* check each lock belongs to this lock state */
> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
> +					if (fl->fl_owner != lo)
> +						continue;
> +					if (!list_empty(&fl->fl_blocked_requests))
> +						return true;
> +				}
> +			}
> +		}
> +	}
> +	return false;
> +}
> +
> static time64_t
> nfs4_laundromat(struct nfsd_net *nn)
> {
> @@ -5587,7 +5836,9 @@ nfs4_laundromat(struct nfsd_net *nn)
> 	};
> 	struct nfs4_cpntf_state *cps;
> 	copy_stateid_t *cps_t;
> +	struct nfs4_stid *stid;
> 	int i;
> +	int id = 0;
> 
> 	if (clients_still_reclaiming(nn)) {
> 		lt.new_timeo = 0;
> @@ -5608,8 +5859,33 @@ nfs4_laundromat(struct nfsd_net *nn)
> 	spin_lock(&nn->client_lock);
> 	list_for_each_safe(pos, next, &nn->client_lru) {
> 		clp = list_entry(pos, struct nfs4_client, cl_lru);
> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> +			goto exp_client;
> +		}
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
> +				goto exp_client;
> +			/*
> +			 * after umount, v4.0 client is still
> +			 * around waiting to be expired
> +			 */
> +			if (clp->cl_minorversion)
> +				continue;
> +		}
> 		if (!state_expired(&lt, clp->cl_time))
> 			break;
> +		spin_lock(&clp->cl_lock);
> +		stid = idr_get_next(&clp->cl_stateids, &id);
> +		spin_unlock(&clp->cl_lock);
> +		if (stid && !nfs4_anylock_conflict(clp)) {
> +			/* client still has states */
> +			clp->courtesy_client_expiry =
> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> +			continue;
> +		}
> +exp_client:
> 		if (mark_client_expired_locked(clp))
> 			continue;
> 		list_add(&clp->cl_lru, &reaplist);
> @@ -5689,9 +5965,6 @@ nfs4_laundromat(struct nfsd_net *nn)
> 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
> }
> 
> -static struct workqueue_struct *laundry_wq;
> -static void laundromat_main(struct work_struct *);
> -
> static void
> laundromat_main(struct work_struct *laundry)
> {
> @@ -6496,6 +6769,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
> 		lock->fl_end = OFFSET_MAX;
> }
> 
> +/* return true if lock was expired else return false */
> +static bool
> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
> +{
> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
> +	struct nfs4_client *clp = lo->lo_owner.so_client;
> +
> +	if (testonly)
> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
> +			true : false;

Hm. I know test_bit() returns an integer rather than a boolean, but
the ternary here is a bit unwieldy. How about just:

		return !!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);


> +	return nfs4_destroy_courtesy_client(clp);
> +}
> +
> static fl_owner_t
> nfsd4_fl_get_owner(fl_owner_t owner)
> {
> @@ -6543,6 +6829,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
> 	.lm_notify = nfsd4_lm_notify,
> 	.lm_get_owner = nfsd4_fl_get_owner,
> 	.lm_put_owner = nfsd4_fl_put_owner,
> +	.lm_expire_lock = nfsd4_fl_expire_lock,
> };
> 
> static inline void
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index e73bdbb1634a..93e30b101578 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -345,6 +345,8 @@ struct nfs4_client {
> #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
> #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
> 					 1 << NFSD4_CLIENT_CB_KILL)
> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
> 	unsigned long		cl_flags;
> 	const struct cred	*cl_cb_cred;
> 	struct rpc_clnt		*cl_cb_client;
> @@ -385,6 +387,7 @@ struct nfs4_client {
> 	struct list_head	async_copies;	/* list of async copies */
> 	spinlock_t		async_lock;	/* lock for async copies */
> 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
> +	int			courtesy_client_expiry;
> };
> 
> /* struct nfs4_client_reset
> -- 
> 2.9.5
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 19:52     ` Trond Myklebust
@ 2021-12-06 20:05       ` bfields
  2021-12-06 20:36         ` dai.ngo
  0 siblings, 1 reply; 24+ messages in thread
From: bfields @ 2021-12-06 20:05 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: jlayton, viro, chuck.lever, linux-nfs, linux-fsdevel, dai.ngo

On Mon, Dec 06, 2021 at 07:52:29PM +0000, Trond Myklebust wrote:
> On Mon, 2021-12-06 at 18:39 +0000, Chuck Lever III wrote:
> > 
> > 
> > > On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> > > 
> > > Add new callback, lm_expire_lock, to lock_manager_operations to
> > > allow
> > > the lock manager to take appropriate action to resolve the lock
> > > conflict
> > > if possible. The callback takes 2 arguments, file_lock of the
> > > blocker
> > > and a testonly flag:
> > > 
> > > testonly = 1  check and return true if lock conflict can be
> > > resolved
> > >              else return false.
> > > testonly = 0  resolve the conflict if possible, return true if
> > > conflict
> > >              was resolved esle return false.
> > > 
> > > Lock manager, such as NFSv4 courteous server, uses this callback to
> > > resolve conflict by destroying lock owner, or the NFSv4 courtesy
> > > client
> > > (client that has expired but allowed to maintains its states) that
> > > owns
> > > the lock.
> > > 
> > > Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> > 
> > Al, Jeff, as co-maintainers of record for fs/locks.c, can you give
> > an Ack or Reviewed-by? I'd like to take this patch through the nfsd
> > tree for v5.17. Thanks for your time!
> > 
> > 
> > > ---
> > > fs/locks.c         | 28 +++++++++++++++++++++++++---
> > > include/linux/fs.h |  1 +
> > > 2 files changed, 26 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/fs/locks.c b/fs/locks.c
> > > index 3d6fb4ae847b..0fef0a6322c7 100644
> > > --- a/fs/locks.c
> > > +++ b/fs/locks.c
> > > @@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct
> > > file_lock *fl)
> > >         struct file_lock *cfl;
> > >         struct file_lock_context *ctx;
> > >         struct inode *inode = locks_inode(filp);
> > > +       bool ret;
> > > 
> > >         ctx = smp_load_acquire(&inode->i_flctx);
> > >         if (!ctx || list_empty_careful(&ctx->flc_posix)) {
> > > @@ -962,11 +963,20 @@ posix_test_lock(struct file *filp, struct
> > > file_lock *fl)
> > >         }
> > > 
> > >         spin_lock(&ctx->flc_lock);
> > > +retry:
> > >         list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
> > > -               if (posix_locks_conflict(fl, cfl)) {
> > > -                       locks_copy_conflock(fl, cfl);
> > > -                       goto out;
> > > +               if (!posix_locks_conflict(fl, cfl))
> > > +                       continue;
> > > +               if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock
> > > &&
> > > +                               cfl->fl_lmops->lm_expire_lock(cfl,
> > > 1)) {
> > > +                       spin_unlock(&ctx->flc_lock);
> > > +                       ret = cfl->fl_lmops->lm_expire_lock(cfl,
> > > 0);
> > > +                       spin_lock(&ctx->flc_lock);
> > > +                       if (ret)
> > > +                               goto retry;
> > >                 }
> > > +               locks_copy_conflock(fl, cfl);
> 
> How do you know 'cfl' still points to a valid object after you've
> dropped the spin lock that was protecting the list?

Ugh, good point, I should have noticed that when I suggested this
approach....

Maybe the first call could instead return return some reference-counted
object that a second call could wait on.

Better, maybe it could add itself to a list of such things and then we
could do this in one pass.

--b.

> 
> > > +               goto out;
> > >         }
> > >         fl->fl_type = F_UNLCK;
> > > out:
> > > @@ -1140,6 +1150,7 @@ static int posix_lock_inode(struct inode
> > > *inode, struct file_lock *request,
> > >         int error;
> > >         bool added = false;
> > >         LIST_HEAD(dispose);
> > > +       bool ret;
> > > 
> > >         ctx = locks_get_lock_context(inode, request->fl_type);
> > >         if (!ctx)
> > > @@ -1166,9 +1177,20 @@ static int posix_lock_inode(struct inode
> > > *inode, struct file_lock *request,
> > >          * blocker's list of waiters and the global blocked_hash.
> > >          */
> > >         if (request->fl_type != F_UNLCK) {
> > > +retry:
> > >                 list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
> > >                         if (!posix_locks_conflict(request, fl))
> > >                                 continue;
> > > +                       if (fl->fl_lmops && fl->fl_lmops-
> > > >lm_expire_lock &&
> > > +                                       fl->fl_lmops-
> > > >lm_expire_lock(fl, 1)) {
> > > +                               spin_unlock(&ctx->flc_lock);
> > > +                               percpu_up_read(&file_rwsem);
> > > +                               ret = fl->fl_lmops-
> > > >lm_expire_lock(fl, 0);
> > > +                               percpu_down_read(&file_rwsem);
> > > +                               spin_lock(&ctx->flc_lock);
> > > +                               if (ret)
> > > +                                       goto retry;
> > > +                       }
> 
> ditto.
> 
> > >                         if (conflock)
> > >                                 locks_copy_conflock(conflock, fl);
> > >                         error = -EAGAIN;
> > > diff --git a/include/linux/fs.h b/include/linux/fs.h
> > > index e7a633353fd2..1a76b6451398 100644
> > > --- a/include/linux/fs.h
> > > +++ b/include/linux/fs.h
> > > @@ -1071,6 +1071,7 @@ struct lock_manager_operations {
> > >         int (*lm_change)(struct file_lock *, int, struct list_head
> > > *);
> > >         void (*lm_setup)(struct file_lock *, void **);
> > >         bool (*lm_breaker_owns_lease)(struct file_lock *);
> > > +       bool (*lm_expire_lock)(struct file_lock *fl, bool
> > > testonly);
> > > };
> > > 
> > > struct lock_manager {
> > > -- 
> > > 2.9.5
> > > 
> > 
> > --
> > Chuck Lever
> > 
> > 
> > 
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 20:05       ` bfields
@ 2021-12-06 20:36         ` dai.ngo
  2021-12-06 22:05           ` Trond Myklebust
  0 siblings, 1 reply; 24+ messages in thread
From: dai.ngo @ 2021-12-06 20:36 UTC (permalink / raw)
  To: bfields, Trond Myklebust
  Cc: jlayton, viro, chuck.lever, linux-nfs, linux-fsdevel


On 12/6/21 12:05 PM, bfields@fieldses.org wrote:
> On Mon, Dec 06, 2021 at 07:52:29PM +0000, Trond Myklebust wrote:
>> On Mon, 2021-12-06 at 18:39 +0000, Chuck Lever III wrote:
>>>
>>>> On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
>>>>
>>>> Add new callback, lm_expire_lock, to lock_manager_operations to
>>>> allow
>>>> the lock manager to take appropriate action to resolve the lock
>>>> conflict
>>>> if possible. The callback takes 2 arguments, file_lock of the
>>>> blocker
>>>> and a testonly flag:
>>>>
>>>> testonly = 1  check and return true if lock conflict can be
>>>> resolved
>>>>               else return false.
>>>> testonly = 0  resolve the conflict if possible, return true if
>>>> conflict
>>>>               was resolved esle return false.
>>>>
>>>> Lock manager, such as NFSv4 courteous server, uses this callback to
>>>> resolve conflict by destroying lock owner, or the NFSv4 courtesy
>>>> client
>>>> (client that has expired but allowed to maintains its states) that
>>>> owns
>>>> the lock.
>>>>
>>>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>>> Al, Jeff, as co-maintainers of record for fs/locks.c, can you give
>>> an Ack or Reviewed-by? I'd like to take this patch through the nfsd
>>> tree for v5.17. Thanks for your time!
>>>
>>>
>>>> ---
>>>> fs/locks.c         | 28 +++++++++++++++++++++++++---
>>>> include/linux/fs.h |  1 +
>>>> 2 files changed, 26 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/fs/locks.c b/fs/locks.c
>>>> index 3d6fb4ae847b..0fef0a6322c7 100644
>>>> --- a/fs/locks.c
>>>> +++ b/fs/locks.c
>>>> @@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct
>>>> file_lock *fl)
>>>>          struct file_lock *cfl;
>>>>          struct file_lock_context *ctx;
>>>>          struct inode *inode = locks_inode(filp);
>>>> +       bool ret;
>>>>
>>>>          ctx = smp_load_acquire(&inode->i_flctx);
>>>>          if (!ctx || list_empty_careful(&ctx->flc_posix)) {
>>>> @@ -962,11 +963,20 @@ posix_test_lock(struct file *filp, struct
>>>> file_lock *fl)
>>>>          }
>>>>
>>>>          spin_lock(&ctx->flc_lock);
>>>> +retry:
>>>>          list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
>>>> -               if (posix_locks_conflict(fl, cfl)) {
>>>> -                       locks_copy_conflock(fl, cfl);
>>>> -                       goto out;
>>>> +               if (!posix_locks_conflict(fl, cfl))
>>>> +                       continue;
>>>> +               if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock
>>>> &&
>>>> +                               cfl->fl_lmops->lm_expire_lock(cfl,
>>>> 1)) {
>>>> +                       spin_unlock(&ctx->flc_lock);
>>>> +                       ret = cfl->fl_lmops->lm_expire_lock(cfl,
>>>> 0);
>>>> +                       spin_lock(&ctx->flc_lock);
>>>> +                       if (ret)
>>>> +                               goto retry;
>>>>                  }
>>>> +               locks_copy_conflock(fl, cfl);
>> How do you know 'cfl' still points to a valid object after you've
>> dropped the spin lock that was protecting the list?
> Ugh, good point, I should have noticed that when I suggested this
> approach....
>
> Maybe the first call could instead return return some reference-counted
> object that a second call could wait on.
>
> Better, maybe it could add itself to a list of such things and then we
> could do this in one pass.

I think we adjust this logic a little bit to cover race condition:

The 1st call to lm_expire_lock returns the client needs to be expired.

Before we make the 2nd call, we save the 'lm_expire_lock' into a local
variable then drop the spinlock, and use the local variable to make the
2nd call so that we do not reference 'cfl'. The argument of the second
is the opaque return value from the 1st call.

nfsd4_fl_expire_lock also needs some adjustment to support the above.

-Dai

>
> --b.
>
>>>> +               goto out;
>>>>          }
>>>>          fl->fl_type = F_UNLCK;
>>>> out:
>>>> @@ -1140,6 +1150,7 @@ static int posix_lock_inode(struct inode
>>>> *inode, struct file_lock *request,
>>>>          int error;
>>>>          bool added = false;
>>>>          LIST_HEAD(dispose);
>>>> +       bool ret;
>>>>
>>>>          ctx = locks_get_lock_context(inode, request->fl_type);
>>>>          if (!ctx)
>>>> @@ -1166,9 +1177,20 @@ static int posix_lock_inode(struct inode
>>>> *inode, struct file_lock *request,
>>>>           * blocker's list of waiters and the global blocked_hash.
>>>>           */
>>>>          if (request->fl_type != F_UNLCK) {
>>>> +retry:
>>>>                  list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
>>>>                          if (!posix_locks_conflict(request, fl))
>>>>                                  continue;
>>>> +                       if (fl->fl_lmops && fl->fl_lmops-
>>>>> lm_expire_lock &&
>>>> +                                       fl->fl_lmops-
>>>>> lm_expire_lock(fl, 1)) {
>>>> +                               spin_unlock(&ctx->flc_lock);
>>>> +                               percpu_up_read(&file_rwsem);
>>>> +                               ret = fl->fl_lmops-
>>>>> lm_expire_lock(fl, 0);
>>>> +                               percpu_down_read(&file_rwsem);
>>>> +                               spin_lock(&ctx->flc_lock);
>>>> +                               if (ret)
>>>> +                                       goto retry;
>>>> +                       }
>> ditto.
>>
>>>>                          if (conflock)
>>>>                                  locks_copy_conflock(conflock, fl);
>>>>                          error = -EAGAIN;
>>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>>> index e7a633353fd2..1a76b6451398 100644
>>>> --- a/include/linux/fs.h
>>>> +++ b/include/linux/fs.h
>>>> @@ -1071,6 +1071,7 @@ struct lock_manager_operations {
>>>>          int (*lm_change)(struct file_lock *, int, struct list_head
>>>> *);
>>>>          void (*lm_setup)(struct file_lock *, void **);
>>>>          bool (*lm_breaker_owns_lease)(struct file_lock *);
>>>> +       bool (*lm_expire_lock)(struct file_lock *fl, bool
>>>> testonly);
>>>> };
>>>>
>>>> struct lock_manager {
>>>> -- 
>>>> 2.9.5
>>>>
>>> --
>>> Chuck Lever
>>>
>>>
>>>
>> -- 
>> Trond Myklebust
>> Linux NFS client maintainer, Hammerspace
>> trond.myklebust@hammerspace.com
>>
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 19:55   ` Chuck Lever III
@ 2021-12-06 21:44     ` dai.ngo
  2021-12-06 22:30       ` Chuck Lever III
  2021-12-08 15:54     ` dai.ngo
  1 sibling, 1 reply; 24+ messages in thread
From: dai.ngo @ 2021-12-06 21:44 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Bruce Fields, Linux NFS Mailing List, linux-fsdevel


On 12/6/21 11:55 AM, Chuck Lever III wrote:
> Hi Dai-
>
> Some comments and questions below:
>
>
>> On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
>>
>> Currently an NFSv4 client must maintain its lease by using the at least
>> one of the state tokens or if nothing else, by issuing a RENEW (4.0), or
>> a singleton SEQUENCE (4.1) at least once during each lease period. If the
>> client fails to renew the lease, for any reason, the Linux server expunges
>> the state tokens immediately upon detection of the "failure to renew the
>> lease" condition and begins returning NFS4ERR_EXPIRED if the client should
>> reconnect and attempt to use the (now) expired state.
>>
>> The default lease period for the Linux server is 90 seconds.  The typical
>> client cuts that in half and will issue a lease renewing operation every
>> 45 seconds. The 90 second lease period is very short considering the
>> potential for moderately long term network partitions.  A network partition
>> refers to any loss of network connectivity between the NFS client and the
>> NFS server, regardless of its root cause.  This includes NIC failures, NIC
>> driver bugs, network misconfigurations & administrative errors, routers &
>> switches crashing and/or having software updates applied, even down to
>> cables being physically pulled.  In most cases, these network failures are
>> transient, although the duration is unknown.
>>
>> A server which does not immediately expunge the state on lease expiration
>> is known as a Courteous Server.  A Courteous Server continues to recognize
>> previously generated state tokens as valid until conflict arises between
>> the expired state and the requests from another client, or the server
>> reboots.
>>
>> The initial implementation of the Courteous Server will do the following:
>>
>> . when the laundromat thread detects an expired client and if that client
>> still has established states on the Linux server and there is no waiters
>> for the client's locks then mark the client as a COURTESY_CLIENT and skip
>> destroying the client and all its states, otherwise destroy the client as
>> usual.
>>
>> . detects conflict of OPEN request with COURTESY_CLIENT, destroys the
>> expired client and all its states, skips the delegation recall then allows
>> the conflicting request to succeed.
>>
>> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
>> requests with COURTESY_CLIENT, destroys the expired client and all its
>> states then allows the conflicting request to succeed.
>>
>> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
>> requests with COURTESY_CLIENT, destroys the expired client and all its
>> states then allows the conflicting request to succeed.
>>
>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>> ---
>> fs/nfsd/nfs4state.c | 293 +++++++++++++++++++++++++++++++++++++++++++++++++++-
>> fs/nfsd/state.h     |   3 +
>> 2 files changed, 293 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>> index 3f4027a5de88..759f61dc6685 100644
>> --- a/fs/nfsd/nfs4state.c
>> +++ b/fs/nfsd/nfs4state.c
>> @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *);
>> static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
>> static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
>>
>> +static struct workqueue_struct *laundry_wq;
>> +static void laundromat_main(struct work_struct *);
>> +
>> +static int courtesy_client_expiry = (24 * 60 * 60);	/* in secs */
>> +
>> static bool is_session_dead(struct nfsd4_session *ses)
>> {
>> 	return ses->se_flags & NFS4_SESSION_DEAD;
>> @@ -172,6 +177,7 @@ renew_client_locked(struct nfs4_client *clp)
>>
>> 	list_move_tail(&clp->cl_lru, &nn->client_lru);
>> 	clp->cl_time = ktime_get_boottime_seconds();
>> +	clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> }
>>
>> static void put_client_renew_locked(struct nfs4_client *clp)
>> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
>> 		seq_puts(m, "status: confirmed\n");
>> 	else
>> 		seq_puts(m, "status: unconfirmed\n");
>> +	seq_printf(m, "courtesy client: %s\n",
>> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
>> +	seq_printf(m, "seconds from last renew: %lld\n",
>> +		ktime_get_boottime_seconds() - clp->cl_time);
>> 	seq_printf(m, "name: ");
>> 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>> 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
>> @@ -4662,6 +4672,33 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
>> 	nfsd4_run_cb(&dp->dl_recall);
>> }
>>
>> +/*
>> + * This function is called when a file is opened and there is a
>> + * delegation conflict with another client. If the other client
>> + * is a courtesy client then kick start the laundromat to destroy
>> + * it.
>> + */
>> +static bool
>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>> +{
>> +	struct svc_rqst *rqst;
>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +
>> +	if (!i_am_nfsd())
>> +		goto out;
>> +	rqst = kthread_data(current);
>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>> +		return false;
>> +out:
>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +		set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>> +		return true;
>> +	}
>> +	return false;
>> +}
>> +
>> /* Called from break_lease() with i_lock held. */
>> static bool
>> nfsd_break_deleg_cb(struct file_lock *fl)
>> @@ -4670,6 +4707,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>> 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>> 	struct nfs4_file *fp = dp->dl_stid.sc_file;
>>
>> +	if (nfsd_check_courtesy_client(dp))
>> +		return false;
>> 	trace_nfsd_cb_recall(&dp->dl_stid);
>>
>> 	/*
>> @@ -4912,6 +4951,136 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
>> 	return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0);
>> }
>>
>> +static bool
>> +__nfs4_check_deny_bmap(struct nfs4_ol_stateid *stp, u32 access,
>> +			bool share_access)
>> +{
>> +	if (share_access) {
>> +		if (!stp->st_deny_bmap)
>> +			return false;
>> +
>> +		if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) ||
> Aren't the NFS4_SHARE_DENY macros already bit masks?
>
> NFS4_SHARE_DENY_BOTH is (NFS4_SHARE_DENY_READ | NFS4_SHARE_DENY_WRITE).

I think the protocol defines these as constants and nfsd uses them
to set the bitmap in st_deny_bmap. See set_deny().

>
>
>> +			(access & NFS4_SHARE_ACCESS_READ &&
>> +				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) ||
>> +			(access & NFS4_SHARE_ACCESS_WRITE &&
>> +				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) {
>> +			return true;
>> +		}
>> +		return false;
>> +	}
>> +	if ((access & NFS4_SHARE_DENY_BOTH) ||
>> +		(access & NFS4_SHARE_DENY_READ &&
>> +			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) ||
>> +		(access & NFS4_SHARE_DENY_WRITE &&
>> +			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) {
>> +		return true;
>> +	}
>> +	return false;
>> +}
>> +
>> +/*
>> + * access: if share_access is true then check access mode else check deny mode
>> + */
>> +static bool
>> +nfs4_check_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp,
>> +		struct nfs4_ol_stateid *st, u32 access, bool share_access)
>> +{
>> +	int i;
>> +	struct nfs4_openowner *oo;
>> +	struct nfs4_stateowner *so, *tmp;
>> +	struct nfs4_ol_stateid *stp, *stmp;
>> +
>> +	spin_lock(&clp->cl_lock);
>> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
>> +					so_strhash) {
>> +			if (!so->so_is_open_owner)
>> +				continue;
>> +			oo = openowner(so);
>> +			list_for_each_entry_safe(stp, stmp,
>> +				&oo->oo_owner.so_stateids, st_perstateowner) {
>> +				if (stp == st || stp->st_stid.sc_file != fp)
>> +					continue;
>> +				if (__nfs4_check_deny_bmap(stp, access,
>> +							share_access)) {
>> +					spin_unlock(&clp->cl_lock);
>> +					return true;
>> +				}
>> +			}
>> +		}
>> +	}
>> +	spin_unlock(&clp->cl_lock);
>> +	return false;
>> +}
>> +
>> +/*
>> + * Function to check if the nfserr_share_denied error for 'fp' resulted
>> + * from conflict with courtesy clients then release their state to resolve
>> + * the conflict.
>> + *
>> + * Function returns:
>> + *	 0 -  no conflict with courtesy clients
>> + *	>0 -  conflict with courtesy clients resolved, try access/deny check again
>> + *	-1 -  conflict with courtesy clients being resolved in background
>> + *            return nfserr_jukebox to NFS client
>> + */
>> +static int
>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>> +			struct nfs4_file *fp, struct nfs4_ol_stateid *stp,
>> +			u32 access, bool share_access)
>> +{
>> +	int cnt = 0;
>> +	int async_cnt = 0;
>> +	bool no_retry = false;
>> +	struct nfs4_client *cl;
>> +	struct list_head *pos, *next, reaplist;
>> +	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
>> +
>> +	INIT_LIST_HEAD(&reaplist);
>> +	spin_lock(&nn->client_lock);
>> +	list_for_each_safe(pos, next, &nn->client_lru) {
>> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
>> +		/*
>> +		 * check all nfs4_ol_stateid of this client
>> +		 * for conflicts with 'access'mode.
>> +		 */
>> +		if (nfs4_check_deny_bmap(cl, fp, stp, access, share_access)) {
>> +			if (!test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) {
>> +				/* conflict with non-courtesy client */
>> +				no_retry = true;
>> +				cnt = 0;
>> +				goto out;
>> +			}
>> +			/*
>> +			 * if too many to resolve synchronously
>> +			 * then do the rest in background
>> +			 */
>> +			if (cnt > 100) {
>> +				set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags);
>> +				async_cnt++;
>> +				continue;
>> +			}
>> +			if (mark_client_expired_locked(cl))
>> +				continue;
>> +			cnt++;
>> +			list_add(&cl->cl_lru, &reaplist);
>> +		}
>> +	}
> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
> That would simplify this quite a bit for what is a rare edge
> case.

If we always do NFS4ERR_DELAY then we have to modify pynfs' OPEN18
to handle NFS4ERR_DELAY. Since I don't think this code is overly
complicated and most of the time in real usage, if there is a share
reservation conflict nfsd should be able to resolve it synchronously
avoiding the NFS4ERR_DELAY and also it'd be nice if we don't have
to modify pynfs test.

>
>
>> +out:
>> +	spin_unlock(&nn->client_lock);
>> +	list_for_each_safe(pos, next, &reaplist) {
>> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
>> +		list_del_init(&cl->cl_lru);
>> +		expire_client(cl);
>> +	}
> A slightly nicer construct here would be something like this:
>
> 	while ((pos = list_del_first(&reaplist)))
> 		expire_client(list_entry(pos, struct nfs4_client, cl_lru));

You meant llist_del_first?
The above code follows the style in nfs4_laundromat. Can I keep the
same to avoid doing retest?

>
>
>> +	if (async_cnt) {
>> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>> +		if (!no_retry)
>> +			cnt = -1;
>> +	}
>> +	return cnt;
>> +}
>> +
>> static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>> 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
>> 		struct nfsd4_open *open)
>> @@ -4921,6 +5090,7 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>> 	int oflag = nfs4_access_to_omode(open->op_share_access);
>> 	int access = nfs4_access_to_access(open->op_share_access);
>> 	unsigned char old_access_bmap, old_deny_bmap;
>> +	int cnt = 0;
>>
>> 	spin_lock(&fp->fi_lock);
>>
>> @@ -4928,16 +5098,38 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>> 	 * Are we trying to set a deny mode that would conflict with
>> 	 * current access?
>> 	 */
>> +chk_deny:
>> 	status = nfs4_file_check_deny(fp, open->op_share_deny);
>> 	if (status != nfs_ok) {
>> 		spin_unlock(&fp->fi_lock);
>> +		if (status != nfserr_share_denied)
>> +			goto out;
>> +		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
>> +				stp, open->op_share_deny, false);
>> +		if (cnt > 0) {
>> +			spin_lock(&fp->fi_lock);
>> +			goto chk_deny;
> I'm pondering whether a distributed set of clients can
> cause this loop to never terminate.

I'm not clear how we can get into an infinite loop, can you elaborate?

To get into an infinite loop, there must be stream of new clients that
open the file with the conflict access mode *and* become courtesy client
before we finish expiring all existing courtesy clients that have access
conflict this OPEN.

>
>
>> +		}
>> +		if (cnt == -1)
>> +			status = nfserr_jukebox;
>> 		goto out;
>> 	}
>>
>> 	/* set access to the file */
>> +get_access:
>> 	status = nfs4_file_get_access(fp, open->op_share_access);
>> 	if (status != nfs_ok) {
>> 		spin_unlock(&fp->fi_lock);
>> +		if (status != nfserr_share_denied)
>> +			goto out;
>> +		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
>> +				stp, open->op_share_access, true);
>> +		if (cnt > 0) {
>> +			spin_lock(&fp->fi_lock);
>> +			goto get_access;
> Ditto.
>
>
>> +		}
>> +		if (cnt == -1)
>> +			status = nfserr_jukebox;
>> 		goto out;
>> 	}
>>
>> @@ -5289,6 +5481,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
>> 	 */
>> }
>>
>> +static bool
>> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
>> +{
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
>> +			mark_client_expired_locked(clp)) {
>> +		spin_unlock(&nn->client_lock);
>> +		return false;
>> +	}
>> +	spin_unlock(&nn->client_lock);
>> +	expire_client(clp);
>> +	return true;
>> +}
>> +
> Perhaps nfs4_destroy_courtesy_client() could be merged into
> nfsd4_fl_expire_lock(), it's only caller.

ok, will be in v7 patch.

>
>
>> __be32
>> nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
>> {
>> @@ -5572,6 +5780,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>> }
>> #endif
>>
>> +static
>> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
>> +{
>> +	int i;
>> +	struct nfs4_stateowner *so, *tmp;
>> +	struct nfs4_lockowner *lo;
>> +	struct nfs4_ol_stateid *stp;
>> +	struct nfs4_file *nf;
>> +	struct inode *ino;
>> +	struct file_lock_context *ctx;
>> +	struct file_lock *fl;
>> +
>> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>> +		/* scan each lock owner */
>> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
>> +				so_strhash) {
>> +			if (so->so_is_open_owner)
>> +				continue;
> Isn't cl_lock needed to protect the cl_ownerstr_hashtbl lists?

Yes, thanks Chuck! will be in v7

>
>
>> +
>> +			/* scan lock states of this lock owner */
>> +			lo = lockowner(so);
>> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
>> +					st_perstateowner) {
>> +				nf = stp->st_stid.sc_file;
>> +				ino = nf->fi_inode;
>> +				ctx = ino->i_flctx;
>> +				if (!ctx)
>> +					continue;
>> +				/* check each lock belongs to this lock state */
>> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
>> +					if (fl->fl_owner != lo)
>> +						continue;
>> +					if (!list_empty(&fl->fl_blocked_requests))
>> +						return true;
>> +				}
>> +			}
>> +		}
>> +	}
>> +	return false;
>> +}
>> +
>> static time64_t
>> nfs4_laundromat(struct nfsd_net *nn)
>> {
>> @@ -5587,7 +5836,9 @@ nfs4_laundromat(struct nfsd_net *nn)
>> 	};
>> 	struct nfs4_cpntf_state *cps;
>> 	copy_stateid_t *cps_t;
>> +	struct nfs4_stid *stid;
>> 	int i;
>> +	int id = 0;
>>
>> 	if (clients_still_reclaiming(nn)) {
>> 		lt.new_timeo = 0;
>> @@ -5608,8 +5859,33 @@ nfs4_laundromat(struct nfsd_net *nn)
>> 	spin_lock(&nn->client_lock);
>> 	list_for_each_safe(pos, next, &nn->client_lru) {
>> 		clp = list_entry(pos, struct nfs4_client, cl_lru);
>> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> +			goto exp_client;
>> +		}
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
>> +				goto exp_client;
>> +			/*
>> +			 * after umount, v4.0 client is still
>> +			 * around waiting to be expired
>> +			 */
>> +			if (clp->cl_minorversion)
>> +				continue;
>> +		}
>> 		if (!state_expired(&lt, clp->cl_time))
>> 			break;
>> +		spin_lock(&clp->cl_lock);
>> +		stid = idr_get_next(&clp->cl_stateids, &id);
>> +		spin_unlock(&clp->cl_lock);
>> +		if (stid && !nfs4_anylock_conflict(clp)) {
>> +			/* client still has states */
>> +			clp->courtesy_client_expiry =
>> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
>> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> +			continue;
>> +		}
>> +exp_client:
>> 		if (mark_client_expired_locked(clp))
>> 			continue;
>> 		list_add(&clp->cl_lru, &reaplist);
>> @@ -5689,9 +5965,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>> 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>> }
>>
>> -static struct workqueue_struct *laundry_wq;
>> -static void laundromat_main(struct work_struct *);
>> -
>> static void
>> laundromat_main(struct work_struct *laundry)
>> {
>> @@ -6496,6 +6769,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
>> 		lock->fl_end = OFFSET_MAX;
>> }
>>
>> +/* return true if lock was expired else return false */
>> +static bool
>> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
>> +{
>> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
>> +	struct nfs4_client *clp = lo->lo_owner.so_client;
>> +
>> +	if (testonly)
>> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
>> +			true : false;
> Hm. I know test_bit() returns an integer rather than a boolean, but
> the ternary here is a bit unwieldy. How about just:
>
> 		return !!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);

ok, will be in v7.

>
>
>> +	return nfs4_destroy_courtesy_client(clp);
>> +}
>> +
>> static fl_owner_t
>> nfsd4_fl_get_owner(fl_owner_t owner)
>> {
>> @@ -6543,6 +6829,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
>> 	.lm_notify = nfsd4_lm_notify,
>> 	.lm_get_owner = nfsd4_fl_get_owner,
>> 	.lm_put_owner = nfsd4_fl_put_owner,
>> +	.lm_expire_lock = nfsd4_fl_expire_lock,
>> };
>>
>> static inline void
>> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
>> index e73bdbb1634a..93e30b101578 100644
>> --- a/fs/nfsd/state.h
>> +++ b/fs/nfsd/state.h
>> @@ -345,6 +345,8 @@ struct nfs4_client {
>> #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
>> #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
>> 					 1 << NFSD4_CLIENT_CB_KILL)
>> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
>> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
>> 	unsigned long		cl_flags;
>> 	const struct cred	*cl_cb_cred;
>> 	struct rpc_clnt		*cl_cb_client;
>> @@ -385,6 +387,7 @@ struct nfs4_client {
>> 	struct list_head	async_copies;	/* list of async copies */
>> 	spinlock_t		async_lock;	/* lock for async copies */
>> 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
>> +	int			courtesy_client_expiry;
>> };
>>
>> /* struct nfs4_client_reset
>> -- 
>> 2.9.5
>>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 20:36         ` dai.ngo
@ 2021-12-06 22:05           ` Trond Myklebust
  2021-12-06 23:07             ` dai.ngo
  0 siblings, 1 reply; 24+ messages in thread
From: Trond Myklebust @ 2021-12-06 22:05 UTC (permalink / raw)
  To: bfields, dai.ngo; +Cc: jlayton, linux-nfs, linux-fsdevel, viro, chuck.lever

On Mon, 2021-12-06 at 12:36 -0800, dai.ngo@oracle.com wrote:
> 
> On 12/6/21 12:05 PM, bfields@fieldses.org wrote:
> > On Mon, Dec 06, 2021 at 07:52:29PM +0000, Trond Myklebust wrote:
> > > On Mon, 2021-12-06 at 18:39 +0000, Chuck Lever III wrote:
> > > > 
> > > > > On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com>
> > > > > wrote:
> > > > > 
> > > > > Add new callback, lm_expire_lock, to lock_manager_operations
> > > > > to
> > > > > allow
> > > > > the lock manager to take appropriate action to resolve the
> > > > > lock
> > > > > conflict
> > > > > if possible. The callback takes 2 arguments, file_lock of the
> > > > > blocker
> > > > > and a testonly flag:
> > > > > 
> > > > > testonly = 1  check and return true if lock conflict can be
> > > > > resolved
> > > > >               else return false.
> > > > > testonly = 0  resolve the conflict if possible, return true
> > > > > if
> > > > > conflict
> > > > >               was resolved esle return false.
> > > > > 
> > > > > Lock manager, such as NFSv4 courteous server, uses this
> > > > > callback to
> > > > > resolve conflict by destroying lock owner, or the NFSv4
> > > > > courtesy
> > > > > client
> > > > > (client that has expired but allowed to maintains its states)
> > > > > that
> > > > > owns
> > > > > the lock.
> > > > > 
> > > > > Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> > > > Al, Jeff, as co-maintainers of record for fs/locks.c, can you
> > > > give
> > > > an Ack or Reviewed-by? I'd like to take this patch through the
> > > > nfsd
> > > > tree for v5.17. Thanks for your time!
> > > > 
> > > > 
> > > > > ---
> > > > > fs/locks.c         | 28 +++++++++++++++++++++++++---
> > > > > include/linux/fs.h |  1 +
> > > > > 2 files changed, 26 insertions(+), 3 deletions(-)
> > > > > 
> > > > > diff --git a/fs/locks.c b/fs/locks.c
> > > > > index 3d6fb4ae847b..0fef0a6322c7 100644
> > > > > --- a/fs/locks.c
> > > > > +++ b/fs/locks.c
> > > > > @@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct
> > > > > file_lock *fl)
> > > > >          struct file_lock *cfl;
> > > > >          struct file_lock_context *ctx;
> > > > >          struct inode *inode = locks_inode(filp);
> > > > > +       bool ret;
> > > > > 
> > > > >          ctx = smp_load_acquire(&inode->i_flctx);
> > > > >          if (!ctx || list_empty_careful(&ctx->flc_posix)) {
> > > > > @@ -962,11 +963,20 @@ posix_test_lock(struct file *filp,
> > > > > struct
> > > > > file_lock *fl)
> > > > >          }
> > > > > 
> > > > >          spin_lock(&ctx->flc_lock);
> > > > > +retry:
> > > > >          list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
> > > > > -               if (posix_locks_conflict(fl, cfl)) {
> > > > > -                       locks_copy_conflock(fl, cfl);
> > > > > -                       goto out;
> > > > > +               if (!posix_locks_conflict(fl, cfl))
> > > > > +                       continue;
> > > > > +               if (cfl->fl_lmops && cfl->fl_lmops-
> > > > > >lm_expire_lock
> > > > > &&
> > > > > +                               cfl->fl_lmops-
> > > > > >lm_expire_lock(cfl,
> > > > > 1)) {
> > > > > +                       spin_unlock(&ctx->flc_lock);
> > > > > +                       ret = cfl->fl_lmops-
> > > > > >lm_expire_lock(cfl,
> > > > > 0);
> > > > > +                       spin_lock(&ctx->flc_lock);
> > > > > +                       if (ret)
> > > > > +                               goto retry;
> > > > >                  }
> > > > > +               locks_copy_conflock(fl, cfl);
> > > How do you know 'cfl' still points to a valid object after you've
> > > dropped the spin lock that was protecting the list?
> > Ugh, good point, I should have noticed that when I suggested this
> > approach....
> > 
> > Maybe the first call could instead return return some reference-
> > counted
> > object that a second call could wait on.
> > 
> > Better, maybe it could add itself to a list of such things and then
> > we
> > could do this in one pass.
> 
> I think we adjust this logic a little bit to cover race condition:
> 
> The 1st call to lm_expire_lock returns the client needs to be
> expired.
> 
> Before we make the 2nd call, we save the 'lm_expire_lock' into a
> local
> variable then drop the spinlock, and use the local variable to make
> the
> 2nd call so that we do not reference 'cfl'. The argument of the
> second
> is the opaque return value from the 1st call.
> 
> nfsd4_fl_expire_lock also needs some adjustment to support the above.
> 

It's not just the fact that you're using 'cfl' in the actual call to
lm_expire_lock(), but you're also using it after retaking the spinlock.


-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 21:44     ` dai.ngo
@ 2021-12-06 22:30       ` Chuck Lever III
  2021-12-06 22:52         ` Bruce Fields
  0 siblings, 1 reply; 24+ messages in thread
From: Chuck Lever III @ 2021-12-06 22:30 UTC (permalink / raw)
  To: Dai Ngo; +Cc: Bruce Fields, Linux NFS Mailing List, linux-fsdevel



> On Dec 6, 2021, at 4:44 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> 
> 
> On 12/6/21 11:55 AM, Chuck Lever III wrote:
>> Hi Dai-
>> 
>> Some comments and questions below:
>> 
>> 
>>> On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
>>> 
>>> Currently an NFSv4 client must maintain its lease by using the at least
>>> one of the state tokens or if nothing else, by issuing a RENEW (4.0), or
>>> a singleton SEQUENCE (4.1) at least once during each lease period. If the
>>> client fails to renew the lease, for any reason, the Linux server expunges
>>> the state tokens immediately upon detection of the "failure to renew the
>>> lease" condition and begins returning NFS4ERR_EXPIRED if the client should
>>> reconnect and attempt to use the (now) expired state.
>>> 
>>> The default lease period for the Linux server is 90 seconds.  The typical
>>> client cuts that in half and will issue a lease renewing operation every
>>> 45 seconds. The 90 second lease period is very short considering the
>>> potential for moderately long term network partitions.  A network partition
>>> refers to any loss of network connectivity between the NFS client and the
>>> NFS server, regardless of its root cause.  This includes NIC failures, NIC
>>> driver bugs, network misconfigurations & administrative errors, routers &
>>> switches crashing and/or having software updates applied, even down to
>>> cables being physically pulled.  In most cases, these network failures are
>>> transient, although the duration is unknown.
>>> 
>>> A server which does not immediately expunge the state on lease expiration
>>> is known as a Courteous Server.  A Courteous Server continues to recognize
>>> previously generated state tokens as valid until conflict arises between
>>> the expired state and the requests from another client, or the server
>>> reboots.
>>> 
>>> The initial implementation of the Courteous Server will do the following:
>>> 
>>> . when the laundromat thread detects an expired client and if that client
>>> still has established states on the Linux server and there is no waiters
>>> for the client's locks then mark the client as a COURTESY_CLIENT and skip
>>> destroying the client and all its states, otherwise destroy the client as
>>> usual.
>>> 
>>> . detects conflict of OPEN request with COURTESY_CLIENT, destroys the
>>> expired client and all its states, skips the delegation recall then allows
>>> the conflicting request to succeed.
>>> 
>>> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
>>> requests with COURTESY_CLIENT, destroys the expired client and all its
>>> states then allows the conflicting request to succeed.
>>> 
>>> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
>>> requests with COURTESY_CLIENT, destroys the expired client and all its
>>> states then allows the conflicting request to succeed.
>>> 
>>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>>> ---
>>> fs/nfsd/nfs4state.c | 293 +++++++++++++++++++++++++++++++++++++++++++++++++++-
>>> fs/nfsd/state.h     |   3 +
>>> 2 files changed, 293 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>> index 3f4027a5de88..759f61dc6685 100644
>>> --- a/fs/nfsd/nfs4state.c
>>> +++ b/fs/nfsd/nfs4state.c
>>> @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *);
>>> static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
>>> static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
>>> 
>>> +static struct workqueue_struct *laundry_wq;
>>> +static void laundromat_main(struct work_struct *);
>>> +
>>> +static int courtesy_client_expiry = (24 * 60 * 60);	/* in secs */

Btw, why is this a variable? It could be "static const int"
or even better, just a macro.


>>> +
>>> static bool is_session_dead(struct nfsd4_session *ses)
>>> {
>>> 	return ses->se_flags & NFS4_SESSION_DEAD;
>>> @@ -172,6 +177,7 @@ renew_client_locked(struct nfs4_client *clp)
>>> 
>>> 	list_move_tail(&clp->cl_lru, &nn->client_lru);
>>> 	clp->cl_time = ktime_get_boottime_seconds();
>>> +	clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>>> }
>>> 
>>> static void put_client_renew_locked(struct nfs4_client *clp)
>>> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
>>> 		seq_puts(m, "status: confirmed\n");
>>> 	else
>>> 		seq_puts(m, "status: unconfirmed\n");
>>> +	seq_printf(m, "courtesy client: %s\n",
>>> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
>>> +	seq_printf(m, "seconds from last renew: %lld\n",
>>> +		ktime_get_boottime_seconds() - clp->cl_time);
>>> 	seq_printf(m, "name: ");
>>> 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>>> 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
>>> @@ -4662,6 +4672,33 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
>>> 	nfsd4_run_cb(&dp->dl_recall);
>>> }
>>> 
>>> +/*
>>> + * This function is called when a file is opened and there is a
>>> + * delegation conflict with another client. If the other client
>>> + * is a courtesy client then kick start the laundromat to destroy
>>> + * it.
>>> + */
>>> +static bool
>>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>>> +{
>>> +	struct svc_rqst *rqst;
>>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>>> +
>>> +	if (!i_am_nfsd())
>>> +		goto out;
>>> +	rqst = kthread_data(current);
>>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>>> +		return false;
>>> +out:
>>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>>> +		set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>>> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>>> +		return true;
>>> +	}
>>> +	return false;
>>> +}
>>> +
>>> /* Called from break_lease() with i_lock held. */
>>> static bool
>>> nfsd_break_deleg_cb(struct file_lock *fl)
>>> @@ -4670,6 +4707,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>>> 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>>> 	struct nfs4_file *fp = dp->dl_stid.sc_file;
>>> 
>>> +	if (nfsd_check_courtesy_client(dp))
>>> +		return false;
>>> 	trace_nfsd_cb_recall(&dp->dl_stid);
>>> 
>>> 	/*
>>> @@ -4912,6 +4951,136 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
>>> 	return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0);
>>> }
>>> 
>>> +static bool
>>> +__nfs4_check_deny_bmap(struct nfs4_ol_stateid *stp, u32 access,
>>> +			bool share_access)
>>> +{
>>> +	if (share_access) {
>>> +		if (!stp->st_deny_bmap)
>>> +			return false;
>>> +
>>> +		if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) ||
>> Aren't the NFS4_SHARE_DENY macros already bit masks?
>> 
>> NFS4_SHARE_DENY_BOTH is (NFS4_SHARE_DENY_READ | NFS4_SHARE_DENY_WRITE).
> 
> I think the protocol defines these as constants and nfsd uses them
> to set the bitmap in st_deny_bmap. See set_deny().

OK, this is really confusing.

5142         set_deny(open->op_share_deny, stp);
5143         fp->fi_share_deny |= (open->op_share_deny & NFS4_SHARE_DENY_BOTH);

Here set_deny() is treating the contents of open->op_share_deny
as bit positions, but then upon return NFS4_SHARE_DENY_BOTH
is used directly as a bit mask. Am I reading this correctly?

But that's not your problem, so I'll let that be.

You need to refactor this passage to manage the code duplication
and the long lines, and use BIT() instead of open-coding the
shift.


>>> +			(access & NFS4_SHARE_ACCESS_READ &&
>>> +				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) ||
>>> +			(access & NFS4_SHARE_ACCESS_WRITE &&
>>> +				stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) {
>>> +			return true;
>>> +		}
>>> +		return false;
>>> +	}
>>> +	if ((access & NFS4_SHARE_DENY_BOTH) ||
>>> +		(access & NFS4_SHARE_DENY_READ &&
>>> +			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) ||
>>> +		(access & NFS4_SHARE_DENY_WRITE &&
>>> +			stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) {
>>> +		return true;
>>> +	}
>>> +	return false;
>>> +}
>>> +
>>> +/*
>>> + * access: if share_access is true then check access mode else check deny mode
>>> + */
>>> +static bool
>>> +nfs4_check_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp,
>>> +		struct nfs4_ol_stateid *st, u32 access, bool share_access)
>>> +{
>>> +	int i;
>>> +	struct nfs4_openowner *oo;
>>> +	struct nfs4_stateowner *so, *tmp;
>>> +	struct nfs4_ol_stateid *stp, *stmp;
>>> +
>>> +	spin_lock(&clp->cl_lock);
>>> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>>> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
>>> +					so_strhash) {
>>> +			if (!so->so_is_open_owner)
>>> +				continue;
>>> +			oo = openowner(so);
>>> +			list_for_each_entry_safe(stp, stmp,
>>> +				&oo->oo_owner.so_stateids, st_perstateowner) {
>>> +				if (stp == st || stp->st_stid.sc_file != fp)
>>> +					continue;
>>> +				if (__nfs4_check_deny_bmap(stp, access,
>>> +							share_access)) {
>>> +					spin_unlock(&clp->cl_lock);
>>> +					return true;
>>> +				}
>>> +			}
>>> +		}
>>> +	}
>>> +	spin_unlock(&clp->cl_lock);
>>> +	return false;
>>> +}
>>> +
>>> +/*
>>> + * Function to check if the nfserr_share_denied error for 'fp' resulted
>>> + * from conflict with courtesy clients then release their state to resolve
>>> + * the conflict.
>>> + *
>>> + * Function returns:
>>> + *	 0 -  no conflict with courtesy clients
>>> + *	>0 -  conflict with courtesy clients resolved, try access/deny check again
>>> + *	-1 -  conflict with courtesy clients being resolved in background
>>> + *            return nfserr_jukebox to NFS client
>>> + */
>>> +static int
>>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>>> +			struct nfs4_file *fp, struct nfs4_ol_stateid *stp,
>>> +			u32 access, bool share_access)
>>> +{
>>> +	int cnt = 0;
>>> +	int async_cnt = 0;
>>> +	bool no_retry = false;
>>> +	struct nfs4_client *cl;
>>> +	struct list_head *pos, *next, reaplist;
>>> +	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
>>> +
>>> +	INIT_LIST_HEAD(&reaplist);
>>> +	spin_lock(&nn->client_lock);
>>> +	list_for_each_safe(pos, next, &nn->client_lru) {
>>> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
>>> +		/*
>>> +		 * check all nfs4_ol_stateid of this client
>>> +		 * for conflicts with 'access'mode.
>>> +		 */
>>> +		if (nfs4_check_deny_bmap(cl, fp, stp, access, share_access)) {
>>> +			if (!test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) {
>>> +				/* conflict with non-courtesy client */
>>> +				no_retry = true;
>>> +				cnt = 0;
>>> +				goto out;
>>> +			}
>>> +			/*
>>> +			 * if too many to resolve synchronously
>>> +			 * then do the rest in background
>>> +			 */
>>> +			if (cnt > 100) {
>>> +				set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags);
>>> +				async_cnt++;
>>> +				continue;
>>> +			}
>>> +			if (mark_client_expired_locked(cl))
>>> +				continue;
>>> +			cnt++;
>>> +			list_add(&cl->cl_lru, &reaplist);
>>> +		}
>>> +	}
>> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
>> That would simplify this quite a bit for what is a rare edge
>> case.
> 
> If we always do NFS4ERR_DELAY then we have to modify pynfs' OPEN18
> to handle NFS4ERR_DELAY.

Changing the test case seems reasonable to me. A complete test
should handle NFS4ERR_DELAY, right?


> Since I don't think this code is overly
> complicated and most of the time in real usage, if there is a share
> reservation conflict nfsd should be able to resolve it synchronously
> avoiding the NFS4ERR_DELAY and also it'd be nice if we don't have
> to modify pynfs test.

You're adding a magic number (100) here. That is immediately a
sign that we are doing some guessing and handwaving.

In the long run it's better if we don't have a case that would
be used most of the time (the synchronous case) and one that
would hardly ever used or tested (the async case). Guess which
one will grow fallow?

Having just one case means that one is always tested and working.


>>> +out:
>>> +	spin_unlock(&nn->client_lock);
>>> +	list_for_each_safe(pos, next, &reaplist) {
>>> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
>>> +		list_del_init(&cl->cl_lru);
>>> +		expire_client(cl);
>>> +	}
>> A slightly nicer construct here would be something like this:
>> 
>> 	while ((pos = list_del_first(&reaplist)))
>> 		expire_client(list_entry(pos, struct nfs4_client, cl_lru));
> 
> You meant llist_del_first?

No, that's for wait-free lists. How about list_first_entry_or_null() ...


> The above code follows the style in nfs4_laundromat. Can I keep the
> same to avoid doing retest?

The use of "list_for_each_safe()" is overkill because the loop
deletes the front of the list on each iteration. Technically
correct, but more complicated than you need and there's no
justification for it.


>>> +	if (async_cnt) {
>>> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>>> +		if (!no_retry)
>>> +			cnt = -1;
>>> +	}
>>> +	return cnt;
>>> +}
>>> +
>>> static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>>> 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
>>> 		struct nfsd4_open *open)
>>> @@ -4921,6 +5090,7 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>>> 	int oflag = nfs4_access_to_omode(open->op_share_access);
>>> 	int access = nfs4_access_to_access(open->op_share_access);
>>> 	unsigned char old_access_bmap, old_deny_bmap;
>>> +	int cnt = 0;
>>> 
>>> 	spin_lock(&fp->fi_lock);
>>> 
>>> @@ -4928,16 +5098,38 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>>> 	 * Are we trying to set a deny mode that would conflict with
>>> 	 * current access?
>>> 	 */
>>> +chk_deny:
>>> 	status = nfs4_file_check_deny(fp, open->op_share_deny);
>>> 	if (status != nfs_ok) {
>>> 		spin_unlock(&fp->fi_lock);
>>> +		if (status != nfserr_share_denied)
>>> +			goto out;
>>> +		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
>>> +				stp, open->op_share_deny, false);
>>> +		if (cnt > 0) {
>>> +			spin_lock(&fp->fi_lock);
>>> +			goto chk_deny;
>> I'm pondering whether a distributed set of clients can
>> cause this loop to never terminate.
> 
> I'm not clear how we can get into an infinite loop, can you elaborate?

I don't see anything that _guarantees_ that eventually
nfs4_destroy_clnts_with_sresv_conflict() will return a
value of 0 or less.


> To get into an infinite loop, there must be stream of new clients that
> open the file with the conflict access mode *and* become courtesy client
> before we finish expiring all existing courtesy clients that have access
> conflict this OPEN.

>>> +		}
>>> +		if (cnt == -1)
>>> +			status = nfserr_jukebox;
>>> 		goto out;
>>> 	}
>>> 
>>> 	/* set access to the file */
>>> +get_access:
>>> 	status = nfs4_file_get_access(fp, open->op_share_access);
>>> 	if (status != nfs_ok) {
>>> 		spin_unlock(&fp->fi_lock);
>>> +		if (status != nfserr_share_denied)
>>> +			goto out;
>>> +		cnt = nfs4_destroy_clnts_with_sresv_conflict(rqstp, fp,
>>> +				stp, open->op_share_access, true);
>>> +		if (cnt > 0) {
>>> +			spin_lock(&fp->fi_lock);
>>> +			goto get_access;
>> Ditto.
>> 
>> 
>>> +		}
>>> +		if (cnt == -1)
>>> +			status = nfserr_jukebox;
>>> 		goto out;
>>> 	}
>>> 
>>> @@ -5289,6 +5481,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
>>> 	 */
>>> }
>>> 
>>> +static bool
>>> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
>>> +{
>>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>>> +
>>> +	spin_lock(&nn->client_lock);
>>> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
>>> +			mark_client_expired_locked(clp)) {
>>> +		spin_unlock(&nn->client_lock);
>>> +		return false;
>>> +	}
>>> +	spin_unlock(&nn->client_lock);
>>> +	expire_client(clp);
>>> +	return true;
>>> +}
>>> +
>> Perhaps nfs4_destroy_courtesy_client() could be merged into
>> nfsd4_fl_expire_lock(), it's only caller.
> 
> ok, will be in v7 patch.
> 
>> 
>> 
>>> __be32
>>> nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
>>> {
>>> @@ -5572,6 +5780,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>>> }
>>> #endif
>>> 
>>> +static
>>> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
>>> +{
>>> +	int i;
>>> +	struct nfs4_stateowner *so, *tmp;
>>> +	struct nfs4_lockowner *lo;
>>> +	struct nfs4_ol_stateid *stp;
>>> +	struct nfs4_file *nf;
>>> +	struct inode *ino;
>>> +	struct file_lock_context *ctx;
>>> +	struct file_lock *fl;
>>> +
>>> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>>> +		/* scan each lock owner */
>>> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
>>> +				so_strhash) {
>>> +			if (so->so_is_open_owner)
>>> +				continue;
>> Isn't cl_lock needed to protect the cl_ownerstr_hashtbl lists?
> 
> Yes, thanks Chuck! will be in v7
> 
>> 
>> 
>>> +
>>> +			/* scan lock states of this lock owner */
>>> +			lo = lockowner(so);
>>> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
>>> +					st_perstateowner) {
>>> +				nf = stp->st_stid.sc_file;
>>> +				ino = nf->fi_inode;
>>> +				ctx = ino->i_flctx;
>>> +				if (!ctx)
>>> +					continue;
>>> +				/* check each lock belongs to this lock state */
>>> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
>>> +					if (fl->fl_owner != lo)
>>> +						continue;
>>> +					if (!list_empty(&fl->fl_blocked_requests))
>>> +						return true;
>>> +				}
>>> +			}
>>> +		}
>>> +	}
>>> +	return false;
>>> +}
>>> +
>>> static time64_t
>>> nfs4_laundromat(struct nfsd_net *nn)
>>> {
>>> @@ -5587,7 +5836,9 @@ nfs4_laundromat(struct nfsd_net *nn)
>>> 	};
>>> 	struct nfs4_cpntf_state *cps;
>>> 	copy_stateid_t *cps_t;
>>> +	struct nfs4_stid *stid;
>>> 	int i;
>>> +	int id = 0;
>>> 
>>> 	if (clients_still_reclaiming(nn)) {
>>> 		lt.new_timeo = 0;
>>> @@ -5608,8 +5859,33 @@ nfs4_laundromat(struct nfsd_net *nn)
>>> 	spin_lock(&nn->client_lock);
>>> 	list_for_each_safe(pos, next, &nn->client_lru) {
>>> 		clp = list_entry(pos, struct nfs4_client, cl_lru);
>>> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
>>> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>>> +			goto exp_client;
>>> +		}
>>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>>> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
>>> +				goto exp_client;
>>> +			/*
>>> +			 * after umount, v4.0 client is still
>>> +			 * around waiting to be expired
>>> +			 */
>>> +			if (clp->cl_minorversion)
>>> +				continue;
>>> +		}
>>> 		if (!state_expired(&lt, clp->cl_time))
>>> 			break;
>>> +		spin_lock(&clp->cl_lock);
>>> +		stid = idr_get_next(&clp->cl_stateids, &id);
>>> +		spin_unlock(&clp->cl_lock);
>>> +		if (stid && !nfs4_anylock_conflict(clp)) {
>>> +			/* client still has states */
>>> +			clp->courtesy_client_expiry =
>>> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
>>> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>>> +			continue;
>>> +		}
>>> +exp_client:
>>> 		if (mark_client_expired_locked(clp))
>>> 			continue;
>>> 		list_add(&clp->cl_lru, &reaplist);
>>> @@ -5689,9 +5965,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>>> 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>>> }
>>> 
>>> -static struct workqueue_struct *laundry_wq;
>>> -static void laundromat_main(struct work_struct *);
>>> -
>>> static void
>>> laundromat_main(struct work_struct *laundry)
>>> {
>>> @@ -6496,6 +6769,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
>>> 		lock->fl_end = OFFSET_MAX;
>>> }
>>> 
>>> +/* return true if lock was expired else return false */
>>> +static bool
>>> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
>>> +{
>>> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
>>> +	struct nfs4_client *clp = lo->lo_owner.so_client;
>>> +
>>> +	if (testonly)
>>> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
>>> +			true : false;
>> Hm. I know test_bit() returns an integer rather than a boolean, but
>> the ternary here is a bit unwieldy. How about just:
>> 
>> 		return !!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> 
> ok, will be in v7.
> 
>> 
>> 
>>> +	return nfs4_destroy_courtesy_client(clp);
>>> +}
>>> +
>>> static fl_owner_t
>>> nfsd4_fl_get_owner(fl_owner_t owner)
>>> {
>>> @@ -6543,6 +6829,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
>>> 	.lm_notify = nfsd4_lm_notify,
>>> 	.lm_get_owner = nfsd4_fl_get_owner,
>>> 	.lm_put_owner = nfsd4_fl_put_owner,
>>> +	.lm_expire_lock = nfsd4_fl_expire_lock,
>>> };
>>> 
>>> static inline void
>>> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
>>> index e73bdbb1634a..93e30b101578 100644
>>> --- a/fs/nfsd/state.h
>>> +++ b/fs/nfsd/state.h
>>> @@ -345,6 +345,8 @@ struct nfs4_client {
>>> #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
>>> #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
>>> 					 1 << NFSD4_CLIENT_CB_KILL)
>>> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
>>> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
>>> 	unsigned long		cl_flags;
>>> 	const struct cred	*cl_cb_cred;
>>> 	struct rpc_clnt		*cl_cb_client;
>>> @@ -385,6 +387,7 @@ struct nfs4_client {
>>> 	struct list_head	async_copies;	/* list of async copies */
>>> 	spinlock_t		async_lock;	/* lock for async copies */
>>> 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
>>> +	int			courtesy_client_expiry;
>>> };
>>> 
>>> /* struct nfs4_client_reset
>>> -- 
>>> 2.9.5
>>> 
>> --
>> Chuck Lever

--
Chuck Lever




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 22:30       ` Chuck Lever III
@ 2021-12-06 22:52         ` Bruce Fields
  2021-12-07 22:00           ` Chuck Lever III
  0 siblings, 1 reply; 24+ messages in thread
From: Bruce Fields @ 2021-12-06 22:52 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Dai Ngo, Linux NFS Mailing List, linux-fsdevel

On Mon, Dec 06, 2021 at 10:30:45PM +0000, Chuck Lever III wrote:
> OK, this is really confusing.
> 
> 5142         set_deny(open->op_share_deny, stp);
> 5143         fp->fi_share_deny |= (open->op_share_deny & NFS4_SHARE_DENY_BOTH);
> 
> Here set_deny() is treating the contents of open->op_share_deny
> as bit positions, but then upon return NFS4_SHARE_DENY_BOTH
> is used directly as a bit mask. Am I reading this correctly?
> 
> But that's not your problem, so I'll let that be.

This is weird but intentional.

For most practical purposes, fi_share_deny is all that matters.

BUT, there is also this language in the spec for OPEN_DOWNGRADE:

	https://datatracker.ietf.org/doc/html/rfc5661#section-18.18.3

	The bits in share_deny SHOULD equal the union of the share_deny
	bits specified for some subset of the OPENs in effect for the
	current open-owner on the current file.

	If the above constraints are not respected, the server SHOULD
	return the error NFS4ERR_INVAL.

If you open a file twice, once with DENY_READ, once with DENY_WRITE,
then that is not *quite* the same as opening it once with DENY_BOTH.  In
the former case, you're allowed to, for example, downgrade to DENY_READ.
In the latter, you're not.

So if we want to the server to follow that SHOULD, we need to remember
not only that the union of all the DENYs so far, you also need to
remember the different DENY modes that different OPENs were done with.

So, we also keep the st_deny_bmap with that information.

The same goes for allow bits (hence there's also an st_access_bmap).

It's arguably a lot of extra busy work just for one SHOULD that has no
justification other than just to be persnickety about client
behavior....

--b.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-12-06 22:05           ` Trond Myklebust
@ 2021-12-06 23:07             ` dai.ngo
  0 siblings, 0 replies; 24+ messages in thread
From: dai.ngo @ 2021-12-06 23:07 UTC (permalink / raw)
  To: Trond Myklebust, bfields
  Cc: jlayton, linux-nfs, linux-fsdevel, viro, chuck.lever

On 12/6/21 2:05 PM, Trond Myklebust wrote:
> On Mon, 2021-12-06 at 12:36 -0800, dai.ngo@oracle.com wrote:
>> On 12/6/21 12:05 PM, bfields@fieldses.org wrote:
>>> On Mon, Dec 06, 2021 at 07:52:29PM +0000, Trond Myklebust wrote:
>>>> On Mon, 2021-12-06 at 18:39 +0000, Chuck Lever III wrote:
>>>>>> On Dec 6, 2021, at 12:59 PM, Dai Ngo <dai.ngo@oracle.com>
>>>>>> wrote:
>>>>>>
>>>>>> Add new callback, lm_expire_lock, to lock_manager_operations
>>>>>> to
>>>>>> allow
>>>>>> the lock manager to take appropriate action to resolve the
>>>>>> lock
>>>>>> conflict
>>>>>> if possible. The callback takes 2 arguments, file_lock of the
>>>>>> blocker
>>>>>> and a testonly flag:
>>>>>>
>>>>>> testonly = 1  check and return true if lock conflict can be
>>>>>> resolved
>>>>>>                else return false.
>>>>>> testonly = 0  resolve the conflict if possible, return true
>>>>>> if
>>>>>> conflict
>>>>>>                was resolved esle return false.
>>>>>>
>>>>>> Lock manager, such as NFSv4 courteous server, uses this
>>>>>> callback to
>>>>>> resolve conflict by destroying lock owner, or the NFSv4
>>>>>> courtesy
>>>>>> client
>>>>>> (client that has expired but allowed to maintains its states)
>>>>>> that
>>>>>> owns
>>>>>> the lock.
>>>>>>
>>>>>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>>>>> Al, Jeff, as co-maintainers of record for fs/locks.c, can you
>>>>> give
>>>>> an Ack or Reviewed-by? I'd like to take this patch through the
>>>>> nfsd
>>>>> tree for v5.17. Thanks for your time!
>>>>>
>>>>>
>>>>>> ---
>>>>>> fs/locks.c         | 28 +++++++++++++++++++++++++---
>>>>>> include/linux/fs.h |  1 +
>>>>>> 2 files changed, 26 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/fs/locks.c b/fs/locks.c
>>>>>> index 3d6fb4ae847b..0fef0a6322c7 100644
>>>>>> --- a/fs/locks.c
>>>>>> +++ b/fs/locks.c
>>>>>> @@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct
>>>>>> file_lock *fl)
>>>>>>           struct file_lock *cfl;
>>>>>>           struct file_lock_context *ctx;
>>>>>>           struct inode *inode = locks_inode(filp);
>>>>>> +       bool ret;
>>>>>>
>>>>>>           ctx = smp_load_acquire(&inode->i_flctx);
>>>>>>           if (!ctx || list_empty_careful(&ctx->flc_posix)) {
>>>>>> @@ -962,11 +963,20 @@ posix_test_lock(struct file *filp,
>>>>>> struct
>>>>>> file_lock *fl)
>>>>>>           }
>>>>>>
>>>>>>           spin_lock(&ctx->flc_lock);
>>>>>> +retry:
>>>>>>           list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
>>>>>> -               if (posix_locks_conflict(fl, cfl)) {
>>>>>> -                       locks_copy_conflock(fl, cfl);
>>>>>> -                       goto out;
>>>>>> +               if (!posix_locks_conflict(fl, cfl))
>>>>>> +                       continue;
>>>>>> +               if (cfl->fl_lmops && cfl->fl_lmops-
>>>>>>> lm_expire_lock
>>>>>> &&
>>>>>> +                               cfl->fl_lmops-
>>>>>>> lm_expire_lock(cfl,
>>>>>> 1)) {
>>>>>> +                       spin_unlock(&ctx->flc_lock);
>>>>>> +                       ret = cfl->fl_lmops-
>>>>>>> lm_expire_lock(cfl,
>>>>>> 0);
>>>>>> +                       spin_lock(&ctx->flc_lock);
>>>>>> +                       if (ret)
>>>>>> +                               goto retry;
>>>>>>                   }
>>>>>> +               locks_copy_conflock(fl, cfl);
>>>> How do you know 'cfl' still points to a valid object after you've
>>>> dropped the spin lock that was protecting the list?
>>> Ugh, good point, I should have noticed that when I suggested this
>>> approach....
>>>
>>> Maybe the first call could instead return return some reference-
>>> counted
>>> object that a second call could wait on.
>>>
>>> Better, maybe it could add itself to a list of such things and then
>>> we
>>> could do this in one pass.
>> I think we adjust this logic a little bit to cover race condition:
>>
>> The 1st call to lm_expire_lock returns the client needs to be
>> expired.
>>
>> Before we make the 2nd call, we save the 'lm_expire_lock' into a
>> local
>> variable then drop the spinlock, and use the local variable to make
>> the
>> 2nd call so that we do not reference 'cfl'. The argument of the
>> second
>> is the opaque return value from the 1st call.
>>
>> nfsd4_fl_expire_lock also needs some adjustment to support the above.
>>
> It's not just the fact that you're using 'cfl' in the actual call to
> lm_expire_lock(), but you're also using it after retaking the spinlock.

I plan to do this:

         checked_cfl = NULL;
         spin_lock(&ctx->flc_lock);
retry:
         list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
                 if (!posix_locks_conflict(fl, cfl))
                         continue;
                 if (check_cfl != cfg && cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock) {
                         void *res_data;

                         res_data = cfl->fl_lmops->lm_expire_lock(cfl, 1);
                         if (res_data) {
                                 func = cfl->fl_lmops->lm_expire_lock;
                                 spin_unlock(&ctx->flc_lock);
                                 func(res_data, 0);
                                 spin_lock(&ctx->flc_lock);
                                 checked_cfl = cfl;
                                 goto retry;
                         }
                 locks_copy_conflock(fl, cfl);
                 goto out;
         }
         fl->fl_type = F_UNLCK;

Do you still see problems with this?

-Dai



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 22:52         ` Bruce Fields
@ 2021-12-07 22:00           ` Chuck Lever III
  2021-12-07 22:35             ` Bruce Fields
  0 siblings, 1 reply; 24+ messages in thread
From: Chuck Lever III @ 2021-12-07 22:00 UTC (permalink / raw)
  To: Bruce Fields; +Cc: Dai Ngo, Linux NFS Mailing List, linux-fsdevel



> On Dec 6, 2021, at 5:52 PM, Bruce Fields <bfields@fieldses.org> wrote:
> 
> On Mon, Dec 06, 2021 at 10:30:45PM +0000, Chuck Lever III wrote:
>> OK, this is really confusing.
>> 
>> 5142         set_deny(open->op_share_deny, stp);
>> 5143         fp->fi_share_deny |= (open->op_share_deny & NFS4_SHARE_DENY_BOTH);
>> 
>> Here set_deny() is treating the contents of open->op_share_deny
>> as bit positions, but then upon return NFS4_SHARE_DENY_BOTH
>> is used directly as a bit mask. Am I reading this correctly?
>> 
>> But that's not your problem, so I'll let that be.
> 
> This is weird but intentional.
> 
> For most practical purposes, fi_share_deny is all that matters.
> 
> BUT, there is also this language in the spec for OPEN_DOWNGRADE:
> 
> 	https://datatracker.ietf.org/doc/html/rfc5661#section-18.18.3
> 
> 	The bits in share_deny SHOULD equal the union of the share_deny
> 	bits specified for some subset of the OPENs in effect for the
> 	current open-owner on the current file.
> 
> 	If the above constraints are not respected, the server SHOULD
> 	return the error NFS4ERR_INVAL.
> 
> If you open a file twice, once with DENY_READ, once with DENY_WRITE,
> then that is not *quite* the same as opening it once with DENY_BOTH.  In
> the former case, you're allowed to, for example, downgrade to DENY_READ.
> In the latter, you're not.
> 
> So if we want to the server to follow that SHOULD, we need to remember
> not only that the union of all the DENYs so far, you also need to
> remember the different DENY modes that different OPENs were done with.
> 
> So, we also keep the st_deny_bmap with that information.
> 
> The same goes for allow bits (hence there's also an st_access_bmap).
> 
> It's arguably a lot of extra busy work just for one SHOULD that has no
> justification other than just to be persnickety about client
> behavior....

Thanks for clarifying! If you are feeling industrious, it would be nice
for this to be documented somewhere in the source code....


--
Chuck Lever




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-07 22:00           ` Chuck Lever III
@ 2021-12-07 22:35             ` Bruce Fields
  2021-12-08 15:17               ` Chuck Lever III
  0 siblings, 1 reply; 24+ messages in thread
From: Bruce Fields @ 2021-12-07 22:35 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Dai Ngo, Linux NFS Mailing List, linux-fsdevel

On Tue, Dec 07, 2021 at 10:00:22PM +0000, Chuck Lever III wrote:
> Thanks for clarifying! If you are feeling industrious, it would be nice
> for this to be documented somewhere in the source code....

I did that, then noticed I was duplicating a comment I'd already written
elsewhere, so, how about the following?

--b.

From 2e3f00c5f29f033fd5db05ef713d0d9fa27d6db1 Mon Sep 17 00:00:00 2001
From: "J. Bruce Fields" <bfields@redhat.com>
Date: Tue, 7 Dec 2021 17:32:21 -0500
Subject: [PATCH] nfsd: improve stateid access bitmask documentation

The use of the bitmaps is confusing.  Add a cross-reference to make it
easier to find the existing comment.  Add an updated reference with URL
to make it quicker to look up.  And a bit more editorializing about the
value of this.

Signed-off-by: J. Bruce Fields <bfields@redhat.com>
---
 fs/nfsd/nfs4state.c | 14 ++++++++++----
 fs/nfsd/state.h     |  4 ++++
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 0031e006f4dc..f07fe7562d4d 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -360,11 +360,13 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops = {
  * st_{access,deny}_bmap field of the stateid, in order to track not
  * only what share bits are currently in force, but also what
  * combinations of share bits previous opens have used.  This allows us
- * to enforce the recommendation of rfc 3530 14.2.19 that the server
- * return an error if the client attempt to downgrade to a combination
- * of share bits not explicable by closing some of its previous opens.
+ * to enforce the recommendation in
+ * https://datatracker.ietf.org/doc/html/rfc7530#section-16.19.4 that
+ * the server return an error if the client attempt to downgrade to a
+ * combination of share bits not explicable by closing some of its
+ * previous opens.
  *
- * XXX: This enforcement is actually incomplete, since we don't keep
+ * This enforcement is arguably incomplete, since we don't keep
  * track of access/deny bit combinations; so, e.g., we allow:
  *
  *	OPEN allow read, deny write
@@ -372,6 +374,10 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops = {
  *	DOWNGRADE allow read, deny none
  *
  * which we should reject.
+ *
+ * But you could also argue that what our current code is already
+ * overkill, since it only exists to return NFS4ERR_INVAL on incorrect
+ * client behavior.
  */
 static unsigned int
 bmap_to_share_mode(unsigned long bmap)
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index e73bdbb1634a..6eb3c7157214 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -568,6 +568,10 @@ struct nfs4_ol_stateid {
 	struct list_head		st_locks;
 	struct nfs4_stateowner		*st_stateowner;
 	struct nfs4_clnt_odstate	*st_clnt_odstate;
+/*
+ * These bitmasks use 3 separate bits for READ, ALLOW, and BOTH; see the
+ * comment above bmap_to_share_mode() for explanation:
+ */
 	unsigned char			st_access_bmap;
 	unsigned char			st_deny_bmap;
 	struct nfs4_ol_stateid		*st_openstp;
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-07 22:35             ` Bruce Fields
@ 2021-12-08 15:17               ` Chuck Lever III
  0 siblings, 0 replies; 24+ messages in thread
From: Chuck Lever III @ 2021-12-08 15:17 UTC (permalink / raw)
  To: Bruce Fields; +Cc: Dai Ngo, Linux NFS Mailing List, linux-fsdevel



> On Dec 7, 2021, at 5:35 PM, Bruce Fields <bfields@fieldses.org> wrote:
> 
> On Tue, Dec 07, 2021 at 10:00:22PM +0000, Chuck Lever III wrote:
>> Thanks for clarifying! If you are feeling industrious, it would be nice
>> for this to be documented somewhere in the source code....
> 
> I did that, then noticed I was duplicating a comment I'd already written
> elsewhere, so, how about the following?
> 
> --b.
> 
> From 2e3f00c5f29f033fd5db05ef713d0d9fa27d6db1 Mon Sep 17 00:00:00 2001
> From: "J. Bruce Fields" <bfields@redhat.com>
> Date: Tue, 7 Dec 2021 17:32:21 -0500
> Subject: [PATCH] nfsd: improve stateid access bitmask documentation
> 
> The use of the bitmaps is confusing.  Add a cross-reference to make it
> easier to find the existing comment.  Add an updated reference with URL
> to make it quicker to look up.  And a bit more editorializing about the
> value of this.
> 
> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
> ---
> fs/nfsd/nfs4state.c | 14 ++++++++++----
> fs/nfsd/state.h     |  4 ++++
> 2 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 0031e006f4dc..f07fe7562d4d 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -360,11 +360,13 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops = {
>  * st_{access,deny}_bmap field of the stateid, in order to track not
>  * only what share bits are currently in force, but also what
>  * combinations of share bits previous opens have used.  This allows us
> - * to enforce the recommendation of rfc 3530 14.2.19 that the server
> - * return an error if the client attempt to downgrade to a combination
> - * of share bits not explicable by closing some of its previous opens.
> + * to enforce the recommendation in
> + * https://datatracker.ietf.org/doc/html/rfc7530#section-16.19.4 that
> + * the server return an error if the client attempt to downgrade to a
> + * combination of share bits not explicable by closing some of its
> + * previous opens.
>  *
> - * XXX: This enforcement is actually incomplete, since we don't keep
> + * This enforcement is arguably incomplete, since we don't keep
>  * track of access/deny bit combinations; so, e.g., we allow:
>  *
>  *	OPEN allow read, deny write
> @@ -372,6 +374,10 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops = {
>  *	DOWNGRADE allow read, deny none
>  *
>  * which we should reject.
> + *
> + * But you could also argue that what our current code is already
> + * overkill, since it only exists to return NFS4ERR_INVAL on incorrect
> + * client behavior.

Thanks for the patch! This sentence seems to have too many words.


>  */
> static unsigned int
> bmap_to_share_mode(unsigned long bmap)
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index e73bdbb1634a..6eb3c7157214 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -568,6 +568,10 @@ struct nfs4_ol_stateid {
> 	struct list_head		st_locks;
> 	struct nfs4_stateowner		*st_stateowner;
> 	struct nfs4_clnt_odstate	*st_clnt_odstate;
> +/*
> + * These bitmasks use 3 separate bits for READ, ALLOW, and BOTH; see the
> + * comment above bmap_to_share_mode() for explanation:
> + */
> 	unsigned char			st_access_bmap;
> 	unsigned char			st_deny_bmap;
> 	struct nfs4_ol_stateid		*st_openstp;
> -- 
> 2.33.1
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-06 19:55   ` Chuck Lever III
  2021-12-06 21:44     ` dai.ngo
@ 2021-12-08 15:54     ` dai.ngo
  2021-12-08 15:58       ` Chuck Lever III
  2021-12-08 16:16       ` Trond Myklebust
  1 sibling, 2 replies; 24+ messages in thread
From: dai.ngo @ 2021-12-08 15:54 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Bruce Fields, Linux NFS Mailing List, linux-fsdevel

On 12/6/21 11:55 AM, Chuck Lever III wrote:

>
>> +
>> +/*
>> + * Function to check if the nfserr_share_denied error for 'fp' resulted
>> + * from conflict with courtesy clients then release their state to resolve
>> + * the conflict.
>> + *
>> + * Function returns:
>> + *	 0 -  no conflict with courtesy clients
>> + *	>0 -  conflict with courtesy clients resolved, try access/deny check again
>> + *	-1 -  conflict with courtesy clients being resolved in background
>> + *            return nfserr_jukebox to NFS client
>> + */
>> +static int
>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>> +			struct nfs4_file *fp, struct nfs4_ol_stateid *stp,
>> +			u32 access, bool share_access)
>> +{
>> +	int cnt = 0;
>> +	int async_cnt = 0;
>> +	bool no_retry = false;
>> +	struct nfs4_client *cl;
>> +	struct list_head *pos, *next, reaplist;
>> +	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
>> +
>> +	INIT_LIST_HEAD(&reaplist);
>> +	spin_lock(&nn->client_lock);
>> +	list_for_each_safe(pos, next, &nn->client_lru) {
>> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
>> +		/*
>> +		 * check all nfs4_ol_stateid of this client
>> +		 * for conflicts with 'access'mode.
>> +		 */
>> +		if (nfs4_check_deny_bmap(cl, fp, stp, access, share_access)) {
>> +			if (!test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) {
>> +				/* conflict with non-courtesy client */
>> +				no_retry = true;
>> +				cnt = 0;
>> +				goto out;
>> +			}
>> +			/*
>> +			 * if too many to resolve synchronously
>> +			 * then do the rest in background
>> +			 */
>> +			if (cnt > 100) {
>> +				set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags);
>> +				async_cnt++;
>> +				continue;
>> +			}
>> +			if (mark_client_expired_locked(cl))
>> +				continue;
>> +			cnt++;
>> +			list_add(&cl->cl_lru, &reaplist);
>> +		}
>> +	}
> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
> That would simplify this quite a bit for what is a rare edge
> case.

If we always do this asynchronously by returning NFS4ERR_DELAY
for all cases then the following pynfs tests need to be modified
to handle the error:

RENEW3   st_renew.testExpired                                     : FAILURE
LKU10    st_locku.testTimedoutUnlock                              : FAILURE
CLOSE9   st_close.testTimedoutClose2                              : FAILURE

and any new tests that opens file have to be prepared to handle
NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.

Do we still want to take this approach?

-Dai



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 15:54     ` dai.ngo
@ 2021-12-08 15:58       ` Chuck Lever III
  2021-12-08 16:16       ` Trond Myklebust
  1 sibling, 0 replies; 24+ messages in thread
From: Chuck Lever III @ 2021-12-08 15:58 UTC (permalink / raw)
  To: Dai Ngo, Calum Mackay; +Cc: Bruce Fields, Linux NFS Mailing List, linux-fsdevel



> On Dec 8, 2021, at 10:54 AM, Dai Ngo <dai.ngo@oracle.com> wrote:
> 
> On 12/6/21 11:55 AM, Chuck Lever III wrote:
> 
>> 
>>> +
>>> +/*
>>> + * Function to check if the nfserr_share_denied error for 'fp' resulted
>>> + * from conflict with courtesy clients then release their state to resolve
>>> + * the conflict.
>>> + *
>>> + * Function returns:
>>> + *	 0 -  no conflict with courtesy clients
>>> + *	>0 -  conflict with courtesy clients resolved, try access/deny check again
>>> + *	-1 -  conflict with courtesy clients being resolved in background
>>> + *            return nfserr_jukebox to NFS client
>>> + */
>>> +static int
>>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>>> +			struct nfs4_file *fp, struct nfs4_ol_stateid *stp,
>>> +			u32 access, bool share_access)
>>> +{
>>> +	int cnt = 0;
>>> +	int async_cnt = 0;
>>> +	bool no_retry = false;
>>> +	struct nfs4_client *cl;
>>> +	struct list_head *pos, *next, reaplist;
>>> +	struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
>>> +
>>> +	INIT_LIST_HEAD(&reaplist);
>>> +	spin_lock(&nn->client_lock);
>>> +	list_for_each_safe(pos, next, &nn->client_lru) {
>>> +		cl = list_entry(pos, struct nfs4_client, cl_lru);
>>> +		/*
>>> +		 * check all nfs4_ol_stateid of this client
>>> +		 * for conflicts with 'access'mode.
>>> +		 */
>>> +		if (nfs4_check_deny_bmap(cl, fp, stp, access, share_access)) {
>>> +			if (!test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) {
>>> +				/* conflict with non-courtesy client */
>>> +				no_retry = true;
>>> +				cnt = 0;
>>> +				goto out;
>>> +			}
>>> +			/*
>>> +			 * if too many to resolve synchronously
>>> +			 * then do the rest in background
>>> +			 */
>>> +			if (cnt > 100) {
>>> +				set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags);
>>> +				async_cnt++;
>>> +				continue;
>>> +			}
>>> +			if (mark_client_expired_locked(cl))
>>> +				continue;
>>> +			cnt++;
>>> +			list_add(&cl->cl_lru, &reaplist);
>>> +		}
>>> +	}
>> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
>> That would simplify this quite a bit for what is a rare edge
>> case.
> 
> If we always do this asynchronously by returning NFS4ERR_DELAY
> for all cases then the following pynfs tests need to be modified
> to handle the error:
> 
> RENEW3   st_renew.testExpired                                     : FAILURE
> LKU10    st_locku.testTimedoutUnlock                              : FAILURE
> CLOSE9   st_close.testTimedoutClose2                              : FAILURE
> 
> and any new tests that opens file have to be prepared to handle
> NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.
> 
> Do we still want to take this approach?

I'm still interested, but Bruce should chime in.

Maybe Calum could have a look under the covers of pynfs and see how difficult the change might be.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 15:54     ` dai.ngo
  2021-12-08 15:58       ` Chuck Lever III
@ 2021-12-08 16:16       ` Trond Myklebust
  2021-12-08 16:25         ` dai.ngo
  1 sibling, 1 reply; 24+ messages in thread
From: Trond Myklebust @ 2021-12-08 16:16 UTC (permalink / raw)
  To: dai.ngo, chuck.lever; +Cc: bfields, linux-nfs, linux-fsdevel

On Wed, 2021-12-08 at 07:54 -0800, dai.ngo@oracle.com wrote:
> On 12/6/21 11:55 AM, Chuck Lever III wrote:
> 
> > 
> > > +
> > > +/*
> > > + * Function to check if the nfserr_share_denied error for 'fp'
> > > resulted
> > > + * from conflict with courtesy clients then release their state to
> > > resolve
> > > + * the conflict.
> > > + *
> > > + * Function returns:
> > > + *      0 -  no conflict with courtesy clients
> > > + *     >0 -  conflict with courtesy clients resolved, try
> > > access/deny check again
> > > + *     -1 -  conflict with courtesy clients being resolved in
> > > background
> > > + *            return nfserr_jukebox to NFS client
> > > + */
> > > +static int
> > > +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
> > > +                       struct nfs4_file *fp, struct
> > > nfs4_ol_stateid *stp,
> > > +                       u32 access, bool share_access)
> > > +{
> > > +       int cnt = 0;
> > > +       int async_cnt = 0;
> > > +       bool no_retry = false;
> > > +       struct nfs4_client *cl;
> > > +       struct list_head *pos, *next, reaplist;
> > > +       struct nfsd_net *nn = net_generic(SVC_NET(rqstp),
> > > nfsd_net_id);
> > > +
> > > +       INIT_LIST_HEAD(&reaplist);
> > > +       spin_lock(&nn->client_lock);
> > > +       list_for_each_safe(pos, next, &nn->client_lru) {
> > > +               cl = list_entry(pos, struct nfs4_client, cl_lru);
> > > +               /*
> > > +                * check all nfs4_ol_stateid of this client
> > > +                * for conflicts with 'access'mode.
> > > +                */
> > > +               if (nfs4_check_deny_bmap(cl, fp, stp, access,
> > > share_access)) {
> > > +                       if (!test_bit(NFSD4_COURTESY_CLIENT, &cl-
> > > >cl_flags)) {
> > > +                               /* conflict with non-courtesy
> > > client */
> > > +                               no_retry = true;
> > > +                               cnt = 0;
> > > +                               goto out;
> > > +                       }
> > > +                       /*
> > > +                        * if too many to resolve synchronously
> > > +                        * then do the rest in background
> > > +                        */
> > > +                       if (cnt > 100) {
> > > +                               set_bit(NFSD4_DESTROY_COURTESY_CLIE
> > > NT, &cl->cl_flags);
> > > +                               async_cnt++;
> > > +                               continue;
> > > +                       }
> > > +                       if (mark_client_expired_locked(cl))
> > > +                               continue;
> > > +                       cnt++;
> > > +                       list_add(&cl->cl_lru, &reaplist);
> > > +               }
> > > +       }
> > Bruce suggested simply returning NFS4ERR_DELAY for all cases.
> > That would simplify this quite a bit for what is a rare edge
> > case.
> 
> If we always do this asynchronously by returning NFS4ERR_DELAY
> for all cases then the following pynfs tests need to be modified
> to handle the error:
> 
> RENEW3   st_renew.testExpired                                     :
> FAILURE
> LKU10    st_locku.testTimedoutUnlock                              :
> FAILURE
> CLOSE9   st_close.testTimedoutClose2                              :
> FAILURE
> 
> and any new tests that opens file have to be prepared to handle
> NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.
> 
> Do we still want to take this approach?

NFS4ERR_DELAY is a valid error for both CLOSE and LOCKU (see RFC7530
section 13.2 https://datatracker.ietf.org/doc/html/rfc7530#section-13.2
) so if pynfs complains, then it needs fixing regardless.

RENEW, on the other hand, cannot return NFS4ERR_DELAY, but why would it
need to? Either the lease is still valid, or else someone is already
trying to tear it down due to an expiration event. I don't see why
courtesy locks need to add any further complexity to that test.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 16:16       ` Trond Myklebust
@ 2021-12-08 16:25         ` dai.ngo
  2021-12-08 16:39           ` bfields
  0 siblings, 1 reply; 24+ messages in thread
From: dai.ngo @ 2021-12-08 16:25 UTC (permalink / raw)
  To: Trond Myklebust, chuck.lever; +Cc: bfields, linux-nfs, linux-fsdevel


On 12/8/21 8:16 AM, Trond Myklebust wrote:
> On Wed, 2021-12-08 at 07:54 -0800, dai.ngo@oracle.com wrote:
>> On 12/6/21 11:55 AM, Chuck Lever III wrote:
>>
>>>> +
>>>> +/*
>>>> + * Function to check if the nfserr_share_denied error for 'fp'
>>>> resulted
>>>> + * from conflict with courtesy clients then release their state to
>>>> resolve
>>>> + * the conflict.
>>>> + *
>>>> + * Function returns:
>>>> + *      0 -  no conflict with courtesy clients
>>>> + *     >0 -  conflict with courtesy clients resolved, try
>>>> access/deny check again
>>>> + *     -1 -  conflict with courtesy clients being resolved in
>>>> background
>>>> + *            return nfserr_jukebox to NFS client
>>>> + */
>>>> +static int
>>>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>>>> +                       struct nfs4_file *fp, struct
>>>> nfs4_ol_stateid *stp,
>>>> +                       u32 access, bool share_access)
>>>> +{
>>>> +       int cnt = 0;
>>>> +       int async_cnt = 0;
>>>> +       bool no_retry = false;
>>>> +       struct nfs4_client *cl;
>>>> +       struct list_head *pos, *next, reaplist;
>>>> +       struct nfsd_net *nn = net_generic(SVC_NET(rqstp),
>>>> nfsd_net_id);
>>>> +
>>>> +       INIT_LIST_HEAD(&reaplist);
>>>> +       spin_lock(&nn->client_lock);
>>>> +       list_for_each_safe(pos, next, &nn->client_lru) {
>>>> +               cl = list_entry(pos, struct nfs4_client, cl_lru);
>>>> +               /*
>>>> +                * check all nfs4_ol_stateid of this client
>>>> +                * for conflicts with 'access'mode.
>>>> +                */
>>>> +               if (nfs4_check_deny_bmap(cl, fp, stp, access,
>>>> share_access)) {
>>>> +                       if (!test_bit(NFSD4_COURTESY_CLIENT, &cl-
>>>>> cl_flags)) {
>>>> +                               /* conflict with non-courtesy
>>>> client */
>>>> +                               no_retry = true;
>>>> +                               cnt = 0;
>>>> +                               goto out;
>>>> +                       }
>>>> +                       /*
>>>> +                        * if too many to resolve synchronously
>>>> +                        * then do the rest in background
>>>> +                        */
>>>> +                       if (cnt > 100) {
>>>> +                               set_bit(NFSD4_DESTROY_COURTESY_CLIE
>>>> NT, &cl->cl_flags);
>>>> +                               async_cnt++;
>>>> +                               continue;
>>>> +                       }
>>>> +                       if (mark_client_expired_locked(cl))
>>>> +                               continue;
>>>> +                       cnt++;
>>>> +                       list_add(&cl->cl_lru, &reaplist);
>>>> +               }
>>>> +       }
>>> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
>>> That would simplify this quite a bit for what is a rare edge
>>> case.
>> If we always do this asynchronously by returning NFS4ERR_DELAY
>> for all cases then the following pynfs tests need to be modified
>> to handle the error:
>>
>> RENEW3   st_renew.testExpired                                     :
>> FAILURE
>> LKU10    st_locku.testTimedoutUnlock                              :
>> FAILURE
>> CLOSE9   st_close.testTimedoutClose2                              :
>> FAILURE
>>
>> and any new tests that opens file have to be prepared to handle
>> NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.
>>
>> Do we still want to take this approach?
> NFS4ERR_DELAY is a valid error for both CLOSE and LOCKU (see RFC7530
> section 13.2 https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc7530*section-13.2__;Iw!!ACWV5N9M2RV99hQ!f8vZHAJophxXdSSJvnxDCSBSRpWFxEOZBo2ZLvjPzXLVrvMYR8RKcc0_Jvjhng$
> ) so if pynfs complains, then it needs fixing regardless.
>
> RENEW, on the other hand, cannot return NFS4ERR_DELAY, but why would it
> need to? Either the lease is still valid, or else someone is already
> trying to tear it down due to an expiration event. I don't see why
> courtesy locks need to add any further complexity to that test.

RENEW fails in the 2nd open:

     c.create_confirm(t.word(), access=OPEN4_SHARE_ACCESS_BOTH,
                      deny=OPEN4_SHARE_DENY_BOTH)     <<======   DENY_BOTH
     sleeptime = c.getLeaseTime() * 2
     env.sleep(sleeptime)
     c2 = env.c2
     c2.init_connection()
     c2.open_confirm(t.word(), access=OPEN4_SHARE_ACCESS_READ,    <<=== needs to handle NFS4ERR_DELAY
                     deny=OPEN4_SHARE_DENY_NONE)

CLOSE and LOCKU also fail in the OPEN, similar to the RENEW test.
Any new pynfs 4.0 test that does open might get NFS4ERR_DELAY.

-Dai


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 16:25         ` dai.ngo
@ 2021-12-08 16:39           ` bfields
  2021-12-08 17:29             ` dai.ngo
  0 siblings, 1 reply; 24+ messages in thread
From: bfields @ 2021-12-08 16:39 UTC (permalink / raw)
  To: dai.ngo; +Cc: Trond Myklebust, chuck.lever, linux-nfs, linux-fsdevel

On Wed, Dec 08, 2021 at 08:25:28AM -0800, dai.ngo@oracle.com wrote:
> 
> On 12/8/21 8:16 AM, Trond Myklebust wrote:
> >On Wed, 2021-12-08 at 07:54 -0800, dai.ngo@oracle.com wrote:
> >>On 12/6/21 11:55 AM, Chuck Lever III wrote:
> >>
> >>>>+
> >>>>+/*
> >>>>+ * Function to check if the nfserr_share_denied error for 'fp'
> >>>>resulted
> >>>>+ * from conflict with courtesy clients then release their state to
> >>>>resolve
> >>>>+ * the conflict.
> >>>>+ *
> >>>>+ * Function returns:
> >>>>+ *      0 -  no conflict with courtesy clients
> >>>>+ *     >0 -  conflict with courtesy clients resolved, try
> >>>>access/deny check again
> >>>>+ *     -1 -  conflict with courtesy clients being resolved in
> >>>>background
> >>>>+ *            return nfserr_jukebox to NFS client
> >>>>+ */
> >>>>+static int
> >>>>+nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
> >>>>+                       struct nfs4_file *fp, struct
> >>>>nfs4_ol_stateid *stp,
> >>>>+                       u32 access, bool share_access)
> >>>>+{
> >>>>+       int cnt = 0;
> >>>>+       int async_cnt = 0;
> >>>>+       bool no_retry = false;
> >>>>+       struct nfs4_client *cl;
> >>>>+       struct list_head *pos, *next, reaplist;
> >>>>+       struct nfsd_net *nn = net_generic(SVC_NET(rqstp),
> >>>>nfsd_net_id);
> >>>>+
> >>>>+       INIT_LIST_HEAD(&reaplist);
> >>>>+       spin_lock(&nn->client_lock);
> >>>>+       list_for_each_safe(pos, next, &nn->client_lru) {
> >>>>+               cl = list_entry(pos, struct nfs4_client, cl_lru);
> >>>>+               /*
> >>>>+                * check all nfs4_ol_stateid of this client
> >>>>+                * for conflicts with 'access'mode.
> >>>>+                */
> >>>>+               if (nfs4_check_deny_bmap(cl, fp, stp, access,
> >>>>share_access)) {
> >>>>+                       if (!test_bit(NFSD4_COURTESY_CLIENT, &cl-
> >>>>>cl_flags)) {
> >>>>+                               /* conflict with non-courtesy
> >>>>client */
> >>>>+                               no_retry = true;
> >>>>+                               cnt = 0;
> >>>>+                               goto out;
> >>>>+                       }
> >>>>+                       /*
> >>>>+                        * if too many to resolve synchronously
> >>>>+                        * then do the rest in background
> >>>>+                        */
> >>>>+                       if (cnt > 100) {
> >>>>+                               set_bit(NFSD4_DESTROY_COURTESY_CLIE
> >>>>NT, &cl->cl_flags);
> >>>>+                               async_cnt++;
> >>>>+                               continue;
> >>>>+                       }
> >>>>+                       if (mark_client_expired_locked(cl))
> >>>>+                               continue;
> >>>>+                       cnt++;
> >>>>+                       list_add(&cl->cl_lru, &reaplist);
> >>>>+               }
> >>>>+       }
> >>>Bruce suggested simply returning NFS4ERR_DELAY for all cases.
> >>>That would simplify this quite a bit for what is a rare edge
> >>>case.
> >>If we always do this asynchronously by returning NFS4ERR_DELAY
> >>for all cases then the following pynfs tests need to be modified
> >>to handle the error:
> >>
> >>RENEW3   st_renew.testExpired                                     :
> >>FAILURE
> >>LKU10    st_locku.testTimedoutUnlock                              :
> >>FAILURE
> >>CLOSE9   st_close.testTimedoutClose2                              :
> >>FAILURE
> >>
> >>and any new tests that opens file have to be prepared to handle
> >>NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.
> >>
> >>Do we still want to take this approach?
> >NFS4ERR_DELAY is a valid error for both CLOSE and LOCKU (see RFC7530
> >section 13.2 https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc7530*section-13.2__;Iw!!ACWV5N9M2RV99hQ!f8vZHAJophxXdSSJvnxDCSBSRpWFxEOZBo2ZLvjPzXLVrvMYR8RKcc0_Jvjhng$
> >) so if pynfs complains, then it needs fixing regardless.
> >
> >RENEW, on the other hand, cannot return NFS4ERR_DELAY, but why would it
> >need to? Either the lease is still valid, or else someone is already
> >trying to tear it down due to an expiration event. I don't see why
> >courtesy locks need to add any further complexity to that test.
> 
> RENEW fails in the 2nd open:
> 
>     c.create_confirm(t.word(), access=OPEN4_SHARE_ACCESS_BOTH,
>                      deny=OPEN4_SHARE_DENY_BOTH)     <<======   DENY_BOTH
>     sleeptime = c.getLeaseTime() * 2
>     env.sleep(sleeptime)
>     c2 = env.c2
>     c2.init_connection()
>     c2.open_confirm(t.word(), access=OPEN4_SHARE_ACCESS_READ,    <<=== needs to handle NFS4ERR_DELAY
>                     deny=OPEN4_SHARE_DENY_NONE)
> 
> CLOSE and LOCKU also fail in the OPEN, similar to the RENEW test.
> Any new pynfs 4.0 test that does open might get NFS4ERR_DELAY.

So it's a RENEW test, not the RENEW operation.

A general-purpose client always has to be prepared for DELAY on OPEN.
But pynfs isn't a general-purpose client, and it assumes that it's the
only one touching the files and directories it creates.

Within pynfs we've got a problem that the tests don't necessarily clean
up after themselves completely, so in theory a test could interfere with
later results.

But each test uses its own files--e.g. in the fragment above note that
the file it's testing gets the name t.word(), which is by design unique
to that test.  So it shouldn't be hitting any conflicts with state held
by previous tests.  Am I missing something?

--b.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 16:39           ` bfields
@ 2021-12-08 17:29             ` dai.ngo
  2021-12-08 17:45               ` bfields
  2021-12-10 17:51               ` dai.ngo
  0 siblings, 2 replies; 24+ messages in thread
From: dai.ngo @ 2021-12-08 17:29 UTC (permalink / raw)
  To: bfields; +Cc: Trond Myklebust, chuck.lever, linux-nfs, linux-fsdevel


On 12/8/21 8:39 AM, bfields@fieldses.org wrote:
> On Wed, Dec 08, 2021 at 08:25:28AM -0800, dai.ngo@oracle.com wrote:
>> On 12/8/21 8:16 AM, Trond Myklebust wrote:
>>> On Wed, 2021-12-08 at 07:54 -0800, dai.ngo@oracle.com wrote:
>>>> On 12/6/21 11:55 AM, Chuck Lever III wrote:
>>>>
>>>>>> +
>>>>>> +/*
>>>>>> + * Function to check if the nfserr_share_denied error for 'fp'
>>>>>> resulted
>>>>>> + * from conflict with courtesy clients then release their state to
>>>>>> resolve
>>>>>> + * the conflict.
>>>>>> + *
>>>>>> + * Function returns:
>>>>>> + *      0 -  no conflict with courtesy clients
>>>>>> + *     >0 -  conflict with courtesy clients resolved, try
>>>>>> access/deny check again
>>>>>> + *     -1 -  conflict with courtesy clients being resolved in
>>>>>> background
>>>>>> + *            return nfserr_jukebox to NFS client
>>>>>> + */
>>>>>> +static int
>>>>>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>>>>>> +                       struct nfs4_file *fp, struct
>>>>>> nfs4_ol_stateid *stp,
>>>>>> +                       u32 access, bool share_access)
>>>>>> +{
>>>>>> +       int cnt = 0;
>>>>>> +       int async_cnt = 0;
>>>>>> +       bool no_retry = false;
>>>>>> +       struct nfs4_client *cl;
>>>>>> +       struct list_head *pos, *next, reaplist;
>>>>>> +       struct nfsd_net *nn = net_generic(SVC_NET(rqstp),
>>>>>> nfsd_net_id);
>>>>>> +
>>>>>> +       INIT_LIST_HEAD(&reaplist);
>>>>>> +       spin_lock(&nn->client_lock);
>>>>>> +       list_for_each_safe(pos, next, &nn->client_lru) {
>>>>>> +               cl = list_entry(pos, struct nfs4_client, cl_lru);
>>>>>> +               /*
>>>>>> +                * check all nfs4_ol_stateid of this client
>>>>>> +                * for conflicts with 'access'mode.
>>>>>> +                */
>>>>>> +               if (nfs4_check_deny_bmap(cl, fp, stp, access,
>>>>>> share_access)) {
>>>>>> +                       if (!test_bit(NFSD4_COURTESY_CLIENT, &cl-
>>>>>>> cl_flags)) {
>>>>>> +                               /* conflict with non-courtesy
>>>>>> client */
>>>>>> +                               no_retry = true;
>>>>>> +                               cnt = 0;
>>>>>> +                               goto out;
>>>>>> +                       }
>>>>>> +                       /*
>>>>>> +                        * if too many to resolve synchronously
>>>>>> +                        * then do the rest in background
>>>>>> +                        */
>>>>>> +                       if (cnt > 100) {
>>>>>> +                               set_bit(NFSD4_DESTROY_COURTESY_CLIE
>>>>>> NT, &cl->cl_flags);
>>>>>> +                               async_cnt++;
>>>>>> +                               continue;
>>>>>> +                       }
>>>>>> +                       if (mark_client_expired_locked(cl))
>>>>>> +                               continue;
>>>>>> +                       cnt++;
>>>>>> +                       list_add(&cl->cl_lru, &reaplist);
>>>>>> +               }
>>>>>> +       }
>>>>> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
>>>>> That would simplify this quite a bit for what is a rare edge
>>>>> case.
>>>> If we always do this asynchronously by returning NFS4ERR_DELAY
>>>> for all cases then the following pynfs tests need to be modified
>>>> to handle the error:
>>>>
>>>> RENEW3   st_renew.testExpired                                     :
>>>> FAILURE
>>>> LKU10    st_locku.testTimedoutUnlock                              :
>>>> FAILURE
>>>> CLOSE9   st_close.testTimedoutClose2                              :
>>>> FAILURE
>>>>
>>>> and any new tests that opens file have to be prepared to handle
>>>> NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.
>>>>
>>>> Do we still want to take this approach?
>>> NFS4ERR_DELAY is a valid error for both CLOSE and LOCKU (see RFC7530
>>> section 13.2 https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc7530*section-13.2__;Iw!!ACWV5N9M2RV99hQ!f8vZHAJophxXdSSJvnxDCSBSRpWFxEOZBo2ZLvjPzXLVrvMYR8RKcc0_Jvjhng$
>>> ) so if pynfs complains, then it needs fixing regardless.
>>>
>>> RENEW, on the other hand, cannot return NFS4ERR_DELAY, but why would it
>>> need to? Either the lease is still valid, or else someone is already
>>> trying to tear it down due to an expiration event. I don't see why
>>> courtesy locks need to add any further complexity to that test.
>> RENEW fails in the 2nd open:
>>
>>      c.create_confirm(t.word(), access=OPEN4_SHARE_ACCESS_BOTH,
>>                       deny=OPEN4_SHARE_DENY_BOTH)     <<======   DENY_BOTH
>>      sleeptime = c.getLeaseTime() * 2
>>      env.sleep(sleeptime)
>>      c2 = env.c2
>>      c2.init_connection()
>>      c2.open_confirm(t.word(), access=OPEN4_SHARE_ACCESS_READ,    <<=== needs to handle NFS4ERR_DELAY
>>                      deny=OPEN4_SHARE_DENY_NONE)
>>
>> CLOSE and LOCKU also fail in the OPEN, similar to the RENEW test.
>> Any new pynfs 4.0 test that does open might get NFS4ERR_DELAY.
> So it's a RENEW test, not the RENEW operation.
>
> A general-purpose client always has to be prepared for DELAY on OPEN.
> But pynfs isn't a general-purpose client, and it assumes that it's the
> only one touching the files and directories it creates.
>
> Within pynfs we've got a problem that the tests don't necessarily clean
> up after themselves completely, so in theory a test could interfere with
> later results.
>
> But each test uses its own files--e.g. in the fragment above note that
> the file it's testing gets the name t.word(), which is by design unique
> to that test.  So it shouldn't be hitting any conflicts with state held
> by previous tests.  Am I missing something?

Both calls, c.create_confirm and c2.open_confirm, use the same file name
t.word().

However, it's strange that if I run RENEW3 by itself then it passes.
 From the network trace, env.c2/c2.init_connection generates a SETCLIENTID
which might get rid off all the states of the previous client since
they're the same.

I will investigate why it fails when running 'all' with NFS4ERR_DELAY.

Do you know if there is an option to specify a list of tests to run,
instead of 'all'?

-Dai


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 17:29             ` dai.ngo
@ 2021-12-08 17:45               ` bfields
  2021-12-10 17:51               ` dai.ngo
  1 sibling, 0 replies; 24+ messages in thread
From: bfields @ 2021-12-08 17:45 UTC (permalink / raw)
  To: dai.ngo; +Cc: Trond Myklebust, chuck.lever, linux-nfs, linux-fsdevel

On Wed, Dec 08, 2021 at 09:29:31AM -0800, dai.ngo@oracle.com wrote:
> Do you know if there is an option to specify a list of tests to run,
> instead of 'all'?

Yes, you can list the tests.  See the --help, --showflags, and
--showcodes options to testserver.py.

--b.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-12-08 17:29             ` dai.ngo
  2021-12-08 17:45               ` bfields
@ 2021-12-10 17:51               ` dai.ngo
  1 sibling, 0 replies; 24+ messages in thread
From: dai.ngo @ 2021-12-10 17:51 UTC (permalink / raw)
  To: bfields; +Cc: Trond Myklebust, chuck.lever, linux-nfs, linux-fsdevel


On 12/8/21 9:29 AM, dai.ngo@oracle.com wrote:
>
> On 12/8/21 8:39 AM, bfields@fieldses.org wrote:
>> On Wed, Dec 08, 2021 at 08:25:28AM -0800, dai.ngo@oracle.com wrote:
>>> On 12/8/21 8:16 AM, Trond Myklebust wrote:
>>>> On Wed, 2021-12-08 at 07:54 -0800, dai.ngo@oracle.com wrote:
>>>>> On 12/6/21 11:55 AM, Chuck Lever III wrote:
>>>>>
>>>>>>> +
>>>>>>> +/*
>>>>>>> + * Function to check if the nfserr_share_denied error for 'fp'
>>>>>>> resulted
>>>>>>> + * from conflict with courtesy clients then release their state to
>>>>>>> resolve
>>>>>>> + * the conflict.
>>>>>>> + *
>>>>>>> + * Function returns:
>>>>>>> + *      0 -  no conflict with courtesy clients
>>>>>>> + *     >0 -  conflict with courtesy clients resolved, try
>>>>>>> access/deny check again
>>>>>>> + *     -1 -  conflict with courtesy clients being resolved in
>>>>>>> background
>>>>>>> + *            return nfserr_jukebox to NFS client
>>>>>>> + */
>>>>>>> +static int
>>>>>>> +nfs4_destroy_clnts_with_sresv_conflict(struct svc_rqst *rqstp,
>>>>>>> +                       struct nfs4_file *fp, struct
>>>>>>> nfs4_ol_stateid *stp,
>>>>>>> +                       u32 access, bool share_access)
>>>>>>> +{
>>>>>>> +       int cnt = 0;
>>>>>>> +       int async_cnt = 0;
>>>>>>> +       bool no_retry = false;
>>>>>>> +       struct nfs4_client *cl;
>>>>>>> +       struct list_head *pos, *next, reaplist;
>>>>>>> +       struct nfsd_net *nn = net_generic(SVC_NET(rqstp),
>>>>>>> nfsd_net_id);
>>>>>>> +
>>>>>>> +       INIT_LIST_HEAD(&reaplist);
>>>>>>> +       spin_lock(&nn->client_lock);
>>>>>>> +       list_for_each_safe(pos, next, &nn->client_lru) {
>>>>>>> +               cl = list_entry(pos, struct nfs4_client, cl_lru);
>>>>>>> +               /*
>>>>>>> +                * check all nfs4_ol_stateid of this client
>>>>>>> +                * for conflicts with 'access'mode.
>>>>>>> +                */
>>>>>>> +               if (nfs4_check_deny_bmap(cl, fp, stp, access,
>>>>>>> share_access)) {
>>>>>>> +                       if (!test_bit(NFSD4_COURTESY_CLIENT, &cl-
>>>>>>>> cl_flags)) {
>>>>>>> +                               /* conflict with non-courtesy
>>>>>>> client */
>>>>>>> +                               no_retry = true;
>>>>>>> +                               cnt = 0;
>>>>>>> +                               goto out;
>>>>>>> +                       }
>>>>>>> +                       /*
>>>>>>> +                        * if too many to resolve synchronously
>>>>>>> +                        * then do the rest in background
>>>>>>> +                        */
>>>>>>> +                       if (cnt > 100) {
>>>>>>> +                               set_bit(NFSD4_DESTROY_COURTESY_CLIE
>>>>>>> NT, &cl->cl_flags);
>>>>>>> +                               async_cnt++;
>>>>>>> +                               continue;
>>>>>>> +                       }
>>>>>>> +                       if (mark_client_expired_locked(cl))
>>>>>>> +                               continue;
>>>>>>> +                       cnt++;
>>>>>>> +                       list_add(&cl->cl_lru, &reaplist);
>>>>>>> +               }
>>>>>>> +       }
>>>>>> Bruce suggested simply returning NFS4ERR_DELAY for all cases.
>>>>>> That would simplify this quite a bit for what is a rare edge
>>>>>> case.
>>>>> If we always do this asynchronously by returning NFS4ERR_DELAY
>>>>> for all cases then the following pynfs tests need to be modified
>>>>> to handle the error:
>>>>>
>>>>> RENEW3 st_renew.testExpired                                     :
>>>>> FAILURE
>>>>> LKU10 st_locku.testTimedoutUnlock                              :
>>>>> FAILURE
>>>>> CLOSE9 st_close.testTimedoutClose2                              :
>>>>> FAILURE
>>>>>
>>>>> and any new tests that opens file have to be prepared to handle
>>>>> NFS4ERR_DELAY due to the lack of destroy_clientid in 4.0.
>>>>>
>>>>> Do we still want to take this approach?
>>>> NFS4ERR_DELAY is a valid error for both CLOSE and LOCKU (see RFC7530
>>>> section 13.2 
>>>> https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc7530*section-13.2__;Iw!!ACWV5N9M2RV99hQ!f8vZHAJophxXdSSJvnxDCSBSRpWFxEOZBo2ZLvjPzXLVrvMYR8RKcc0_Jvjhng$
>>>> ) so if pynfs complains, then it needs fixing regardless.
>>>>
>>>> RENEW, on the other hand, cannot return NFS4ERR_DELAY, but why 
>>>> would it
>>>> need to? Either the lease is still valid, or else someone is already
>>>> trying to tear it down due to an expiration event. I don't see why
>>>> courtesy locks need to add any further complexity to that test.
>>> RENEW fails in the 2nd open:
>>>
>>>      c.create_confirm(t.word(), access=OPEN4_SHARE_ACCESS_BOTH,
>>>                       deny=OPEN4_SHARE_DENY_BOTH) <<======   DENY_BOTH
>>>      sleeptime = c.getLeaseTime() * 2
>>>      env.sleep(sleeptime)
>>>      c2 = env.c2
>>>      c2.init_connection()
>>>      c2.open_confirm(t.word(), access=OPEN4_SHARE_ACCESS_READ,    
>>> <<=== needs to handle NFS4ERR_DELAY
>>>                      deny=OPEN4_SHARE_DENY_NONE)
>>>
>>> CLOSE and LOCKU also fail in the OPEN, similar to the RENEW test.
>>> Any new pynfs 4.0 test that does open might get NFS4ERR_DELAY.
>> So it's a RENEW test, not the RENEW operation.
>>
>> A general-purpose client always has to be prepared for DELAY on OPEN.
>> But pynfs isn't a general-purpose client, and it assumes that it's the
>> only one touching the files and directories it creates.
>>
>> Within pynfs we've got a problem that the tests don't necessarily clean
>> up after themselves completely, so in theory a test could interfere with
>> later results.
>>
>> But each test uses its own files--e.g. in the fragment above note that
>> the file it's testing gets the name t.word(), which is by design unique
>> to that test.  So it shouldn't be hitting any conflicts with state held
>> by previous tests.  Am I missing something?
>
> Both calls, c.create_confirm and c2.open_confirm, use the same file name
> t.word().
>
> However, it's strange that if I run RENEW3 by itself then it passes.

I found the bug in the courteous server in nfs4_laundromat where idr_get_next
was called, to check if client has states, without 'id' being set 0. This
causes the 4.0 courtesy clients to be expired unexpected, resulting in no
share reservation conflict so there is no expected NFS4ERR_DELAY returned
to the client.

I will submit the v7 patch and also will submit the patch to enhance pynfs
to handle the NFS4ERR_DELAY separately.

-Dai



^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2021-12-10 17:52 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-06 17:59 [PATCH RFC v6 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
2021-12-06 17:59 ` [PATCH RFC v6 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
2021-12-06 18:39   ` Chuck Lever III
2021-12-06 19:52     ` Trond Myklebust
2021-12-06 20:05       ` bfields
2021-12-06 20:36         ` dai.ngo
2021-12-06 22:05           ` Trond Myklebust
2021-12-06 23:07             ` dai.ngo
2021-12-06 17:59 ` [PATCH RFC v6 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
2021-12-06 19:55   ` Chuck Lever III
2021-12-06 21:44     ` dai.ngo
2021-12-06 22:30       ` Chuck Lever III
2021-12-06 22:52         ` Bruce Fields
2021-12-07 22:00           ` Chuck Lever III
2021-12-07 22:35             ` Bruce Fields
2021-12-08 15:17               ` Chuck Lever III
2021-12-08 15:54     ` dai.ngo
2021-12-08 15:58       ` Chuck Lever III
2021-12-08 16:16       ` Trond Myklebust
2021-12-08 16:25         ` dai.ngo
2021-12-08 16:39           ` bfields
2021-12-08 17:29             ` dai.ngo
2021-12-08 17:45               ` bfields
2021-12-10 17:51               ` dai.ngo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.