linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server
@ 2021-09-16 18:22 Dai Ngo
  2021-09-16 18:22 ` [PATCH v3 1/3] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Dai Ngo @ 2021-09-16 18:22 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

Hi Bruce,

This series of patches implement the NFSv4 Courteous Server.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

The v2 patch includes the following:

. add new callback, lm_expire_lock, to lock_manager_operations to
  allow the lock manager to take appropriate action with conflict lock.

. handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.

. expire courtesy client after 24hr if client has not reconnected.

. do not allow expired client to become courtesy client if there are
  waiters for client's locks.

. modify client_info_show to show courtesy client and seconds from
  last renew.

. fix a problem with NFSv4.1 server where the it keeps returning
  SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
  the courtesy client re-connects, causing the client to keep sending
  BCTS requests to server.

The v3 patch includes the following:

. modify posix_test_lock to check and resolve conflict locks
  for handling of NLM TEST and NFSv4 LOCKT requests.

. separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

. merge with 5.15-rc1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/3] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations
  2021-09-16 18:22 [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2021-09-16 18:22 ` Dai Ngo
  2021-09-16 18:22 ` [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 19+ messages in thread
From: Dai Ngo @ 2021-09-16 18:22 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

Add new callback, lm_expire_lock, to lock_manager_operations to allow
the lock manager to take appropriate action to resolve the lock conflict
if possible. The callback takes 2 arguments, file_lock of the blocker
and a testonly flag:

testonly = 1  check and return true if lock conflict can be resolved
              else return false.
testonly = 0  resolve the conflict if possible, return true if conflict
              was resolved esle return false.

Lock manager, such as NFSv4 courteous server, uses this callback to
resolve conflict by destroying lock owner, or the NFSv4 courtesy client
(client that has expired but allowed to maintains its states) that owns
the lock.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/locks.c         | 26 +++++++++++++++++++++++---
 include/linux/fs.h |  1 +
 2 files changed, 24 insertions(+), 3 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 3d6fb4ae847b..893b348a2d57 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -954,6 +954,7 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
 	struct file_lock *cfl;
 	struct file_lock_context *ctx;
 	struct inode *inode = locks_inode(filp);
+	bool ret;
 
 	ctx = smp_load_acquire(&inode->i_flctx);
 	if (!ctx || list_empty_careful(&ctx->flc_posix)) {
@@ -962,11 +963,20 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
 	}
 
 	spin_lock(&ctx->flc_lock);
+retry:
 	list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
-		if (posix_locks_conflict(fl, cfl)) {
-			locks_copy_conflock(fl, cfl);
-			goto out;
+		if (!posix_locks_conflict(fl, cfl))
+			continue;
+		if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock &&
+				cfl->fl_lmops->lm_expire_lock(cfl, 1)) {
+			spin_unlock(&ctx->flc_lock);
+			ret = cfl->fl_lmops->lm_expire_lock(cfl, 0);
+			spin_lock(&ctx->flc_lock);
+			if (ret)
+				goto retry;
 		}
+		locks_copy_conflock(fl, cfl);
+		goto out;
 	}
 	fl->fl_type = F_UNLCK;
 out:
@@ -1140,6 +1150,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 	int error;
 	bool added = false;
 	LIST_HEAD(dispose);
+	bool ret;
 
 	ctx = locks_get_lock_context(inode, request->fl_type);
 	if (!ctx)
@@ -1166,9 +1177,18 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 	 * blocker's list of waiters and the global blocked_hash.
 	 */
 	if (request->fl_type != F_UNLCK) {
+retry:
 		list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
 			if (!posix_locks_conflict(request, fl))
 				continue;
+			if (fl->fl_lmops && fl->fl_lmops->lm_expire_lock &&
+					fl->fl_lmops->lm_expire_lock(fl, 1)) {
+				spin_unlock(&ctx->flc_lock);
+				ret = fl->fl_lmops->lm_expire_lock(fl, 0);
+				spin_lock(&ctx->flc_lock);
+				if (ret)
+					goto retry;
+			}
 			if (conflock)
 				locks_copy_conflock(conflock, fl);
 			error = -EAGAIN;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e7a633353fd2..1a76b6451398 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1071,6 +1071,7 @@ struct lock_manager_operations {
 	int (*lm_change)(struct file_lock *, int, struct list_head *);
 	void (*lm_setup)(struct file_lock *, void **);
 	bool (*lm_breaker_owns_lease)(struct file_lock *);
+	bool (*lm_expire_lock)(struct file_lock *fl, bool testonly);
 };
 
 struct lock_manager {
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-16 18:22 [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2021-09-16 18:22 ` [PATCH v3 1/3] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
@ 2021-09-16 18:22 ` Dai Ngo
  2021-09-22 21:14   ` J. Bruce Fields
  2021-09-23  1:34   ` J. Bruce Fields
  2021-09-16 18:22 ` [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN Dai Ngo
  2021-09-23  1:47 ` [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server J. Bruce Fields
  3 siblings, 2 replies; 19+ messages in thread
From: Dai Ngo @ 2021-09-16 18:22 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

Currently an NFSv4 client must maintain its lease by using the at least
one of the state tokens or if nothing else, by issuing a RENEW (4.0), or
a singleton SEQUENCE (4.1) at least once during each lease period. If the
client fails to renew the lease, for any reason, the Linux server expunges
the state tokens immediately upon detection of the "failure to renew the
lease" condition and begins returning NFS4ERR_EXPIRED if the client should
reconnect and attempt to use the (now) expired state.

The default lease period for the Linux server is 90 seconds.  The typical
client cuts that in half and will issue a lease renewing operation every
45 seconds. The 90 second lease period is very short considering the
potential for moderately long term network partitions.  A network partition
refers to any loss of network connectivity between the NFS client and the
NFS server, regardless of its root cause.  This includes NIC failures, NIC
driver bugs, network misconfigurations & administrative errors, routers &
switches crashing and/or having software updates applied, even down to
cables being physically pulled.  In most cases, these network failures are
transient, although the duration is unknown.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

The initial implementation of the Courteous Server will do the following:

. when the laundromat thread detects an expired client and if that client
still has established states on the Linux server and there is no waiters
for the client's locks then mark the client as a COURTESY_CLIENT and skip
destroying the client and all its states, otherwise destroy the client as
usual.

. detects conflict of OPEN request with COURTESY_CLIENT, destroys the
expired client and all its states, skips the delegation recall then allows
the conflicting request to succeed.

. detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
requests with COURTESY_CLIENT, destroys the expired client and all its
states then allows the conflicting request to succeed.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c        | 155 ++++++++++++++++++++++++++++++++++++++++++++-
 fs/nfsd/state.h            |   3 +
 include/linux/sunrpc/svc.h |   1 +
 3 files changed, 156 insertions(+), 3 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 42356416f0a0..54e5317f00f1 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *);
 static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
 static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
 
+static struct workqueue_struct *laundry_wq;
+static void laundromat_main(struct work_struct *);
+
+static int courtesy_client_expiry = (24 * 60 * 60);	/* in secs */
+
 static bool is_session_dead(struct nfsd4_session *ses)
 {
 	return ses->se_flags & NFS4_SESSION_DEAD;
@@ -172,6 +177,7 @@ renew_client_locked(struct nfs4_client *clp)
 
 	list_move_tail(&clp->cl_lru, &nn->client_lru);
 	clp->cl_time = ktime_get_boottime_seconds();
+	clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
 }
 
 static void put_client_renew_locked(struct nfs4_client *clp)
@@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
 		seq_puts(m, "status: confirmed\n");
 	else
 		seq_puts(m, "status: unconfirmed\n");
+	seq_printf(m, "courtesy client: %s\n",
+		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
+	seq_printf(m, "last renew: %lld secs\n",
+		ktime_get_boottime_seconds() - clp->cl_time);
 	seq_printf(m, "name: ");
 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
@@ -4652,6 +4662,42 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
 	nfsd4_run_cb(&dp->dl_recall);
 }
 
+/*
+ * If the conflict happens due to a NFSv4 request then check for
+ * courtesy client and set rq_conflict_client so that upper layer
+ * can destroy the conflict client and retry the call.
+ */
+static bool
+nfsd_check_courtesy_client(struct nfs4_delegation *dp)
+{
+	struct svc_rqst *rqst;
+	struct nfs4_client *clp = dp->dl_recall.cb_clp;
+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+	bool ret = false;
+
+	if (!i_am_nfsd()) {
+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
+			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
+			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+			return true;
+		}
+		return false;
+	}
+	rqst = kthread_data(current);
+	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
+		return false;
+	rqst->rq_conflict_client = NULL;
+
+	spin_lock(&nn->client_lock);
+	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
+				!mark_client_expired_locked(clp)) {
+		rqst->rq_conflict_client = clp;
+		ret = true;
+	}
+	spin_unlock(&nn->client_lock);
+	return ret;
+}
+
 /* Called from break_lease() with i_lock held. */
 static bool
 nfsd_break_deleg_cb(struct file_lock *fl)
@@ -4660,6 +4706,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
 	struct nfs4_file *fp = dp->dl_stid.sc_file;
 
+	if (nfsd_check_courtesy_client(dp))
+		return false;
 	trace_nfsd_cb_recall(&dp->dl_stid);
 
 	/*
@@ -5279,6 +5327,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
 	 */
 }
 
+static bool
+nfs4_destroy_courtesy_client(struct nfs4_client *clp)
+{
+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+
+	spin_lock(&nn->client_lock);
+	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
+			mark_client_expired_locked(clp)) {
+		spin_unlock(&nn->client_lock);
+		return false;
+	}
+	spin_unlock(&nn->client_lock);
+	expire_client(clp);
+	return true;
+}
+
 __be32
 nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
 {
@@ -5328,7 +5392,13 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
 			goto out;
 		}
 	} else {
+		rqstp->rq_conflict_client = NULL;
 		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+		if (status == nfserr_jukebox && rqstp->rq_conflict_client) {
+			if (nfs4_destroy_courtesy_client(rqstp->rq_conflict_client))
+				status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+		}
+
 		if (status) {
 			stp->st_stid.sc_type = NFS4_CLOSED_STID;
 			release_open_stateid(stp);
@@ -5562,6 +5632,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+static
+bool nfs4_anylock_conflict(struct nfs4_client *clp)
+{
+	int i;
+	struct nfs4_stateowner *so, *tmp;
+	struct nfs4_lockowner *lo;
+	struct nfs4_ol_stateid *stp;
+	struct nfs4_file *nf;
+	struct inode *ino;
+	struct file_lock_context *ctx;
+	struct file_lock *fl;
+
+	for (i = 0; i < OWNER_HASH_SIZE; i++) {
+		/* scan each lock owner */
+		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
+				so_strhash) {
+			if (so->so_is_open_owner)
+				continue;
+
+			/* scan lock states of this lock owner */
+			lo = lockowner(so);
+			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
+					st_perstateowner) {
+				nf = stp->st_stid.sc_file;
+				ino = nf->fi_inode;
+				ctx = ino->i_flctx;
+				if (!ctx)
+					continue;
+				/* check each lock belongs to this lock state */
+				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+					if (fl->fl_owner != lo)
+						continue;
+					if (!list_empty(&fl->fl_blocked_requests))
+						return true;
+				}
+			}
+		}
+	}
+	return false;
+}
+
 static time64_t
 nfs4_laundromat(struct nfsd_net *nn)
 {
@@ -5577,7 +5688,9 @@ nfs4_laundromat(struct nfsd_net *nn)
 	};
 	struct nfs4_cpntf_state *cps;
 	copy_stateid_t *cps_t;
+	struct nfs4_stid *stid;
 	int i;
+	int id = 0;
 
 	if (clients_still_reclaiming(nn)) {
 		lt.new_timeo = 0;
@@ -5598,8 +5711,33 @@ nfs4_laundromat(struct nfsd_net *nn)
 	spin_lock(&nn->client_lock);
 	list_for_each_safe(pos, next, &nn->client_lru) {
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
+		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
+			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
+			goto exp_client;
+		}
+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
+			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
+				goto exp_client;
+			/*
+			 * after umount, v4.0 client is still
+			 * around waiting to be expired
+			 */
+			if (clp->cl_minorversion)
+				continue;
+		}
 		if (!state_expired(&lt, clp->cl_time))
 			break;
+		spin_lock(&clp->cl_lock);
+		stid = idr_get_next(&clp->cl_stateids, &id);
+		spin_unlock(&clp->cl_lock);
+		if (stid && !nfs4_anylock_conflict(clp)) {
+			/* client still has states */
+			clp->courtesy_client_expiry =
+				ktime_get_boottime_seconds() + courtesy_client_expiry;
+			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
+			continue;
+		}
+exp_client:
 		if (mark_client_expired_locked(clp))
 			continue;
 		list_add(&clp->cl_lru, &reaplist);
@@ -5679,9 +5817,6 @@ nfs4_laundromat(struct nfsd_net *nn)
 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
 }
 
-static struct workqueue_struct *laundry_wq;
-static void laundromat_main(struct work_struct *);
-
 static void
 laundromat_main(struct work_struct *laundry)
 {
@@ -6486,6 +6621,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
 		lock->fl_end = OFFSET_MAX;
 }
 
+/* return true if lock was expired else return false */
+static bool
+nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
+{
+	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
+	struct nfs4_client *clp = lo->lo_owner.so_client;
+
+	if (testonly)
+		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
+			true : false;
+	return nfs4_destroy_courtesy_client(clp);
+}
+
 static fl_owner_t
 nfsd4_fl_get_owner(fl_owner_t owner)
 {
@@ -6533,6 +6681,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
 	.lm_notify = nfsd4_lm_notify,
 	.lm_get_owner = nfsd4_fl_get_owner,
 	.lm_put_owner = nfsd4_fl_put_owner,
+	.lm_expire_lock = nfsd4_fl_expire_lock,
 };
 
 static inline void
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index e73bdbb1634a..93e30b101578 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -345,6 +345,8 @@ struct nfs4_client {
 #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
 #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
 					 1 << NFSD4_CLIENT_CB_KILL)
+#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
+#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
 	unsigned long		cl_flags;
 	const struct cred	*cl_cb_cred;
 	struct rpc_clnt		*cl_cb_client;
@@ -385,6 +387,7 @@ struct nfs4_client {
 	struct list_head	async_copies;	/* list of async copies */
 	spinlock_t		async_lock;	/* lock for async copies */
 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
+	int			courtesy_client_expiry;
 };
 
 /* struct nfs4_client_reset
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 064c96157d1f..349bf7bf20d2 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -306,6 +306,7 @@ struct svc_rqst {
 						 * net namespace
 						 */
 	void **			rq_lease_breaker; /* The v4 client breaking a lease */
+	void			*rq_conflict_client;
 };
 
 #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
  2021-09-16 18:22 [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2021-09-16 18:22 ` [PATCH v3 1/3] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
  2021-09-16 18:22 ` [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2021-09-16 18:22 ` Dai Ngo
  2021-09-16 19:00   ` Chuck Lever III
  2021-09-23  1:47 ` [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server J. Bruce Fields
  3 siblings, 1 reply; 19+ messages in thread
From: Dai Ngo @ 2021-09-16 18:22 UTC (permalink / raw)
  To: bfields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

When the back channel enters SEQ4_STATUS_CB_PATH_DOWN state, the client
recovers by sending BIND_CONN_TO_SESSION but the server fails to recover
the back channel and leaves it as NFSD4_CB_DOWN.

Fix by enhancing nfsd4_bind_conn_to_session to probe the back channel
by calling nfsd4_probe_callback.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 54e5317f00f1..63b4d0e6fc29 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -3580,7 +3580,7 @@ static struct nfsd4_conn *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
 }
 
 static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
-				struct nfsd4_session *session, u32 req)
+		struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
 {
 	struct nfs4_client *clp = session->se_client;
 	struct svc_xprt *xpt = rqst->rq_xprt;
@@ -3603,6 +3603,8 @@ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
 	else
 		status = nfserr_inval;
 	spin_unlock(&clp->cl_lock);
+	if (status == nfs_ok && conn)
+		*conn = c;
 	return status;
 }
 
@@ -3627,8 +3629,16 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
 	status = nfserr_wrong_cred;
 	if (!nfsd4_mach_creds_match(session->se_client, rqstp))
 		goto out;
-	status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
-	if (status == nfs_ok || status == nfserr_inval)
+	status = nfsd4_match_existing_connection(rqstp, session,
+			bcts->dir, &conn);
+	if (status == nfs_ok) {
+		if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
+				bcts->dir == NFS4_CDFC4_BACK)
+			conn->cn_flags |= NFS4_CDFC4_BACK;
+		nfsd4_probe_callback(session->se_client);
+		goto out;
+	}
+	if (status == nfserr_inval)
 		goto out;
 	status = nfsd4_map_bcts_dir(&bcts->dir);
 	if (status)
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
  2021-09-16 18:22 ` [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN Dai Ngo
@ 2021-09-16 19:00   ` Chuck Lever III
  2021-09-16 19:55     ` Bruce Fields
  0 siblings, 1 reply; 19+ messages in thread
From: Chuck Lever III @ 2021-09-16 19:00 UTC (permalink / raw)
  To: Dai Ngo, Bruce Fields; +Cc: Linux NFS Mailing List, linux-fsdevel

Bruce, Dai -

> On Sep 16, 2021, at 2:22 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> 
> When the back channel enters SEQ4_STATUS_CB_PATH_DOWN state, the client
> recovers by sending BIND_CONN_TO_SESSION but the server fails to recover
> the back channel and leaves it as NFSD4_CB_DOWN.
> 
> Fix by enhancing nfsd4_bind_conn_to_session to probe the back channel
> by calling nfsd4_probe_callback.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>

I'm wondering if this one is appropriate to pull into v5.15-rc.


> ---
> fs/nfsd/nfs4state.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 54e5317f00f1..63b4d0e6fc29 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -3580,7 +3580,7 @@ static struct nfsd4_conn *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
> }
> 
> static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
> -				struct nfsd4_session *session, u32 req)
> +		struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
> {
> 	struct nfs4_client *clp = session->se_client;
> 	struct svc_xprt *xpt = rqst->rq_xprt;
> @@ -3603,6 +3603,8 @@ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
> 	else
> 		status = nfserr_inval;
> 	spin_unlock(&clp->cl_lock);
> +	if (status == nfs_ok && conn)
> +		*conn = c;
> 	return status;
> }
> 
> @@ -3627,8 +3629,16 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
> 	status = nfserr_wrong_cred;
> 	if (!nfsd4_mach_creds_match(session->se_client, rqstp))
> 		goto out;
> -	status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
> -	if (status == nfs_ok || status == nfserr_inval)
> +	status = nfsd4_match_existing_connection(rqstp, session,
> +			bcts->dir, &conn);
> +	if (status == nfs_ok) {
> +		if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
> +				bcts->dir == NFS4_CDFC4_BACK)
> +			conn->cn_flags |= NFS4_CDFC4_BACK;
> +		nfsd4_probe_callback(session->se_client);
> +		goto out;
> +	}
> +	if (status == nfserr_inval)
> 		goto out;
> 	status = nfsd4_map_bcts_dir(&bcts->dir);
> 	if (status)
> -- 
> 2.9.5
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
  2021-09-16 19:00   ` Chuck Lever III
@ 2021-09-16 19:55     ` Bruce Fields
  2021-09-16 20:15       ` dai.ngo
  0 siblings, 1 reply; 19+ messages in thread
From: Bruce Fields @ 2021-09-16 19:55 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Dai Ngo, Linux NFS Mailing List, linux-fsdevel

On Thu, Sep 16, 2021 at 07:00:20PM +0000, Chuck Lever III wrote:
> Bruce, Dai -
> 
> > On Sep 16, 2021, at 2:22 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> > 
> > When the back channel enters SEQ4_STATUS_CB_PATH_DOWN state, the client
> > recovers by sending BIND_CONN_TO_SESSION but the server fails to recover
> > the back channel and leaves it as NFSD4_CB_DOWN.
> > 
> > Fix by enhancing nfsd4_bind_conn_to_session to probe the back channel
> > by calling nfsd4_probe_callback.
> > 
> > Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> 
> I'm wondering if this one is appropriate to pull into v5.15-rc.

I think so.

Dai, do you have a pynfs test for this case?

--b.

> > ---
> > fs/nfsd/nfs4state.c | 16 +++++++++++++---
> > 1 file changed, 13 insertions(+), 3 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index 54e5317f00f1..63b4d0e6fc29 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -3580,7 +3580,7 @@ static struct nfsd4_conn *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
> > }
> > 
> > static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
> > -				struct nfsd4_session *session, u32 req)
> > +		struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
> > {
> > 	struct nfs4_client *clp = session->se_client;
> > 	struct svc_xprt *xpt = rqst->rq_xprt;
> > @@ -3603,6 +3603,8 @@ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
> > 	else
> > 		status = nfserr_inval;
> > 	spin_unlock(&clp->cl_lock);
> > +	if (status == nfs_ok && conn)
> > +		*conn = c;
> > 	return status;
> > }
> > 
> > @@ -3627,8 +3629,16 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
> > 	status = nfserr_wrong_cred;
> > 	if (!nfsd4_mach_creds_match(session->se_client, rqstp))
> > 		goto out;
> > -	status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
> > -	if (status == nfs_ok || status == nfserr_inval)
> > +	status = nfsd4_match_existing_connection(rqstp, session,
> > +			bcts->dir, &conn);
> > +	if (status == nfs_ok) {
> > +		if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
> > +				bcts->dir == NFS4_CDFC4_BACK)
> > +			conn->cn_flags |= NFS4_CDFC4_BACK;
> > +		nfsd4_probe_callback(session->se_client);
> > +		goto out;
> > +	}
> > +	if (status == nfserr_inval)
> > 		goto out;
> > 	status = nfsd4_map_bcts_dir(&bcts->dir);
> > 	if (status)
> > -- 
> > 2.9.5
> > 
> 
> --
> Chuck Lever
> 
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
  2021-09-16 19:55     ` Bruce Fields
@ 2021-09-16 20:15       ` dai.ngo
  2021-09-17 18:23         ` dai.ngo
  0 siblings, 1 reply; 19+ messages in thread
From: dai.ngo @ 2021-09-16 20:15 UTC (permalink / raw)
  To: Bruce Fields, Chuck Lever III; +Cc: Linux NFS Mailing List, linux-fsdevel


On 9/16/21 12:55 PM, Bruce Fields wrote:
> On Thu, Sep 16, 2021 at 07:00:20PM +0000, Chuck Lever III wrote:
>> Bruce, Dai -
>>
>>> On Sep 16, 2021, at 2:22 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
>>>
>>> When the back channel enters SEQ4_STATUS_CB_PATH_DOWN state, the client
>>> recovers by sending BIND_CONN_TO_SESSION but the server fails to recover
>>> the back channel and leaves it as NFSD4_CB_DOWN.
>>>
>>> Fix by enhancing nfsd4_bind_conn_to_session to probe the back channel
>>> by calling nfsd4_probe_callback.
>>>
>>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>> I'm wondering if this one is appropriate to pull into v5.15-rc.
> I think so.
>
> Dai, do you have a pynfs test for this case?

I don't, but I can create a pynfs test for reproduce the problem.

-Dai

>
> --b.
>
>>> ---
>>> fs/nfsd/nfs4state.c | 16 +++++++++++++---
>>> 1 file changed, 13 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>> index 54e5317f00f1..63b4d0e6fc29 100644
>>> --- a/fs/nfsd/nfs4state.c
>>> +++ b/fs/nfsd/nfs4state.c
>>> @@ -3580,7 +3580,7 @@ static struct nfsd4_conn *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
>>> }
>>>
>>> static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
>>> -				struct nfsd4_session *session, u32 req)
>>> +		struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
>>> {
>>> 	struct nfs4_client *clp = session->se_client;
>>> 	struct svc_xprt *xpt = rqst->rq_xprt;
>>> @@ -3603,6 +3603,8 @@ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
>>> 	else
>>> 		status = nfserr_inval;
>>> 	spin_unlock(&clp->cl_lock);
>>> +	if (status == nfs_ok && conn)
>>> +		*conn = c;
>>> 	return status;
>>> }
>>>
>>> @@ -3627,8 +3629,16 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
>>> 	status = nfserr_wrong_cred;
>>> 	if (!nfsd4_mach_creds_match(session->se_client, rqstp))
>>> 		goto out;
>>> -	status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
>>> -	if (status == nfs_ok || status == nfserr_inval)
>>> +	status = nfsd4_match_existing_connection(rqstp, session,
>>> +			bcts->dir, &conn);
>>> +	if (status == nfs_ok) {
>>> +		if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
>>> +				bcts->dir == NFS4_CDFC4_BACK)
>>> +			conn->cn_flags |= NFS4_CDFC4_BACK;
>>> +		nfsd4_probe_callback(session->se_client);
>>> +		goto out;
>>> +	}
>>> +	if (status == nfserr_inval)
>>> 		goto out;
>>> 	status = nfsd4_map_bcts_dir(&bcts->dir);
>>> 	if (status)
>>> -- 
>>> 2.9.5
>>>
>> --
>> Chuck Lever
>>
>>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN
  2021-09-16 20:15       ` dai.ngo
@ 2021-09-17 18:23         ` dai.ngo
  0 siblings, 0 replies; 19+ messages in thread
From: dai.ngo @ 2021-09-17 18:23 UTC (permalink / raw)
  To: Bruce Fields, Chuck Lever III; +Cc: Linux NFS Mailing List, linux-fsdevel


On 9/16/21 1:15 PM, dai.ngo@oracle.com wrote:
>
> On 9/16/21 12:55 PM, Bruce Fields wrote:
>> On Thu, Sep 16, 2021 at 07:00:20PM +0000, Chuck Lever III wrote:
>>> Bruce, Dai -
>>>
>>>> On Sep 16, 2021, at 2:22 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
>>>>
>>>> When the back channel enters SEQ4_STATUS_CB_PATH_DOWN state, the 
>>>> client
>>>> recovers by sending BIND_CONN_TO_SESSION but the server fails to 
>>>> recover
>>>> the back channel and leaves it as NFSD4_CB_DOWN.
>>>>
>>>> Fix by enhancing nfsd4_bind_conn_to_session to probe the back channel
>>>> by calling nfsd4_probe_callback.
>>>>
>>>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>>> I'm wondering if this one is appropriate to pull into v5.15-rc.
>> I think so.
>>
>> Dai, do you have a pynfs test for this case?
>
> I don't, but I can create a pynfs test for reproduce the problem.

Here are the steps to reproduce the stuck SEQ4_STATUS_CB_PATH_DOWN
problem using 'tcpkill':

Client: 5.13.0-rc2
Server: 5.15.0-rc1

1. [root@nfsvmd07 ~]# mount -o vers=4.1 nfsvme14:/root/xfs /tmp/mnt
2. [root@nfsvmd07 ~]# tcpkill host nfsvme14 and port 2049
3. [root@nfsvmd07 ~]# ls /tmp/mnt
4. CTRL-C to stop tcpkill
5. [root@nfsvmd07 ~]# ls /tmp/mnt

The problem can be observed in the wire trace where the back channel
in stuck in SEQ4_STATUS_CB_PATH_DOWN causing the client to keep sending
BCTS.

Note: this problem can only be reproduced with client running 5.13 or
older.  Client with 5.14 or newer does not have this problem. The
reason is in 5.13, when the client re-establishes the TCP connection
it re-uses the previous port number which was destroyed by tcpkill
(client sends RST to server). This causes the server to set the state
of the back channel to SEQ4_STATUS_CB_PATH_DOWN.  In 5.14, the client
uses a new port number when re-establish the connection this results
in server returning NFS4ERR_CONN_NOT_BOUND_TO_SESSION in the reply of
the stand-alone SEQUENCE which the causes the client to send BCTS once
re-establish the back channel successfully.

I can provide the pcap files of a good and bad run of the test if
interested.

I don't have pynfs test for this case.

-Dai

>
> -Dai
>
>>
>> --b.
>>
>>>> ---
>>>> fs/nfsd/nfs4state.c | 16 +++++++++++++---
>>>> 1 file changed, 13 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>>> index 54e5317f00f1..63b4d0e6fc29 100644
>>>> --- a/fs/nfsd/nfs4state.c
>>>> +++ b/fs/nfsd/nfs4state.c
>>>> @@ -3580,7 +3580,7 @@ static struct nfsd4_conn 
>>>> *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
>>>> }
>>>>
>>>> static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
>>>> -                struct nfsd4_session *session, u32 req)
>>>> +        struct nfsd4_session *session, u32 req, struct nfsd4_conn 
>>>> **conn)
>>>> {
>>>>     struct nfs4_client *clp = session->se_client;
>>>>     struct svc_xprt *xpt = rqst->rq_xprt;
>>>> @@ -3603,6 +3603,8 @@ static __be32 
>>>> nfsd4_match_existing_connection(struct svc_rqst *rqst,
>>>>     else
>>>>         status = nfserr_inval;
>>>>     spin_unlock(&clp->cl_lock);
>>>> +    if (status == nfs_ok && conn)
>>>> +        *conn = c;
>>>>     return status;
>>>> }
>>>>
>>>> @@ -3627,8 +3629,16 @@ __be32 nfsd4_bind_conn_to_session(struct 
>>>> svc_rqst *rqstp,
>>>>     status = nfserr_wrong_cred;
>>>>     if (!nfsd4_mach_creds_match(session->se_client, rqstp))
>>>>         goto out;
>>>> -    status = nfsd4_match_existing_connection(rqstp, session, 
>>>> bcts->dir);
>>>> -    if (status == nfs_ok || status == nfserr_inval)
>>>> +    status = nfsd4_match_existing_connection(rqstp, session,
>>>> +            bcts->dir, &conn);
>>>> +    if (status == nfs_ok) {
>>>> +        if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
>>>> +                bcts->dir == NFS4_CDFC4_BACK)
>>>> +            conn->cn_flags |= NFS4_CDFC4_BACK;
>>>> +        nfsd4_probe_callback(session->se_client);
>>>> +        goto out;
>>>> +    }
>>>> +    if (status == nfserr_inval)
>>>>         goto out;
>>>>     status = nfsd4_map_bcts_dir(&bcts->dir);
>>>>     if (status)
>>>> -- 
>>>> 2.9.5
>>>>
>>> -- 
>>> Chuck Lever
>>>
>>>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-16 18:22 ` [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2021-09-22 21:14   ` J. Bruce Fields
  2021-09-22 22:16     ` dai.ngo
  2021-09-23  1:34   ` J. Bruce Fields
  1 sibling, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2021-09-22 21:14 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, linux-nfs, linux-fsdevel

On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
>  		seq_puts(m, "status: confirmed\n");
>  	else
>  		seq_puts(m, "status: unconfirmed\n");
> +	seq_printf(m, "courtesy client: %s\n",
> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
> +	seq_printf(m, "last renew: %lld secs\n",

I'd rather keep any units to the left of the colon.  Also, "last renew"
suggests to me that it's the absolute time of the last renew.  Maybe
"seconds since last renew:" ?

> +		ktime_get_boottime_seconds() - clp->cl_time);
>  	seq_printf(m, "name: ");
>  	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>  	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
> @@ -4652,6 +4662,42 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
>  	nfsd4_run_cb(&dp->dl_recall);
>  }
>  
> +/*
> + * If the conflict happens due to a NFSv4 request then check for
> + * courtesy client and set rq_conflict_client so that upper layer
> + * can destroy the conflict client and retry the call.
> + */
> +static bool
> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> +{
> +	struct svc_rqst *rqst;
> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +	bool ret = false;
> +
> +	if (!i_am_nfsd()) {
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +			return true;
> +		}
> +		return false;
> +	}
> +	rqst = kthread_data(current);
> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> +		return false;
> +	rqst->rq_conflict_client = NULL;
> +
> +	spin_lock(&nn->client_lock);
> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
> +				!mark_client_expired_locked(clp)) {
> +		rqst->rq_conflict_client = clp;
> +		ret = true;
> +	}
> +	spin_unlock(&nn->client_lock);

Check whether this is safe; I think the flc_lock may be taken inside of
this lock elsewhere, resulting in a potential deadlock?

rqst doesn't need any locking as it's only being used by this thread, so
it's the client expiration stuff that's the problem, I guess.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-22 21:14   ` J. Bruce Fields
@ 2021-09-22 22:16     ` dai.ngo
  2021-09-23  1:18       ` J. Bruce Fields
  0 siblings, 1 reply; 19+ messages in thread
From: dai.ngo @ 2021-09-22 22:16 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: chuck.lever, linux-nfs, linux-fsdevel


On 9/22/21 2:14 PM, J. Bruce Fields wrote:
> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
>>   		seq_puts(m, "status: confirmed\n");
>>   	else
>>   		seq_puts(m, "status: unconfirmed\n");
>> +	seq_printf(m, "courtesy client: %s\n",
>> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
>> +	seq_printf(m, "last renew: %lld secs\n",
> I'd rather keep any units to the left of the colon.  Also, "last renew"
> suggests to me that it's the absolute time of the last renew.  Maybe
> "seconds since last renew:" ?

will fix in v4.

>
>> +		ktime_get_boottime_seconds() - clp->cl_time);
>>   	seq_printf(m, "name: ");
>>   	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>>   	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
>> @@ -4652,6 +4662,42 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
>>   	nfsd4_run_cb(&dp->dl_recall);
>>   }
>>   
>> +/*
>> + * If the conflict happens due to a NFSv4 request then check for
>> + * courtesy client and set rq_conflict_client so that upper layer
>> + * can destroy the conflict client and retry the call.
>> + */
>> +static bool
>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>> +{
>> +	struct svc_rqst *rqst;
>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +	bool ret = false;
>> +
>> +	if (!i_am_nfsd()) {
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>> +			return true;
>> +		}
>> +		return false;
>> +	}
>> +	rqst = kthread_data(current);
>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>> +		return false;
>> +	rqst->rq_conflict_client = NULL;
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
>> +				!mark_client_expired_locked(clp)) {
>> +		rqst->rq_conflict_client = clp;
>> +		ret = true;
>> +	}
>> +	spin_unlock(&nn->client_lock);
> Check whether this is safe; I think the flc_lock may be taken inside of
> this lock elsewhere, resulting in a potential deadlock?
>
> rqst doesn't need any locking as it's only being used by this thread, so
> it's the client expiration stuff that's the problem, I guess.

mark_client_expired_locked needs to acquire cl_lock. I think the lock
ordering is ok, client_lock -> cl_lock. nfsd4_exchange_id uses this
lock ordering.

I will submit v4 patch with the fix in client_info_show and also new code
for handling NFSv4 share reservation conflicts with courtesy clients.

Thanks Bruce,

-Dai

>
> --b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-22 22:16     ` dai.ngo
@ 2021-09-23  1:18       ` J. Bruce Fields
  2021-09-23 17:09         ` dai.ngo
  0 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2021-09-23  1:18 UTC (permalink / raw)
  To: dai.ngo; +Cc: chuck.lever, linux-nfs, linux-fsdevel

On Wed, Sep 22, 2021 at 03:16:34PM -0700, dai.ngo@oracle.com wrote:
> 
> On 9/22/21 2:14 PM, J. Bruce Fields wrote:
> >On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> >>+/*
> >>+ * If the conflict happens due to a NFSv4 request then check for
> >>+ * courtesy client and set rq_conflict_client so that upper layer
> >>+ * can destroy the conflict client and retry the call.
> >>+ */
> >>+static bool
> >>+nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> >>+{
> >>+	struct svc_rqst *rqst;
> >>+	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> >>+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> >>+	bool ret = false;
> >>+
> >>+	if (!i_am_nfsd()) {
> >>+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> >>+			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> >>+			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> >>+			return true;
> >>+		}
> >>+		return false;
> >>+	}
> >>+	rqst = kthread_data(current);
> >>+	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> >>+		return false;
> >>+	rqst->rq_conflict_client = NULL;
> >>+
> >>+	spin_lock(&nn->client_lock);
> >>+	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
> >>+				!mark_client_expired_locked(clp)) {
> >>+		rqst->rq_conflict_client = clp;
> >>+		ret = true;
> >>+	}
> >>+	spin_unlock(&nn->client_lock);
> >Check whether this is safe; I think the flc_lock may be taken inside of
> >this lock elsewhere, resulting in a potential deadlock?
> >
> >rqst doesn't need any locking as it's only being used by this thread, so
> >it's the client expiration stuff that's the problem, I guess.
> 
> mark_client_expired_locked needs to acquire cl_lock. I think the lock
> ordering is ok, client_lock -> cl_lock. nfsd4_exchange_id uses this
> lock ordering.

It's flc_lock (see locks.c) that I'm worried about.  I've got a lockdep
warning here, taking a closer look....

nfsd4_release_lockowner takes clp->cl_lock and then fcl_lock.

Here we're taking fcl_lock and then client_lock.

As you say, elsewhere client_lock is taken and then cl_lock.

So that's the loop, I think.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-16 18:22 ` [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2021-09-22 21:14   ` J. Bruce Fields
@ 2021-09-23  1:34   ` J. Bruce Fields
  2021-09-23 17:09     ` dai.ngo
  1 sibling, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2021-09-23  1:34 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, linux-nfs, linux-fsdevel

On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> +/*
> + * If the conflict happens due to a NFSv4 request then check for
> + * courtesy client and set rq_conflict_client so that upper layer
> + * can destroy the conflict client and retry the call.
> + */

I think we need a different approach.  Wouldn't we need to take a
reference on clp when we assign to rq_conflict_client?

What happens if there are multiple leases found in the loop in
__break_lease?

It doesn't seem right that we'd need to treat the case where nfsd is the
breaker differently the case where it's another process.

I'm not sure what to suggest instead, though....  Maybe as with locks we
need two separate callbacks, one that tests whether there's a courtesy
client that needs removing, one that does it after we've dropped the
lock.

--b.

> +static bool
> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> +{
> +	struct svc_rqst *rqst;
> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +	bool ret = false;
> +
> +	if (!i_am_nfsd()) {
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +			return true;
> +		}
> +		return false;
> +	}
> +	rqst = kthread_data(current);
> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> +		return false;
> +	rqst->rq_conflict_client = NULL;
> +
> +	spin_lock(&nn->client_lock);
> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
> +				!mark_client_expired_locked(clp)) {
> +		rqst->rq_conflict_client = clp;
> +		ret = true;
> +	}
> +	spin_unlock(&nn->client_lock);
> +	return ret;
> +}
> +
>  /* Called from break_lease() with i_lock held. */
>  static bool
>  nfsd_break_deleg_cb(struct file_lock *fl)
> @@ -4660,6 +4706,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>  	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>  	struct nfs4_file *fp = dp->dl_stid.sc_file;
>  
> +	if (nfsd_check_courtesy_client(dp))
> +		return false;
>  	trace_nfsd_cb_recall(&dp->dl_stid);
>  
>  	/*
> @@ -5279,6 +5327,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
>  	 */
>  }
>  
> +static bool
> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
> +{
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +
> +	spin_lock(&nn->client_lock);
> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
> +			mark_client_expired_locked(clp)) {
> +		spin_unlock(&nn->client_lock);
> +		return false;
> +	}
> +	spin_unlock(&nn->client_lock);
> +	expire_client(clp);
> +	return true;
> +}
> +
>  __be32
>  nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
>  {
> @@ -5328,7 +5392,13 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
>  			goto out;
>  		}
>  	} else {
> +		rqstp->rq_conflict_client = NULL;
>  		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
> +		if (status == nfserr_jukebox && rqstp->rq_conflict_client) {
> +			if (nfs4_destroy_courtesy_client(rqstp->rq_conflict_client))
> +				status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
> +		}
> +
>  		if (status) {
>  			stp->st_stid.sc_type = NFS4_CLOSED_STID;
>  			release_open_stateid(stp);
> @@ -5562,6 +5632,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>  }
>  #endif
>  
> +static
> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
> +{
> +	int i;
> +	struct nfs4_stateowner *so, *tmp;
> +	struct nfs4_lockowner *lo;
> +	struct nfs4_ol_stateid *stp;
> +	struct nfs4_file *nf;
> +	struct inode *ino;
> +	struct file_lock_context *ctx;
> +	struct file_lock *fl;
> +
> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
> +		/* scan each lock owner */
> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
> +				so_strhash) {
> +			if (so->so_is_open_owner)
> +				continue;
> +
> +			/* scan lock states of this lock owner */
> +			lo = lockowner(so);
> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
> +					st_perstateowner) {
> +				nf = stp->st_stid.sc_file;
> +				ino = nf->fi_inode;
> +				ctx = ino->i_flctx;
> +				if (!ctx)
> +					continue;
> +				/* check each lock belongs to this lock state */
> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
> +					if (fl->fl_owner != lo)
> +						continue;
> +					if (!list_empty(&fl->fl_blocked_requests))
> +						return true;
> +				}
> +			}
> +		}
> +	}
> +	return false;
> +}
> +
>  static time64_t
>  nfs4_laundromat(struct nfsd_net *nn)
>  {
> @@ -5577,7 +5688,9 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	};
>  	struct nfs4_cpntf_state *cps;
>  	copy_stateid_t *cps_t;
> +	struct nfs4_stid *stid;
>  	int i;
> +	int id = 0;
>  
>  	if (clients_still_reclaiming(nn)) {
>  		lt.new_timeo = 0;
> @@ -5598,8 +5711,33 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	spin_lock(&nn->client_lock);
>  	list_for_each_safe(pos, next, &nn->client_lru) {
>  		clp = list_entry(pos, struct nfs4_client, cl_lru);
> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> +			goto exp_client;
> +		}
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
> +				goto exp_client;
> +			/*
> +			 * after umount, v4.0 client is still
> +			 * around waiting to be expired
> +			 */
> +			if (clp->cl_minorversion)
> +				continue;
> +		}
>  		if (!state_expired(&lt, clp->cl_time))
>  			break;
> +		spin_lock(&clp->cl_lock);
> +		stid = idr_get_next(&clp->cl_stateids, &id);
> +		spin_unlock(&clp->cl_lock);
> +		if (stid && !nfs4_anylock_conflict(clp)) {
> +			/* client still has states */
> +			clp->courtesy_client_expiry =
> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> +			continue;
> +		}
> +exp_client:
>  		if (mark_client_expired_locked(clp))
>  			continue;
>  		list_add(&clp->cl_lru, &reaplist);
> @@ -5679,9 +5817,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>  }
>  
> -static struct workqueue_struct *laundry_wq;
> -static void laundromat_main(struct work_struct *);
> -
>  static void
>  laundromat_main(struct work_struct *laundry)
>  {
> @@ -6486,6 +6621,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
>  		lock->fl_end = OFFSET_MAX;
>  }
>  
> +/* return true if lock was expired else return false */
> +static bool
> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
> +{
> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
> +	struct nfs4_client *clp = lo->lo_owner.so_client;
> +
> +	if (testonly)
> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
> +			true : false;
> +	return nfs4_destroy_courtesy_client(clp);
> +}
> +
>  static fl_owner_t
>  nfsd4_fl_get_owner(fl_owner_t owner)
>  {
> @@ -6533,6 +6681,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
>  	.lm_notify = nfsd4_lm_notify,
>  	.lm_get_owner = nfsd4_fl_get_owner,
>  	.lm_put_owner = nfsd4_fl_put_owner,
> +	.lm_expire_lock = nfsd4_fl_expire_lock,
>  };
>  
>  static inline void
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index e73bdbb1634a..93e30b101578 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -345,6 +345,8 @@ struct nfs4_client {
>  #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
>  #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
>  					 1 << NFSD4_CLIENT_CB_KILL)
> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
>  	unsigned long		cl_flags;
>  	const struct cred	*cl_cb_cred;
>  	struct rpc_clnt		*cl_cb_client;
> @@ -385,6 +387,7 @@ struct nfs4_client {
>  	struct list_head	async_copies;	/* list of async copies */
>  	spinlock_t		async_lock;	/* lock for async copies */
>  	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
> +	int			courtesy_client_expiry;
>  };
>  
>  /* struct nfs4_client_reset
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 064c96157d1f..349bf7bf20d2 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -306,6 +306,7 @@ struct svc_rqst {
>  						 * net namespace
>  						 */
>  	void **			rq_lease_breaker; /* The v4 client breaking a lease */
> +	void			*rq_conflict_client;
>  };
>  
>  #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-16 18:22 [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (2 preceding siblings ...)
  2021-09-16 18:22 ` [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN Dai Ngo
@ 2021-09-23  1:47 ` J. Bruce Fields
  2021-09-23 17:15   ` dai.ngo
  3 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2021-09-23  1:47 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, linux-nfs, linux-fsdevel

I haven't tried to figure out why, but I notice after these patches that
pynfs tests RENEW3, LKU10, CLOSE9, and CLOSE8 are failing with
unexpected share denied errors.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-23  1:18       ` J. Bruce Fields
@ 2021-09-23 17:09         ` dai.ngo
  0 siblings, 0 replies; 19+ messages in thread
From: dai.ngo @ 2021-09-23 17:09 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: chuck.lever, linux-nfs, linux-fsdevel


On 9/22/21 6:18 PM, J. Bruce Fields wrote:
> On Wed, Sep 22, 2021 at 03:16:34PM -0700, dai.ngo@oracle.com wrote:
>> On 9/22/21 2:14 PM, J. Bruce Fields wrote:
>>> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>>>> +/*
>>>> + * If the conflict happens due to a NFSv4 request then check for
>>>> + * courtesy client and set rq_conflict_client so that upper layer
>>>> + * can destroy the conflict client and retry the call.
>>>> + */
>>>> +static bool
>>>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>>>> +{
>>>> +	struct svc_rqst *rqst;
>>>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>>>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>>>> +	bool ret = false;
>>>> +
>>>> +	if (!i_am_nfsd()) {
>>>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>>>> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>>>> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>>>> +			return true;
>>>> +		}
>>>> +		return false;
>>>> +	}
>>>> +	rqst = kthread_data(current);
>>>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>>>> +		return false;
>>>> +	rqst->rq_conflict_client = NULL;
>>>> +
>>>> +	spin_lock(&nn->client_lock);
>>>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
>>>> +				!mark_client_expired_locked(clp)) {
>>>> +		rqst->rq_conflict_client = clp;
>>>> +		ret = true;
>>>> +	}
>>>> +	spin_unlock(&nn->client_lock);
>>> Check whether this is safe; I think the flc_lock may be taken inside of
>>> this lock elsewhere, resulting in a potential deadlock?
>>>
>>> rqst doesn't need any locking as it's only being used by this thread, so
>>> it's the client expiration stuff that's the problem, I guess.
>> mark_client_expired_locked needs to acquire cl_lock. I think the lock
>> ordering is ok, client_lock -> cl_lock. nfsd4_exchange_id uses this
>> lock ordering.
> It's flc_lock (see locks.c) that I'm worried about.  I've got a lockdep
> warning here, taking a closer look....
>
> nfsd4_release_lockowner takes clp->cl_lock and then fcl_lock.
>
> Here we're taking fcl_lock and then client_lock.
>
> As you say, elsewhere client_lock is taken and then cl_lock.
>
> So that's the loop, I think.

Thanks Bruce, I see the deadlock. We will need a new approach for this.

-Dai

>
> --b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-23  1:34   ` J. Bruce Fields
@ 2021-09-23 17:09     ` dai.ngo
  2021-09-23 19:32       ` J. Bruce Fields
  0 siblings, 1 reply; 19+ messages in thread
From: dai.ngo @ 2021-09-23 17:09 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: chuck.lever, linux-nfs, linux-fsdevel

On 9/22/21 6:34 PM, J. Bruce Fields wrote:
> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>> +/*
>> + * If the conflict happens due to a NFSv4 request then check for
>> + * courtesy client and set rq_conflict_client so that upper layer
>> + * can destroy the conflict client and retry the call.
>> + */
> I think we need a different approach.

I think nfsd_check_courtesy_client is used to handle conflict with
delegation. So instead of using rq_conflict_client to let the caller
knows and destroy the courtesy client as the current patch does, we
can ask the laundromat thread to do the destroy. In that case,
nfs4_get_vfs_file in nfsd4_process_open2 will either return no error
since the the laufromat destroyed the courtesy client or it gets
get nfserr_jukebox which causes the NFS client to retry. By the time
the retry comes the courtesy client should already be destroyed.

>   Wouldn't we need to take a
> reference on clp when we assign to rq_conflict_client?

we won't need rq_conflict_client with the new approach.

>
> What happens if there are multiple leases found in the loop in
> __break_lease?

this should no longer be a problem also.

>
> It doesn't seem right that we'd need to treat the case where nfsd is the
> breaker differently the case where it's another process.
>
> I'm not sure what to suggest instead, though....  Maybe as with locks we
> need two separate callbacks, one that tests whether there's a courtesy
> client that needs removing, one that does it after we've dropped the

I will try the new approach if you don't see any obvious problems
with it.

-Dai

> lock.
>
> --b.
>
>> +static bool
>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>> +{
>> +	struct svc_rqst *rqst;
>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +	bool ret = false;
>> +
>> +	if (!i_am_nfsd()) {
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>> +			return true;
>> +		}
>> +		return false;
>> +	}
>> +	rqst = kthread_data(current);
>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>> +		return false;
>> +	rqst->rq_conflict_client = NULL;
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
>> +				!mark_client_expired_locked(clp)) {
>> +		rqst->rq_conflict_client = clp;
>> +		ret = true;
>> +	}
>> +	spin_unlock(&nn->client_lock);
>> +	return ret;
>> +}
>> +
>>   /* Called from break_lease() with i_lock held. */
>>   static bool
>>   nfsd_break_deleg_cb(struct file_lock *fl)
>> @@ -4660,6 +4706,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>>   	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>>   	struct nfs4_file *fp = dp->dl_stid.sc_file;
>>   
>> +	if (nfsd_check_courtesy_client(dp))
>> +		return false;
>>   	trace_nfsd_cb_recall(&dp->dl_stid);
>>   
>>   	/*
>> @@ -5279,6 +5327,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
>>   	 */
>>   }
>>   
>> +static bool
>> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
>> +{
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
>> +			mark_client_expired_locked(clp)) {
>> +		spin_unlock(&nn->client_lock);
>> +		return false;
>> +	}
>> +	spin_unlock(&nn->client_lock);
>> +	expire_client(clp);
>> +	return true;
>> +}
>> +
>>   __be32
>>   nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
>>   {
>> @@ -5328,7 +5392,13 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
>>   			goto out;
>>   		}
>>   	} else {
>> +		rqstp->rq_conflict_client = NULL;
>>   		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
>> +		if (status == nfserr_jukebox && rqstp->rq_conflict_client) {
>> +			if (nfs4_destroy_courtesy_client(rqstp->rq_conflict_client))
>> +				status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
>> +		}
>> +
>>   		if (status) {
>>   			stp->st_stid.sc_type = NFS4_CLOSED_STID;
>>   			release_open_stateid(stp);
>> @@ -5562,6 +5632,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>>   }
>>   #endif
>>   
>> +static
>> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
>> +{
>> +	int i;
>> +	struct nfs4_stateowner *so, *tmp;
>> +	struct nfs4_lockowner *lo;
>> +	struct nfs4_ol_stateid *stp;
>> +	struct nfs4_file *nf;
>> +	struct inode *ino;
>> +	struct file_lock_context *ctx;
>> +	struct file_lock *fl;
>> +
>> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>> +		/* scan each lock owner */
>> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
>> +				so_strhash) {
>> +			if (so->so_is_open_owner)
>> +				continue;
>> +
>> +			/* scan lock states of this lock owner */
>> +			lo = lockowner(so);
>> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
>> +					st_perstateowner) {
>> +				nf = stp->st_stid.sc_file;
>> +				ino = nf->fi_inode;
>> +				ctx = ino->i_flctx;
>> +				if (!ctx)
>> +					continue;
>> +				/* check each lock belongs to this lock state */
>> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
>> +					if (fl->fl_owner != lo)
>> +						continue;
>> +					if (!list_empty(&fl->fl_blocked_requests))
>> +						return true;
>> +				}
>> +			}
>> +		}
>> +	}
>> +	return false;
>> +}
>> +
>>   static time64_t
>>   nfs4_laundromat(struct nfsd_net *nn)
>>   {
>> @@ -5577,7 +5688,9 @@ nfs4_laundromat(struct nfsd_net *nn)
>>   	};
>>   	struct nfs4_cpntf_state *cps;
>>   	copy_stateid_t *cps_t;
>> +	struct nfs4_stid *stid;
>>   	int i;
>> +	int id = 0;
>>   
>>   	if (clients_still_reclaiming(nn)) {
>>   		lt.new_timeo = 0;
>> @@ -5598,8 +5711,33 @@ nfs4_laundromat(struct nfsd_net *nn)
>>   	spin_lock(&nn->client_lock);
>>   	list_for_each_safe(pos, next, &nn->client_lru) {
>>   		clp = list_entry(pos, struct nfs4_client, cl_lru);
>> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> +			goto exp_client;
>> +		}
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
>> +				goto exp_client;
>> +			/*
>> +			 * after umount, v4.0 client is still
>> +			 * around waiting to be expired
>> +			 */
>> +			if (clp->cl_minorversion)
>> +				continue;
>> +		}
>>   		if (!state_expired(&lt, clp->cl_time))
>>   			break;
>> +		spin_lock(&clp->cl_lock);
>> +		stid = idr_get_next(&clp->cl_stateids, &id);
>> +		spin_unlock(&clp->cl_lock);
>> +		if (stid && !nfs4_anylock_conflict(clp)) {
>> +			/* client still has states */
>> +			clp->courtesy_client_expiry =
>> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
>> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> +			continue;
>> +		}
>> +exp_client:
>>   		if (mark_client_expired_locked(clp))
>>   			continue;
>>   		list_add(&clp->cl_lru, &reaplist);
>> @@ -5679,9 +5817,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>>   	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>>   }
>>   
>> -static struct workqueue_struct *laundry_wq;
>> -static void laundromat_main(struct work_struct *);
>> -
>>   static void
>>   laundromat_main(struct work_struct *laundry)
>>   {
>> @@ -6486,6 +6621,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
>>   		lock->fl_end = OFFSET_MAX;
>>   }
>>   
>> +/* return true if lock was expired else return false */
>> +static bool
>> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
>> +{
>> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
>> +	struct nfs4_client *clp = lo->lo_owner.so_client;
>> +
>> +	if (testonly)
>> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
>> +			true : false;
>> +	return nfs4_destroy_courtesy_client(clp);
>> +}
>> +
>>   static fl_owner_t
>>   nfsd4_fl_get_owner(fl_owner_t owner)
>>   {
>> @@ -6533,6 +6681,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
>>   	.lm_notify = nfsd4_lm_notify,
>>   	.lm_get_owner = nfsd4_fl_get_owner,
>>   	.lm_put_owner = nfsd4_fl_put_owner,
>> +	.lm_expire_lock = nfsd4_fl_expire_lock,
>>   };
>>   
>>   static inline void
>> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
>> index e73bdbb1634a..93e30b101578 100644
>> --- a/fs/nfsd/state.h
>> +++ b/fs/nfsd/state.h
>> @@ -345,6 +345,8 @@ struct nfs4_client {
>>   #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
>>   #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
>>   					 1 << NFSD4_CLIENT_CB_KILL)
>> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
>> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
>>   	unsigned long		cl_flags;
>>   	const struct cred	*cl_cb_cred;
>>   	struct rpc_clnt		*cl_cb_client;
>> @@ -385,6 +387,7 @@ struct nfs4_client {
>>   	struct list_head	async_copies;	/* list of async copies */
>>   	spinlock_t		async_lock;	/* lock for async copies */
>>   	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
>> +	int			courtesy_client_expiry;
>>   };
>>   
>>   /* struct nfs4_client_reset
>> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
>> index 064c96157d1f..349bf7bf20d2 100644
>> --- a/include/linux/sunrpc/svc.h
>> +++ b/include/linux/sunrpc/svc.h
>> @@ -306,6 +306,7 @@ struct svc_rqst {
>>   						 * net namespace
>>   						 */
>>   	void **			rq_lease_breaker; /* The v4 client breaking a lease */
>> +	void			*rq_conflict_client;
>>   };
>>   
>>   #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)
>> -- 
>> 2.9.5

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-23  1:47 ` [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server J. Bruce Fields
@ 2021-09-23 17:15   ` dai.ngo
  2021-09-23 19:37     ` dai.ngo
  0 siblings, 1 reply; 19+ messages in thread
From: dai.ngo @ 2021-09-23 17:15 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: chuck.lever, linux-nfs, linux-fsdevel


On 9/22/21 6:47 PM, J. Bruce Fields wrote:
> I haven't tried to figure out why, but I notice after these patches that
> pynfs tests RENEW3

The failure is related to share reservation, will be fixed when we
have code that handles share reservation with courtesy client. However,
with courtesy client support, the test will need to be modified since
the expected result will be NFS4_OK instead of NFS4ERR_EXPIRE.

> , LKU10, CLOSE9, and CLOSE8 are failing with
> unexpected share denied errors.

I suspected these tests are also related to share reservation. However,
I had problems running these tests, they are skipped. For example:

[root@nfsvmf25 nfs4.0]# ./testserver.py $server  -v CLOSE9
**************************************************
**************************************************
Command line asked for 1 of 673 tests
Of those: 1 Skipped, 0 Failed, 0 Warned, 0 Passed
[root@nfsvmf25 nfs4.0]#

Do I need to do any special setup to run these tests?

Thanks,
-Dai

>
> --b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-23 17:09     ` dai.ngo
@ 2021-09-23 19:32       ` J. Bruce Fields
  2021-09-24 20:53         ` dai.ngo
  0 siblings, 1 reply; 19+ messages in thread
From: J. Bruce Fields @ 2021-09-23 19:32 UTC (permalink / raw)
  To: dai.ngo; +Cc: chuck.lever, linux-nfs, linux-fsdevel

On Thu, Sep 23, 2021 at 10:09:35AM -0700, dai.ngo@oracle.com wrote:
> On 9/22/21 6:34 PM, J. Bruce Fields wrote:
> >On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> >>+/*
> >>+ * If the conflict happens due to a NFSv4 request then check for
> >>+ * courtesy client and set rq_conflict_client so that upper layer
> >>+ * can destroy the conflict client and retry the call.
> >>+ */
> >I think we need a different approach.
> 
> I think nfsd_check_courtesy_client is used to handle conflict with
> delegation. So instead of using rq_conflict_client to let the caller
> knows and destroy the courtesy client as the current patch does, we
> can ask the laundromat thread to do the destroy.

I can't see right now why that wouldn't work.

> In that case,
> nfs4_get_vfs_file in nfsd4_process_open2 will either return no error
> since the the laufromat destroyed the courtesy client or it gets
> get nfserr_jukebox which causes the NFS client to retry. By the time
> the retry comes the courtesy client should already be destroyed.

Make sure this works for local (non-NFS) lease breakers as well.  I
think that mainly means making sure the !O_NONBLOCK case of
__break_lease works.

--b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-23 17:15   ` dai.ngo
@ 2021-09-23 19:37     ` dai.ngo
  0 siblings, 0 replies; 19+ messages in thread
From: dai.ngo @ 2021-09-23 19:37 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: chuck.lever, linux-nfs, linux-fsdevel


On 9/23/21 10:15 AM, dai.ngo@oracle.com wrote:
>
> On 9/22/21 6:47 PM, J. Bruce Fields wrote:
>> I haven't tried to figure out why, but I notice after these patches that
>> pynfs tests RENEW3
>
> The failure is related to share reservation, will be fixed when we
> have code that handles share reservation with courtesy client. However,
> with courtesy client support, the test will need to be modified since
> the expected result will be NFS4_OK instead of NFS4ERR_EXPIRE.

correction, with the patch for handling share reservation conflict,
this test now passes with NFS4ERR_EXPIRE as expected since the courtesy
client was destroyed.

-Dai

>
>> , LKU10, CLOSE9, and CLOSE8 are failing with
>> unexpected share denied errors.
>
> I suspected these tests are also related to share reservation. However,
> I had problems running these tests, they are skipped. For example:
>
> [root@nfsvmf25 nfs4.0]# ./testserver.py $server  -v CLOSE9
> **************************************************
> **************************************************
> Command line asked for 1 of 673 tests
> Of those: 1 Skipped, 0 Failed, 0 Warned, 0 Passed
> [root@nfsvmf25 nfs4.0]#
>
> Do I need to do any special setup to run these tests?

still trying to figure out why these tests are skipped on my setup.

-Dai

>
> Thanks,
> -Dai
>
>>
>> --b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server
  2021-09-23 19:32       ` J. Bruce Fields
@ 2021-09-24 20:53         ` dai.ngo
  0 siblings, 0 replies; 19+ messages in thread
From: dai.ngo @ 2021-09-24 20:53 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: chuck.lever, linux-nfs, linux-fsdevel


On 9/23/21 12:32 PM, J. Bruce Fields wrote:
> On Thu, Sep 23, 2021 at 10:09:35AM -0700, dai.ngo@oracle.com wrote:
>> On 9/22/21 6:34 PM, J. Bruce Fields wrote:
>>> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>>>> +/*
>>>> + * If the conflict happens due to a NFSv4 request then check for
>>>> + * courtesy client and set rq_conflict_client so that upper layer
>>>> + * can destroy the conflict client and retry the call.
>>>> + */
>>> I think we need a different approach.
>> I think nfsd_check_courtesy_client is used to handle conflict with
>> delegation. So instead of using rq_conflict_client to let the caller
>> knows and destroy the courtesy client as the current patch does, we
>> can ask the laundromat thread to do the destroy.
> I can't see right now why that wouldn't work.
>
>> In that case,
>> nfs4_get_vfs_file in nfsd4_process_open2 will either return no error
>> since the the laufromat destroyed the courtesy client or it gets
>> get nfserr_jukebox which causes the NFS client to retry. By the time
>> the retry comes the courtesy client should already be destroyed.
> Make sure this works for local (non-NFS) lease breakers as well.  I
> think that mainly means making sure the !O_NONBLOCK case of
> __break_lease works.

Yes, local lease breakers use (!O_NONBLOCK). In this case __break_lease
will call lm_break then wait for all lease conflicts to be resolved
before returning to caller.

-Dai

>
> --b.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-09-24 20:53 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-16 18:22 [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
2021-09-16 18:22 ` [PATCH v3 1/3] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo
2021-09-16 18:22 ` [PATCH v3 2/3] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo
2021-09-22 21:14   ` J. Bruce Fields
2021-09-22 22:16     ` dai.ngo
2021-09-23  1:18       ` J. Bruce Fields
2021-09-23 17:09         ` dai.ngo
2021-09-23  1:34   ` J. Bruce Fields
2021-09-23 17:09     ` dai.ngo
2021-09-23 19:32       ` J. Bruce Fields
2021-09-24 20:53         ` dai.ngo
2021-09-16 18:22 ` [PATCH v3 3/3] nfsd: back channel stuck in SEQ4_STATUS_CB_PATH_DOWN Dai Ngo
2021-09-16 19:00   ` Chuck Lever III
2021-09-16 19:55     ` Bruce Fields
2021-09-16 20:15       ` dai.ngo
2021-09-17 18:23         ` dai.ngo
2021-09-23  1:47 ` [PATCH RFC v3 0/2] nfsd: Initial implementation of NFSv4 Courteous Server J. Bruce Fields
2021-09-23 17:15   ` dai.ngo
2021-09-23 19:37     ` dai.ngo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).