All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server
@ 2022-03-31 16:01 Dai Ngo
  2022-03-31 16:01 ` [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
                   ` (11 more replies)
  0 siblings, 12 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:01 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Hi Chuck, Bruce

This series of patches implement the NFSv4 Courteous Server.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

v2:

. add new callback, lm_expire_lock, to lock_manager_operations to
  allow the lock manager to take appropriate action with conflict lock.

. handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.

. expire courtesy client after 24hr if client has not reconnected.

. do not allow expired client to become courtesy client if there are
  waiters for client's locks.

. modify client_info_show to show courtesy client and seconds from
  last renew.

. fix a problem with NFSv4.1 server where the it keeps returning
  SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
  the courtesy client reconnects, causing the client to keep sending
  BCTS requests to server.

v3:

. modified posix_test_lock to check and resolve conflict locks
  to handle NLM TEST and NFSv4 LOCKT requests.

. separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

v4:

. rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock
  by asking the laudromat thread to destroy the courtesy client.

. handle NFSv4 share reservation conflicts with courtesy client. This
  includes conflicts between access mode and deny mode and vice versa.

. drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

v5:

. fix recursive locking of file_rwsem from posix_lock_file. 

. retest with LOCKDEP enabled.

v6:

. merge witn 5.15-rc7

. fix a bug in nfs4_check_deny_bmap that did not check for matched
  nfs4_file before checking for access/deny conflict. This bug causes
  pynfs OPEN18 to fail since the server taking too long to release
  lots of un-conflict clients' state.

. enhance share reservation conflict handler to handle case where
  a large number of conflict courtesy clients need to be expired.
  The 1st 100 clients are expired synchronously and the rest are
  expired in the background by the laundromat and NFS4ERR_DELAY
  is returned to the NFS client. This is needed to prevent the
  NFS client from timing out waiting got the reply.

v7:

. Fix race condition in posix_test_lock and posix_lock_inode after
  dropping spinlock.

. Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock
  callback

. Always resolve share reservation conflicts asynchrously.

. Fix bug in nfs4_laundromat where spinlock is not used when
  scanning cl_ownerstr_hashtbl.

. Fix bug in nfs4_laundromat where idr_get_next was called
  with incorrect 'id'. 

. Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock.

v8:

. Fix warning in nfsd4_fl_expire_lock reported by test robot.

v9:

. Simplify lm_expire_lock API by (1) remove the 'testonly' flag
  and (2) specifying return value as true/false to indicate
  whether conflict was succesfully resolved.

. Rework nfsd4_fl_expire_lock to mark client with
  NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire
  the client in the background.

. Add a spinlock in nfs4_client to synchronize access to the
  NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to
  handle race conditions when resolving lock and share reservation
  conflict.

. Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT
  are now consisdered 'dead', waiting for the laundromat to expire
  it. This client is no longer allowed to use its states if it
  reconnects before the laundromat finishes expiring the client.

  For v4.1 client, the detection is done in the processing of the
  SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client
  to re-establish new clientid and session.
  For v4.0 client, the detection is done in the processing of the
  RENEW and state-related ops and return NFS4ERR_EXPIRE to force
  the client to re-establish new clientid.

v10:

  Resolve deadlock in v9 by avoiding getting cl_client and
  cl_cs_lock together. The laundromat needs to determine whether
  the expired client has any state and also has no blockers on
  its locks. Both of these conditions are allowed to change after
  the laundromat transits an expired client to courtesy client.
  When this happens, the laundromat will detect it on the next
  run and and expire the courtesy client.

  Remove client persistent record before marking it as COURTESY_CLIENT
  and add client persistent record before clearing the COURTESY_CLIENT
  flag to allow the courtesy client to transist to normal client to
  continue to use its state.

  Lock/delegation/share reversation conflict with courtesy client is
  resolved by marking the courtesy client as DESTROY_COURTESY_CLIENT,
  effectively disable it, then allow the current request to proceed
  immediately.
  
  Courtesy client marked as DESTROY_COURTESY_CLIENT is not allowed
  to reconnect to reuse itsstate. It is expired by the laundromat
  asynchronously in the background.

  Move processing of expired clients from nfs4_laudromat to a
  separate function, nfs4_get_client_reaplist, that creates the
  reaplist and also to process courtesy clients.

  Update Documentation/filesystems/locking.rst to include new
  lm_lock_conflict call.

  Modify leases_conflict to call lm_breaker_owns_lease only if
  there is real conflict.  This is to allow the lock manager to
  resolve the delegation conflict if possible.

v11:

  Add comment for lm_lock_conflict callback.

  Replace static const courtesy_client_expiry with macro.

  Remove courtesy_clnt argument from find_in_sessionid_hashtbl.
  Callers use nfs4_client->cl_cs_client boolean to determined if
  it's the courtesy client and take appropriate actions.

  Rename NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT
  with NFSD4_CLIENT_COURTESY and NFSD4_CLIENT_DESTROY_COURTESY.

v12:

  Remove unnecessary comment in nfs4_get_client_reaplist.

  Replace nfs4_client->cl_cs_client boolean with
  NFSD4_CLIENT_COURTESY_CLNT flag.

  Remove courtesy_clnt argument from find_client_in_id_table and
  find_clp_in_name_tree. Callers use NFSD4_CLIENT_COURTESY_CLNT to
  determined if it's the courtesy client and take appropriate actions.

v13:

  Merge with 5.17-rc3.

  Cleanup Documentation/filesystems/locking.rst: replace i_lock
  with flc_lock, update API's that use flc_lock.

  Rename lm_lock_conflict to lm_lock_expired().

  Remove comment of lm_lock_expired API in lock_manager_operations.
  Same information is in patch description.

  Update commit messages of 4/4.

  Add some comment for NFSD4_CLIENT_COURTESY_CLNT.

  Add nfsd4_discard_courtesy_clnt() to eliminate duplicate code of
  discarding courtesy client; setting NFSD4_DESTROY_COURTESY_CLIENT.

v14:

. merge with Chuck's public for-next branch.

. remove courtesy_client_expiry, use client's last renew time.

. simplify comment of nfs4_check_access_deny_bmap.

. add comment about race condition in nfs4_get_client_reaplist.

. add list_del when walking cslist in nfs4_get_client_reaplist.

. remove duplicate INIT_LIST_HEAD(&reaplist) from nfs4_laundromat

. Modify find_confirmed_client and find_confirmed_client_by_name
  to detect courtesy client and destroy it.

. refactor lookup_clientid to use find_client_in_id_table
  directly instead of find_confirmed_client.

. refactor nfsd4_setclientid to call find_clp_in_name_tree
  directly instead of find_confirmed_client_by_name.

. remove comment of NFSD4_CLIENT_COURTESY.

. replace NFSD4_CLIENT_DESTROY_COURTESY with NFSD4_CLIENT_EXPIRED.

. replace NFSD4_CLIENT_COURTESY_CLNT with NFSD4_CLIENT_RECONNECTED.

v15:

. add helper locks_has_blockers_locked in fs.h to check for
  lock blockers

. rename nfs4_conflict_clients to nfs4_resolve_deny_conflicts_locked

. update nfs4_upgrade_open() to handle courtesy clients.

. add helper nfs4_check_and_expire_courtesy_client and
  nfs4_is_courtesy_client_expired to deduplicate some code.

. update nfs4_anylock_blocker:
   . replace list_for_each_entry_safe with list_for_each_entry
   . break nfs4_anylock_blocker into 2 smaller functions.

. update nfs4_get_client_reaplist:
   . remove unnecessary commets
   . acquire cl_cs_lock before setting NFSD4_CLIENT_COURTESY flag

. update client_info_show to show 'time since last renew: 00:00:38'
  instead of 'seconds from last renew: 38'.

v16:

. update client_info_show to display 'status' as
  'confirmed/unconfirmed/courtesy'

. replace helper locks_has_blockers_locked in fs.h in v15 with new
  locks_owner_has_blockers call in fs/locks.c

. update nfs4_lockowner_has_blockers to use locks_owner_has_blockers

. move nfs4_check_and_expire_courtesy_client from 5/11 to 4/11

. remove unnecessary check for NULL clp in find_in_sessionid_hashtb

. fix typo in commit messages

v17:

. replace flags used for courtesy client with enum courtesy_client_state

. add state table in nfsd/state.h

. make nfsd4_expire_courtesy_clnt, nfsd4_discard_courtesy_clnt and
  nfsd4_courtesy_clnt_expired as static inline.

. update nfsd_breaker_owns_lease to use dl->dl_stid.sc_client directly

. fix kernel test robot warning when CONFIG_FILE_LOCKING not defined.

v18:

. modify 0005-NFSD-Update-nfs4_get_vfs_file-to-handle-courtesy-cli.patch to:

    . remove nfs4_check_access_deny_bmap, fold this functionality
      into nfs4_resolve_deny_conflicts_locked by making use of
      bmap_to_share_mode.

    . move nfs4_resolve_deny_conflicts_locked into nfs4_file_get_access
      and nfs4_file_check_deny. 

v19:

. modify 0002-NFSD-Add-courtesy-client-state-macro-and-spinlock-to.patch to

    . add NFSD4_CLIENT_ACTIVE

    . redo Courtesy client state table

. modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch and
  0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch to:

    . set cl_cs_client_stare to NFSD4_CLIENT_ACTIVE when reactive
      courtesy client  



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2022-03-31 16:01 ` Dai Ngo
  2022-03-31 16:17   ` Chuck Lever III
  2022-03-31 16:02 ` [PATCH RFC v19 02/11] NFSD: Add courtesy client state, macro and spinlock to support courteous server Dai Ngo
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:01 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Add helper locks_owner_has_blockers to check if there is any blockers
for a given lockowner.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/locks.c         | 28 ++++++++++++++++++++++++++++
 include/linux/fs.h |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/fs/locks.c b/fs/locks.c
index 050acf8b5110..53864eb99dc5 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -300,6 +300,34 @@ void locks_release_private(struct file_lock *fl)
 }
 EXPORT_SYMBOL_GPL(locks_release_private);
 
+/**
+ * locks_owner_has_blockers - Check for blocking lock requests
+ * @flctx: file lock context
+ * @owner: lock owner
+ *
+ * Return values:
+ *   %true: @owner has at least one blocker
+ *   %false: @owner has no blockers
+ */
+bool locks_owner_has_blockers(struct file_lock_context *flctx,
+		fl_owner_t owner)
+{
+	struct file_lock *fl;
+
+	spin_lock(&flctx->flc_lock);
+	list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
+		if (fl->fl_owner != owner)
+			continue;
+		if (!list_empty(&fl->fl_blocked_requests)) {
+			spin_unlock(&flctx->flc_lock);
+			return true;
+		}
+	}
+	spin_unlock(&flctx->flc_lock);
+	return false;
+}
+EXPORT_SYMBOL_GPL(locks_owner_has_blockers);
+
 /* Free a lock which is not in use. */
 void locks_free_lock(struct file_lock *fl)
 {
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 831b20430d6e..2057a9df790f 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1200,6 +1200,8 @@ extern void lease_unregister_notifier(struct notifier_block *);
 struct files_struct;
 extern void show_fd_locks(struct seq_file *f,
 			 struct file *filp, struct files_struct *files);
+extern bool locks_owner_has_blockers(struct file_lock_context *flctx,
+			fl_owner_t owner);
 #else /* !CONFIG_FILE_LOCKING */
 static inline int fcntl_getlk(struct file *file, unsigned int cmd,
 			      struct flock __user *user)
@@ -1335,6 +1337,11 @@ static inline int lease_modify(struct file_lock *fl, int arg,
 struct files_struct;
 static inline void show_fd_locks(struct seq_file *f,
 			struct file *filp, struct files_struct *files) {}
+static inline bool locks_owner_has_blockers(struct file_lock_context *flctx,
+			fl_owner_t owner)
+{
+	return false;
+}
 #endif /* !CONFIG_FILE_LOCKING */
 
 static inline struct inode *file_inode(const struct file *f)
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 02/11] NFSD: Add courtesy client state, macro and spinlock to support courteous server
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2022-03-31 16:01 ` [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 03/11] NFSD: Add lm_lock_expired call out Dai Ngo
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update nfs4_client to add:
 . cl_cs_client_state: courtesy client state
 . cl_cs_lock: spinlock to synchronize access to cl_cs_client_state
 . cl_cs_list: list used by laundromat to process courtesy clients

Modify alloc_client to initialize these fields.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c |  2 ++
 fs/nfsd/nfsd.h      |  1 +
 fs/nfsd/state.h     | 33 +++++++++++++++++++++++++++++++++
 3 files changed, 36 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 234e852fcdfa..a65d59510681 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2009,12 +2009,14 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name)
 	INIT_LIST_HEAD(&clp->cl_delegations);
 	INIT_LIST_HEAD(&clp->cl_lru);
 	INIT_LIST_HEAD(&clp->cl_revoked);
+	INIT_LIST_HEAD(&clp->cl_cs_list);
 #ifdef CONFIG_NFSD_PNFS
 	INIT_LIST_HEAD(&clp->cl_lo_states);
 #endif
 	INIT_LIST_HEAD(&clp->async_copies);
 	spin_lock_init(&clp->async_lock);
 	spin_lock_init(&clp->cl_lock);
+	spin_lock_init(&clp->cl_cs_lock);
 	rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table");
 	return clp;
 err_no_hashtbl:
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 4fc1fd639527..23996c6ca75e 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -336,6 +336,7 @@ void		nfsd_lockd_shutdown(void);
 #define COMPOUND_ERR_SLACK_SPACE	16     /* OP_SETATTR */
 
 #define NFSD_LAUNDROMAT_MINTIMEOUT      1   /* seconds */
+#define	NFSD_COURTESY_CLIENT_TIMEOUT	(24 * 60 * 60)	/* seconds */
 
 /*
  * The following attributes are currently not supported by the NFSv4 server:
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 95457cfd37fc..7f78da5d1408 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -283,6 +283,35 @@ struct nfsd4_sessionid {
 #define HEXDIR_LEN     33 /* hex version of 16 byte md5 of cl_name plus '\0' */
 
 /*
+ *       State                Meaning                  Where set
+ * --------------------------------------------------------------------------
+ * | CLIENT_ACTIVE     | Confirmed, active    | Default                     |
+ * |------------------- ----------------------------------------------------|
+ * | CLIENT_COURTESY   | Courtesy state.      | nfs4_get_client_reaplist    |
+ * |                   | Lease/lock/share     |                             |
+ * |                   | reservation conflict |                             |
+ * |                   | can cause Courtesy   |                             |
+ * |                   | client to be expired |                             |
+ * |------------------------------------------------------------------------|
+ * | CLIENT_EXPIRED    | Courtesy client to be| nfs4_laundromat             |
+ * |                   | expired by Laundromat| nfsd4_lm_lock_expired       |
+ * |                   | due to conflict      | nfsd4_discard_courtesy_clnt |
+ * |                   |                      | nfsd4_expire_courtesy_clnt  |
+ * |------------------------------------------------------------------------|
+ * | CLIENT_RECONNECT  | Courtesy client      | nfsd4_courtesy_clnt_expired |
+ * |                   | reconnected,         |                             |
+ * |                   | becoming active      |                             |
+ * --------------------------------------------------------------------------
+ */
+
+enum courtesy_client_state {
+	NFSD4_CLIENT_ACTIVE = 0,
+	NFSD4_CLIENT_COURTESY,
+	NFSD4_CLIENT_EXPIRED,
+	NFSD4_CLIENT_RECONNECTED,
+};
+
+/*
  * struct nfs4_client - one per client.  Clientids live here.
  *
  * The initial object created by an NFS client using SETCLIENTID (for NFSv4.0)
@@ -385,6 +414,10 @@ struct nfs4_client {
 	struct list_head	async_copies;	/* list of async copies */
 	spinlock_t		async_lock;	/* lock for async copies */
 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
+
+	enum courtesy_client_state	cl_cs_client_state;
+	spinlock_t		cl_cs_lock;
+	struct list_head	cl_cs_list;
 };
 
 /* struct nfs4_client_reset
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 03/11] NFSD: Add lm_lock_expired call out
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2022-03-31 16:01 ` [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 02/11] NFSD: Add courtesy client state, macro and spinlock to support courteous server Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 04/11] NFSD: Update nfsd_breaker_owns_lease() to handle courtesy clients Dai Ngo
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Add callout function nfsd4_lm_lock_expired for lm_lock_expired.
If lock request has conflict with courtesy client then expire the
courtesy client and return no conflict to caller.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 22 ++++++++++++++++++++++
 fs/nfsd/state.h     | 14 ++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index a65d59510681..80772662236b 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -6578,10 +6578,32 @@ nfsd4_lm_notify(struct file_lock *fl)
 	}
 }
 
+/**
+ * nfsd4_lm_lock_expired - check if lock conflict can be resolved.
+ *
+ * @fl: pointer to file_lock with a potential conflict
+ * Return values:
+ *   %false: real conflict, lock conflict can not be resolved.
+ *   %true: no conflict, lock conflict was resolved.
+ *
+ * Note that this function is called while the flc_lock is held.
+ */
+static bool
+nfsd4_lm_lock_expired(struct file_lock *fl)
+{
+	struct nfs4_lockowner *lo;
+
+	if (!fl)
+		return false;
+	lo = (struct nfs4_lockowner *)fl->fl_owner;
+	return nfsd4_expire_courtesy_clnt(lo->lo_owner.so_client);
+}
+
 static const struct lock_manager_operations nfsd_posix_mng_ops  = {
 	.lm_notify = nfsd4_lm_notify,
 	.lm_get_owner = nfsd4_lm_get_owner,
 	.lm_put_owner = nfsd4_lm_put_owner,
+	.lm_lock_expired = nfsd4_lm_lock_expired,
 };
 
 static inline void
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 7f78da5d1408..8b81493ee48a 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -735,4 +735,18 @@ extern void nfsd4_client_record_remove(struct nfs4_client *clp);
 extern int nfsd4_client_record_check(struct nfs4_client *clp);
 extern void nfsd4_record_grace_done(struct nfsd_net *nn);
 
+static inline bool
+nfsd4_expire_courtesy_clnt(struct nfs4_client *clp)
+{
+	bool rc = false;
+
+	spin_lock(&clp->cl_cs_lock);
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_COURTESY)
+		clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_EXPIRED)
+		rc = true;
+	spin_unlock(&clp->cl_cs_lock);
+	return rc;
+}
+
 #endif   /* NFSD4_STATE_H */
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 04/11] NFSD: Update nfsd_breaker_owns_lease() to handle courtesy clients
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (2 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 03/11] NFSD: Add lm_lock_expired call out Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 05/11] NFSD: Update nfs4_get_vfs_file() to handle courtesy client Dai Ngo
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update nfsd_breaker_owns_lease() to handle delegation conflict with
courtesy clients by calling nfsd4_expire_courtesy_clnt.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 80772662236b..f20c75890594 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4727,6 +4727,9 @@ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
 	struct svc_rqst *rqst;
 	struct nfs4_client *clp;
 
+	if (nfsd4_expire_courtesy_clnt(dl->dl_stid.sc_client))
+		return true;
+
 	if (!i_am_nfsd())
 		return false;
 	rqst = kthread_data(current);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 05/11] NFSD: Update nfs4_get_vfs_file() to handle courtesy client
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (3 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 04/11] NFSD: Update nfsd_breaker_owns_lease() to handle courtesy clients Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() " Dai Ngo
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update nfs4_get_vfs_file and nfs4_upgrade_open to handle share
reservation conflict with courtesy client.

Update nfs4_get_vfs_file and nfs4_upgrade_open to handle share
reservation conflict with courtesy client.

When we have deny/access conflict we walk the fi_stateids of the
file in question, looking for open stateid and check the deny/access
of that stateid against the one from the open request. If there is
a conflict then we check if the client that owns that stateid is
a courtesy client. If it is then we set the client state to
CLIENT_EXPIRED and allow the open request to continue. We have
to scan all the stateid's of the file since the conflict can be
caused by multiple open stateid's.

Client with CLIENT_EXPIRED is expired by the laundromat.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 85 +++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 72 insertions(+), 13 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index f20c75890594..fe8969ba94b3 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -701,9 +701,56 @@ __nfs4_file_get_access(struct nfs4_file *fp, u32 access)
 		atomic_inc(&fp->fi_access[O_RDONLY]);
 }
 
+/*
+ * Check if courtesy clients have conflicting access and resolve it if possible
+ *
+ * access:  is op_share_access if share_access is true.
+ *	    Check if access mode, op_share_access, would conflict with
+ *	    the current deny mode of the file 'fp'.
+ * access:  is op_share_deny if share_access is false.
+ *	    Check if the deny mode, op_share_deny, would conflict with
+ *	    current access of the file 'fp'.
+ * stp:     skip checking this entry.
+ * new_stp: normal open, not open upgrade.
+ *
+ * Function returns:
+ *	false - access/deny mode conflict with normal client.
+ *	true  - no conflict or conflict with courtesy client(s) is resolved.
+ */
+static bool
+nfs4_resolve_deny_conflicts_locked(struct nfs4_file *fp, bool new_stp,
+		struct nfs4_ol_stateid *stp, u32 access, bool share_access)
+{
+	struct nfs4_ol_stateid *st;
+	struct nfs4_client *clp;
+	bool conflict = true;
+	unsigned char bmap;
+
+	lockdep_assert_held(&fp->fi_lock);
+	list_for_each_entry(st, &fp->fi_stateids, st_perfile) {
+		/* ignore lock stateid */
+		if (st->st_openstp)
+			continue;
+		if (st == stp && new_stp)
+			continue;
+		/* check file access against deny mode or vice versa */
+		bmap = share_access ? st->st_deny_bmap : st->st_access_bmap;
+		if (!(access & bmap_to_share_mode(bmap)))
+			continue;
+		clp = st->st_stid.sc_client;
+		if (nfsd4_expire_courtesy_clnt(clp))
+			continue;
+		conflict = false;
+		break;
+	}
+	return conflict;
+}
+
 static __be32
-nfs4_file_get_access(struct nfs4_file *fp, u32 access)
+nfs4_file_get_access(struct nfs4_file *fp, u32 access,
+		struct nfs4_ol_stateid *stp, bool new_stp)
 {
+
 	lockdep_assert_held(&fp->fi_lock);
 
 	/* Does this access mode make sense? */
@@ -711,15 +758,21 @@ nfs4_file_get_access(struct nfs4_file *fp, u32 access)
 		return nfserr_inval;
 
 	/* Does it conflict with a deny mode already set? */
-	if ((access & fp->fi_share_deny) != 0)
-		return nfserr_share_denied;
+	if ((access & fp->fi_share_deny) != 0) {
+		if (!nfs4_resolve_deny_conflicts_locked(fp, new_stp,
+				stp, access, true))
+			return nfserr_share_denied;
+	}
 
 	__nfs4_file_get_access(fp, access);
 	return nfs_ok;
 }
 
-static __be32 nfs4_file_check_deny(struct nfs4_file *fp, u32 deny)
+static __be32 nfs4_file_check_deny(struct nfs4_file *fp, u32 deny,
+		struct nfs4_ol_stateid *stp, bool new_stp)
 {
+	__be32 rc = nfs_ok;
+
 	/* Common case is that there is no deny mode. */
 	if (deny) {
 		/* Does this deny mode make sense? */
@@ -728,13 +781,19 @@ static __be32 nfs4_file_check_deny(struct nfs4_file *fp, u32 deny)
 
 		if ((deny & NFS4_SHARE_DENY_READ) &&
 		    atomic_read(&fp->fi_access[O_RDONLY]))
-			return nfserr_share_denied;
+			rc = nfserr_share_denied;
 
 		if ((deny & NFS4_SHARE_DENY_WRITE) &&
 		    atomic_read(&fp->fi_access[O_WRONLY]))
-			return nfserr_share_denied;
+			rc = nfserr_share_denied;
+
+		if (rc == nfserr_share_denied) {
+			if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
+					stp, deny, false))
+				rc = nfs_ok;
+		}
 	}
-	return nfs_ok;
+	return rc;
 }
 
 static void __nfs4_file_put_access(struct nfs4_file *fp, int oflag)
@@ -4952,7 +5011,7 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
 
 static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
-		struct nfsd4_open *open)
+		struct nfsd4_open *open, bool new_stp)
 {
 	struct nfsd_file *nf = NULL;
 	__be32 status;
@@ -4966,14 +5025,14 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 	 * Are we trying to set a deny mode that would conflict with
 	 * current access?
 	 */
-	status = nfs4_file_check_deny(fp, open->op_share_deny);
+	status = nfs4_file_check_deny(fp, open->op_share_deny, stp, new_stp);
 	if (status != nfs_ok) {
 		spin_unlock(&fp->fi_lock);
 		goto out;
 	}
 
 	/* set access to the file */
-	status = nfs4_file_get_access(fp, open->op_share_access);
+	status = nfs4_file_get_access(fp, open->op_share_access, stp, new_stp);
 	if (status != nfs_ok) {
 		spin_unlock(&fp->fi_lock);
 		goto out;
@@ -5027,11 +5086,11 @@ nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp, struct svc_fh *c
 	unsigned char old_deny_bmap = stp->st_deny_bmap;
 
 	if (!test_access(open->op_share_access, stp))
-		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open);
+		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open, false);
 
 	/* test and set deny mode */
 	spin_lock(&fp->fi_lock);
-	status = nfs4_file_check_deny(fp, open->op_share_deny);
+	status = nfs4_file_check_deny(fp, open->op_share_deny, stp, false);
 	if (status == nfs_ok) {
 		set_deny(open->op_share_deny, stp);
 		fp->fi_share_deny |=
@@ -5376,7 +5435,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
 			goto out;
 		}
 	} else {
-		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open, true);
 		if (status) {
 			stp->st_stid.sc_type = NFS4_CLOSED_STID;
 			release_open_stateid(stp);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (4 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 05/11] NFSD: Update nfs4_get_vfs_file() to handle courtesy client Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-04-01 15:21   ` J. Bruce Fields
  2022-03-31 16:02 ` [PATCH RFC v19 07/11] NFSD: Update find_in_sessionid_hashtbl() " Dai Ngo
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update find_clp_in_name_tree to check and expire courtesy client.

Update find_confirmed_client_by_name to discard the courtesy
client by setting CLIENT_EXPIRED.

Update nfsd4_setclientid to expire client with CLIENT_EXPIRED
state to prevent multiple confirmed clients with the same name
on the conf_name_tree.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 27 ++++++++++++++++++++++++---
 fs/nfsd/state.h     | 22 ++++++++++++++++++++++
 2 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index fe8969ba94b3..eadce5d19473 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2893,8 +2893,11 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root)
 			node = node->rb_left;
 		else if (cmp < 0)
 			node = node->rb_right;
-		else
+		else {
+			if (nfsd4_courtesy_clnt_expired(clp))
+				return NULL;
 			return clp;
+		}
 	}
 	return NULL;
 }
@@ -2973,8 +2976,15 @@ static bool clp_used_exchangeid(struct nfs4_client *clp)
 static struct nfs4_client *
 find_confirmed_client_by_name(struct xdr_netobj *name, struct nfsd_net *nn)
 {
+	struct nfs4_client *clp;
+
 	lockdep_assert_held(&nn->client_lock);
-	return find_clp_in_name_tree(name, &nn->conf_name_tree);
+	clp = find_clp_in_name_tree(name, &nn->conf_name_tree);
+	if (clp && clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
+		nfsd4_discard_courtesy_clnt(clp);
+		clp = NULL;
+	}
+	return clp;
 }
 
 static struct nfs4_client *
@@ -4091,12 +4101,19 @@ nfsd4_setclientid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	struct nfs4_client	*unconf = NULL;
 	__be32 			status;
 	struct nfsd_net		*nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+	struct nfs4_client	*cclient = NULL;
 
 	new = create_client(clname, rqstp, &clverifier);
 	if (new == NULL)
 		return nfserr_jukebox;
 	spin_lock(&nn->client_lock);
-	conf = find_confirmed_client_by_name(&clname, nn);
+	/* find confirmed client by name */
+	conf = find_clp_in_name_tree(&clname, &nn->conf_name_tree);
+	if (conf && conf->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
+		cclient = conf;
+		conf = NULL;
+	}
+
 	if (conf && client_has_state(conf)) {
 		status = nfserr_clid_inuse;
 		if (clp_used_exchangeid(conf))
@@ -4127,7 +4144,11 @@ nfsd4_setclientid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	new = NULL;
 	status = nfs_ok;
 out:
+	if (cclient)
+		unhash_client_locked(cclient);
 	spin_unlock(&nn->client_lock);
+	if (cclient)
+		expire_client(cclient);
 	if (new)
 		free_client(new);
 	if (unconf) {
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 8b81493ee48a..a2e2ac1a945a 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -735,6 +735,7 @@ extern void nfsd4_client_record_remove(struct nfs4_client *clp);
 extern int nfsd4_client_record_check(struct nfs4_client *clp);
 extern void nfsd4_record_grace_done(struct nfsd_net *nn);
 
+/* courteous server */
 static inline bool
 nfsd4_expire_courtesy_clnt(struct nfs4_client *clp)
 {
@@ -749,4 +750,25 @@ nfsd4_expire_courtesy_clnt(struct nfs4_client *clp)
 	return rc;
 }
 
+static inline void
+nfsd4_discard_courtesy_clnt(struct nfs4_client *clp)
+{
+	spin_lock(&clp->cl_cs_lock);
+	clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
+	spin_unlock(&clp->cl_cs_lock);
+}
+
+static inline bool
+nfsd4_courtesy_clnt_expired(struct nfs4_client *clp)
+{
+	bool rc = false;
+
+	spin_lock(&clp->cl_cs_lock);
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_EXPIRED)
+		rc = true;
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_COURTESY)
+		clp->cl_cs_client_state = NFSD4_CLIENT_RECONNECTED;
+	spin_unlock(&clp->cl_cs_lock);
+	return rc;
+}
 #endif   /* NFSD4_STATE_H */
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 07/11] NFSD: Update find_in_sessionid_hashtbl() to handle courtesy client
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (5 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() " Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 08/11] NFSD: Update find_client_in_id_table() " Dai Ngo
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update find_in_sessionid_hashtbl to:
 . skip client with CLIENT_EXPIRED state; discarded courtesy client.
 . if courtesy client was found then set CLIENT_RECONNECTED so caller
   can take appropriate action.

Update nfsd4_sequence and nfsd4_bind_conn_to_session to create client
record for courtesy client with CLIENT_RECONNECTED state.

Update nfsd4_destroy_session to discard courtesy client with
CLIENT_RECONNECTED state.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 41 +++++++++++++++++++++++++++++++++++++++--
 1 file changed, 39 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index eadce5d19473..a3004654d096 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -701,6 +701,22 @@ __nfs4_file_get_access(struct nfs4_file *fp, u32 access)
 		atomic_inc(&fp->fi_access[O_RDONLY]);
 }
 
+static void
+nfsd4_reactivate_courtesy_client(struct nfs4_client *clp, __be32 status)
+{
+	spin_lock(&clp->cl_cs_lock);
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
+		if (status == nfs_ok) {
+			clp->cl_cs_client_state = NFSD4_CLIENT_ACTIVE;
+			spin_unlock(&clp->cl_cs_lock);
+			nfsd4_client_record_create(clp);
+			return;
+		}
+		clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
+	}
+	spin_unlock(&clp->cl_cs_lock);
+}
+
 /*
  * Check if courtesy clients have conflicting access and resolve it if possible
  *
@@ -1994,13 +2010,22 @@ find_in_sessionid_hashtbl(struct nfs4_sessionid *sessionid, struct net *net,
 {
 	struct nfsd4_session *session;
 	__be32 status = nfserr_badsession;
+	struct nfs4_client *clp;
 
 	session = __find_in_sessionid_hashtbl(sessionid, net);
 	if (!session)
 		goto out;
+	clp = session->se_client;
+	if (nfsd4_courtesy_clnt_expired(clp)) {
+		session = NULL;
+		goto out;
+	}
 	status = nfsd4_get_session_locked(session);
-	if (status)
+	if (status) {
 		session = NULL;
+		if (clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED)
+			nfsd4_discard_courtesy_clnt(clp);
+	}
 out:
 	*ret = status;
 	return session;
@@ -3702,6 +3727,7 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
 	struct nfsd4_session *session;
 	struct net *net = SVC_NET(rqstp);
 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+	struct nfs4_client *clp;
 
 	if (!nfsd4_last_compound_op(rqstp))
 		return nfserr_not_only_op;
@@ -3734,6 +3760,8 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
 	nfsd4_init_conn(rqstp, conn, session);
 	status = nfs_ok;
 out:
+	clp = session->se_client;
+	nfsd4_reactivate_courtesy_client(clp, status);
 	nfsd4_put_session(session);
 out_no_session:
 	return status;
@@ -3756,6 +3784,7 @@ nfsd4_destroy_session(struct svc_rqst *r, struct nfsd4_compound_state *cstate,
 	int ref_held_by_me = 0;
 	struct net *net = SVC_NET(r);
 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+	struct nfs4_client *clp;
 
 	status = nfserr_not_only_op;
 	if (nfsd4_compound_in_session(cstate, sessionid)) {
@@ -3768,6 +3797,12 @@ nfsd4_destroy_session(struct svc_rqst *r, struct nfsd4_compound_state *cstate,
 	ses = find_in_sessionid_hashtbl(sessionid, net, &status);
 	if (!ses)
 		goto out_client_lock;
+	clp = ses->se_client;
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
+		status = nfserr_badsession;
+		nfsd4_discard_courtesy_clnt(clp);
+		goto out_put_session;
+	}
 	status = nfserr_wrong_cred;
 	if (!nfsd4_mach_creds_match(ses->se_client, r))
 		goto out_put_session;
@@ -3872,7 +3907,7 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	struct nfsd4_compoundres *resp = rqstp->rq_resp;
 	struct xdr_stream *xdr = resp->xdr;
 	struct nfsd4_session *session;
-	struct nfs4_client *clp;
+	struct nfs4_client *clp = NULL;
 	struct nfsd4_slot *slot;
 	struct nfsd4_conn *conn;
 	__be32 status;
@@ -3982,6 +4017,8 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	if (conn)
 		free_conn(conn);
 	spin_unlock(&nn->client_lock);
+	if (clp)
+		nfsd4_reactivate_courtesy_client(clp, status);
 	return status;
 out_put_session:
 	nfsd4_put_session_locked(session);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 08/11] NFSD: Update find_client_in_id_table() to handle courtesy client
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (6 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 07/11] NFSD: Update find_in_sessionid_hashtbl() " Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 09/11] NFSD: Refactor nfsd4_laundromat() Dai Ngo
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update find_client_in_id_table to:
 . skip client with CLIENT_EXPIRED; discarded courtesy client
 . if courtesy client was found then set CLIENT_RECONNECTED
   state so caller can take appropriate action.

Update find_confirmed_client to discard courtesy client.
Update lookup_clientid to call find_client_in_id_table directly.
Update set_client to create client record for courtesy client.
Update find_cpntf_state to discard courtesy client.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index a3004654d096..b33ad5bce721 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2968,6 +2968,8 @@ find_client_in_id_table(struct list_head *tbl, clientid_t *clid, bool sessions)
 		if (same_clid(&clp->cl_clientid, clid)) {
 			if ((bool)clp->cl_minorversion != sessions)
 				return NULL;
+			if (nfsd4_courtesy_clnt_expired(clp))
+				continue;
 			renew_client_locked(clp);
 			return clp;
 		}
@@ -2979,9 +2981,15 @@ static struct nfs4_client *
 find_confirmed_client(clientid_t *clid, bool sessions, struct nfsd_net *nn)
 {
 	struct list_head *tbl = nn->conf_id_hashtbl;
+	struct nfs4_client *clp;
 
 	lockdep_assert_held(&nn->client_lock);
-	return find_client_in_id_table(tbl, clid, sessions);
+	clp = find_client_in_id_table(tbl, clid, sessions);
+	if (clp && clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
+		nfsd4_discard_courtesy_clnt(clp);
+		clp = NULL;
+	}
+	return clp;
 }
 
 static struct nfs4_client *
@@ -4888,9 +4896,10 @@ static struct nfs4_client *lookup_clientid(clientid_t *clid, bool sessions,
 						struct nfsd_net *nn)
 {
 	struct nfs4_client *found;
+	struct list_head *tbl = nn->conf_id_hashtbl;
 
 	spin_lock(&nn->client_lock);
-	found = find_confirmed_client(clid, sessions, nn);
+	found = find_client_in_id_table(tbl, clid, sessions);
 	if (found)
 		atomic_inc(&found->cl_rpc_users);
 	spin_unlock(&nn->client_lock);
@@ -4915,6 +4924,7 @@ static __be32 set_client(clientid_t *clid,
 	cstate->clp = lookup_clientid(clid, false, nn);
 	if (!cstate->clp)
 		return nfserr_expired;
+	nfsd4_reactivate_courtesy_client(cstate->clp, nfs_ok);
 	return nfs_ok;
 }
 
@@ -6151,6 +6161,13 @@ static __be32 find_cpntf_state(struct nfsd_net *nn, stateid_t *st,
 	found = lookup_clientid(&cps->cp_p_clid, true, nn);
 	if (!found)
 		goto out;
+	if (found->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
+		nfsd4_discard_courtesy_clnt(found);
+		if (atomic_dec_and_lock(&found->cl_rpc_users,
+				&nn->client_lock))
+			spin_unlock(&nn->client_lock);
+		goto out;
+	}
 
 	*stid = find_stateid_by_type(found, &cps->cp_p_stateid,
 			NFS4_DELEG_STID|NFS4_OPEN_STID|NFS4_LOCK_STID);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 09/11] NFSD: Refactor nfsd4_laundromat()
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (7 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 08/11] NFSD: Update find_client_in_id_table() " Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 10/11] NFSD: Update laundromat to handle courtesy clients Dai Ngo
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Extract a bit of logic that is about to be expanded to handle
courtesy clients.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 33 +++++++++++++++++++++------------
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index b33ad5bce721..ed98bba82669 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -5737,6 +5737,26 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+static void
+nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
+				struct laundry_time *lt)
+{
+	struct list_head *pos, *next;
+	struct nfs4_client *clp;
+
+	INIT_LIST_HEAD(reaplist);
+	spin_lock(&nn->client_lock);
+	list_for_each_safe(pos, next, &nn->client_lru) {
+		clp = list_entry(pos, struct nfs4_client, cl_lru);
+		if (!state_expired(lt, clp->cl_time))
+			break;
+		if (mark_client_expired_locked(clp))
+			continue;
+		list_add(&clp->cl_lru, reaplist);
+	}
+	spin_unlock(&nn->client_lock);
+}
+
 static time64_t
 nfs4_laundromat(struct nfsd_net *nn)
 {
@@ -5759,7 +5779,6 @@ nfs4_laundromat(struct nfsd_net *nn)
 		goto out;
 	}
 	nfsd4_end_grace(nn);
-	INIT_LIST_HEAD(&reaplist);
 
 	spin_lock(&nn->s2s_cp_lock);
 	idr_for_each_entry(&nn->s2s_cp_stateids, cps_t, i) {
@@ -5769,17 +5788,7 @@ nfs4_laundromat(struct nfsd_net *nn)
 			_free_cpntf_state_locked(nn, cps);
 	}
 	spin_unlock(&nn->s2s_cp_lock);
-
-	spin_lock(&nn->client_lock);
-	list_for_each_safe(pos, next, &nn->client_lru) {
-		clp = list_entry(pos, struct nfs4_client, cl_lru);
-		if (!state_expired(&lt, clp->cl_time))
-			break;
-		if (mark_client_expired_locked(clp))
-			continue;
-		list_add(&clp->cl_lru, &reaplist);
-	}
-	spin_unlock(&nn->client_lock);
+	nfs4_get_client_reaplist(nn, &reaplist, &lt);
 	list_for_each_safe(pos, next, &reaplist) {
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
 		trace_nfsd_clid_purged(&clp->cl_clientid);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 10/11] NFSD: Update laundromat to handle courtesy clients
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (8 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 09/11] NFSD: Refactor nfsd4_laundromat() Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-03-31 16:02 ` [PATCH RFC v19 11/11] NFSD: Show state of courtesy clients in client info Dai Ngo
  2022-04-02 10:35 ` [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Jeff Layton
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Add nfs4_anylock_blocker and nfs4_lockowner_has_blockers to check
if an expired client has any lock blockers

Update nfs4_get_client_reaplist to:
 . add courtesy client in CLIENT_EXPIRED state to reaplist.
 . detect if expired client still has state and no blockers then
   transit it to courtesy client by setting CLIENT_COURTESY state
   and removing the client record.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 91 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index ed98bba82669..b56c23fb6ba1 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -5737,24 +5737,106 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+/* Check if any lock belonging to this lockowner has any blockers */
+static bool
+nfs4_lockowner_has_blockers(struct nfs4_lockowner *lo)
+{
+	struct file_lock_context *ctx;
+	struct nfs4_ol_stateid *stp;
+	struct nfs4_file *nf;
+
+	list_for_each_entry(stp, &lo->lo_owner.so_stateids, st_perstateowner) {
+		nf = stp->st_stid.sc_file;
+		ctx = nf->fi_inode->i_flctx;
+		if (!ctx)
+			continue;
+		if (locks_owner_has_blockers(ctx, lo))
+			return true;
+	}
+	return false;
+}
+
+static bool
+nfs4_anylock_blockers(struct nfs4_client *clp)
+{
+	int i;
+	struct nfs4_stateowner *so;
+	struct nfs4_lockowner *lo;
+
+	spin_lock(&clp->cl_lock);
+	for (i = 0; i < OWNER_HASH_SIZE; i++) {
+		list_for_each_entry(so, &clp->cl_ownerstr_hashtbl[i],
+				so_strhash) {
+			if (so->so_is_open_owner)
+				continue;
+			lo = lockowner(so);
+			if (nfs4_lockowner_has_blockers(lo)) {
+				spin_unlock(&clp->cl_lock);
+				return true;
+			}
+		}
+	}
+	spin_unlock(&clp->cl_lock);
+	return false;
+}
+
 static void
 nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
 				struct laundry_time *lt)
 {
 	struct list_head *pos, *next;
 	struct nfs4_client *clp;
+	bool cour;
+	struct list_head cslist;
 
 	INIT_LIST_HEAD(reaplist);
+	INIT_LIST_HEAD(&cslist);
 	spin_lock(&nn->client_lock);
 	list_for_each_safe(pos, next, &nn->client_lru) {
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
 		if (!state_expired(lt, clp->cl_time))
 			break;
-		if (mark_client_expired_locked(clp))
+
+		if (!client_has_state(clp))
+			goto exp_client;
+
+		if (clp->cl_cs_client_state == NFSD4_CLIENT_EXPIRED)
+			goto exp_client;
+		cour = (clp->cl_cs_client_state == NFSD4_CLIENT_COURTESY);
+		if (cour && ktime_get_boottime_seconds() >=
+				(clp->cl_time + NFSD_COURTESY_CLIENT_TIMEOUT))
+			goto exp_client;
+		if (nfs4_anylock_blockers(clp)) {
+exp_client:
+			if (mark_client_expired_locked(clp))
+				continue;
+			list_add(&clp->cl_lru, reaplist);
 			continue;
-		list_add(&clp->cl_lru, reaplist);
+		}
+		if (!cour) {
+			spin_lock(&clp->cl_cs_lock);
+			clp->cl_cs_client_state = NFSD4_CLIENT_COURTESY;
+			spin_unlock(&clp->cl_cs_lock);
+			list_add(&clp->cl_cs_list, &cslist);
+		}
 	}
 	spin_unlock(&nn->client_lock);
+
+	while (!list_empty(&cslist)) {
+		clp = list_first_entry(&cslist, struct nfs4_client, cl_cs_list);
+		list_del_init(&clp->cl_cs_list);
+		spin_lock(&clp->cl_cs_lock);
+		/*
+		 * Client might have re-connected. Make sure it's
+		 * still in courtesy state before removing its record.
+		 */
+		if (clp->cl_cs_client_state != NFSD4_CLIENT_COURTESY) {
+			spin_unlock(&clp->cl_cs_lock);
+			continue;
+		}
+		spin_unlock(&clp->cl_cs_lock);
+		nfsd4_client_record_remove(clp);
+	}
 }
 
 static time64_t
@@ -5800,6 +5882,13 @@ nfs4_laundromat(struct nfsd_net *nn)
 		dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
 		if (!state_expired(&lt, dp->dl_time))
 			break;
+		spin_lock(&clp->cl_cs_lock);
+		if (clp->cl_cs_client_state == NFSD4_CLIENT_COURTESY) {
+			clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
+			spin_unlock(&clp->cl_cs_lock);
+			continue;
+		}
+		spin_unlock(&clp->cl_cs_lock);
 		WARN_ON(!unhash_delegation_locked(dp));
 		list_add(&dp->dl_recall_lru, &reaplist);
 	}
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH RFC v19 11/11] NFSD: Show state of courtesy clients in client info
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (9 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 10/11] NFSD: Update laundromat to handle courtesy clients Dai Ngo
@ 2022-03-31 16:02 ` Dai Ngo
  2022-04-02 10:35 ` [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Jeff Layton
  11 siblings, 0 replies; 29+ messages in thread
From: Dai Ngo @ 2022-03-31 16:02 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update client_info_show to show state of courtesy client
and time since last renew.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index b56c23fb6ba1..7b4c7b15a99b 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2486,7 +2486,8 @@ static int client_info_show(struct seq_file *m, void *v)
 {
 	struct inode *inode = m->private;
 	struct nfs4_client *clp;
-	u64 clid;
+	u64 clid, hrs;
+	u32 mins, secs;
 
 	clp = get_nfsdfs_clp(inode);
 	if (!clp)
@@ -2494,10 +2495,17 @@ static int client_info_show(struct seq_file *m, void *v)
 	memcpy(&clid, &clp->cl_clientid, sizeof(clid));
 	seq_printf(m, "clientid: 0x%llx\n", clid);
 	seq_printf(m, "address: \"%pISpc\"\n", (struct sockaddr *)&clp->cl_addr);
-	if (test_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags))
+
+	if (clp->cl_cs_client_state == NFSD4_CLIENT_COURTESY)
+		seq_puts(m, "status: courtesy\n");
+	else if (clp->cl_cs_client_state == NFSD4_CLIENT_CONFIRMED)
 		seq_puts(m, "status: confirmed\n");
 	else
 		seq_puts(m, "status: unconfirmed\n");
+	hrs = div_u64_rem(ktime_get_boottime_seconds() - clp->cl_time,
+				3600, &secs);
+	mins = div_u64_rem((u64)secs, 60, &secs);
+	seq_printf(m, "time since last renew: %llu:%02u:%02u\n", hrs, mins, secs);
 	seq_printf(m, "name: ");
 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers
  2022-03-31 16:01 ` [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
@ 2022-03-31 16:17   ` Chuck Lever III
  2022-03-31 16:29     ` dai.ngo
  0 siblings, 1 reply; 29+ messages in thread
From: Chuck Lever III @ 2022-03-31 16:17 UTC (permalink / raw)
  To: Dai Ngo
  Cc: Bruce Fields, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel



> On Mar 31, 2022, at 12:01 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
> 
> Add helper locks_owner_has_blockers to check if there is any blockers
> for a given lockowner.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>

Since 01/11 is no longer changing and I want to keep the
Reviewed-by, I've already applied this one to for-next.

You don't need to send it again. If there is a v20,
please be sure it is rebased on my current for-next
topic branch.


> ---
> fs/locks.c         | 28 ++++++++++++++++++++++++++++
> include/linux/fs.h |  7 +++++++
> 2 files changed, 35 insertions(+)
> 
> diff --git a/fs/locks.c b/fs/locks.c
> index 050acf8b5110..53864eb99dc5 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -300,6 +300,34 @@ void locks_release_private(struct file_lock *fl)
> }
> EXPORT_SYMBOL_GPL(locks_release_private);
> 
> +/**
> + * locks_owner_has_blockers - Check for blocking lock requests
> + * @flctx: file lock context
> + * @owner: lock owner
> + *
> + * Return values:
> + *   %true: @owner has at least one blocker
> + *   %false: @owner has no blockers
> + */
> +bool locks_owner_has_blockers(struct file_lock_context *flctx,
> +		fl_owner_t owner)
> +{
> +	struct file_lock *fl;
> +
> +	spin_lock(&flctx->flc_lock);
> +	list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
> +		if (fl->fl_owner != owner)
> +			continue;
> +		if (!list_empty(&fl->fl_blocked_requests)) {
> +			spin_unlock(&flctx->flc_lock);
> +			return true;
> +		}
> +	}
> +	spin_unlock(&flctx->flc_lock);
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(locks_owner_has_blockers);
> +
> /* Free a lock which is not in use. */
> void locks_free_lock(struct file_lock *fl)
> {
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 831b20430d6e..2057a9df790f 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1200,6 +1200,8 @@ extern void lease_unregister_notifier(struct notifier_block *);
> struct files_struct;
> extern void show_fd_locks(struct seq_file *f,
> 			 struct file *filp, struct files_struct *files);
> +extern bool locks_owner_has_blockers(struct file_lock_context *flctx,
> +			fl_owner_t owner);
> #else /* !CONFIG_FILE_LOCKING */
> static inline int fcntl_getlk(struct file *file, unsigned int cmd,
> 			      struct flock __user *user)
> @@ -1335,6 +1337,11 @@ static inline int lease_modify(struct file_lock *fl, int arg,
> struct files_struct;
> static inline void show_fd_locks(struct seq_file *f,
> 			struct file *filp, struct files_struct *files) {}
> +static inline bool locks_owner_has_blockers(struct file_lock_context *flctx,
> +			fl_owner_t owner)
> +{
> +	return false;
> +}
> #endif /* !CONFIG_FILE_LOCKING */
> 
> static inline struct inode *file_inode(const struct file *f)
> -- 
> 2.9.5
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers
  2022-03-31 16:17   ` Chuck Lever III
@ 2022-03-31 16:29     ` dai.ngo
  0 siblings, 0 replies; 29+ messages in thread
From: dai.ngo @ 2022-03-31 16:29 UTC (permalink / raw)
  To: Chuck Lever III
  Cc: Bruce Fields, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel


On 3/31/22 9:17 AM, Chuck Lever III wrote:
>
>> On Mar 31, 2022, at 12:01 PM, Dai Ngo <dai.ngo@oracle.com> wrote:
>>
>> Add helper locks_owner_has_blockers to check if there is any blockers
>> for a given lockowner.
>>
>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> Since 01/11 is no longer changing and I want to keep the
> Reviewed-by, I've already applied this one to for-next.
>
> You don't need to send it again. If there is a v20,
> please be sure it is rebased on my current for-next
> topic branch.

Got it.

Thanks,
-Dai

>
>
>> ---
>> fs/locks.c         | 28 ++++++++++++++++++++++++++++
>> include/linux/fs.h |  7 +++++++
>> 2 files changed, 35 insertions(+)
>>
>> diff --git a/fs/locks.c b/fs/locks.c
>> index 050acf8b5110..53864eb99dc5 100644
>> --- a/fs/locks.c
>> +++ b/fs/locks.c
>> @@ -300,6 +300,34 @@ void locks_release_private(struct file_lock *fl)
>> }
>> EXPORT_SYMBOL_GPL(locks_release_private);
>>
>> +/**
>> + * locks_owner_has_blockers - Check for blocking lock requests
>> + * @flctx: file lock context
>> + * @owner: lock owner
>> + *
>> + * Return values:
>> + *   %true: @owner has at least one blocker
>> + *   %false: @owner has no blockers
>> + */
>> +bool locks_owner_has_blockers(struct file_lock_context *flctx,
>> +		fl_owner_t owner)
>> +{
>> +	struct file_lock *fl;
>> +
>> +	spin_lock(&flctx->flc_lock);
>> +	list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
>> +		if (fl->fl_owner != owner)
>> +			continue;
>> +		if (!list_empty(&fl->fl_blocked_requests)) {
>> +			spin_unlock(&flctx->flc_lock);
>> +			return true;
>> +		}
>> +	}
>> +	spin_unlock(&flctx->flc_lock);
>> +	return false;
>> +}
>> +EXPORT_SYMBOL_GPL(locks_owner_has_blockers);
>> +
>> /* Free a lock which is not in use. */
>> void locks_free_lock(struct file_lock *fl)
>> {
>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>> index 831b20430d6e..2057a9df790f 100644
>> --- a/include/linux/fs.h
>> +++ b/include/linux/fs.h
>> @@ -1200,6 +1200,8 @@ extern void lease_unregister_notifier(struct notifier_block *);
>> struct files_struct;
>> extern void show_fd_locks(struct seq_file *f,
>> 			 struct file *filp, struct files_struct *files);
>> +extern bool locks_owner_has_blockers(struct file_lock_context *flctx,
>> +			fl_owner_t owner);
>> #else /* !CONFIG_FILE_LOCKING */
>> static inline int fcntl_getlk(struct file *file, unsigned int cmd,
>> 			      struct flock __user *user)
>> @@ -1335,6 +1337,11 @@ static inline int lease_modify(struct file_lock *fl, int arg,
>> struct files_struct;
>> static inline void show_fd_locks(struct seq_file *f,
>> 			struct file *filp, struct files_struct *files) {}
>> +static inline bool locks_owner_has_blockers(struct file_lock_context *flctx,
>> +			fl_owner_t owner)
>> +{
>> +	return false;
>> +}
>> #endif /* !CONFIG_FILE_LOCKING */
>>
>> static inline struct inode *file_inode(const struct file *f)
>> -- 
>> 2.9.5
>>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-03-31 16:02 ` [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() " Dai Ngo
@ 2022-04-01 15:21   ` J. Bruce Fields
  2022-04-01 15:57     ` Chuck Lever III
  0 siblings, 1 reply; 29+ messages in thread
From: J. Bruce Fields @ 2022-04-01 15:21 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Thu, Mar 31, 2022 at 09:02:04AM -0700, Dai Ngo wrote:
> Update find_clp_in_name_tree to check and expire courtesy client.
> 
> Update find_confirmed_client_by_name to discard the courtesy
> client by setting CLIENT_EXPIRED.
> 
> Update nfsd4_setclientid to expire client with CLIENT_EXPIRED
> state to prevent multiple confirmed clients with the same name
> on the conf_name_tree.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/nfsd/nfs4state.c | 27 ++++++++++++++++++++++++---
>  fs/nfsd/state.h     | 22 ++++++++++++++++++++++
>  2 files changed, 46 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index fe8969ba94b3..eadce5d19473 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2893,8 +2893,11 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root)
>  			node = node->rb_left;
>  		else if (cmp < 0)
>  			node = node->rb_right;
> -		else
> +		else {
> +			if (nfsd4_courtesy_clnt_expired(clp))
> +				return NULL;
>  			return clp;
> +		}
>  	}
>  	return NULL;
>  }
> @@ -2973,8 +2976,15 @@ static bool clp_used_exchangeid(struct nfs4_client *clp)
>  static struct nfs4_client *
>  find_confirmed_client_by_name(struct xdr_netobj *name, struct nfsd_net *nn)
>  {
> +	struct nfs4_client *clp;
> +
>  	lockdep_assert_held(&nn->client_lock);
> -	return find_clp_in_name_tree(name, &nn->conf_name_tree);
> +	clp = find_clp_in_name_tree(name, &nn->conf_name_tree);
> +	if (clp && clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
> +		nfsd4_discard_courtesy_clnt(clp);
> +		clp = NULL;
> +	}
> +	return clp;
>  }
>  
....
> +static inline void
> +nfsd4_discard_courtesy_clnt(struct nfs4_client *clp)
> +{
> +	spin_lock(&clp->cl_cs_lock);
> +	clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
> +	spin_unlock(&clp->cl_cs_lock);
> +}

This is a red flag to me.... What guarantees that the condition we just
checked (cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) is still true
here?  Why couldn't another thread have raced in and called
reactivate_courtesy_client?

Should we be holding cl_cs_lock across both the cl_cs_client_state and
the assignment?  Or should reactivate_courtesy_client be taking the
client_lock?

I'm still not clear on the need for the CLIENT_RECONNECTED state.

I think analysis would be a bit simpler if the only states were ACTIVE,
COURTESY, and EXPIRED, the only transitions possible were

	ACTIVE->COURTESY->{EXPIRED or ACTIVE}

and the same lock ensured that those were the only possible transitions.

(And to be honest I'd still prefer the original approach where we expire
clients from the posix locking code and then retry.  It handles an
additional case (the one where reboot happens after a long network
partition), and I don't think it requires adding these new client
states....)

--b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-01 15:21   ` J. Bruce Fields
@ 2022-04-01 15:57     ` Chuck Lever III
  2022-04-01 19:11       ` dai.ngo
  0 siblings, 1 reply; 29+ messages in thread
From: Chuck Lever III @ 2022-04-01 15:57 UTC (permalink / raw)
  To: Bruce Fields
  Cc: Dai Ngo, Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel



> On Apr 1, 2022, at 11:21 AM, J. Bruce Fields <bfields@fieldses.org> wrote:
> 
> On Thu, Mar 31, 2022 at 09:02:04AM -0700, Dai Ngo wrote:
>> Update find_clp_in_name_tree to check and expire courtesy client.
>> 
>> Update find_confirmed_client_by_name to discard the courtesy
>> client by setting CLIENT_EXPIRED.
>> 
>> Update nfsd4_setclientid to expire client with CLIENT_EXPIRED
>> state to prevent multiple confirmed clients with the same name
>> on the conf_name_tree.
>> 
>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>> ---
>> fs/nfsd/nfs4state.c | 27 ++++++++++++++++++++++++---
>> fs/nfsd/state.h     | 22 ++++++++++++++++++++++
>> 2 files changed, 46 insertions(+), 3 deletions(-)
>> 
>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>> index fe8969ba94b3..eadce5d19473 100644
>> --- a/fs/nfsd/nfs4state.c
>> +++ b/fs/nfsd/nfs4state.c
>> @@ -2893,8 +2893,11 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root)
>> 			node = node->rb_left;
>> 		else if (cmp < 0)
>> 			node = node->rb_right;
>> -		else
>> +		else {
>> +			if (nfsd4_courtesy_clnt_expired(clp))
>> +				return NULL;
>> 			return clp;
>> +		}
>> 	}
>> 	return NULL;
>> }
>> @@ -2973,8 +2976,15 @@ static bool clp_used_exchangeid(struct nfs4_client *clp)
>> static struct nfs4_client *
>> find_confirmed_client_by_name(struct xdr_netobj *name, struct nfsd_net *nn)
>> {
>> +	struct nfs4_client *clp;
>> +
>> 	lockdep_assert_held(&nn->client_lock);
>> -	return find_clp_in_name_tree(name, &nn->conf_name_tree);
>> +	clp = find_clp_in_name_tree(name, &nn->conf_name_tree);
>> +	if (clp && clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
>> +		nfsd4_discard_courtesy_clnt(clp);
>> +		clp = NULL;
>> +	}
>> +	return clp;
>> }
>> 
> ....
>> +static inline void
>> +nfsd4_discard_courtesy_clnt(struct nfs4_client *clp)
>> +{
>> +	spin_lock(&clp->cl_cs_lock);
>> +	clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
>> +	spin_unlock(&clp->cl_cs_lock);
>> +}
> 
> This is a red flag to me.... What guarantees that the condition we just
> checked (cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) is still true
> here?  Why couldn't another thread have raced in and called
> reactivate_courtesy_client?
> 
> Should we be holding cl_cs_lock across both the cl_cs_client_state and
> the assignment?

Holding cl_cs_lock while checking cl_cs_client_state and then
updating it sounds OK to me.


> Or should reactivate_courtesy_client be taking the
> client_lock?
> 
> I'm still not clear on the need for the CLIENT_RECONNECTED state.
> 
> I think analysis would be a bit simpler if the only states were ACTIVE,
> COURTESY, and EXPIRED, the only transitions possible were
> 
> 	ACTIVE->COURTESY->{EXPIRED or ACTIVE}
> 
> and the same lock ensured that those were the only possible transitions.

Yes, that would be easier, but I don't think it's realistic. There
are lock ordering issues between nn->client_lock and the locks on
the individual files and state that make it onerous.


> (And to be honest I'd still prefer the original approach where we expire
> clients from the posix locking code and then retry.  It handles an
> additional case (the one where reboot happens after a long network
> partition), and I don't think it requires adding these new client
> states....)

The locking of the earlier approach was unworkable.

But, I'm happy to consider that again if you can come up with a way
of handling it properly and simply.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-01 15:57     ` Chuck Lever III
@ 2022-04-01 19:11       ` dai.ngo
  2022-04-13 12:55         ` Bruce Fields
  0 siblings, 1 reply; 29+ messages in thread
From: dai.ngo @ 2022-04-01 19:11 UTC (permalink / raw)
  To: Chuck Lever III, Bruce Fields
  Cc: Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel

On 4/1/22 8:57 AM, Chuck Lever III wrote:
>
>> On Apr 1, 2022, at 11:21 AM, J. Bruce Fields <bfields@fieldses.org> wrote:
>>
>> On Thu, Mar 31, 2022 at 09:02:04AM -0700, Dai Ngo wrote:
>>> Update find_clp_in_name_tree to check and expire courtesy client.
>>>
>>> Update find_confirmed_client_by_name to discard the courtesy
>>> client by setting CLIENT_EXPIRED.
>>>
>>> Update nfsd4_setclientid to expire client with CLIENT_EXPIRED
>>> state to prevent multiple confirmed clients with the same name
>>> on the conf_name_tree.
>>>
>>> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
>>> ---
>>> fs/nfsd/nfs4state.c | 27 ++++++++++++++++++++++++---
>>> fs/nfsd/state.h     | 22 ++++++++++++++++++++++
>>> 2 files changed, 46 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>> index fe8969ba94b3..eadce5d19473 100644
>>> --- a/fs/nfsd/nfs4state.c
>>> +++ b/fs/nfsd/nfs4state.c
>>> @@ -2893,8 +2893,11 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root)
>>> 			node = node->rb_left;
>>> 		else if (cmp < 0)
>>> 			node = node->rb_right;
>>> -		else
>>> +		else {
>>> +			if (nfsd4_courtesy_clnt_expired(clp))
>>> +				return NULL;
>>> 			return clp;
>>> +		}
>>> 	}
>>> 	return NULL;
>>> }
>>> @@ -2973,8 +2976,15 @@ static bool clp_used_exchangeid(struct nfs4_client *clp)
>>> static struct nfs4_client *
>>> find_confirmed_client_by_name(struct xdr_netobj *name, struct nfsd_net *nn)
>>> {
>>> +	struct nfs4_client *clp;
>>> +
>>> 	lockdep_assert_held(&nn->client_lock);
>>> -	return find_clp_in_name_tree(name, &nn->conf_name_tree);
>>> +	clp = find_clp_in_name_tree(name, &nn->conf_name_tree);
>>> +	if (clp && clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
>>> +		nfsd4_discard_courtesy_clnt(clp);
>>> +		clp = NULL;
>>> +	}
>>> +	return clp;
>>> }
>>>
>> ....
>>> +static inline void
>>> +nfsd4_discard_courtesy_clnt(struct nfs4_client *clp)
>>> +{
>>> +	spin_lock(&clp->cl_cs_lock);
>>> +	clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
>>> +	spin_unlock(&clp->cl_cs_lock);
>>> +}
>> This is a red flag to me.... What guarantees that the condition we just
>> checked (cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) is still true
>> here?  Why couldn't another thread have raced in and called
>> reactivate_courtesy_client?
>>
>> Should we be holding cl_cs_lock across both the cl_cs_client_state and
>> the assignment?
> Holding cl_cs_lock while checking cl_cs_client_state and then
> updating it sounds OK to me.

Thanks Bruce and Chuck!

I replaced nfsd4_discard_courtesy_clnt with nfsd4_discard_reconnect_clnt
which checks and sets the cl_cs_client_state under the cl_cs_lock:

static inline bool
nfsd4_discard_reconnect_clnt(struct nfs4_client *clp)
{
         bool ret = false;

         spin_lock(&clp->cl_cs_lock);
         if (clp->cl_cs_client_state == NFSD4_CLIENT_RECONNECTED) {
                 clp->cl_cs_client_state = NFSD4_CLIENT_EXPIRED;
                 ret = true;
         }
         spin_unlock(&clp->cl_cs_lock);
         return ret;
}

>
>
>> Or should reactivate_courtesy_client be taking the
>> client_lock?
>>
>> I'm still not clear on the need for the CLIENT_RECONNECTED state.
>>
>> I think analysis would be a bit simpler if the only states were ACTIVE,
>> COURTESY, and EXPIRED, the only transitions possible were
>>
>> 	ACTIVE->COURTESY->{EXPIRED or ACTIVE}
>>
>> and the same lock ensured that those were the only possible transitions.
> Yes, that would be easier, but I don't think it's realistic. There
> are lock ordering issues between nn->client_lock and the locks on
> the individual files and state that make it onerous.
>
>
>> (And to be honest I'd still prefer the original approach where we expire
>> clients from the posix locking code and then retry.  It handles an
>> additional case (the one where reboot happens after a long network
>> partition), and I don't think it requires adding these new client
>> states....)
> The locking of the earlier approach was unworkable.
>
> But, I'm happy to consider that again if you can come up with a way
> of handling it properly and simply.

I will wait for feedback from Bruce before sending v20 with the
above change.

-Dai

>
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server
  2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (10 preceding siblings ...)
  2022-03-31 16:02 ` [PATCH RFC v19 11/11] NFSD: Show state of courtesy clients in client info Dai Ngo
@ 2022-04-02 10:35 ` Jeff Layton
  2022-04-02 15:10   ` Chuck Lever III
  11 siblings, 1 reply; 29+ messages in thread
From: Jeff Layton @ 2022-04-02 10:35 UTC (permalink / raw)
  To: Dai Ngo, chuck.lever, bfields; +Cc: viro, linux-nfs, linux-fsdevel

On Thu, 2022-03-31 at 09:01 -0700, Dai Ngo wrote:
> Hi Chuck, Bruce
> 
> This series of patches implement the NFSv4 Courteous Server.
> 
> A server which does not immediately expunge the state on lease expiration
> is known as a Courteous Server.  A Courteous Server continues to recognize
> previously generated state tokens as valid until conflict arises between
> the expired state and the requests from another client, or the server
> reboots.
> 
> v2:
> 
> . add new callback, lm_expire_lock, to lock_manager_operations to
>   allow the lock manager to take appropriate action with conflict lock.
> 
> . handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.
> 
> . expire courtesy client after 24hr if client has not reconnected.
> 
> . do not allow expired client to become courtesy client if there are
>   waiters for client's locks.
> 
> . modify client_info_show to show courtesy client and seconds from
>   last renew.
> 
> . fix a problem with NFSv4.1 server where the it keeps returning
>   SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
>   the courtesy client reconnects, causing the client to keep sending
>   BCTS requests to server.
> 
> v3:
> 
> . modified posix_test_lock to check and resolve conflict locks
>   to handle NLM TEST and NFSv4 LOCKT requests.
> 
> . separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.
> 
> v4:
> 
> . rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock
>   by asking the laudromat thread to destroy the courtesy client.
> 
> . handle NFSv4 share reservation conflicts with courtesy client. This
>   includes conflicts between access mode and deny mode and vice versa.
> 
> . drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.
> 
> v5:
> 
> . fix recursive locking of file_rwsem from posix_lock_file. 
> 
> . retest with LOCKDEP enabled.
> 
> v6:
> 
> . merge witn 5.15-rc7
> 
> . fix a bug in nfs4_check_deny_bmap that did not check for matched
>   nfs4_file before checking for access/deny conflict. This bug causes
>   pynfs OPEN18 to fail since the server taking too long to release
>   lots of un-conflict clients' state.
> 
> . enhance share reservation conflict handler to handle case where
>   a large number of conflict courtesy clients need to be expired.
>   The 1st 100 clients are expired synchronously and the rest are
>   expired in the background by the laundromat and NFS4ERR_DELAY
>   is returned to the NFS client. This is needed to prevent the
>   NFS client from timing out waiting got the reply.
> 
> v7:
> 
> . Fix race condition in posix_test_lock and posix_lock_inode after
>   dropping spinlock.
> 
> . Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock
>   callback
> 
> . Always resolve share reservation conflicts asynchrously.
> 
> . Fix bug in nfs4_laundromat where spinlock is not used when
>   scanning cl_ownerstr_hashtbl.
> 
> . Fix bug in nfs4_laundromat where idr_get_next was called
>   with incorrect 'id'. 
> 
> . Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock.
> 
> v8:
> 
> . Fix warning in nfsd4_fl_expire_lock reported by test robot.
> 
> v9:
> 
> . Simplify lm_expire_lock API by (1) remove the 'testonly' flag
>   and (2) specifying return value as true/false to indicate
>   whether conflict was succesfully resolved.
> 
> . Rework nfsd4_fl_expire_lock to mark client with
>   NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire
>   the client in the background.
> 
> . Add a spinlock in nfs4_client to synchronize access to the
>   NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to
>   handle race conditions when resolving lock and share reservation
>   conflict.
> 
> . Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT
>   are now consisdered 'dead', waiting for the laundromat to expire
>   it. This client is no longer allowed to use its states if it
>   reconnects before the laundromat finishes expiring the client.
> 
>   For v4.1 client, the detection is done in the processing of the
>   SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client
>   to re-establish new clientid and session.
>   For v4.0 client, the detection is done in the processing of the
>   RENEW and state-related ops and return NFS4ERR_EXPIRE to force
>   the client to re-establish new clientid.
> 
> v10:
> 
>   Resolve deadlock in v9 by avoiding getting cl_client and
>   cl_cs_lock together. The laundromat needs to determine whether
>   the expired client has any state and also has no blockers on
>   its locks. Both of these conditions are allowed to change after
>   the laundromat transits an expired client to courtesy client.
>   When this happens, the laundromat will detect it on the next
>   run and and expire the courtesy client.
> 
>   Remove client persistent record before marking it as COURTESY_CLIENT
>   and add client persistent record before clearing the COURTESY_CLIENT
>   flag to allow the courtesy client to transist to normal client to
>   continue to use its state.
> 
>   Lock/delegation/share reversation conflict with courtesy client is
>   resolved by marking the courtesy client as DESTROY_COURTESY_CLIENT,
>   effectively disable it, then allow the current request to proceed
>   immediately.
>   
>   Courtesy client marked as DESTROY_COURTESY_CLIENT is not allowed
>   to reconnect to reuse itsstate. It is expired by the laundromat
>   asynchronously in the background.
> 
>   Move processing of expired clients from nfs4_laudromat to a
>   separate function, nfs4_get_client_reaplist, that creates the
>   reaplist and also to process courtesy clients.
> 
>   Update Documentation/filesystems/locking.rst to include new
>   lm_lock_conflict call.
> 
>   Modify leases_conflict to call lm_breaker_owns_lease only if
>   there is real conflict.  This is to allow the lock manager to
>   resolve the delegation conflict if possible.
> 
> v11:
> 
>   Add comment for lm_lock_conflict callback.
> 
>   Replace static const courtesy_client_expiry with macro.
> 
>   Remove courtesy_clnt argument from find_in_sessionid_hashtbl.
>   Callers use nfs4_client->cl_cs_client boolean to determined if
>   it's the courtesy client and take appropriate actions.
> 
>   Rename NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT
>   with NFSD4_CLIENT_COURTESY and NFSD4_CLIENT_DESTROY_COURTESY.
> 
> v12:
> 
>   Remove unnecessary comment in nfs4_get_client_reaplist.
> 
>   Replace nfs4_client->cl_cs_client boolean with
>   NFSD4_CLIENT_COURTESY_CLNT flag.
> 
>   Remove courtesy_clnt argument from find_client_in_id_table and
>   find_clp_in_name_tree. Callers use NFSD4_CLIENT_COURTESY_CLNT to
>   determined if it's the courtesy client and take appropriate actions.
> 
> v13:
> 
>   Merge with 5.17-rc3.
> 
>   Cleanup Documentation/filesystems/locking.rst: replace i_lock
>   with flc_lock, update API's that use flc_lock.
> 
>   Rename lm_lock_conflict to lm_lock_expired().
> 
>   Remove comment of lm_lock_expired API in lock_manager_operations.
>   Same information is in patch description.
> 
>   Update commit messages of 4/4.
> 
>   Add some comment for NFSD4_CLIENT_COURTESY_CLNT.
> 
>   Add nfsd4_discard_courtesy_clnt() to eliminate duplicate code of
>   discarding courtesy client; setting NFSD4_DESTROY_COURTESY_CLIENT.
> 
> v14:
> 
> . merge with Chuck's public for-next branch.
> 
> . remove courtesy_client_expiry, use client's last renew time.
> 
> . simplify comment of nfs4_check_access_deny_bmap.
> 
> . add comment about race condition in nfs4_get_client_reaplist.
> 
> . add list_del when walking cslist in nfs4_get_client_reaplist.
> 
> . remove duplicate INIT_LIST_HEAD(&reaplist) from nfs4_laundromat
> 
> . Modify find_confirmed_client and find_confirmed_client_by_name
>   to detect courtesy client and destroy it.
> 
> . refactor lookup_clientid to use find_client_in_id_table
>   directly instead of find_confirmed_client.
> 
> . refactor nfsd4_setclientid to call find_clp_in_name_tree
>   directly instead of find_confirmed_client_by_name.
> 
> . remove comment of NFSD4_CLIENT_COURTESY.
> 
> . replace NFSD4_CLIENT_DESTROY_COURTESY with NFSD4_CLIENT_EXPIRED.
> 
> . replace NFSD4_CLIENT_COURTESY_CLNT with NFSD4_CLIENT_RECONNECTED.
> 
> v15:
> 
> . add helper locks_has_blockers_locked in fs.h to check for
>   lock blockers
> 
> . rename nfs4_conflict_clients to nfs4_resolve_deny_conflicts_locked
> 
> . update nfs4_upgrade_open() to handle courtesy clients.
> 
> . add helper nfs4_check_and_expire_courtesy_client and
>   nfs4_is_courtesy_client_expired to deduplicate some code.
> 
> . update nfs4_anylock_blocker:
>    . replace list_for_each_entry_safe with list_for_each_entry
>    . break nfs4_anylock_blocker into 2 smaller functions.
> 
> . update nfs4_get_client_reaplist:
>    . remove unnecessary commets
>    . acquire cl_cs_lock before setting NFSD4_CLIENT_COURTESY flag
> 
> . update client_info_show to show 'time since last renew: 00:00:38'
>   instead of 'seconds from last renew: 38'.
> 
> v16:
> 
> . update client_info_show to display 'status' as
>   'confirmed/unconfirmed/courtesy'
> 
> . replace helper locks_has_blockers_locked in fs.h in v15 with new
>   locks_owner_has_blockers call in fs/locks.c
> 
> . update nfs4_lockowner_has_blockers to use locks_owner_has_blockers
> 
> . move nfs4_check_and_expire_courtesy_client from 5/11 to 4/11
> 
> . remove unnecessary check for NULL clp in find_in_sessionid_hashtb
> 
> . fix typo in commit messages
> 
> v17:
> 
> . replace flags used for courtesy client with enum courtesy_client_state
> 
> . add state table in nfsd/state.h
> 
> . make nfsd4_expire_courtesy_clnt, nfsd4_discard_courtesy_clnt and
>   nfsd4_courtesy_clnt_expired as static inline.
> 
> . update nfsd_breaker_owns_lease to use dl->dl_stid.sc_client directly
> 
> . fix kernel test robot warning when CONFIG_FILE_LOCKING not defined.
> 
> v18:
> 
> . modify 0005-NFSD-Update-nfs4_get_vfs_file-to-handle-courtesy-cli.patch to:
> 
>     . remove nfs4_check_access_deny_bmap, fold this functionality
>       into nfs4_resolve_deny_conflicts_locked by making use of
>       bmap_to_share_mode.
> 
>     . move nfs4_resolve_deny_conflicts_locked into nfs4_file_get_access
>       and nfs4_file_check_deny. 
> 
> v19:
> 
> . modify 0002-NFSD-Add-courtesy-client-state-macro-and-spinlock-to.patch to
> 
>     . add NFSD4_CLIENT_ACTIVE
> 
>     . redo Courtesy client state table
> 
> . modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch and
>   0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch to:
> 
>     . set cl_cs_client_stare to NFSD4_CLIENT_ACTIVE when reactive
>       courtesy client  
> 
> 

Dai,

Do you have a public tree with these patches?

-- 
Jeff Layton <jlayton@redhat.com>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server
  2022-04-02 10:35 ` [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Jeff Layton
@ 2022-04-02 15:10   ` Chuck Lever III
  0 siblings, 0 replies; 29+ messages in thread
From: Chuck Lever III @ 2022-04-02 15:10 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Dai Ngo, Bruce Fields, Al Viro, Linux NFS Mailing List, linux-fsdevel



> On Apr 2, 2022, at 6:35 AM, Jeff Layton <jlayton@redhat.com> wrote:
> 
> On Thu, 2022-03-31 at 09:01 -0700, Dai Ngo wrote:
>> Hi Chuck, Bruce
>> 
>> This series of patches implement the NFSv4 Courteous Server.
>> 
>> A server which does not immediately expunge the state on lease expiration
>> is known as a Courteous Server.  A Courteous Server continues to recognize
>> previously generated state tokens as valid until conflict arises between
>> the expired state and the requests from another client, or the server
>> reboots.
>> 
>> v2:
>> 
>> . add new callback, lm_expire_lock, to lock_manager_operations to
>>  allow the lock manager to take appropriate action with conflict lock.
>> 
>> . handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.
>> 
>> . expire courtesy client after 24hr if client has not reconnected.
>> 
>> . do not allow expired client to become courtesy client if there are
>>  waiters for client's locks.
>> 
>> . modify client_info_show to show courtesy client and seconds from
>>  last renew.
>> 
>> . fix a problem with NFSv4.1 server where the it keeps returning
>>  SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
>>  the courtesy client reconnects, causing the client to keep sending
>>  BCTS requests to server.
>> 
>> v3:
>> 
>> . modified posix_test_lock to check and resolve conflict locks
>>  to handle NLM TEST and NFSv4 LOCKT requests.
>> 
>> . separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.
>> 
>> v4:
>> 
>> . rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock
>>  by asking the laudromat thread to destroy the courtesy client.
>> 
>> . handle NFSv4 share reservation conflicts with courtesy client. This
>>  includes conflicts between access mode and deny mode and vice versa.
>> 
>> . drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.
>> 
>> v5:
>> 
>> . fix recursive locking of file_rwsem from posix_lock_file. 
>> 
>> . retest with LOCKDEP enabled.
>> 
>> v6:
>> 
>> . merge witn 5.15-rc7
>> 
>> . fix a bug in nfs4_check_deny_bmap that did not check for matched
>>  nfs4_file before checking for access/deny conflict. This bug causes
>>  pynfs OPEN18 to fail since the server taking too long to release
>>  lots of un-conflict clients' state.
>> 
>> . enhance share reservation conflict handler to handle case where
>>  a large number of conflict courtesy clients need to be expired.
>>  The 1st 100 clients are expired synchronously and the rest are
>>  expired in the background by the laundromat and NFS4ERR_DELAY
>>  is returned to the NFS client. This is needed to prevent the
>>  NFS client from timing out waiting got the reply.
>> 
>> v7:
>> 
>> . Fix race condition in posix_test_lock and posix_lock_inode after
>>  dropping spinlock.
>> 
>> . Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock
>>  callback
>> 
>> . Always resolve share reservation conflicts asynchrously.
>> 
>> . Fix bug in nfs4_laundromat where spinlock is not used when
>>  scanning cl_ownerstr_hashtbl.
>> 
>> . Fix bug in nfs4_laundromat where idr_get_next was called
>>  with incorrect 'id'. 
>> 
>> . Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock.
>> 
>> v8:
>> 
>> . Fix warning in nfsd4_fl_expire_lock reported by test robot.
>> 
>> v9:
>> 
>> . Simplify lm_expire_lock API by (1) remove the 'testonly' flag
>>  and (2) specifying return value as true/false to indicate
>>  whether conflict was succesfully resolved.
>> 
>> . Rework nfsd4_fl_expire_lock to mark client with
>>  NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire
>>  the client in the background.
>> 
>> . Add a spinlock in nfs4_client to synchronize access to the
>>  NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to
>>  handle race conditions when resolving lock and share reservation
>>  conflict.
>> 
>> . Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT
>>  are now consisdered 'dead', waiting for the laundromat to expire
>>  it. This client is no longer allowed to use its states if it
>>  reconnects before the laundromat finishes expiring the client.
>> 
>>  For v4.1 client, the detection is done in the processing of the
>>  SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client
>>  to re-establish new clientid and session.
>>  For v4.0 client, the detection is done in the processing of the
>>  RENEW and state-related ops and return NFS4ERR_EXPIRE to force
>>  the client to re-establish new clientid.
>> 
>> v10:
>> 
>>  Resolve deadlock in v9 by avoiding getting cl_client and
>>  cl_cs_lock together. The laundromat needs to determine whether
>>  the expired client has any state and also has no blockers on
>>  its locks. Both of these conditions are allowed to change after
>>  the laundromat transits an expired client to courtesy client.
>>  When this happens, the laundromat will detect it on the next
>>  run and and expire the courtesy client.
>> 
>>  Remove client persistent record before marking it as COURTESY_CLIENT
>>  and add client persistent record before clearing the COURTESY_CLIENT
>>  flag to allow the courtesy client to transist to normal client to
>>  continue to use its state.
>> 
>>  Lock/delegation/share reversation conflict with courtesy client is
>>  resolved by marking the courtesy client as DESTROY_COURTESY_CLIENT,
>>  effectively disable it, then allow the current request to proceed
>>  immediately.
>> 
>>  Courtesy client marked as DESTROY_COURTESY_CLIENT is not allowed
>>  to reconnect to reuse itsstate. It is expired by the laundromat
>>  asynchronously in the background.
>> 
>>  Move processing of expired clients from nfs4_laudromat to a
>>  separate function, nfs4_get_client_reaplist, that creates the
>>  reaplist and also to process courtesy clients.
>> 
>>  Update Documentation/filesystems/locking.rst to include new
>>  lm_lock_conflict call.
>> 
>>  Modify leases_conflict to call lm_breaker_owns_lease only if
>>  there is real conflict.  This is to allow the lock manager to
>>  resolve the delegation conflict if possible.
>> 
>> v11:
>> 
>>  Add comment for lm_lock_conflict callback.
>> 
>>  Replace static const courtesy_client_expiry with macro.
>> 
>>  Remove courtesy_clnt argument from find_in_sessionid_hashtbl.
>>  Callers use nfs4_client->cl_cs_client boolean to determined if
>>  it's the courtesy client and take appropriate actions.
>> 
>>  Rename NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT
>>  with NFSD4_CLIENT_COURTESY and NFSD4_CLIENT_DESTROY_COURTESY.
>> 
>> v12:
>> 
>>  Remove unnecessary comment in nfs4_get_client_reaplist.
>> 
>>  Replace nfs4_client->cl_cs_client boolean with
>>  NFSD4_CLIENT_COURTESY_CLNT flag.
>> 
>>  Remove courtesy_clnt argument from find_client_in_id_table and
>>  find_clp_in_name_tree. Callers use NFSD4_CLIENT_COURTESY_CLNT to
>>  determined if it's the courtesy client and take appropriate actions.
>> 
>> v13:
>> 
>>  Merge with 5.17-rc3.
>> 
>>  Cleanup Documentation/filesystems/locking.rst: replace i_lock
>>  with flc_lock, update API's that use flc_lock.
>> 
>>  Rename lm_lock_conflict to lm_lock_expired().
>> 
>>  Remove comment of lm_lock_expired API in lock_manager_operations.
>>  Same information is in patch description.
>> 
>>  Update commit messages of 4/4.
>> 
>>  Add some comment for NFSD4_CLIENT_COURTESY_CLNT.
>> 
>>  Add nfsd4_discard_courtesy_clnt() to eliminate duplicate code of
>>  discarding courtesy client; setting NFSD4_DESTROY_COURTESY_CLIENT.
>> 
>> v14:
>> 
>> . merge with Chuck's public for-next branch.
>> 
>> . remove courtesy_client_expiry, use client's last renew time.
>> 
>> . simplify comment of nfs4_check_access_deny_bmap.
>> 
>> . add comment about race condition in nfs4_get_client_reaplist.
>> 
>> . add list_del when walking cslist in nfs4_get_client_reaplist.
>> 
>> . remove duplicate INIT_LIST_HEAD(&reaplist) from nfs4_laundromat
>> 
>> . Modify find_confirmed_client and find_confirmed_client_by_name
>>  to detect courtesy client and destroy it.
>> 
>> . refactor lookup_clientid to use find_client_in_id_table
>>  directly instead of find_confirmed_client.
>> 
>> . refactor nfsd4_setclientid to call find_clp_in_name_tree
>>  directly instead of find_confirmed_client_by_name.
>> 
>> . remove comment of NFSD4_CLIENT_COURTESY.
>> 
>> . replace NFSD4_CLIENT_DESTROY_COURTESY with NFSD4_CLIENT_EXPIRED.
>> 
>> . replace NFSD4_CLIENT_COURTESY_CLNT with NFSD4_CLIENT_RECONNECTED.
>> 
>> v15:
>> 
>> . add helper locks_has_blockers_locked in fs.h to check for
>>  lock blockers
>> 
>> . rename nfs4_conflict_clients to nfs4_resolve_deny_conflicts_locked
>> 
>> . update nfs4_upgrade_open() to handle courtesy clients.
>> 
>> . add helper nfs4_check_and_expire_courtesy_client and
>>  nfs4_is_courtesy_client_expired to deduplicate some code.
>> 
>> . update nfs4_anylock_blocker:
>>   . replace list_for_each_entry_safe with list_for_each_entry
>>   . break nfs4_anylock_blocker into 2 smaller functions.
>> 
>> . update nfs4_get_client_reaplist:
>>   . remove unnecessary commets
>>   . acquire cl_cs_lock before setting NFSD4_CLIENT_COURTESY flag
>> 
>> . update client_info_show to show 'time since last renew: 00:00:38'
>>  instead of 'seconds from last renew: 38'.
>> 
>> v16:
>> 
>> . update client_info_show to display 'status' as
>>  'confirmed/unconfirmed/courtesy'
>> 
>> . replace helper locks_has_blockers_locked in fs.h in v15 with new
>>  locks_owner_has_blockers call in fs/locks.c
>> 
>> . update nfs4_lockowner_has_blockers to use locks_owner_has_blockers
>> 
>> . move nfs4_check_and_expire_courtesy_client from 5/11 to 4/11
>> 
>> . remove unnecessary check for NULL clp in find_in_sessionid_hashtb
>> 
>> . fix typo in commit messages
>> 
>> v17:
>> 
>> . replace flags used for courtesy client with enum courtesy_client_state
>> 
>> . add state table in nfsd/state.h
>> 
>> . make nfsd4_expire_courtesy_clnt, nfsd4_discard_courtesy_clnt and
>>  nfsd4_courtesy_clnt_expired as static inline.
>> 
>> . update nfsd_breaker_owns_lease to use dl->dl_stid.sc_client directly
>> 
>> . fix kernel test robot warning when CONFIG_FILE_LOCKING not defined.
>> 
>> v18:
>> 
>> . modify 0005-NFSD-Update-nfs4_get_vfs_file-to-handle-courtesy-cli.patch to:
>> 
>>    . remove nfs4_check_access_deny_bmap, fold this functionality
>>      into nfs4_resolve_deny_conflicts_locked by making use of
>>      bmap_to_share_mode.
>> 
>>    . move nfs4_resolve_deny_conflicts_locked into nfs4_file_get_access
>>      and nfs4_file_check_deny. 
>> 
>> v19:
>> 
>> . modify 0002-NFSD-Add-courtesy-client-state-macro-and-spinlock-to.patch to
>> 
>>    . add NFSD4_CLIENT_ACTIVE
>> 
>>    . redo Courtesy client state table
>> 
>> . modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch and
>>  0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch to:
>> 
>>    . set cl_cs_client_stare to NFSD4_CLIENT_ACTIVE when reactive
>>      courtesy client  
>> 
>> 
> 
> Dai,
> 
> Do you have a public tree with these patches?

I've been hosting them here:

  https://git.kernel.org/pub/scm/linux/kernel/git/cel/linux.git/

in the nfsd-courteous-server branch.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-01 19:11       ` dai.ngo
@ 2022-04-13 12:55         ` Bruce Fields
  2022-04-13 18:28           ` dai.ngo
                             ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Bruce Fields @ 2022-04-13 12:55 UTC (permalink / raw)
  To: dai.ngo
  Cc: Chuck Lever III, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel

On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
> On 4/1/22 8:57 AM, Chuck Lever III wrote:
> >>(And to be honest I'd still prefer the original approach where we expire
> >>clients from the posix locking code and then retry.  It handles an
> >>additional case (the one where reboot happens after a long network
> >>partition), and I don't think it requires adding these new client
> >>states....)
> >The locking of the earlier approach was unworkable.
> >
> >But, I'm happy to consider that again if you can come up with a way
> >of handling it properly and simply.
> 
> I will wait for feedback from Bruce before sending v20 with the
> above change.

OK, I'd like to tweak the design in that direction.

I'd like to handle the case where the network goes down for a while, and
the server gets power-cycled before the network comes back up.  I think
that could easily happen.  There's no reason clients couldn't reclaim
all their state in that case.  We should let them.

To handle that case, we have to delay removing the client's stable
storage record until there's a lock conflict.  That means code that
checks for conflicts must be able to sleep.

In each case (opens, locks, delegations), conflicts are first detected
while holding a spinlock.  So we need to unlock before waiting, and then
retry if necessary.

We decided instead to remove the stable-storage record when first
converting a client to a courtesy client--then we can handle a conflict
by just setting a flag on the client that indicates it should no longer
be used, no need to drop any locks.

That leaves the client in a state where it's still on a bunch of global
data structures, but has to be treated as if it no longer exists.  That
turns out to require more special handling than expected.  You've shown
admirable persistance in handling those cases, but I'm still not
completely convinced this is correct.

We could avoid that complication, and also solve the
server-reboot-during-network-partition problem, if we went back to the
first plan and allowed ourselves to sleep at the time we detect a
conflict.  I don't think it's that complicated.

We end up using a lot of the same logic regardless, so don't throw away
the existing patches.

My basic plan is:

Keep the client state, but with only three values: ACTIVE, COURTESY, and
EXPIRABLE.

ACTIVE is the initial state, which we return to whenever we renew.  The
laundromat sets COURTESY whenever a client isn't renewed for a lease
period.  When we run into a conflict with a lock held by a client, we
call

  static bool try_to_expire_client(struct nfs4_client *clp)
  {
	return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
  }

If it returns true, that tells us the client was a courtesy client.  We
then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
laundromat to actually expire the client.  Then if needed we can drop
locks, wait for the laundromat to do the work with
flush_workqueue(laundry_wq), and retry.

All the EXPIRABLE state does is tell the laundromat to expire this
client.  It does *not* prevent the client from being renewed and
acquiring new locks--if that happens before the laundromat gets to the
client, that's fine, we let it return to ACTIVE state and if someone
retries the conflicing lock they'll just get a denial.

Here's a suggested a rough patch ordering.  If you want to go above and
beyond, I also suggest some tests that should pass after each step:


PATCH 1
-------

Implement courtesy behavior *only* for clients that have
delegations, but no actual opens or locks:

Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
Set to ACTIVE on renewal.  Modify the laundromat so that instead of
expiring any client that's too old, it first checks if a client has
state consisting only of unconflicted delegations, and, if so, it sets
COURTESY.

Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
try_to_expire_client and queue_work.  (But also continue scheduling the
recall as we do in the current code, there's no harm to that.)

Modify the laundromat to try to expire old clients with EXPIRED set.

TESTS:
	- Establish a client, open a file, get a delegation, close the
	  file, wait 2 lease periods, verify that you can still use the
	  delegation.
	- Establish a client, open a file, get a delegation, close the
	  file, wait 2 lease periods, establish a second client, request
	  a conflicting open, verify that the open succeeds and that the
	  first client is no longer able to use its delegation.


PATCH 2
-------

Extend courtesy client behavior to clients that have opens or
delegations, but no locks:

Modify the laundromat to set COURTESY on old clients with state
consisting only of opens or unconflicted delegations.

Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
"Update nfs4_get_vfs_file()...", but in the case of a conflict, call
try_to_expire_client and queue_work(), then modify e.g.
nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
fi_lock.

TESTS:
	- establish a client, open a file, wait 2 lease periods, verify
	  that you can still use the open stateid.
	- establish a client, open a file, wait 2 lease periods,
	  establish a second client, request an open with a share mode
	  conflicting with the first open, verify that the open succeeds
	  and that first client is no longer able to use its open.

PATCH 3
-------

Minor tweak to prevent the laundromat from being freed out from
under a thread processing a conflicting lock:

Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
instead of where it's done currently.

(That makes the laundromat's lifetime longer than strictly necessary.
We could do better with a little more work; I think this is OK for now.)

TESTS:
	- just rerun any regression tests; this patch shouldn't change
	  behavior.

PATCH 4
-------

Extend courtesy client behavior to any client with state, including
locks:

Modify the laundromat to set COURTESY on any old client with state.

Add two new lock manager callbacks:

	void * (*lm_lock_expirable)(struct file_lock *);
	bool (*lm_expire_lock)(void *);

If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
should drop flc_lock, call lm_expire_lock() with the value returned from
lm_lock_expirable, and then restart the loop over flc_posix from the
beginning.

For now, nfsd's lm_lock_expirable will basically just be

	if (try_to_expire_client()) {
		queue_work()
		return get_net();
	}
	return NULL;

and lm_expire_lock will:

	flush_workqueue()
	put_net()

One more subtlety: the moment we drop the flc_lock, it's possible
another task could race in and free it.  Worse, the nfsd module could be
removed entirely--so nfsd's lm_expire_lock code could disappear out from
under us.  To prevent this, I think we need to add a struct module
*owner field to struct lock_manager_operations, and use it like:

	owner = fl->fl_lmops->owner;
	__get_module(owner);
	expire_lock = fl->fl_lmops->lm_expire_lock;
	spin_unlock(&ctx->flc_lock);
	expire_lock(...);
	module_put(owner);

Maybe there's some simpler way, but I don't see it.

TESTS:
	- retest courtesy client behavior using file locks this time.

--

That's the basic idea.  I think it should work--though I may have
overlooked something.

This has us flush the laundromat workqueue while holding mutexes in a
couple cases.  We could avoid that with a little more work, I think.
But those mutexes should only be associated with the client requesting a
new open/lock, and such a client shouldn't be touched by the laundromat,
so I think we're OK.

It'd also be helpful to update the info file with courtesy client
information, as you do in your current patches.

Does this make sense?

--b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-13 12:55         ` Bruce Fields
@ 2022-04-13 18:28           ` dai.ngo
  2022-04-13 18:42             ` Bruce Fields
  2022-04-15 14:47           ` dai.ngo
  2022-04-17 19:07           ` Bruce Fields
  2 siblings, 1 reply; 29+ messages in thread
From: dai.ngo @ 2022-04-13 18:28 UTC (permalink / raw)
  To: Bruce Fields
  Cc: Chuck Lever III, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel


On 4/13/22 5:55 AM, Bruce Fields wrote:
> On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
>> On 4/1/22 8:57 AM, Chuck Lever III wrote:
>>>> (And to be honest I'd still prefer the original approach where we expire
>>>> clients from the posix locking code and then retry.  It handles an
>>>> additional case (the one where reboot happens after a long network
>>>> partition), and I don't think it requires adding these new client
>>>> states....)
>>> The locking of the earlier approach was unworkable.
>>>
>>> But, I'm happy to consider that again if you can come up with a way
>>> of handling it properly and simply.
>> I will wait for feedback from Bruce before sending v20 with the
>> above change.
> OK, I'd like to tweak the design in that direction.
>
> I'd like to handle the case where the network goes down for a while, and
> the server gets power-cycled before the network comes back up.  I think
> that could easily happen.  There's no reason clients couldn't reclaim
> all their state in that case.  We should let them.
>
> To handle that case, we have to delay removing the client's stable
> storage record until there's a lock conflict.  That means code that
> checks for conflicts must be able to sleep.
>
> In each case (opens, locks, delegations), conflicts are first detected
> while holding a spinlock.  So we need to unlock before waiting, and then
> retry if necessary.
>
> We decided instead to remove the stable-storage record when first
> converting a client to a courtesy client--then we can handle a conflict
> by just setting a flag on the client that indicates it should no longer
> be used, no need to drop any locks.
>
> That leaves the client in a state where it's still on a bunch of global
> data structures, but has to be treated as if it no longer exists.  That
> turns out to require more special handling than expected.  You've shown
> admirable persistance in handling those cases, but I'm still not
> completely convinced this is correct.
>
> We could avoid that complication, and also solve the
> server-reboot-during-network-partition problem, if we went back to the
> first plan and allowed ourselves to sleep at the time we detect a
> conflict.  I don't think it's that complicated.
>
> We end up using a lot of the same logic regardless, so don't throw away
> the existing patches.
>
> My basic plan is:
>
> Keep the client state, but with only three values: ACTIVE, COURTESY, and
> EXPIRABLE.
>
> ACTIVE is the initial state, which we return to whenever we renew.  The
> laundromat sets COURTESY whenever a client isn't renewed for a lease
> period.  When we run into a conflict with a lock held by a client, we
> call
>
>    static bool try_to_expire_client(struct nfs4_client *clp)
>    {
> 	return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
>    }
>
> If it returns true, that tells us the client was a courtesy client.  We
> then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
> laundromat to actually expire the client.  Then if needed we can drop
> locks, wait for the laundromat to do the work with
> flush_workqueue(laundry_wq), and retry.
>
> All the EXPIRABLE state does is tell the laundromat to expire this
> client.  It does *not* prevent the client from being renewed and
> acquiring new locks--if that happens before the laundromat gets to the
> client, that's fine, we let it return to ACTIVE state and if someone
> retries the conflicing lock they'll just get a denial.
>
> Here's a suggested a rough patch ordering.  If you want to go above and
> beyond, I also suggest some tests that should pass after each step:
>
>
> PATCH 1
> -------
>
> Implement courtesy behavior *only* for clients that have
> delegations, but no actual opens or locks:
>
> Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
> Set to ACTIVE on renewal.  Modify the laundromat so that instead of
> expiring any client that's too old, it first checks if a client has
> state consisting only of unconflicted delegations, and, if so, it sets
> COURTESY.
>
> Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
> try_to_expire_client and queue_work.  (But also continue scheduling the
> recall as we do in the current code, there's no harm to that.)
>
> Modify the laundromat to try to expire old clients with EXPIRED set.
>
> TESTS:
> 	- Establish a client, open a file, get a delegation, close the
> 	  file, wait 2 lease periods, verify that you can still use the
> 	  delegation.
> 	- Establish a client, open a file, get a delegation, close the
> 	  file, wait 2 lease periods, establish a second client, request
> 	  a conflicting open, verify that the open succeeds and that the
> 	  first client is no longer able to use its delegation.
>
>
> PATCH 2
> -------
>
> Extend courtesy client behavior to clients that have opens or
> delegations, but no locks:
>
> Modify the laundromat to set COURTESY on old clients with state
> consisting only of opens or unconflicted delegations.
>
> Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
> "Update nfs4_get_vfs_file()...", but in the case of a conflict, call
> try_to_expire_client and queue_work(), then modify e.g.
> nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
> fi_lock.
>
> TESTS:
> 	- establish a client, open a file, wait 2 lease periods, verify
> 	  that you can still use the open stateid.
> 	- establish a client, open a file, wait 2 lease periods,
> 	  establish a second client, request an open with a share mode
> 	  conflicting with the first open, verify that the open succeeds
> 	  and that first client is no longer able to use its open.
>
> PATCH 3
> -------
>
> Minor tweak to prevent the laundromat from being freed out from
> under a thread processing a conflicting lock:
>
> Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
> instead of where it's done currently.
>
> (That makes the laundromat's lifetime longer than strictly necessary.
> We could do better with a little more work; I think this is OK for now.)
>
> TESTS:
> 	- just rerun any regression tests; this patch shouldn't change
> 	  behavior.
>
> PATCH 4
> -------
>
> Extend courtesy client behavior to any client with state, including
> locks:
>
> Modify the laundromat to set COURTESY on any old client with state.
>
> Add two new lock manager callbacks:
>
> 	void * (*lm_lock_expirable)(struct file_lock *);
> 	bool (*lm_expire_lock)(void *);
>
> If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
> should drop flc_lock, call lm_expire_lock() with the value returned from
> lm_lock_expirable, and then restart the loop over flc_posix from the
> beginning.
>
> For now, nfsd's lm_lock_expirable will basically just be
>
> 	if (try_to_expire_client()) {
> 		queue_work()
> 		return get_net();
> 	}
> 	return NULL;
>
> and lm_expire_lock will:
>
> 	flush_workqueue()
> 	put_net()
>
> One more subtlety: the moment we drop the flc_lock, it's possible
> another task could race in and free it.  Worse, the nfsd module could be
> removed entirely--so nfsd's lm_expire_lock code could disappear out from
> under us.  To prevent this, I think we need to add a struct module
> *owner field to struct lock_manager_operations, and use it like:
>
> 	owner = fl->fl_lmops->owner;
> 	__get_module(owner);
> 	expire_lock = fl->fl_lmops->lm_expire_lock;
> 	spin_unlock(&ctx->flc_lock);
> 	expire_lock(...);
> 	module_put(owner);
>
> Maybe there's some simpler way, but I don't see it.
>
> TESTS:
> 	- retest courtesy client behavior using file locks this time.
>
> --
>
> That's the basic idea.  I think it should work--though I may have
> overlooked something.
>
> This has us flush the laundromat workqueue while holding mutexes in a
> couple cases.  We could avoid that with a little more work, I think.
> But those mutexes should only be associated with the client requesting a
> new open/lock, and such a client shouldn't be touched by the laundromat,
> so I think we're OK.
>
> It'd also be helpful to update the info file with courtesy client
> information, as you do in your current patches.
>
> Does this make sense?

I think most of the complications in the current patches is due to the
handling of race conditions when courtesy client reconnects as well as
creating and removing client record (which I already addressed in v21).
The new approach here does not cover these race conditions, I guess
these are the details that will show up in the implementation.

I feel like we're going around in the circle but I will implement this
new approach then we can compare to see if it's simpler than the current
one.

-Dai


>
> --b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-13 18:28           ` dai.ngo
@ 2022-04-13 18:42             ` Bruce Fields
  0 siblings, 0 replies; 29+ messages in thread
From: Bruce Fields @ 2022-04-13 18:42 UTC (permalink / raw)
  To: dai.ngo
  Cc: Chuck Lever III, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel

On Wed, Apr 13, 2022 at 11:28:05AM -0700, dai.ngo@oracle.com wrote:
> 
> On 4/13/22 5:55 AM, Bruce Fields wrote:
> >On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
> >>On 4/1/22 8:57 AM, Chuck Lever III wrote:
> >>>>(And to be honest I'd still prefer the original approach where we expire
> >>>>clients from the posix locking code and then retry.  It handles an
> >>>>additional case (the one where reboot happens after a long network
> >>>>partition), and I don't think it requires adding these new client
> >>>>states....)
> >>>The locking of the earlier approach was unworkable.
> >>>
> >>>But, I'm happy to consider that again if you can come up with a way
> >>>of handling it properly and simply.
> >>I will wait for feedback from Bruce before sending v20 with the
> >>above change.
> >OK, I'd like to tweak the design in that direction.
> >
> >I'd like to handle the case where the network goes down for a while, and
> >the server gets power-cycled before the network comes back up.  I think
> >that could easily happen.  There's no reason clients couldn't reclaim
> >all their state in that case.  We should let them.
> >
> >To handle that case, we have to delay removing the client's stable
> >storage record until there's a lock conflict.  That means code that
> >checks for conflicts must be able to sleep.
> >
> >In each case (opens, locks, delegations), conflicts are first detected
> >while holding a spinlock.  So we need to unlock before waiting, and then
> >retry if necessary.
> >
> >We decided instead to remove the stable-storage record when first
> >converting a client to a courtesy client--then we can handle a conflict
> >by just setting a flag on the client that indicates it should no longer
> >be used, no need to drop any locks.
> >
> >That leaves the client in a state where it's still on a bunch of global
> >data structures, but has to be treated as if it no longer exists.  That
> >turns out to require more special handling than expected.  You've shown
> >admirable persistance in handling those cases, but I'm still not
> >completely convinced this is correct.
> >
> >We could avoid that complication, and also solve the
> >server-reboot-during-network-partition problem, if we went back to the
> >first plan and allowed ourselves to sleep at the time we detect a
> >conflict.  I don't think it's that complicated.
> >
> >We end up using a lot of the same logic regardless, so don't throw away
> >the existing patches.
> >
> >My basic plan is:
> >
> >Keep the client state, but with only three values: ACTIVE, COURTESY, and
> >EXPIRABLE.
> >
> >ACTIVE is the initial state, which we return to whenever we renew.  The
> >laundromat sets COURTESY whenever a client isn't renewed for a lease
> >period.  When we run into a conflict with a lock held by a client, we
> >call
> >
> >   static bool try_to_expire_client(struct nfs4_client *clp)
> >   {
> >	return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
> >   }
> >
> >If it returns true, that tells us the client was a courtesy client.  We
> >then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
> >laundromat to actually expire the client.  Then if needed we can drop
> >locks, wait for the laundromat to do the work with
> >flush_workqueue(laundry_wq), and retry.
> >
> >All the EXPIRABLE state does is tell the laundromat to expire this
> >client.  It does *not* prevent the client from being renewed and
> >acquiring new locks--if that happens before the laundromat gets to the
> >client, that's fine, we let it return to ACTIVE state and if someone
> >retries the conflicing lock they'll just get a denial.
> >
> >Here's a suggested a rough patch ordering.  If you want to go above and
> >beyond, I also suggest some tests that should pass after each step:
> >
> >
> >PATCH 1
> >-------
> >
> >Implement courtesy behavior *only* for clients that have
> >delegations, but no actual opens or locks:
> >
> >Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
> >Set to ACTIVE on renewal.  Modify the laundromat so that instead of
> >expiring any client that's too old, it first checks if a client has
> >state consisting only of unconflicted delegations, and, if so, it sets
> >COURTESY.
> >
> >Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
> >try_to_expire_client and queue_work.  (But also continue scheduling the
> >recall as we do in the current code, there's no harm to that.)
> >
> >Modify the laundromat to try to expire old clients with EXPIRED set.
> >
> >TESTS:
> >	- Establish a client, open a file, get a delegation, close the
> >	  file, wait 2 lease periods, verify that you can still use the
> >	  delegation.
> >	- Establish a client, open a file, get a delegation, close the
> >	  file, wait 2 lease periods, establish a second client, request
> >	  a conflicting open, verify that the open succeeds and that the
> >	  first client is no longer able to use its delegation.
> >
> >
> >PATCH 2
> >-------
> >
> >Extend courtesy client behavior to clients that have opens or
> >delegations, but no locks:
> >
> >Modify the laundromat to set COURTESY on old clients with state
> >consisting only of opens or unconflicted delegations.
> >
> >Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
> >"Update nfs4_get_vfs_file()...", but in the case of a conflict, call
> >try_to_expire_client and queue_work(), then modify e.g.
> >nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
> >fi_lock.
> >
> >TESTS:
> >	- establish a client, open a file, wait 2 lease periods, verify
> >	  that you can still use the open stateid.
> >	- establish a client, open a file, wait 2 lease periods,
> >	  establish a second client, request an open with a share mode
> >	  conflicting with the first open, verify that the open succeeds
> >	  and that first client is no longer able to use its open.
> >
> >PATCH 3
> >-------
> >
> >Minor tweak to prevent the laundromat from being freed out from
> >under a thread processing a conflicting lock:
> >
> >Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
> >instead of where it's done currently.
> >
> >(That makes the laundromat's lifetime longer than strictly necessary.
> >We could do better with a little more work; I think this is OK for now.)
> >
> >TESTS:
> >	- just rerun any regression tests; this patch shouldn't change
> >	  behavior.
> >
> >PATCH 4
> >-------
> >
> >Extend courtesy client behavior to any client with state, including
> >locks:
> >
> >Modify the laundromat to set COURTESY on any old client with state.
> >
> >Add two new lock manager callbacks:
> >
> >	void * (*lm_lock_expirable)(struct file_lock *);
> >	bool (*lm_expire_lock)(void *);
> >
> >If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
> >should drop flc_lock, call lm_expire_lock() with the value returned from
> >lm_lock_expirable, and then restart the loop over flc_posix from the
> >beginning.
> >
> >For now, nfsd's lm_lock_expirable will basically just be
> >
> >	if (try_to_expire_client()) {
> >		queue_work()
> >		return get_net();
> >	}
> >	return NULL;
> >
> >and lm_expire_lock will:
> >
> >	flush_workqueue()
> >	put_net()
> >
> >One more subtlety: the moment we drop the flc_lock, it's possible
> >another task could race in and free it.  Worse, the nfsd module could be
> >removed entirely--so nfsd's lm_expire_lock code could disappear out from
> >under us.  To prevent this, I think we need to add a struct module
> >*owner field to struct lock_manager_operations, and use it like:
> >
> >	owner = fl->fl_lmops->owner;
> >	__get_module(owner);
> >	expire_lock = fl->fl_lmops->lm_expire_lock;
> >	spin_unlock(&ctx->flc_lock);
> >	expire_lock(...);
> >	module_put(owner);
> >
> >Maybe there's some simpler way, but I don't see it.
> >
> >TESTS:
> >	- retest courtesy client behavior using file locks this time.
> >
> >--
> >
> >That's the basic idea.  I think it should work--though I may have
> >overlooked something.
> >
> >This has us flush the laundromat workqueue while holding mutexes in a
> >couple cases.  We could avoid that with a little more work, I think.
> >But those mutexes should only be associated with the client requesting a
> >new open/lock, and such a client shouldn't be touched by the laundromat,
> >so I think we're OK.
> >
> >It'd also be helpful to update the info file with courtesy client
> >information, as you do in your current patches.
> >
> >Does this make sense?
> 
> I think most of the complications in the current patches is due to the
> handling of race conditions when courtesy client reconnects as well as
> creating and removing client record (which I already addressed in v21).
> The new approach here does not cover these race conditions,

That's the thing, there *is* no reconnection with this approach.  A
"courtesy" client still has a stable storage record and is treated in
every way exactly like a normal active client that just hasn't been
renewed in a long time.  I'm suggesting this approach exactly to avoid
the complications you're talking about.

> I guess
> these are the details that will show up in the implementation.
> 
> I feel like we're going around in the circle but I will implement this
> new approach then we can compare to see if it's simpler than the current
> one.

Thanks for giving it a look.

--b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-13 12:55         ` Bruce Fields
  2022-04-13 18:28           ` dai.ngo
@ 2022-04-15 14:47           ` dai.ngo
  2022-04-15 14:56             ` dai.ngo
  2022-04-17 19:07           ` Bruce Fields
  2 siblings, 1 reply; 29+ messages in thread
From: dai.ngo @ 2022-04-15 14:47 UTC (permalink / raw)
  To: Bruce Fields; +Cc: Chuck Lever III, Linux NFS Mailing List


On 4/13/22 5:55 AM, Bruce Fields wrote:
> On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
>> On 4/1/22 8:57 AM, Chuck Lever III wrote:
>>>> (And to be honest I'd still prefer the original approach where we expire
>>>> clients from the posix locking code and then retry.  It handles an
>>>> additional case (the one where reboot happens after a long network
>>>> partition), and I don't think it requires adding these new client
>>>> states....)
>>> The locking of the earlier approach was unworkable.
>>>
>>> But, I'm happy to consider that again if you can come up with a way
>>> of handling it properly and simply.
>> I will wait for feedback from Bruce before sending v20 with the
>> above change.
> OK, I'd like to tweak the design in that direction.
>
> I'd like to handle the case where the network goes down for a while, and
> the server gets power-cycled before the network comes back up.  I think
> that could easily happen.  There's no reason clients couldn't reclaim
> all their state in that case.  We should let them.
>
> To handle that case, we have to delay removing the client's stable
> storage record until there's a lock conflict.  That means code that
> checks for conflicts must be able to sleep.
>
> In each case (opens, locks, delegations), conflicts are first detected
> while holding a spinlock.  So we need to unlock before waiting, and then
> retry if necessary.
>
> We decided instead to remove the stable-storage record when first
> converting a client to a courtesy client--then we can handle a conflict
> by just setting a flag on the client that indicates it should no longer
> be used, no need to drop any locks.
>
> That leaves the client in a state where it's still on a bunch of global
> data structures, but has to be treated as if it no longer exists.  That
> turns out to require more special handling than expected.  You've shown
> admirable persistance in handling those cases, but I'm still not
> completely convinced this is correct.
>
> We could avoid that complication, and also solve the
> server-reboot-during-network-partition problem, if we went back to the
> first plan and allowed ourselves to sleep at the time we detect a
> conflict.  I don't think it's that complicated.
>
> We end up using a lot of the same logic regardless, so don't throw away
> the existing patches.
>
> My basic plan is:
>
> Keep the client state, but with only three values: ACTIVE, COURTESY, and
> EXPIRABLE.
>
> ACTIVE is the initial state, which we return to whenever we renew.  The
> laundromat sets COURTESY whenever a client isn't renewed for a lease
> period.  When we run into a conflict with a lock held by a client, we
> call
>
>    static bool try_to_expire_client(struct nfs4_client *clp)
>    {
> 	return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
>    }
>
> If it returns true, that tells us the client was a courtesy client.  We
> then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
> laundromat to actually expire the client.  Then if needed we can drop
> locks, wait for the laundromat to do the work with
> flush_workqueue(laundry_wq), and retry.
>
> All the EXPIRABLE state does is tell the laundromat to expire this
> client.  It does *not* prevent the client from being renewed and
> acquiring new locks--if that happens before the laundromat gets to the
> client, that's fine, we let it return to ACTIVE state and if someone
> retries the conflicing lock they'll just get a denial.
>
> Here's a suggested a rough patch ordering.  If you want to go above and
> beyond, I also suggest some tests that should pass after each step:
>
>
> PATCH 1
> -------
>
> Implement courtesy behavior *only* for clients that have
> delegations, but no actual opens or locks:

we can do a close(2) so that the only remaining state is the
delegation state. However this causes problem for doing exactly
what you want in patch 1.

>
> Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
> Set to ACTIVE on renewal.  Modify the laundromat so that instead of
> expiring any client that's too old, it first checks if a client has
> state consisting only of unconflicted delegations, and, if so, it sets
> COURTESY.
>
> Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
> try_to_expire_client and queue_work.  (But also continue scheduling the
> recall as we do in the current code, there's no harm to that.)
>
> Modify the laundromat to try to expire old clients with EXPIRED set.
>
> TESTS:
> 	- Establish a client, open a file, get a delegation, close the
> 	  file, wait 2 lease periods, verify that you can still use the
> 	  delegation.

 From user space, how do we force the client to use the delegation
state to read the file *after* doing a close(2)?

In my testing, once the read delegation is granted the Linux client
uses the delegation state to read the file. So the test can do open(2),
read some (more(1)), wait for 2 lease periods and read again and
make sure it works as expected.

Can we leave the open state remain valid for this patch but do not
support share reservation conflict yet until Patch 2?

-Dai

> 	- Establish a client, open a file, get a delegation, close the
> 	  file, wait 2 lease periods, establish a second client, request
> 	  a conflicting open, verify that the open succeeds and that the
> 	  first client is no longer able to use its delegation.
>
>
> PATCH 2
> -------
>
> Extend courtesy client behavior to clients that have opens or
> delegations, but no locks:
>
> Modify the laundromat to set COURTESY on old clients with state
> consisting only of opens or unconflicted delegations.
>
> Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
> "Update nfs4_get_vfs_file()...", but in the case of a conflict, call
> try_to_expire_client and queue_work(), then modify e.g.
> nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
> fi_lock.
>
> TESTS:
> 	- establish a client, open a file, wait 2 lease periods, verify
> 	  that you can still use the open stateid.
> 	- establish a client, open a file, wait 2 lease periods,
> 	  establish a second client, request an open with a share mode
> 	  conflicting with the first open, verify that the open succeeds
> 	  and that first client is no longer able to use its open.
>
> PATCH 3
> -------
>
> Minor tweak to prevent the laundromat from being freed out from
> under a thread processing a conflicting lock:
>
> Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
> instead of where it's done currently.
>
> (That makes the laundromat's lifetime longer than strictly necessary.
> We could do better with a little more work; I think this is OK for now.)
>
> TESTS:
> 	- just rerun any regression tests; this patch shouldn't change
> 	  behavior.
>
> PATCH 4
> -------
>
> Extend courtesy client behavior to any client with state, including
> locks:
>
> Modify the laundromat to set COURTESY on any old client with state.
>
> Add two new lock manager callbacks:
>
> 	void * (*lm_lock_expirable)(struct file_lock *);
> 	bool (*lm_expire_lock)(void *);
>
> If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
> should drop flc_lock, call lm_expire_lock() with the value returned from
> lm_lock_expirable, and then restart the loop over flc_posix from the
> beginning.
>
> For now, nfsd's lm_lock_expirable will basically just be
>
> 	if (try_to_expire_client()) {
> 		queue_work()
> 		return get_net();
> 	}
> 	return NULL;
>
> and lm_expire_lock will:
>
> 	flush_workqueue()
> 	put_net()
>
> One more subtlety: the moment we drop the flc_lock, it's possible
> another task could race in and free it.  Worse, the nfsd module could be
> removed entirely--so nfsd's lm_expire_lock code could disappear out from
> under us.  To prevent this, I think we need to add a struct module
> *owner field to struct lock_manager_operations, and use it like:
>
> 	owner = fl->fl_lmops->owner;
> 	__get_module(owner);
> 	expire_lock = fl->fl_lmops->lm_expire_lock;
> 	spin_unlock(&ctx->flc_lock);
> 	expire_lock(...);
> 	module_put(owner);
>
> Maybe there's some simpler way, but I don't see it.
>
> TESTS:
> 	- retest courtesy client behavior using file locks this time.
>
> --
>
> That's the basic idea.  I think it should work--though I may have
> overlooked something.
>
> This has us flush the laundromat workqueue while holding mutexes in a
> couple cases.  We could avoid that with a little more work, I think.
> But those mutexes should only be associated with the client requesting a
> new open/lock, and such a client shouldn't be touched by the laundromat,
> so I think we're OK.
>
> It'd also be helpful to update the info file with courtesy client
> information, as you do in your current patches.
>
> Does this make sense?
>
> --b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-15 14:47           ` dai.ngo
@ 2022-04-15 14:56             ` dai.ngo
  2022-04-15 15:19               ` Bruce Fields
  0 siblings, 1 reply; 29+ messages in thread
From: dai.ngo @ 2022-04-15 14:56 UTC (permalink / raw)
  To: Bruce Fields; +Cc: Chuck Lever III, Linux NFS Mailing List


On 4/15/22 7:47 AM, dai.ngo@oracle.com wrote:
>
> On 4/13/22 5:55 AM, Bruce Fields wrote:
>> On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
>>> On 4/1/22 8:57 AM, Chuck Lever III wrote:
>>>>> (And to be honest I'd still prefer the original approach where we 
>>>>> expire
>>>>> clients from the posix locking code and then retry.  It handles an
>>>>> additional case (the one where reboot happens after a long network
>>>>> partition), and I don't think it requires adding these new client
>>>>> states....)
>>>> The locking of the earlier approach was unworkable.
>>>>
>>>> But, I'm happy to consider that again if you can come up with a way
>>>> of handling it properly and simply.
>>> I will wait for feedback from Bruce before sending v20 with the
>>> above change.
>> OK, I'd like to tweak the design in that direction.
>>
>> I'd like to handle the case where the network goes down for a while, and
>> the server gets power-cycled before the network comes back up. I think
>> that could easily happen.  There's no reason clients couldn't reclaim
>> all their state in that case.  We should let them.
>>
>> To handle that case, we have to delay removing the client's stable
>> storage record until there's a lock conflict.  That means code that
>> checks for conflicts must be able to sleep.
>>
>> In each case (opens, locks, delegations), conflicts are first detected
>> while holding a spinlock.  So we need to unlock before waiting, and then
>> retry if necessary.
>>
>> We decided instead to remove the stable-storage record when first
>> converting a client to a courtesy client--then we can handle a conflict
>> by just setting a flag on the client that indicates it should no longer
>> be used, no need to drop any locks.
>>
>> That leaves the client in a state where it's still on a bunch of global
>> data structures, but has to be treated as if it no longer exists.  That
>> turns out to require more special handling than expected. You've shown
>> admirable persistance in handling those cases, but I'm still not
>> completely convinced this is correct.
>>
>> We could avoid that complication, and also solve the
>> server-reboot-during-network-partition problem, if we went back to the
>> first plan and allowed ourselves to sleep at the time we detect a
>> conflict.  I don't think it's that complicated.
>>
>> We end up using a lot of the same logic regardless, so don't throw away
>> the existing patches.
>>
>> My basic plan is:
>>
>> Keep the client state, but with only three values: ACTIVE, COURTESY, and
>> EXPIRABLE.
>>
>> ACTIVE is the initial state, which we return to whenever we renew.  The
>> laundromat sets COURTESY whenever a client isn't renewed for a lease
>> period.  When we run into a conflict with a lock held by a client, we
>> call
>>
>>    static bool try_to_expire_client(struct nfs4_client *clp)
>>    {
>>     return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
>>    }
>>
>> If it returns true, that tells us the client was a courtesy client.  We
>> then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
>> laundromat to actually expire the client.  Then if needed we can drop
>> locks, wait for the laundromat to do the work with
>> flush_workqueue(laundry_wq), and retry.
>>
>> All the EXPIRABLE state does is tell the laundromat to expire this
>> client.  It does *not* prevent the client from being renewed and
>> acquiring new locks--if that happens before the laundromat gets to the
>> client, that's fine, we let it return to ACTIVE state and if someone
>> retries the conflicing lock they'll just get a denial.
>>
>> Here's a suggested a rough patch ordering.  If you want to go above and
>> beyond, I also suggest some tests that should pass after each step:
>>
>>
>> PATCH 1
>> -------
>>
>> Implement courtesy behavior *only* for clients that have
>> delegations, but no actual opens or locks:
>
> we can do a close(2) so that the only remaining state is the
> delegation state. However this causes problem for doing exactly
> what you want in patch 1.
>
>>
>> Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
>> Set to ACTIVE on renewal.  Modify the laundromat so that instead of
>> expiring any client that's too old, it first checks if a client has
>> state consisting only of unconflicted delegations, and, if so, it sets
>> COURTESY.
>>
>> Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
>> try_to_expire_client and queue_work.  (But also continue scheduling the
>> recall as we do in the current code, there's no harm to that.)
>>
>> Modify the laundromat to try to expire old clients with EXPIRED set.
>>
>> TESTS:
>>     - Establish a client, open a file, get a delegation, close the
>>       file, wait 2 lease periods, verify that you can still use the
>>       delegation.
>
> From user space, how do we force the client to use the delegation
> state to read the file *after* doing a close(2)?

I guess we can write new pynfs test to do this, but I'd like to avoid
it if possible.

-Dai
  

>
> In my testing, once the read delegation is granted the Linux client
> uses the delegation state to read the file. So the test can do open(2),
> read some (more(1)), wait for 2 lease periods and read again and
> make sure it works as expected.
>
> Can we leave the open state remain valid for this patch but do not
> support share reservation conflict yet until Patch 2?
>
> -Dai
>
>>     - Establish a client, open a file, get a delegation, close the
>>       file, wait 2 lease periods, establish a second client, request
>>       a conflicting open, verify that the open succeeds and that the
>>       first client is no longer able to use its delegation.
>>
>>
>> PATCH 2
>> -------
>>
>> Extend courtesy client behavior to clients that have opens or
>> delegations, but no locks:
>>
>> Modify the laundromat to set COURTESY on old clients with state
>> consisting only of opens or unconflicted delegations.
>>
>> Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
>> "Update nfs4_get_vfs_file()...", but in the case of a conflict, call
>> try_to_expire_client and queue_work(), then modify e.g.
>> nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
>> fi_lock.
>>
>> TESTS:
>>     - establish a client, open a file, wait 2 lease periods, verify
>>       that you can still use the open stateid.
>>     - establish a client, open a file, wait 2 lease periods,
>>       establish a second client, request an open with a share mode
>>       conflicting with the first open, verify that the open succeeds
>>       and that first client is no longer able to use its open.
>>
>> PATCH 3
>> -------
>>
>> Minor tweak to prevent the laundromat from being freed out from
>> under a thread processing a conflicting lock:
>>
>> Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
>> instead of where it's done currently.
>>
>> (That makes the laundromat's lifetime longer than strictly necessary.
>> We could do better with a little more work; I think this is OK for now.)
>>
>> TESTS:
>>     - just rerun any regression tests; this patch shouldn't change
>>       behavior.
>>
>> PATCH 4
>> -------
>>
>> Extend courtesy client behavior to any client with state, including
>> locks:
>>
>> Modify the laundromat to set COURTESY on any old client with state.
>>
>> Add two new lock manager callbacks:
>>
>>     void * (*lm_lock_expirable)(struct file_lock *);
>>     bool (*lm_expire_lock)(void *);
>>
>> If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
>> should drop flc_lock, call lm_expire_lock() with the value returned from
>> lm_lock_expirable, and then restart the loop over flc_posix from the
>> beginning.
>>
>> For now, nfsd's lm_lock_expirable will basically just be
>>
>>     if (try_to_expire_client()) {
>>         queue_work()
>>         return get_net();
>>     }
>>     return NULL;
>>
>> and lm_expire_lock will:
>>
>>     flush_workqueue()
>>     put_net()
>>
>> One more subtlety: the moment we drop the flc_lock, it's possible
>> another task could race in and free it.  Worse, the nfsd module could be
>> removed entirely--so nfsd's lm_expire_lock code could disappear out from
>> under us.  To prevent this, I think we need to add a struct module
>> *owner field to struct lock_manager_operations, and use it like:
>>
>>     owner = fl->fl_lmops->owner;
>>     __get_module(owner);
>>     expire_lock = fl->fl_lmops->lm_expire_lock;
>>     spin_unlock(&ctx->flc_lock);
>>     expire_lock(...);
>>     module_put(owner);
>>
>> Maybe there's some simpler way, but I don't see it.
>>
>> TESTS:
>>     - retest courtesy client behavior using file locks this time.
>>
>> -- 
>>
>> That's the basic idea.  I think it should work--though I may have
>> overlooked something.
>>
>> This has us flush the laundromat workqueue while holding mutexes in a
>> couple cases.  We could avoid that with a little more work, I think.
>> But those mutexes should only be associated with the client requesting a
>> new open/lock, and such a client shouldn't be touched by the laundromat,
>> so I think we're OK.
>>
>> It'd also be helpful to update the info file with courtesy client
>> information, as you do in your current patches.
>>
>> Does this make sense?
>>
>> --b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-15 14:56             ` dai.ngo
@ 2022-04-15 15:19               ` Bruce Fields
  2022-04-15 15:36                 ` dai.ngo
  2022-04-15 19:53                 ` Bruce Fields
  0 siblings, 2 replies; 29+ messages in thread
From: Bruce Fields @ 2022-04-15 15:19 UTC (permalink / raw)
  To: dai.ngo; +Cc: Chuck Lever III, Linux NFS Mailing List

On Fri, Apr 15, 2022 at 07:56:06AM -0700, dai.ngo@oracle.com wrote:
> On 4/15/22 7:47 AM, dai.ngo@oracle.com wrote:
> >From user space, how do we force the client to use the delegation
> >state to read the file *after* doing a close(2)?
> 
> I guess we can write new pynfs test to do this, but I'd like to avoid
> it if possible.

Right, this would require pynfs tests.  If you don't think they'd be
that bad.  But if you don't want to do tests for each step I think
that's not the end of the world, up to you.

--b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-15 15:19               ` Bruce Fields
@ 2022-04-15 15:36                 ` dai.ngo
  2022-04-15 19:53                 ` Bruce Fields
  1 sibling, 0 replies; 29+ messages in thread
From: dai.ngo @ 2022-04-15 15:36 UTC (permalink / raw)
  To: Bruce Fields; +Cc: Chuck Lever III, Linux NFS Mailing List

On 4/15/22 8:19 AM, Bruce Fields wrote:
> On Fri, Apr 15, 2022 at 07:56:06AM -0700, dai.ngo@oracle.com wrote:
>> On 4/15/22 7:47 AM, dai.ngo@oracle.com wrote:
>> >From user space, how do we force the client to use the delegation
>>> state to read the file *after* doing a close(2)?
>> I guess we can write new pynfs test to do this, but I'd like to avoid
>> it if possible.
> Right, this would require pynfs tests.  If you don't think they'd be
> that bad.  But if you don't want to do tests for each step I think
> that's not the end of the world, up to you.

ok, thanks Bruce. I'll leave open state alone but don't do anything
with share reservation yet until patch 2.

-Dai


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-15 15:19               ` Bruce Fields
  2022-04-15 15:36                 ` dai.ngo
@ 2022-04-15 19:53                 ` Bruce Fields
  1 sibling, 0 replies; 29+ messages in thread
From: Bruce Fields @ 2022-04-15 19:53 UTC (permalink / raw)
  To: dai.ngo; +Cc: Chuck Lever III, Linux NFS Mailing List

On Fri, Apr 15, 2022 at 11:19:59AM -0400, Bruce Fields wrote:
> On Fri, Apr 15, 2022 at 07:56:06AM -0700, dai.ngo@oracle.com wrote:
> > On 4/15/22 7:47 AM, dai.ngo@oracle.com wrote:
> > >From user space, how do we force the client to use the delegation
> > >state to read the file *after* doing a close(2)?
> > 
> > I guess we can write new pynfs test to do this, but I'd like to avoid
> > it if possible.
> 
> Right, this would require pynfs tests.  If you don't think they'd be
> that bad.

(Sorry, I meant to write "I don't think they'd be that bad"!  Anyway.)

> But if you don't want to do tests for each step I think
> that's not the end of the world, up to you.
> 
> --b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-13 12:55         ` Bruce Fields
  2022-04-13 18:28           ` dai.ngo
  2022-04-15 14:47           ` dai.ngo
@ 2022-04-17 19:07           ` Bruce Fields
  2022-04-18  1:18             ` dai.ngo
  2 siblings, 1 reply; 29+ messages in thread
From: Bruce Fields @ 2022-04-17 19:07 UTC (permalink / raw)
  To: dai.ngo
  Cc: Chuck Lever III, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel

On Wed, Apr 13, 2022 at 08:55:50AM -0400, Bruce Fields wrote:
> On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
> > On 4/1/22 8:57 AM, Chuck Lever III wrote:
> > >>(And to be honest I'd still prefer the original approach where we expire
> > >>clients from the posix locking code and then retry.  It handles an
> > >>additional case (the one where reboot happens after a long network
> > >>partition), and I don't think it requires adding these new client
> > >>states....)
> > >The locking of the earlier approach was unworkable.
> > >
> > >But, I'm happy to consider that again if you can come up with a way
> > >of handling it properly and simply.
> > 
> > I will wait for feedback from Bruce before sending v20 with the
> > above change.
> 
> OK, I'd like to tweak the design in that direction.
> 
> I'd like to handle the case where the network goes down for a while, and
> the server gets power-cycled before the network comes back up.  I think
> that could easily happen.  There's no reason clients couldn't reclaim
> all their state in that case.  We should let them.
> 
> To handle that case, we have to delay removing the client's stable
> storage record until there's a lock conflict.  That means code that
> checks for conflicts must be able to sleep.
> 
> In each case (opens, locks, delegations), conflicts are first detected
> while holding a spinlock.  So we need to unlock before waiting, and then
> retry if necessary.
> 
> We decided instead to remove the stable-storage record when first
> converting a client to a courtesy client--then we can handle a conflict
> by just setting a flag on the client that indicates it should no longer
> be used, no need to drop any locks.
> 
> That leaves the client in a state where it's still on a bunch of global
> data structures, but has to be treated as if it no longer exists.  That
> turns out to require more special handling than expected.  You've shown
> admirable persistance in handling those cases, but I'm still not
> completely convinced this is correct.
> 
> We could avoid that complication, and also solve the
> server-reboot-during-network-partition problem, if we went back to the
> first plan and allowed ourselves to sleep at the time we detect a
> conflict.  I don't think it's that complicated.
> 
> We end up using a lot of the same logic regardless, so don't throw away
> the existing patches.
> 
> My basic plan is:
> 
> Keep the client state, but with only three values: ACTIVE, COURTESY, and
> EXPIRABLE.
> 
> ACTIVE is the initial state, which we return to whenever we renew.  The
> laundromat sets COURTESY whenever a client isn't renewed for a lease
> period.  When we run into a conflict with a lock held by a client, we
> call
> 
>   static bool try_to_expire_client(struct nfs4_client *clp)
>   {
> 	return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
>   }
> 
> If it returns true, that tells us the client was a courtesy client.  We
> then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
> laundromat to actually expire the client.  Then if needed we can drop
> locks, wait for the laundromat to do the work with
> flush_workqueue(laundry_wq), and retry.
> 
> All the EXPIRABLE state does is tell the laundromat to expire this
> client.  It does *not* prevent the client from being renewed and
> acquiring new locks--if that happens before the laundromat gets to the
> client, that's fine, we let it return to ACTIVE state and if someone
> retries the conflicing lock they'll just get a denial.
> 
> Here's a suggested a rough patch ordering.  If you want to go above and
> beyond, I also suggest some tests that should pass after each step:
> 
> 
> PATCH 1
> -------
> 
> Implement courtesy behavior *only* for clients that have
> delegations, but no actual opens or locks:
> 
> Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
> Set to ACTIVE on renewal.  Modify the laundromat so that instead of
> expiring any client that's too old, it first checks if a client has
> state consisting only of unconflicted delegations, and, if so, it sets
> COURTESY.
> 
> Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
> try_to_expire_client and queue_work.  (But also continue scheduling the
> recall as we do in the current code, there's no harm to that.)
> 
> Modify the laundromat to try to expire old clients with EXPIRED set.
> 
> TESTS:
> 	- Establish a client, open a file, get a delegation, close the
> 	  file, wait 2 lease periods, verify that you can still use the
> 	  delegation.
> 	- Establish a client, open a file, get a delegation, close the
> 	  file, wait 2 lease periods, establish a second client, request
> 	  a conflicting open, verify that the open succeeds and that the
> 	  first client is no longer able to use its delegation.
> 
> 
> PATCH 2
> -------
> 
> Extend courtesy client behavior to clients that have opens or
> delegations, but no locks:
> 
> Modify the laundromat to set COURTESY on old clients with state
> consisting only of opens or unconflicted delegations.
> 
> Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
> "Update nfs4_get_vfs_file()...", but in the case of a conflict, call
> try_to_expire_client and queue_work(), then modify e.g.
> nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
> fi_lock.
> 
> TESTS:
> 	- establish a client, open a file, wait 2 lease periods, verify
> 	  that you can still use the open stateid.
> 	- establish a client, open a file, wait 2 lease periods,
> 	  establish a second client, request an open with a share mode
> 	  conflicting with the first open, verify that the open succeeds
> 	  and that first client is no longer able to use its open.
> 
> PATCH 3
> -------
> 
> Minor tweak to prevent the laundromat from being freed out from
> under a thread processing a conflicting lock:
> 
> Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
> instead of where it's done currently.
> 
> (That makes the laundromat's lifetime longer than strictly necessary.
> We could do better with a little more work; I think this is OK for now.)
> 
> TESTS:
> 	- just rerun any regression tests; this patch shouldn't change
> 	  behavior.
> 
> PATCH 4
> -------
> 
> Extend courtesy client behavior to any client with state, including
> locks:
> 
> Modify the laundromat to set COURTESY on any old client with state.
> 
> Add two new lock manager callbacks:
> 
> 	void * (*lm_lock_expirable)(struct file_lock *);
> 	bool (*lm_expire_lock)(void *);
> 
> If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
> should drop flc_lock, call lm_expire_lock() with the value returned from
> lm_lock_expirable, and then restart the loop over flc_posix from the
> beginning.
> 
> For now, nfsd's lm_lock_expirable will basically just be
> 
> 	if (try_to_expire_client()) {
> 		queue_work()
> 		return get_net();

Correction: I forgot that the laundromat is global, not per-net.  So, we
can skip the put_net/get_net.  Also, lm_lock_expirable can just return
bool instead of void *, and lm_expire_lock needs no arguments.

--b.

> 	}
> 	return NULL;
> 
> and lm_expire_lock will:
> 
> 	flush_workqueue()
> 	put_net()
> 
> One more subtlety: the moment we drop the flc_lock, it's possible
> another task could race in and free it.  Worse, the nfsd module could be
> removed entirely--so nfsd's lm_expire_lock code could disappear out from
> under us.  To prevent this, I think we need to add a struct module
> *owner field to struct lock_manager_operations, and use it like:
> 
> 	owner = fl->fl_lmops->owner;
> 	__get_module(owner);
> 	expire_lock = fl->fl_lmops->lm_expire_lock;
> 	spin_unlock(&ctx->flc_lock);
> 	expire_lock(...);
> 	module_put(owner);
> 
> Maybe there's some simpler way, but I don't see it.
> 
> TESTS:
> 	- retest courtesy client behavior using file locks this time.
> 
> --
> 
> That's the basic idea.  I think it should work--though I may have
> overlooked something.
> 
> This has us flush the laundromat workqueue while holding mutexes in a
> couple cases.  We could avoid that with a little more work, I think.
> But those mutexes should only be associated with the client requesting a
> new open/lock, and such a client shouldn't be touched by the laundromat,
> so I think we're OK.
> 
> It'd also be helpful to update the info file with courtesy client
> information, as you do in your current patches.
> 
> Does this make sense?
> 
> --b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() to handle courtesy client
  2022-04-17 19:07           ` Bruce Fields
@ 2022-04-18  1:18             ` dai.ngo
  0 siblings, 0 replies; 29+ messages in thread
From: dai.ngo @ 2022-04-18  1:18 UTC (permalink / raw)
  To: Bruce Fields
  Cc: Chuck Lever III, Jeff Layton, Al Viro, Linux NFS Mailing List,
	linux-fsdevel


On 4/17/22 12:07 PM, Bruce Fields wrote:
> On Wed, Apr 13, 2022 at 08:55:50AM -0400, Bruce Fields wrote:
>> On Fri, Apr 01, 2022 at 12:11:34PM -0700, dai.ngo@oracle.com wrote:
>>> On 4/1/22 8:57 AM, Chuck Lever III wrote:
>>>>> (And to be honest I'd still prefer the original approach where we expire
>>>>> clients from the posix locking code and then retry.  It handles an
>>>>> additional case (the one where reboot happens after a long network
>>>>> partition), and I don't think it requires adding these new client
>>>>> states....)
>>>> The locking of the earlier approach was unworkable.
>>>>
>>>> But, I'm happy to consider that again if you can come up with a way
>>>> of handling it properly and simply.
>>> I will wait for feedback from Bruce before sending v20 with the
>>> above change.
>> OK, I'd like to tweak the design in that direction.
>>
>> I'd like to handle the case where the network goes down for a while, and
>> the server gets power-cycled before the network comes back up.  I think
>> that could easily happen.  There's no reason clients couldn't reclaim
>> all their state in that case.  We should let them.
>>
>> To handle that case, we have to delay removing the client's stable
>> storage record until there's a lock conflict.  That means code that
>> checks for conflicts must be able to sleep.
>>
>> In each case (opens, locks, delegations), conflicts are first detected
>> while holding a spinlock.  So we need to unlock before waiting, and then
>> retry if necessary.
>>
>> We decided instead to remove the stable-storage record when first
>> converting a client to a courtesy client--then we can handle a conflict
>> by just setting a flag on the client that indicates it should no longer
>> be used, no need to drop any locks.
>>
>> That leaves the client in a state where it's still on a bunch of global
>> data structures, but has to be treated as if it no longer exists.  That
>> turns out to require more special handling than expected.  You've shown
>> admirable persistance in handling those cases, but I'm still not
>> completely convinced this is correct.
>>
>> We could avoid that complication, and also solve the
>> server-reboot-during-network-partition problem, if we went back to the
>> first plan and allowed ourselves to sleep at the time we detect a
>> conflict.  I don't think it's that complicated.
>>
>> We end up using a lot of the same logic regardless, so don't throw away
>> the existing patches.
>>
>> My basic plan is:
>>
>> Keep the client state, but with only three values: ACTIVE, COURTESY, and
>> EXPIRABLE.
>>
>> ACTIVE is the initial state, which we return to whenever we renew.  The
>> laundromat sets COURTESY whenever a client isn't renewed for a lease
>> period.  When we run into a conflict with a lock held by a client, we
>> call
>>
>>    static bool try_to_expire_client(struct nfs4_client *clp)
>>    {
>> 	return COURTESY == cmpxchg(clp->cl_state, COURTESY, EXPIRABLE);
>>    }
>>
>> If it returns true, that tells us the client was a courtesy client.  We
>> then call queue_work(laundry_wq, &nn->laundromat_work) to tell the
>> laundromat to actually expire the client.  Then if needed we can drop
>> locks, wait for the laundromat to do the work with
>> flush_workqueue(laundry_wq), and retry.
>>
>> All the EXPIRABLE state does is tell the laundromat to expire this
>> client.  It does *not* prevent the client from being renewed and
>> acquiring new locks--if that happens before the laundromat gets to the
>> client, that's fine, we let it return to ACTIVE state and if someone
>> retries the conflicing lock they'll just get a denial.
>>
>> Here's a suggested a rough patch ordering.  If you want to go above and
>> beyond, I also suggest some tests that should pass after each step:
>>
>>
>> PATCH 1
>> -------
>>
>> Implement courtesy behavior *only* for clients that have
>> delegations, but no actual opens or locks:
>>
>> Define new cl_state field with values ACTIVE, COURTESY, and EXPIRABLE.
>> Set to ACTIVE on renewal.  Modify the laundromat so that instead of
>> expiring any client that's too old, it first checks if a client has
>> state consisting only of unconflicted delegations, and, if so, it sets
>> COURTESY.
>>
>> Define try_to_expire_client as above.  In nfsd_break_deleg_cb, call
>> try_to_expire_client and queue_work.  (But also continue scheduling the
>> recall as we do in the current code, there's no harm to that.)
>>
>> Modify the laundromat to try to expire old clients with EXPIRED set.
>>
>> TESTS:
>> 	- Establish a client, open a file, get a delegation, close the
>> 	  file, wait 2 lease periods, verify that you can still use the
>> 	  delegation.
>> 	- Establish a client, open a file, get a delegation, close the
>> 	  file, wait 2 lease periods, establish a second client, request
>> 	  a conflicting open, verify that the open succeeds and that the
>> 	  first client is no longer able to use its delegation.
>>
>>
>> PATCH 2
>> -------
>>
>> Extend courtesy client behavior to clients that have opens or
>> delegations, but no locks:
>>
>> Modify the laundromat to set COURTESY on old clients with state
>> consisting only of opens or unconflicted delegations.
>>
>> Add in nfs4_resolve_deny_conflicts_locked and friends as in your patch
>> "Update nfs4_get_vfs_file()...", but in the case of a conflict, call
>> try_to_expire_client and queue_work(), then modify e.g.
>> nfs4_get_vfs_file to flush_workqueue() and then retry after unlocking
>> fi_lock.
>>
>> TESTS:
>> 	- establish a client, open a file, wait 2 lease periods, verify
>> 	  that you can still use the open stateid.
>> 	- establish a client, open a file, wait 2 lease periods,
>> 	  establish a second client, request an open with a share mode
>> 	  conflicting with the first open, verify that the open succeeds
>> 	  and that first client is no longer able to use its open.
>>
>> PATCH 3
>> -------
>>
>> Minor tweak to prevent the laundromat from being freed out from
>> under a thread processing a conflicting lock:
>>
>> Create and destroy the laundromat workqueue in init_nfsd/exit_nfsd
>> instead of where it's done currently.
>>
>> (That makes the laundromat's lifetime longer than strictly necessary.
>> We could do better with a little more work; I think this is OK for now.)
>>
>> TESTS:
>> 	- just rerun any regression tests; this patch shouldn't change
>> 	  behavior.
>>
>> PATCH 4
>> -------
>>
>> Extend courtesy client behavior to any client with state, including
>> locks:
>>
>> Modify the laundromat to set COURTESY on any old client with state.
>>
>> Add two new lock manager callbacks:
>>
>> 	void * (*lm_lock_expirable)(struct file_lock *);
>> 	bool (*lm_expire_lock)(void *);
>>
>> If lm_lock_expirable() is called and returns non-NULL, posix_lock_inode
>> should drop flc_lock, call lm_expire_lock() with the value returned from
>> lm_lock_expirable, and then restart the loop over flc_posix from the
>> beginning.
>>
>> For now, nfsd's lm_lock_expirable will basically just be
>>
>> 	if (try_to_expire_client()) {
>> 		queue_work()
>> 		return get_net();
> Correction: I forgot that the laundromat is global, not per-net.  So, we
> can skip the put_net/get_net.  Also, lm_lock_expirable can just return
> bool instead of void *, and lm_expire_lock needs no arguments.

okay.

-Dai

>
> --b.
>
>> 	}
>> 	return NULL;
>>
>> and lm_expire_lock will:
>>
>> 	flush_workqueue()
>> 	put_net()
>>
>> One more subtlety: the moment we drop the flc_lock, it's possible
>> another task could race in and free it.  Worse, the nfsd module could be
>> removed entirely--so nfsd's lm_expire_lock code could disappear out from
>> under us.  To prevent this, I think we need to add a struct module
>> *owner field to struct lock_manager_operations, and use it like:
>>
>> 	owner = fl->fl_lmops->owner;
>> 	__get_module(owner);
>> 	expire_lock = fl->fl_lmops->lm_expire_lock;
>> 	spin_unlock(&ctx->flc_lock);
>> 	expire_lock(...);
>> 	module_put(owner);
>>
>> Maybe there's some simpler way, but I don't see it.
>>
>> TESTS:
>> 	- retest courtesy client behavior using file locks this time.
>>
>> --
>>
>> That's the basic idea.  I think it should work--though I may have
>> overlooked something.
>>
>> This has us flush the laundromat workqueue while holding mutexes in a
>> couple cases.  We could avoid that with a little more work, I think.
>> But those mutexes should only be associated with the client requesting a
>> new open/lock, and such a client shouldn't be touched by the laundromat,
>> so I think we're OK.
>>
>> It'd also be helpful to update the info file with courtesy client
>> information, as you do in your current patches.
>>
>> Does this make sense?
>>
>> --b.

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2022-04-18  1:18 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-31 16:01 [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
2022-03-31 16:01 ` [PATCH RFC v19 01/11] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
2022-03-31 16:17   ` Chuck Lever III
2022-03-31 16:29     ` dai.ngo
2022-03-31 16:02 ` [PATCH RFC v19 02/11] NFSD: Add courtesy client state, macro and spinlock to support courteous server Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 03/11] NFSD: Add lm_lock_expired call out Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 04/11] NFSD: Update nfsd_breaker_owns_lease() to handle courtesy clients Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 05/11] NFSD: Update nfs4_get_vfs_file() to handle courtesy client Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 06/11] NFSD: Update find_clp_in_name_tree() " Dai Ngo
2022-04-01 15:21   ` J. Bruce Fields
2022-04-01 15:57     ` Chuck Lever III
2022-04-01 19:11       ` dai.ngo
2022-04-13 12:55         ` Bruce Fields
2022-04-13 18:28           ` dai.ngo
2022-04-13 18:42             ` Bruce Fields
2022-04-15 14:47           ` dai.ngo
2022-04-15 14:56             ` dai.ngo
2022-04-15 15:19               ` Bruce Fields
2022-04-15 15:36                 ` dai.ngo
2022-04-15 19:53                 ` Bruce Fields
2022-04-17 19:07           ` Bruce Fields
2022-04-18  1:18             ` dai.ngo
2022-03-31 16:02 ` [PATCH RFC v19 07/11] NFSD: Update find_in_sessionid_hashtbl() " Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 08/11] NFSD: Update find_client_in_id_table() " Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 09/11] NFSD: Refactor nfsd4_laundromat() Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 10/11] NFSD: Update laundromat to handle courtesy clients Dai Ngo
2022-03-31 16:02 ` [PATCH RFC v19 11/11] NFSD: Show state of courtesy clients in client info Dai Ngo
2022-04-02 10:35 ` [PATCH RFC v19 0/11] NFSD: Initial implementation of NFSv4 Courteous Server Jeff Layton
2022-04-02 15:10   ` Chuck Lever III

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.