All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server
@ 2022-05-01 17:38 Dai Ngo
  2022-05-01 17:38 ` [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation Dai Ngo
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Hi Chuck, Bruce

This series of patches implement the NFSv4 Courteous Server.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

v2:

. add new callback, lm_expire_lock, to lock_manager_operations to
  allow the lock manager to take appropriate action with conflict lock.

. handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.

. expire courtesy client after 24hr if client has not reconnected.

. do not allow expired client to become courtesy client if there are
  waiters for client's locks.

. modify client_info_show to show courtesy client and seconds from
  last renew.

. fix a problem with NFSv4.1 server where the it keeps returning
  SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
  the courtesy client reconnects, causing the client to keep sending
  BCTS requests to server.

v3:

. modified posix_test_lock to check and resolve conflict locks
  to handle NLM TEST and NFSv4 LOCKT requests.

. separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

v4:

. rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock
  by asking the laudromat thread to destroy the courtesy client.

. handle NFSv4 share reservation conflicts with courtesy client. This
  includes conflicts between access mode and deny mode and vice versa.

. drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.

v5:

. fix recursive locking of file_rwsem from posix_lock_file. 

. retest with LOCKDEP enabled.

v6:

. merge witn 5.15-rc7

. fix a bug in nfs4_check_deny_bmap that did not check for matched
  nfs4_file before checking for access/deny conflict. This bug causes
  pynfs OPEN18 to fail since the server taking too long to release
  lots of un-conflict clients' state.

. enhance share reservation conflict handler to handle case where
  a large number of conflict courtesy clients need to be expired.
  The 1st 100 clients are expired synchronously and the rest are
  expired in the background by the laundromat and NFS4ERR_DELAY
  is returned to the NFS client. This is needed to prevent the
  NFS client from timing out waiting got the reply.

v7:

. Fix race condition in posix_test_lock and posix_lock_inode after
  dropping spinlock.

. Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock
  callback

. Always resolve share reservation conflicts asynchrously.

. Fix bug in nfs4_laundromat where spinlock is not used when
  scanning cl_ownerstr_hashtbl.

. Fix bug in nfs4_laundromat where idr_get_next was called
  with incorrect 'id'. 

. Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock.

v8:

. Fix warning in nfsd4_fl_expire_lock reported by test robot.

v9:

. Simplify lm_expire_lock API by (1) remove the 'testonly' flag
  and (2) specifying return value as true/false to indicate
  whether conflict was succesfully resolved.

. Rework nfsd4_fl_expire_lock to mark client with
  NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire
  the client in the background.

. Add a spinlock in nfs4_client to synchronize access to the
  NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to
  handle race conditions when resolving lock and share reservation
  conflict.

. Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT
  are now consisdered 'dead', waiting for the laundromat to expire
  it. This client is no longer allowed to use its states if it
  reconnects before the laundromat finishes expiring the client.

  For v4.1 client, the detection is done in the processing of the
  SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client
  to re-establish new clientid and session.
  For v4.0 client, the detection is done in the processing of the
  RENEW and state-related ops and return NFS4ERR_EXPIRE to force
  the client to re-establish new clientid.

v10:

  Resolve deadlock in v9 by avoiding getting cl_client and
  cl_cs_lock together. The laundromat needs to determine whether
  the expired client has any state and also has no blockers on
  its locks. Both of these conditions are allowed to change after
  the laundromat transits an expired client to courtesy client.
  When this happens, the laundromat will detect it on the next
  run and and expire the courtesy client.

  Remove client persistent record before marking it as COURTESY_CLIENT
  and add client persistent record before clearing the COURTESY_CLIENT
  flag to allow the courtesy client to transist to normal client to
  continue to use its state.

  Lock/delegation/share reversation conflict with courtesy client is
  resolved by marking the courtesy client as DESTROY_COURTESY_CLIENT,
  effectively disable it, then allow the current request to proceed
  immediately.
  
  Courtesy client marked as DESTROY_COURTESY_CLIENT is not allowed
  to reconnect to reuse itsstate. It is expired by the laundromat
  asynchronously in the background.

  Move processing of expired clients from nfs4_laudromat to a
  separate function, nfs4_get_client_reaplist, that creates the
  reaplist and also to process courtesy clients.

  Update Documentation/filesystems/locking.rst to include new
  lm_lock_conflict call.

  Modify leases_conflict to call lm_breaker_owns_lease only if
  there is real conflict.  This is to allow the lock manager to
  resolve the delegation conflict if possible.

v11:

  Add comment for lm_lock_conflict callback.

  Replace static const courtesy_client_expiry with macro.

  Remove courtesy_clnt argument from find_in_sessionid_hashtbl.
  Callers use nfs4_client->cl_cs_client boolean to determined if
  it's the courtesy client and take appropriate actions.

  Rename NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT
  with NFSD4_CLIENT_COURTESY and NFSD4_CLIENT_DESTROY_COURTESY.

v12:

  Remove unnecessary comment in nfs4_get_client_reaplist.

  Replace nfs4_client->cl_cs_client boolean with
  NFSD4_CLIENT_COURTESY_CLNT flag.

  Remove courtesy_clnt argument from find_client_in_id_table and
  find_clp_in_name_tree. Callers use NFSD4_CLIENT_COURTESY_CLNT to
  determined if it's the courtesy client and take appropriate actions.

v13:

  Merge with 5.17-rc3.

  Cleanup Documentation/filesystems/locking.rst: replace i_lock
  with flc_lock, update API's that use flc_lock.

  Rename lm_lock_conflict to lm_lock_expired().

  Remove comment of lm_lock_expired API in lock_manager_operations.
  Same information is in patch description.

  Update commit messages of 4/4.

  Add some comment for NFSD4_CLIENT_COURTESY_CLNT.

  Add nfsd4_discard_courtesy_clnt() to eliminate duplicate code of
  discarding courtesy client; setting NFSD4_DESTROY_COURTESY_CLIENT.

v14:

. merge with Chuck's public for-next branch.

. remove courtesy_client_expiry, use client's last renew time.

. simplify comment of nfs4_check_access_deny_bmap.

. add comment about race condition in nfs4_get_client_reaplist.

. add list_del when walking cslist in nfs4_get_client_reaplist.

. remove duplicate INIT_LIST_HEAD(&reaplist) from nfs4_laundromat

. Modify find_confirmed_client and find_confirmed_client_by_name
  to detect courtesy client and destroy it.

. refactor lookup_clientid to use find_client_in_id_table
  directly instead of find_confirmed_client.

. refactor nfsd4_setclientid to call find_clp_in_name_tree
  directly instead of find_confirmed_client_by_name.

. remove comment of NFSD4_CLIENT_COURTESY.

. replace NFSD4_CLIENT_DESTROY_COURTESY with NFSD4_CLIENT_EXPIRED.

. replace NFSD4_CLIENT_COURTESY_CLNT with NFSD4_CLIENT_RECONNECTED.

v15:

. add helper locks_has_blockers_locked in fs.h to check for
  lock blockers

. rename nfs4_conflict_clients to nfs4_resolve_deny_conflicts_locked

. update nfs4_upgrade_open() to handle courtesy clients.

. add helper nfs4_check_and_expire_courtesy_client and
  nfs4_is_courtesy_client_expired to deduplicate some code.

. update nfs4_anylock_blocker:
   . replace list_for_each_entry_safe with list_for_each_entry
   . break nfs4_anylock_blocker into 2 smaller functions.

. update nfs4_get_client_reaplist:
   . remove unnecessary commets
   . acquire cl_cs_lock before setting NFSD4_CLIENT_COURTESY flag

. update client_info_show to show 'time since last renew: 00:00:38'
  instead of 'seconds from last renew: 38'.

v16:

. update client_info_show to display 'status' as
  'confirmed/unconfirmed/courtesy'

. replace helper locks_has_blockers_locked in fs.h in v15 with new
  locks_owner_has_blockers call in fs/locks.c

. update nfs4_lockowner_has_blockers to use locks_owner_has_blockers

. move nfs4_check_and_expire_courtesy_client from 5/11 to 4/11

. remove unnecessary check for NULL clp in find_in_sessionid_hashtb

. fix typo in commit messages

v17:

. replace flags used for courtesy client with enum courtesy_client_state

. add state table in nfsd/state.h

. make nfsd4_expire_courtesy_clnt, nfsd4_discard_courtesy_clnt and
  nfsd4_courtesy_clnt_expired as static inline.

. update nfsd_breaker_owns_lease to use dl->dl_stid.sc_client directly

. fix kernel test robot warning when CONFIG_FILE_LOCKING not defined.

v18:

. modify 0005-NFSD-Update-nfs4_get_vfs_file-to-handle-courtesy-cli.patch to:

    . remove nfs4_check_access_deny_bmap, fold this functionality
      into nfs4_resolve_deny_conflicts_locked by making use of
      bmap_to_share_mode.

    . move nfs4_resolve_deny_conflicts_locked into nfs4_file_get_access
      and nfs4_file_check_deny. 

v19:

. modify 0002-NFSD-Add-courtesy-client-state-macro-and-spinlock-to.patch to

    . add NFSD4_CLIENT_ACTIVE

    . redo Courtesy client state table

. modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch and
  0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch to:

    . set cl_cs_client_stare to NFSD4_CLIENT_ACTIVE when reactive
      courtesy client  

v20:

. modify 0006-NFSD-Update-find_clp_in_name_tree-to-handle-courtesy.patch to:
	. add nfsd4_discard_reconnect_clnt
	. replace call to nfsd4_discard_courtesy_clnt with
	  nfsd4_discard_reconnect_clnt

. modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch to:
	. replace call to nfsd4_discard_courtesy_clnt with
	  nfsd4_discard_reconnect_clnt
          
. modify 0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch
	. replace call to nfsd4_discard_courtesy_clnt with
	  nfsd4_discard_reconnect_clnt

v21:

. merged with 5.18.0-rc3

. Redo based on Bruce's suggestion by breaking the patches into functionality
  and also don't remove client record of courtesy client until the client is
  actually expired.

  0001: courteous server framework with support for client with delegation only.
        This patch also handles COURTESY and EXPIRABLE reconnect.
        Conflict is resolved by set the courtesy client to EXPIRABLE, let the
        laundromat expires the client on next run and return NFS4ERR_DELAY
        OPEN request.

  0002: add support for opens/share reservation to courteous server
        Conflict is resolved by set the courtesy client to EXPIRABLE, let the
        laundromat expires the client on next run and return NFS4ERR_DELAY
        OPEN request.

  0003: mv creation/destroying laundromat workqueue from nfs4_state_start and
        and nfs4_state_shutdown_net to init_nfsd and exit_nfsd.

  0004: fs/lock: add locks_owner_has_blockers helper
  
  0005: add 2 callbacks to lock_manager_operations for resolving lock conflict
  
  0006: add support for locks to courteous server, making use of 0004 and 0005
        Conflict is resolved by set the courtesy client to EXPIRABLE, run the
        laundromat immediately and wait for it to complete before returning to
        fs/lock code to recheck the lock list from the beginning.

        NOTE: I could not get queue_work/queue_delay_work and flush_workqueue
        to work as expected, I have to use mod_delayed_work and flush_workqueue
        to get the laundromat to run immediately.

        When we check for blockers in nfs4_anylock_blockers, we do not check
        for client with delegation conflict. This is because we already hold
        the client_lock and to check for delegation conflict we need the state_lock
        and scanning the del_recall_lru list each time. So to avoid this overhead
        and potential deadlock (not sure about lock of ordering of these locks)
        we check and set the COURTESY client with delegation being recalled to
        EXPIRABLE later in nfs4_laundromat.

  0007: show state of courtesy client in client info.

v22:

. modify 0001:
	. allow EXPIRABLE client to reconnect.
        . modify try_to_expire_client to return false if cl_state is
          either COURTEY or EXPIRABLE.
        . remove try_to_activate_client and set cl_state to ACTIVE in
          get_client_locked and renew_client_locked.
        . remove unnecessary cl_cs_lock. Synchronization between expiring
          client and client reconnect is provided by mark_client_expired_locked
          and get_client_locked or renew_client_locked

. modify 0003:
        . fix 'ld' error with laundry_wq when CONFIG_NFSD is defined
          and CONFIG_NFSD_V4 is not defined.

v23:
	. rework try_to_expire_client to return true when cl_state in EXPIRABLE
	  and its callers to work accordingly.

	. add missing mod_delay_work in nfsd4_lm_lock_expirable.

	. add check for cl_rpc_users before setting client state to COURTESY
	  in nfs4_get_client_reaplist.

        . setting client to COURTESY before nfs4_anylock_blockers to handle
          race between the laundromat and thread resolving lock conflict.

	. cleanup 2 fs/lock callbacks: lm_lock_expirable to return bool and
          lm_expire_lock takes no argument.
v24:
	. add new counter, cl_delegs_in_recall, in nfs4_client to maintain
	  delegation recalls and is checked by nfs4_anylock_blockers.

	. remove resolve_lock_conflict_locked and move its logic into the
	  callers posix_lock_inode and posix_test_lock for clarity.

	. rename 'conflict' to 'resolvable' in nfs4_resolve_deny_conflicts_locked.

	. fix kernel robot test warning about missing semicolon in nfsd.h



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 15:23   ` J. Bruce Fields
  2022-05-01 17:38 ` [PATCH RFC v24 2/7] NFSD: add support for share reservation conflict to courteous server Dai Ngo
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

This patch provides courteous server support for delegation only.
Only expired client with delegation but no conflict and no open
or lock state is allowed to be in COURTESY state.

Delegation conflict with COURTESY/EXPIRABLE client is resolved by
setting it to EXPIRABLE, queue work for the laundromat and return
delay to the caller. Conflict is resolved when the laudromat runs
and expires the EXIRABLE client while the NFS client retries the
OPEN request. Local thread request that gets conflict is doing the
retry in _break_lease.

Client in COURTESY or EXPIRABLE state is allowed to reconnect and
continues to have access to its state. Access to the nfs4_client by
the reconnecting thread and the laundromat is serialized via the
client_lock.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 83 +++++++++++++++++++++++++++++++++++++++++++----------
 fs/nfsd/nfsd.h      |  1 +
 fs/nfsd/state.h     | 31 ++++++++++++++++++++
 3 files changed, 100 insertions(+), 15 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 234e852fcdfa..917eaab45999 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -125,6 +125,8 @@ static void free_session(struct nfsd4_session *);
 static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
 static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
 
+static struct workqueue_struct *laundry_wq;
+
 static bool is_session_dead(struct nfsd4_session *ses)
 {
 	return ses->se_flags & NFS4_SESSION_DEAD;
@@ -152,6 +154,7 @@ static __be32 get_client_locked(struct nfs4_client *clp)
 	if (is_client_expired(clp))
 		return nfserr_expired;
 	atomic_inc(&clp->cl_rpc_users);
+	clp->cl_state = NFSD4_ACTIVE;
 	return nfs_ok;
 }
 
@@ -172,6 +175,7 @@ renew_client_locked(struct nfs4_client *clp)
 
 	list_move_tail(&clp->cl_lru, &nn->client_lru);
 	clp->cl_time = ktime_get_boottime_seconds();
+	clp->cl_state = NFSD4_ACTIVE;
 }
 
 static void put_client_renew_locked(struct nfs4_client *clp)
@@ -1090,6 +1094,7 @@ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_file *fp,
 	get_clnt_odstate(odstate);
 	dp->dl_type = NFS4_OPEN_DELEGATE_READ;
 	dp->dl_retries = 1;
+	dp->dl_recalled = false;
 	nfsd4_init_cb(&dp->dl_recall, dp->dl_stid.sc_client,
 		      &nfsd4_cb_recall_ops, NFSPROC4_CLNT_CB_RECALL);
 	get_nfs4_file(fp);
@@ -2004,6 +2009,8 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name)
 	idr_init(&clp->cl_stateids);
 	atomic_set(&clp->cl_rpc_users, 0);
 	clp->cl_cb_state = NFSD4_CB_UNKNOWN;
+	clp->cl_state = NFSD4_ACTIVE;
+	atomic_set(&clp->cl_delegs_in_recall, 0);
 	INIT_LIST_HEAD(&clp->cl_idhash);
 	INIT_LIST_HEAD(&clp->cl_openowners);
 	INIT_LIST_HEAD(&clp->cl_delegations);
@@ -4694,9 +4701,18 @@ nfsd_break_deleg_cb(struct file_lock *fl)
 	bool ret = false;
 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
 	struct nfs4_file *fp = dp->dl_stid.sc_file;
+	struct nfs4_client *clp = dp->dl_stid.sc_client;
+	struct nfsd_net *nn;
 
 	trace_nfsd_cb_recall(&dp->dl_stid);
 
+	dp->dl_recalled = true;
+	atomic_inc(&clp->cl_delegs_in_recall);
+	if (try_to_expire_client(clp)) {
+		nn = net_generic(clp->net, nfsd_net_id);
+		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+	}
+
 	/*
 	 * We don't want the locks code to timeout the lease for us;
 	 * we'll remove it ourself if a delegation isn't returned
@@ -4739,9 +4755,15 @@ static int
 nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
 		     struct list_head *dispose)
 {
-	if (arg & F_UNLCK)
+	struct nfs4_delegation *dp = (struct nfs4_delegation *)onlist->fl_owner;
+	struct nfs4_client *clp = dp->dl_stid.sc_client;
+
+	if (arg & F_UNLCK) {
+		if (dp->dl_recalled &&
+			atomic_dec_return(&clp->cl_delegs_in_recall) == 0)
+			dp->dl_recalled = false;
 		return lease_modify(onlist, arg, dispose);
-	else
+	} else
 		return -EAGAIN;
 }
 
@@ -5605,6 +5627,49 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+/*
+ * place holder for now, no check for lock blockers yet
+ */
+static bool
+nfs4_anylock_blockers(struct nfs4_client *clp)
+{
+	if (atomic_read(&clp->cl_delegs_in_recall) ||
+			client_has_openowners(clp)  ||
+			!list_empty(&clp->async_copies))
+		return true;
+	return false;
+}
+
+static void
+nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
+				struct laundry_time *lt)
+{
+	struct list_head *pos, *next;
+	struct nfs4_client *clp;
+
+	INIT_LIST_HEAD(reaplist);
+	spin_lock(&nn->client_lock);
+	list_for_each_safe(pos, next, &nn->client_lru) {
+		clp = list_entry(pos, struct nfs4_client, cl_lru);
+		if (clp->cl_state == NFSD4_EXPIRABLE)
+			goto exp_client;
+		if (!state_expired(lt, clp->cl_time))
+			break;
+		if (!atomic_read(&clp->cl_rpc_users))
+			clp->cl_state = NFSD4_COURTESY;
+		if (!client_has_state(clp) ||
+				ktime_get_boottime_seconds() >=
+				(clp->cl_time + NFSD_COURTESY_CLIENT_TIMEOUT))
+			goto exp_client;
+		if (nfs4_anylock_blockers(clp)) {
+exp_client:
+			if (!mark_client_expired_locked(clp))
+				list_add(&clp->cl_lru, reaplist);
+		}
+	}
+	spin_unlock(&nn->client_lock);
+}
+
 static time64_t
 nfs4_laundromat(struct nfsd_net *nn)
 {
@@ -5627,7 +5692,6 @@ nfs4_laundromat(struct nfsd_net *nn)
 		goto out;
 	}
 	nfsd4_end_grace(nn);
-	INIT_LIST_HEAD(&reaplist);
 
 	spin_lock(&nn->s2s_cp_lock);
 	idr_for_each_entry(&nn->s2s_cp_stateids, cps_t, i) {
@@ -5637,17 +5701,7 @@ nfs4_laundromat(struct nfsd_net *nn)
 			_free_cpntf_state_locked(nn, cps);
 	}
 	spin_unlock(&nn->s2s_cp_lock);
-
-	spin_lock(&nn->client_lock);
-	list_for_each_safe(pos, next, &nn->client_lru) {
-		clp = list_entry(pos, struct nfs4_client, cl_lru);
-		if (!state_expired(&lt, clp->cl_time))
-			break;
-		if (mark_client_expired_locked(clp))
-			continue;
-		list_add(&clp->cl_lru, &reaplist);
-	}
-	spin_unlock(&nn->client_lock);
+	nfs4_get_client_reaplist(nn, &reaplist, &lt);
 	list_for_each_safe(pos, next, &reaplist) {
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
 		trace_nfsd_clid_purged(&clp->cl_clientid);
@@ -5722,7 +5776,6 @@ nfs4_laundromat(struct nfsd_net *nn)
 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
 }
 
-static struct workqueue_struct *laundry_wq;
 static void laundromat_main(struct work_struct *);
 
 static void
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 4fc1fd639527..23996c6ca75e 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -336,6 +336,7 @@ void		nfsd_lockd_shutdown(void);
 #define COMPOUND_ERR_SLACK_SPACE	16     /* OP_SETATTR */
 
 #define NFSD_LAUNDROMAT_MINTIMEOUT      1   /* seconds */
+#define	NFSD_COURTESY_CLIENT_TIMEOUT	(24 * 60 * 60)	/* seconds */
 
 /*
  * The following attributes are currently not supported by the NFSv4 server:
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 95457cfd37fc..f3d6313914ed 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -149,6 +149,7 @@ struct nfs4_delegation {
 /* For recall: */
 	int			dl_retries;
 	struct nfsd4_callback	dl_recall;
+	bool			dl_recalled;
 };
 
 #define cb_to_delegation(cb) \
@@ -283,6 +284,28 @@ struct nfsd4_sessionid {
 #define HEXDIR_LEN     33 /* hex version of 16 byte md5 of cl_name plus '\0' */
 
 /*
+ *       State                Meaning                  Where set
+ * --------------------------------------------------------------------------
+ * | NFSD4_ACTIVE      | Confirmed, active    | Default                     |
+ * |------------------- ----------------------------------------------------|
+ * | NFSD4_COURTESY    | Courtesy state.      | nfs4_get_client_reaplist    |
+ * |                   | Lease/lock/share     |                             |
+ * |                   | reservation conflict |                             |
+ * |                   | can cause Courtesy   |                             |
+ * |                   | client to be expired |                             |
+ * |------------------------------------------------------------------------|
+ * | NFSD4_EXPIRABLE   | Courtesy client to be| nfs4_laundromat             |
+ * |                   | expired by Laundromat| try_to_expire_client        |
+ * |                   | due to conflict      |                             |
+ * |------------------------------------------------------------------------|
+ */
+enum {
+	NFSD4_ACTIVE = 0,
+	NFSD4_COURTESY,
+	NFSD4_EXPIRABLE,
+};
+
+/*
  * struct nfs4_client - one per client.  Clientids live here.
  *
  * The initial object created by an NFS client using SETCLIENTID (for NFSv4.0)
@@ -385,6 +408,9 @@ struct nfs4_client {
 	struct list_head	async_copies;	/* list of async copies */
 	spinlock_t		async_lock;	/* lock for async copies */
 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
+
+	unsigned int		cl_state;
+	atomic_t		cl_delegs_in_recall;
 };
 
 /* struct nfs4_client_reset
@@ -702,4 +728,9 @@ extern void nfsd4_client_record_remove(struct nfs4_client *clp);
 extern int nfsd4_client_record_check(struct nfs4_client *clp);
 extern void nfsd4_record_grace_done(struct nfsd_net *nn);
 
+static inline bool try_to_expire_client(struct nfs4_client *clp)
+{
+	cmpxchg(&clp->cl_state, NFSD4_COURTESY, NFSD4_EXPIRABLE);
+	return clp->cl_state == NFSD4_EXPIRABLE;
+}
 #endif   /* NFSD4_STATE_H */
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 2/7] NFSD: add support for share reservation conflict to courteous server
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2022-05-01 17:38 ` [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 15:31   ` J. Bruce Fields
  2022-05-01 17:38 ` [PATCH RFC v24 3/7] NFSD: move create/destroy of laundry_wq to init_nfsd and exit_nfsd Dai Ngo
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

This patch allows expired client with open state to be in COURTESY
state. Share/access conflict with COURTESY client is resolved by
setting COURTESY client to EXPIRABLE state, schedule laundromat
to run and returning nfserr_jukebox to the request client.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 101 insertions(+), 8 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 917eaab45999..0e98e9c16e3e 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -694,6 +694,57 @@ static unsigned int file_hashval(struct svc_fh *fh)
 
 static struct hlist_head file_hashtbl[FILE_HASH_SIZE];
 
+/*
+ * Check if courtesy clients have conflicting access and resolve it if possible
+ *
+ * access:  is op_share_access if share_access is true.
+ *	    Check if access mode, op_share_access, would conflict with
+ *	    the current deny mode of the file 'fp'.
+ * access:  is op_share_deny if share_access is false.
+ *	    Check if the deny mode, op_share_deny, would conflict with
+ *	    current access of the file 'fp'.
+ * stp:     skip checking this entry.
+ * new_stp: normal open, not open upgrade.
+ *
+ * Function returns:
+ *	false - access/deny mode conflict with normal client.
+ *	true  - no conflict or conflict with courtesy client(s) is resolved.
+ */
+static bool
+nfs4_resolve_deny_conflicts_locked(struct nfs4_file *fp, bool new_stp,
+		struct nfs4_ol_stateid *stp, u32 access, bool share_access)
+{
+	struct nfs4_ol_stateid *st;
+	bool resolvable = true;
+	unsigned char bmap;
+	struct nfsd_net *nn;
+	struct nfs4_client *clp;
+
+	lockdep_assert_held(&fp->fi_lock);
+	list_for_each_entry(st, &fp->fi_stateids, st_perfile) {
+		/* ignore lock stateid */
+		if (st->st_openstp)
+			continue;
+		if (st == stp && new_stp)
+			continue;
+		/* check file access against deny mode or vice versa */
+		bmap = share_access ? st->st_deny_bmap : st->st_access_bmap;
+		if (!(access & bmap_to_share_mode(bmap)))
+			continue;
+		clp = st->st_stid.sc_client;
+		if (try_to_expire_client(clp))
+			continue;
+		resolvable = false;
+		break;
+	}
+	if (resolvable) {
+		clp = stp->st_stid.sc_client;
+		nn = net_generic(clp->net, nfsd_net_id);
+		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+	}
+	return resolvable;
+}
+
 static void
 __nfs4_file_get_access(struct nfs4_file *fp, u32 access)
 {
@@ -4969,7 +5020,7 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
 
 static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
-		struct nfsd4_open *open)
+		struct nfsd4_open *open, bool new_stp)
 {
 	struct nfsd_file *nf = NULL;
 	__be32 status;
@@ -4985,6 +5036,13 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 	 */
 	status = nfs4_file_check_deny(fp, open->op_share_deny);
 	if (status != nfs_ok) {
+		if (status != nfserr_share_denied) {
+			spin_unlock(&fp->fi_lock);
+			goto out;
+		}
+		if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
+				stp, open->op_share_deny, false))
+			status = nfserr_jukebox;
 		spin_unlock(&fp->fi_lock);
 		goto out;
 	}
@@ -4992,6 +5050,13 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 	/* set access to the file */
 	status = nfs4_file_get_access(fp, open->op_share_access);
 	if (status != nfs_ok) {
+		if (status != nfserr_share_denied) {
+			spin_unlock(&fp->fi_lock);
+			goto out;
+		}
+		if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
+				stp, open->op_share_access, true))
+			status = nfserr_jukebox;
 		spin_unlock(&fp->fi_lock);
 		goto out;
 	}
@@ -5038,21 +5103,29 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
 }
 
 static __be32
-nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp, struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, struct nfsd4_open *open)
+nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp,
+		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
+		struct nfsd4_open *open)
 {
 	__be32 status;
 	unsigned char old_deny_bmap = stp->st_deny_bmap;
 
 	if (!test_access(open->op_share_access, stp))
-		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open);
+		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open, false);
 
 	/* test and set deny mode */
 	spin_lock(&fp->fi_lock);
 	status = nfs4_file_check_deny(fp, open->op_share_deny);
 	if (status == nfs_ok) {
-		set_deny(open->op_share_deny, stp);
-		fp->fi_share_deny |=
+		if (status != nfserr_share_denied) {
+			set_deny(open->op_share_deny, stp);
+			fp->fi_share_deny |=
 				(open->op_share_deny & NFS4_SHARE_DENY_BOTH);
+		} else {
+			if (nfs4_resolve_deny_conflicts_locked(fp, false,
+					stp, open->op_share_deny, false))
+				status = nfserr_jukebox;
+		}
 	}
 	spin_unlock(&fp->fi_lock);
 
@@ -5393,7 +5466,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
 			goto out;
 		}
 	} else {
-		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open, true);
 		if (status) {
 			stp->st_stid.sc_type = NFS4_CLOSED_STID;
 			release_open_stateid(stp);
@@ -5627,6 +5700,26 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+static bool
+nfs4_has_any_locks(struct nfs4_client *clp)
+{
+	int i;
+	struct nfs4_stateowner *so;
+
+	spin_lock(&clp->cl_lock);
+	for (i = 0; i < OWNER_HASH_SIZE; i++) {
+		list_for_each_entry(so, &clp->cl_ownerstr_hashtbl[i],
+				so_strhash) {
+			if (so->so_is_open_owner)
+				continue;
+			spin_unlock(&clp->cl_lock);
+			return true;
+		}
+	}
+	spin_unlock(&clp->cl_lock);
+	return false;
+}
+
 /*
  * place holder for now, no check for lock blockers yet
  */
@@ -5634,8 +5727,8 @@ static bool
 nfs4_anylock_blockers(struct nfs4_client *clp)
 {
 	if (atomic_read(&clp->cl_delegs_in_recall) ||
-			client_has_openowners(clp)  ||
-			!list_empty(&clp->async_copies))
+			!list_empty(&clp->async_copies) ||
+			nfs4_has_any_locks(clp))
 		return true;
 	return false;
 }
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 3/7] NFSD: move create/destroy of laundry_wq to init_nfsd and exit_nfsd
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
  2022-05-01 17:38 ` [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation Dai Ngo
  2022-05-01 17:38 ` [PATCH RFC v24 2/7] NFSD: add support for share reservation conflict to courteous server Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 15:35   ` J. Bruce Fields
  2022-05-01 17:38 ` [PATCH RFC v24 4/7] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

This patch moves create/destroy of laundry_wq from nfs4_state_start
and nfs4_state_shutdown_net to init_nfsd and exit_nfsd to prevent
the laundromat from being freed while a thread is processing a
conflicting lock.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 28 ++++++++++++++++------------
 fs/nfsd/nfsctl.c    |  4 ++++
 fs/nfsd/nfsd.h      |  4 ++++
 3 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 0e98e9c16e3e..f369142da79f 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -127,6 +127,21 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
 
 static struct workqueue_struct *laundry_wq;
 
+int nfsd4_create_laundry_wq(void)
+{
+	int rc = 0;
+
+	laundry_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, "nfsd4");
+	if (laundry_wq == NULL)
+		rc = -ENOMEM;
+	return rc;
+}
+
+void nfsd4_destroy_laundry_wq(void)
+{
+	destroy_workqueue(laundry_wq);
+}
+
 static bool is_session_dead(struct nfsd4_session *ses)
 {
 	return ses->se_flags & NFS4_SESSION_DEAD;
@@ -7748,22 +7763,12 @@ nfs4_state_start(void)
 {
 	int ret;
 
-	laundry_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, "nfsd4");
-	if (laundry_wq == NULL) {
-		ret = -ENOMEM;
-		goto out;
-	}
 	ret = nfsd4_create_callback_queue();
 	if (ret)
-		goto out_free_laundry;
+		return ret;
 
 	set_max_delegations();
 	return 0;
-
-out_free_laundry:
-	destroy_workqueue(laundry_wq);
-out:
-	return ret;
 }
 
 void
@@ -7800,7 +7805,6 @@ nfs4_state_shutdown_net(struct net *net)
 void
 nfs4_state_shutdown(void)
 {
-	destroy_workqueue(laundry_wq);
 	nfsd4_destroy_callback_queue();
 }
 
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 16920e4512bd..322a208878f2 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -1544,6 +1544,9 @@ static int __init init_nfsd(void)
 	retval = register_cld_notifier();
 	if (retval)
 		goto out_free_all;
+	retval = nfsd4_create_laundry_wq();
+	if (retval)
+		goto out_free_all;
 	return 0;
 out_free_all:
 	unregister_pernet_subsys(&nfsd_net_ops);
@@ -1566,6 +1569,7 @@ static int __init init_nfsd(void)
 
 static void __exit exit_nfsd(void)
 {
+	nfsd4_destroy_laundry_wq();
 	unregister_cld_notifier();
 	unregister_pernet_subsys(&nfsd_net_ops);
 	nfsd_drc_slab_free();
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 23996c6ca75e..847b482155ae 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -162,6 +162,8 @@ void nfs4_state_shutdown_net(struct net *net);
 int nfs4_reset_recoverydir(char *recdir);
 char * nfs4_recoverydir(void);
 bool nfsd4_spo_must_allow(struct svc_rqst *rqstp);
+int nfsd4_create_laundry_wq(void);
+void nfsd4_destroy_laundry_wq(void);
 #else
 static inline int nfsd4_init_slabs(void) { return 0; }
 static inline void nfsd4_free_slabs(void) { }
@@ -175,6 +177,8 @@ static inline bool nfsd4_spo_must_allow(struct svc_rqst *rqstp)
 {
 	return false;
 }
+static inline int nfsd4_create_laundry_wq(void) { return 0; };
+static inline void nfsd4_destroy_laundry_wq(void) {};
 #endif
 
 /*
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 4/7] fs/lock: add helper locks_owner_has_blockers to check for blockers
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (2 preceding siblings ...)
  2022-05-01 17:38 ` [PATCH RFC v24 3/7] NFSD: move create/destroy of laundry_wq to init_nfsd and exit_nfsd Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 15:43   ` J. Bruce Fields
  2022-05-01 17:38 ` [PATCH RFC v24 5/7] fs/lock: add 2 callbacks to lock_manager_operations to resolve conflict Dai Ngo
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Add helper locks_owner_has_blockers to check if there is any blockers
for a given lockowner.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/locks.c         | 28 ++++++++++++++++++++++++++++
 include/linux/fs.h |  7 +++++++
 2 files changed, 35 insertions(+)

diff --git a/fs/locks.c b/fs/locks.c
index 8c6df10cd9ed..c369841ef7d1 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -300,6 +300,34 @@ void locks_release_private(struct file_lock *fl)
 }
 EXPORT_SYMBOL_GPL(locks_release_private);
 
+/**
+ * locks_owner_has_blockers - Check for blocking lock requests
+ * @flctx: file lock context
+ * @owner: lock owner
+ *
+ * Return values:
+ *   %true: @owner has at least one blocker
+ *   %false: @owner has no blockers
+ */
+bool locks_owner_has_blockers(struct file_lock_context *flctx,
+		fl_owner_t owner)
+{
+	struct file_lock *fl;
+
+	spin_lock(&flctx->flc_lock);
+	list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
+		if (fl->fl_owner != owner)
+			continue;
+		if (!list_empty(&fl->fl_blocked_requests)) {
+			spin_unlock(&flctx->flc_lock);
+			return true;
+		}
+	}
+	spin_unlock(&flctx->flc_lock);
+	return false;
+}
+EXPORT_SYMBOL_GPL(locks_owner_has_blockers);
+
 /* Free a lock which is not in use. */
 void locks_free_lock(struct file_lock *fl)
 {
diff --git a/include/linux/fs.h b/include/linux/fs.h
index bbde95387a23..b8ed7f974fb4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1174,6 +1174,8 @@ extern void lease_unregister_notifier(struct notifier_block *);
 struct files_struct;
 extern void show_fd_locks(struct seq_file *f,
 			 struct file *filp, struct files_struct *files);
+extern bool locks_owner_has_blockers(struct file_lock_context *flctx,
+			fl_owner_t owner);
 #else /* !CONFIG_FILE_LOCKING */
 static inline int fcntl_getlk(struct file *file, unsigned int cmd,
 			      struct flock __user *user)
@@ -1309,6 +1311,11 @@ static inline int lease_modify(struct file_lock *fl, int arg,
 struct files_struct;
 static inline void show_fd_locks(struct seq_file *f,
 			struct file *filp, struct files_struct *files) {}
+static inline bool locks_owner_has_blockers(struct file_lock_context *flctx,
+			fl_owner_t owner)
+{
+	return false;
+}
 #endif /* !CONFIG_FILE_LOCKING */
 
 static inline struct inode *file_inode(const struct file *f)
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 5/7] fs/lock: add 2 callbacks to lock_manager_operations to resolve conflict
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (3 preceding siblings ...)
  2022-05-01 17:38 ` [PATCH RFC v24 4/7] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 15:53   ` J. Bruce Fields
  2022-05-01 17:38 ` [PATCH RFC v24 6/7] NFSD: add support for lock conflict to courteous server Dai Ngo
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Add 2 new callbacks, lm_lock_expirable and lm_expire_lock, to
lock_manager_operations to allow the lock manager to take appropriate
action to resolve the lock conflict if possible.

A new field, lm_mod_owner, is also added to lock_manager_operations.
The lm_mod_owner is used by the fs/lock code to make sure the lock
manager module such as nfsd, is not freed while lock conflict is being
resolved.

lm_lock_expirable checks and returns true to indicate that the lock
conflict can be resolved else return false. This callback must be
called with the flc_lock held so it can not block.

lm_expire_lock is called to resolve the lock conflict if the returned
value from lm_lock_expirable is true. This callback is called without
the flc_lock held since it's allowed to block. Upon returning from
this callback, the lock conflict should be resolved and the caller is
expected to restart the conflict check from the beginnning of the list.

Lock manager, such as NFSv4 courteous server, uses this callback to
resolve conflict by destroying lock owner, or the NFSv4 courtesy client
(client that has expired but allowed to maintains its states) that owns
the lock.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 Documentation/filesystems/locking.rst |  4 ++++
 fs/locks.c                            | 45 ++++++++++++++++++++++++++++++++---
 include/linux/fs.h                    |  3 +++
 3 files changed, 49 insertions(+), 3 deletions(-)

diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index c26d854275a0..0997a258361a 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -428,6 +428,8 @@ prototypes::
 	void (*lm_break)(struct file_lock *); /* break_lease callback */
 	int (*lm_change)(struct file_lock **, int);
 	bool (*lm_breaker_owns_lease)(struct file_lock *);
+        bool (*lm_lock_expirable)(struct file_lock *);
+        void (*lm_expire_lock)(void);
 
 locking rules:
 
@@ -439,6 +441,8 @@ lm_grant:		no		no			no
 lm_break:		yes		no			no
 lm_change		yes		no			no
 lm_breaker_owns_lease:	yes     	no			no
+lm_lock_expirable	yes		no			no
+lm_expire_lock		no		no			yes
 ======================	=============	=================	=========
 
 buffer_head
diff --git a/fs/locks.c b/fs/locks.c
index c369841ef7d1..17917da06463 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -902,6 +902,9 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
 	struct file_lock *cfl;
 	struct file_lock_context *ctx;
 	struct inode *inode = locks_inode(filp);
+	void *owner;
+	bool ret;
+	void (*func)(void);
 
 	ctx = smp_load_acquire(&inode->i_flctx);
 	if (!ctx || list_empty_careful(&ctx->flc_posix)) {
@@ -909,12 +912,28 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
 		return;
 	}
 
+retry:
 	spin_lock(&ctx->flc_lock);
 	list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
-		if (posix_locks_conflict(fl, cfl)) {
-			locks_copy_conflock(fl, cfl);
-			goto out;
+		if (!posix_locks_conflict(fl, cfl))
+			continue;
+		if (cfl->fl_lmops && cfl->fl_lmops->lm_mod_owner &&
+				cfl->fl_lmops->lm_lock_expirable &&
+				cfl->fl_lmops->lm_expire_lock) {
+			ret = (*cfl->fl_lmops->lm_lock_expirable)(cfl);
+			if (!ret)
+				goto conflict;
+			owner = cfl->fl_lmops->lm_mod_owner;
+			func = cfl->fl_lmops->lm_expire_lock;
+			__module_get(owner);
+			spin_unlock(&ctx->flc_lock);
+			(*func)();
+			module_put(owner);
+			goto retry;
 		}
+conflict:
+		locks_copy_conflock(fl, cfl);
+		goto out;
 	}
 	fl->fl_type = F_UNLCK;
 out:
@@ -1088,6 +1107,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 	int error;
 	bool added = false;
 	LIST_HEAD(dispose);
+	void *owner;
+	bool ret;
+	void (*func)(void);
 
 	ctx = locks_get_lock_context(inode, request->fl_type);
 	if (!ctx)
@@ -1106,6 +1128,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 		new_fl2 = locks_alloc_lock();
 	}
 
+retry:
 	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	/*
@@ -1117,6 +1140,22 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
 		list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
 			if (!posix_locks_conflict(request, fl))
 				continue;
+			if (fl->fl_lmops && fl->fl_lmops->lm_mod_owner &&
+					fl->fl_lmops->lm_lock_expirable &&
+					fl->fl_lmops->lm_expire_lock) {
+				ret = (*fl->fl_lmops->lm_lock_expirable)(fl);
+				if (!ret)
+					goto conflict;
+				owner = fl->fl_lmops->lm_mod_owner;
+				func = fl->fl_lmops->lm_expire_lock;
+				__module_get(owner);
+				spin_unlock(&ctx->flc_lock);
+				percpu_up_read(&file_rwsem);
+				(*func)();
+				module_put(owner);
+				goto retry;
+			}
+conflict:
 			if (conflock)
 				locks_copy_conflock(conflock, fl);
 			error = -EAGAIN;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index b8ed7f974fb4..aa6c1bbdb8c4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1029,6 +1029,7 @@ struct file_lock_operations {
 };
 
 struct lock_manager_operations {
+	void *lm_mod_owner;
 	fl_owner_t (*lm_get_owner)(fl_owner_t);
 	void (*lm_put_owner)(fl_owner_t);
 	void (*lm_notify)(struct file_lock *);	/* unblock callback */
@@ -1037,6 +1038,8 @@ struct lock_manager_operations {
 	int (*lm_change)(struct file_lock *, int, struct list_head *);
 	void (*lm_setup)(struct file_lock *, void **);
 	bool (*lm_breaker_owns_lease)(struct file_lock *);
+	bool (*lm_lock_expirable)(struct file_lock *cfl);
+	void (*lm_expire_lock)(void);
 };
 
 struct lock_manager {
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 6/7] NFSD: add support for lock conflict to courteous server
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (4 preceding siblings ...)
  2022-05-01 17:38 ` [PATCH RFC v24 5/7] fs/lock: add 2 callbacks to lock_manager_operations to resolve conflict Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 16:00   ` J. Bruce Fields
  2022-05-01 17:38 ` [PATCH RFC v24 7/7] NFSD: Show state of courtesy client in client info Dai Ngo
  2022-05-02 16:07 ` [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server J. Bruce Fields
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

This patch allows expired client with lock state to be in COURTESY
state. Lock conflict with COURTESY client is resolved by the fs/lock
code using the lm_lock_expirable and lm_expire_lock callback in the
struct lock_manager_operations.

If conflict client is in COURTESY state, set it to EXPIRABLE and
schedule the laundromat to run immediately to expire the client. The
callback lm_expire_lock waits for the laundromat to flush its work
queue before returning to caller.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 70 +++++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 54 insertions(+), 16 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index f369142da79f..4ab7dda44f38 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -5715,39 +5715,51 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+/* Check if any lock belonging to this lockowner has any blockers */
 static bool
-nfs4_has_any_locks(struct nfs4_client *clp)
+nfs4_lockowner_has_blockers(struct nfs4_lockowner *lo)
+{
+	struct file_lock_context *ctx;
+	struct nfs4_ol_stateid *stp;
+	struct nfs4_file *nf;
+
+	list_for_each_entry(stp, &lo->lo_owner.so_stateids, st_perstateowner) {
+		nf = stp->st_stid.sc_file;
+		ctx = nf->fi_inode->i_flctx;
+		if (!ctx)
+			continue;
+		if (locks_owner_has_blockers(ctx, lo))
+			return true;
+	}
+	return false;
+}
+
+static bool
+nfs4_anylock_blockers(struct nfs4_client *clp)
 {
 	int i;
 	struct nfs4_stateowner *so;
+	struct nfs4_lockowner *lo;
 
+	if (atomic_read(&clp->cl_delegs_in_recall))
+		return true;
 	spin_lock(&clp->cl_lock);
 	for (i = 0; i < OWNER_HASH_SIZE; i++) {
 		list_for_each_entry(so, &clp->cl_ownerstr_hashtbl[i],
 				so_strhash) {
 			if (so->so_is_open_owner)
 				continue;
-			spin_unlock(&clp->cl_lock);
-			return true;
+			lo = lockowner(so);
+			if (nfs4_lockowner_has_blockers(lo)) {
+				spin_unlock(&clp->cl_lock);
+				return true;
+			}
 		}
 	}
 	spin_unlock(&clp->cl_lock);
 	return false;
 }
 
-/*
- * place holder for now, no check for lock blockers yet
- */
-static bool
-nfs4_anylock_blockers(struct nfs4_client *clp)
-{
-	if (atomic_read(&clp->cl_delegs_in_recall) ||
-			!list_empty(&clp->async_copies) ||
-			nfs4_has_any_locks(clp))
-		return true;
-	return false;
-}
-
 static void
 nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
 				struct laundry_time *lt)
@@ -6712,6 +6724,29 @@ nfsd4_lm_put_owner(fl_owner_t owner)
 		nfs4_put_stateowner(&lo->lo_owner);
 }
 
+/* return pointer to struct nfs4_client if client is expirable */
+static bool
+nfsd4_lm_lock_expirable(struct file_lock *cfl)
+{
+	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)cfl->fl_owner;
+	struct nfs4_client *clp = lo->lo_owner.so_client;
+	struct nfsd_net *nn;
+
+	if (try_to_expire_client(clp)) {
+		nn = net_generic(clp->net, nfsd_net_id);
+		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+		return true;
+	}
+	return false;
+}
+
+/* schedule laundromat to run immediately and wait for it to complete */
+static void
+nfsd4_lm_expire_lock(void)
+{
+	flush_workqueue(laundry_wq);
+}
+
 static void
 nfsd4_lm_notify(struct file_lock *fl)
 {
@@ -6738,9 +6773,12 @@ nfsd4_lm_notify(struct file_lock *fl)
 }
 
 static const struct lock_manager_operations nfsd_posix_mng_ops  = {
+	.lm_mod_owner = THIS_MODULE,
 	.lm_notify = nfsd4_lm_notify,
 	.lm_get_owner = nfsd4_lm_get_owner,
 	.lm_put_owner = nfsd4_lm_put_owner,
+	.lm_lock_expirable = nfsd4_lm_lock_expirable,
+	.lm_expire_lock = nfsd4_lm_expire_lock,
 };
 
 static inline void
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC v24 7/7] NFSD: Show state of courtesy client in client info
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (5 preceding siblings ...)
  2022-05-01 17:38 ` [PATCH RFC v24 6/7] NFSD: add support for lock conflict to courteous server Dai Ngo
@ 2022-05-01 17:38 ` Dai Ngo
  2022-05-02 16:06   ` J. Bruce Fields
  2022-05-02 16:07 ` [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server J. Bruce Fields
  7 siblings, 1 reply; 16+ messages in thread
From: Dai Ngo @ 2022-05-01 17:38 UTC (permalink / raw)
  To: chuck.lever, bfields; +Cc: jlayton, viro, linux-nfs, linux-fsdevel

Update client_info_show to show state of courtesy client
and time since last renew.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 4ab7dda44f38..9cff06fc3600 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2473,7 +2473,8 @@ static int client_info_show(struct seq_file *m, void *v)
 {
 	struct inode *inode = m->private;
 	struct nfs4_client *clp;
-	u64 clid;
+	u64 clid, hrs;
+	u32 mins, secs;
 
 	clp = get_nfsdfs_clp(inode);
 	if (!clp)
@@ -2481,10 +2482,19 @@ static int client_info_show(struct seq_file *m, void *v)
 	memcpy(&clid, &clp->cl_clientid, sizeof(clid));
 	seq_printf(m, "clientid: 0x%llx\n", clid);
 	seq_printf(m, "address: \"%pISpc\"\n", (struct sockaddr *)&clp->cl_addr);
-	if (test_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags))
+
+	if (clp->cl_state == NFSD4_COURTESY)
+		seq_puts(m, "status: courtesy\n");
+	else if (clp->cl_state == NFSD4_EXPIRABLE)
+		seq_puts(m, "status: expirable\n");
+	else if (test_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags))
 		seq_puts(m, "status: confirmed\n");
 	else
 		seq_puts(m, "status: unconfirmed\n");
+	hrs = div_u64_rem(ktime_get_boottime_seconds() - clp->cl_time,
+				3600, &secs);
+	mins = div_u64_rem((u64)secs, 60, &secs);
+	seq_printf(m, "time since last renew: %llu:%02u:%02u\n", hrs, mins, secs);
 	seq_printf(m, "name: ");
 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation
  2022-05-01 17:38 ` [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation Dai Ngo
@ 2022-05-02 15:23   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 15:23 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:10AM -0700, Dai Ngo wrote:
> This patch provides courteous server support for delegation only.
> Only expired client with delegation but no conflict and no open
> or lock state is allowed to be in COURTESY state.
> 
> Delegation conflict with COURTESY/EXPIRABLE client is resolved by
> setting it to EXPIRABLE, queue work for the laundromat and return
> delay to the caller. Conflict is resolved when the laudromat runs
> and expires the EXIRABLE client while the NFS client retries the
> OPEN request. Local thread request that gets conflict is doing the
> retry in _break_lease.
> 
> Client in COURTESY or EXPIRABLE state is allowed to reconnect and
> continues to have access to its state. Access to the nfs4_client by
> the reconnecting thread and the laundromat is serialized via the
> client_lock.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/nfsd/nfs4state.c | 83 +++++++++++++++++++++++++++++++++++++++++++----------
>  fs/nfsd/nfsd.h      |  1 +
>  fs/nfsd/state.h     | 31 ++++++++++++++++++++
>  3 files changed, 100 insertions(+), 15 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 234e852fcdfa..917eaab45999 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -125,6 +125,8 @@ static void free_session(struct nfsd4_session *);
>  static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
>  static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
>  
> +static struct workqueue_struct *laundry_wq;
> +
>  static bool is_session_dead(struct nfsd4_session *ses)
>  {
>  	return ses->se_flags & NFS4_SESSION_DEAD;
> @@ -152,6 +154,7 @@ static __be32 get_client_locked(struct nfs4_client *clp)
>  	if (is_client_expired(clp))
>  		return nfserr_expired;
>  	atomic_inc(&clp->cl_rpc_users);
> +	clp->cl_state = NFSD4_ACTIVE;
>  	return nfs_ok;
>  }
>  
> @@ -172,6 +175,7 @@ renew_client_locked(struct nfs4_client *clp)
>  
>  	list_move_tail(&clp->cl_lru, &nn->client_lru);
>  	clp->cl_time = ktime_get_boottime_seconds();
> +	clp->cl_state = NFSD4_ACTIVE;
>  }
>  
>  static void put_client_renew_locked(struct nfs4_client *clp)
> @@ -1090,6 +1094,7 @@ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_file *fp,
>  	get_clnt_odstate(odstate);
>  	dp->dl_type = NFS4_OPEN_DELEGATE_READ;
>  	dp->dl_retries = 1;
> +	dp->dl_recalled = false;
>  	nfsd4_init_cb(&dp->dl_recall, dp->dl_stid.sc_client,
>  		      &nfsd4_cb_recall_ops, NFSPROC4_CLNT_CB_RECALL);
>  	get_nfs4_file(fp);
> @@ -2004,6 +2009,8 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name)
>  	idr_init(&clp->cl_stateids);
>  	atomic_set(&clp->cl_rpc_users, 0);
>  	clp->cl_cb_state = NFSD4_CB_UNKNOWN;
> +	clp->cl_state = NFSD4_ACTIVE;
> +	atomic_set(&clp->cl_delegs_in_recall, 0);
>  	INIT_LIST_HEAD(&clp->cl_idhash);
>  	INIT_LIST_HEAD(&clp->cl_openowners);
>  	INIT_LIST_HEAD(&clp->cl_delegations);
> @@ -4694,9 +4701,18 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>  	bool ret = false;
>  	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>  	struct nfs4_file *fp = dp->dl_stid.sc_file;
> +	struct nfs4_client *clp = dp->dl_stid.sc_client;
> +	struct nfsd_net *nn;
>  
>  	trace_nfsd_cb_recall(&dp->dl_stid);
>  
> +	dp->dl_recalled = true;
> +	atomic_inc(&clp->cl_delegs_in_recall);
> +	if (try_to_expire_client(clp)) {
> +		nn = net_generic(clp->net, nfsd_net_id);
> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +	}
> +
>  	/*
>  	 * We don't want the locks code to timeout the lease for us;
>  	 * we'll remove it ourself if a delegation isn't returned
> @@ -4739,9 +4755,15 @@ static int
>  nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
>  		     struct list_head *dispose)
>  {
> -	if (arg & F_UNLCK)
> +	struct nfs4_delegation *dp = (struct nfs4_delegation *)onlist->fl_owner;
> +	struct nfs4_client *clp = dp->dl_stid.sc_client;
> +
> +	if (arg & F_UNLCK) {
> +		if (dp->dl_recalled &&
> +			atomic_dec_return(&clp->cl_delegs_in_recall) == 0)
> +			dp->dl_recalled = false;

Why isn't this just

		if (dp->dl_recalled)
			atomic_dec(&clp->cl_delegs_in_recall)

?  I'm not seeing why the case where cl_delegs_in_recall goes to zero
should be special.

Also, from a quick check of fs/locks.c, I don't think
lm_change(.,F_UNLCK,.) will be called more than once, so I don't think
it's necessary to clear dl_recalled.

Other than that, the patch looks good to me.

--b.

>  		return lease_modify(onlist, arg, dispose);
> -	else
> +	} else
>  		return -EAGAIN;
>  }
>  
> @@ -5605,6 +5627,49 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>  }
>  #endif
>  
> +/*
> + * place holder for now, no check for lock blockers yet
> + */
> +static bool
> +nfs4_anylock_blockers(struct nfs4_client *clp)
> +{
> +	if (atomic_read(&clp->cl_delegs_in_recall) ||
> +			client_has_openowners(clp)  ||
> +			!list_empty(&clp->async_copies))
> +		return true;
> +	return false;
> +}
> +
> +static void
> +nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
> +				struct laundry_time *lt)
> +{
> +	struct list_head *pos, *next;
> +	struct nfs4_client *clp;
> +
> +	INIT_LIST_HEAD(reaplist);
> +	spin_lock(&nn->client_lock);
> +	list_for_each_safe(pos, next, &nn->client_lru) {
> +		clp = list_entry(pos, struct nfs4_client, cl_lru);
> +		if (clp->cl_state == NFSD4_EXPIRABLE)
> +			goto exp_client;
> +		if (!state_expired(lt, clp->cl_time))
> +			break;
> +		if (!atomic_read(&clp->cl_rpc_users))
> +			clp->cl_state = NFSD4_COURTESY;
> +		if (!client_has_state(clp) ||
> +				ktime_get_boottime_seconds() >=
> +				(clp->cl_time + NFSD_COURTESY_CLIENT_TIMEOUT))
> +			goto exp_client;
> +		if (nfs4_anylock_blockers(clp)) {
> +exp_client:
> +			if (!mark_client_expired_locked(clp))
> +				list_add(&clp->cl_lru, reaplist);
> +		}
> +	}
> +	spin_unlock(&nn->client_lock);
> +}
> +
>  static time64_t
>  nfs4_laundromat(struct nfsd_net *nn)
>  {
> @@ -5627,7 +5692,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>  		goto out;
>  	}
>  	nfsd4_end_grace(nn);
> -	INIT_LIST_HEAD(&reaplist);
>  
>  	spin_lock(&nn->s2s_cp_lock);
>  	idr_for_each_entry(&nn->s2s_cp_stateids, cps_t, i) {
> @@ -5637,17 +5701,7 @@ nfs4_laundromat(struct nfsd_net *nn)
>  			_free_cpntf_state_locked(nn, cps);
>  	}
>  	spin_unlock(&nn->s2s_cp_lock);
> -
> -	spin_lock(&nn->client_lock);
> -	list_for_each_safe(pos, next, &nn->client_lru) {
> -		clp = list_entry(pos, struct nfs4_client, cl_lru);
> -		if (!state_expired(&lt, clp->cl_time))
> -			break;
> -		if (mark_client_expired_locked(clp))
> -			continue;
> -		list_add(&clp->cl_lru, &reaplist);
> -	}
> -	spin_unlock(&nn->client_lock);
> +	nfs4_get_client_reaplist(nn, &reaplist, &lt);
>  	list_for_each_safe(pos, next, &reaplist) {
>  		clp = list_entry(pos, struct nfs4_client, cl_lru);
>  		trace_nfsd_clid_purged(&clp->cl_clientid);
> @@ -5722,7 +5776,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>  }
>  
> -static struct workqueue_struct *laundry_wq;
>  static void laundromat_main(struct work_struct *);
>  
>  static void
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index 4fc1fd639527..23996c6ca75e 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -336,6 +336,7 @@ void		nfsd_lockd_shutdown(void);
>  #define COMPOUND_ERR_SLACK_SPACE	16     /* OP_SETATTR */
>  
>  #define NFSD_LAUNDROMAT_MINTIMEOUT      1   /* seconds */
> +#define	NFSD_COURTESY_CLIENT_TIMEOUT	(24 * 60 * 60)	/* seconds */
>  
>  /*
>   * The following attributes are currently not supported by the NFSv4 server:
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index 95457cfd37fc..f3d6313914ed 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -149,6 +149,7 @@ struct nfs4_delegation {
>  /* For recall: */
>  	int			dl_retries;
>  	struct nfsd4_callback	dl_recall;
> +	bool			dl_recalled;
>  };
>  
>  #define cb_to_delegation(cb) \
> @@ -283,6 +284,28 @@ struct nfsd4_sessionid {
>  #define HEXDIR_LEN     33 /* hex version of 16 byte md5 of cl_name plus '\0' */
>  
>  /*
> + *       State                Meaning                  Where set
> + * --------------------------------------------------------------------------
> + * | NFSD4_ACTIVE      | Confirmed, active    | Default                     |
> + * |------------------- ----------------------------------------------------|
> + * | NFSD4_COURTESY    | Courtesy state.      | nfs4_get_client_reaplist    |
> + * |                   | Lease/lock/share     |                             |
> + * |                   | reservation conflict |                             |
> + * |                   | can cause Courtesy   |                             |
> + * |                   | client to be expired |                             |
> + * |------------------------------------------------------------------------|
> + * | NFSD4_EXPIRABLE   | Courtesy client to be| nfs4_laundromat             |
> + * |                   | expired by Laundromat| try_to_expire_client        |
> + * |                   | due to conflict      |                             |
> + * |------------------------------------------------------------------------|
> + */
> +enum {
> +	NFSD4_ACTIVE = 0,
> +	NFSD4_COURTESY,
> +	NFSD4_EXPIRABLE,
> +};
> +
> +/*
>   * struct nfs4_client - one per client.  Clientids live here.
>   *
>   * The initial object created by an NFS client using SETCLIENTID (for NFSv4.0)
> @@ -385,6 +408,9 @@ struct nfs4_client {
>  	struct list_head	async_copies;	/* list of async copies */
>  	spinlock_t		async_lock;	/* lock for async copies */
>  	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
> +
> +	unsigned int		cl_state;
> +	atomic_t		cl_delegs_in_recall;
>  };
>  
>  /* struct nfs4_client_reset
> @@ -702,4 +728,9 @@ extern void nfsd4_client_record_remove(struct nfs4_client *clp);
>  extern int nfsd4_client_record_check(struct nfs4_client *clp);
>  extern void nfsd4_record_grace_done(struct nfsd_net *nn);
>  
> +static inline bool try_to_expire_client(struct nfs4_client *clp)
> +{
> +	cmpxchg(&clp->cl_state, NFSD4_COURTESY, NFSD4_EXPIRABLE);
> +	return clp->cl_state == NFSD4_EXPIRABLE;
> +}
>  #endif   /* NFSD4_STATE_H */
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 2/7] NFSD: add support for share reservation conflict to courteous server
  2022-05-01 17:38 ` [PATCH RFC v24 2/7] NFSD: add support for share reservation conflict to courteous server Dai Ngo
@ 2022-05-02 15:31   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 15:31 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:11AM -0700, Dai Ngo wrote:
> This patch allows expired client with open state to be in COURTESY
> state. Share/access conflict with COURTESY client is resolved by
> setting COURTESY client to EXPIRABLE state, schedule laundromat
> to run and returning nfserr_jukebox to the request client.
> 

Looks good to me.

Reviewed-by: J. Bruce Fields <bfields@fieldses.org>

--b.

> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/nfsd/nfs4state.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 101 insertions(+), 8 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 917eaab45999..0e98e9c16e3e 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -694,6 +694,57 @@ static unsigned int file_hashval(struct svc_fh *fh)
>  
>  static struct hlist_head file_hashtbl[FILE_HASH_SIZE];
>  
> +/*
> + * Check if courtesy clients have conflicting access and resolve it if possible
> + *
> + * access:  is op_share_access if share_access is true.
> + *	    Check if access mode, op_share_access, would conflict with
> + *	    the current deny mode of the file 'fp'.
> + * access:  is op_share_deny if share_access is false.
> + *	    Check if the deny mode, op_share_deny, would conflict with
> + *	    current access of the file 'fp'.
> + * stp:     skip checking this entry.
> + * new_stp: normal open, not open upgrade.
> + *
> + * Function returns:
> + *	false - access/deny mode conflict with normal client.
> + *	true  - no conflict or conflict with courtesy client(s) is resolved.
> + */
> +static bool
> +nfs4_resolve_deny_conflicts_locked(struct nfs4_file *fp, bool new_stp,
> +		struct nfs4_ol_stateid *stp, u32 access, bool share_access)
> +{
> +	struct nfs4_ol_stateid *st;
> +	bool resolvable = true;
> +	unsigned char bmap;
> +	struct nfsd_net *nn;
> +	struct nfs4_client *clp;
> +
> +	lockdep_assert_held(&fp->fi_lock);
> +	list_for_each_entry(st, &fp->fi_stateids, st_perfile) {
> +		/* ignore lock stateid */
> +		if (st->st_openstp)
> +			continue;
> +		if (st == stp && new_stp)
> +			continue;
> +		/* check file access against deny mode or vice versa */
> +		bmap = share_access ? st->st_deny_bmap : st->st_access_bmap;
> +		if (!(access & bmap_to_share_mode(bmap)))
> +			continue;
> +		clp = st->st_stid.sc_client;
> +		if (try_to_expire_client(clp))
> +			continue;
> +		resolvable = false;
> +		break;
> +	}
> +	if (resolvable) {
> +		clp = stp->st_stid.sc_client;
> +		nn = net_generic(clp->net, nfsd_net_id);
> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +	}
> +	return resolvable;
> +}
> +
>  static void
>  __nfs4_file_get_access(struct nfs4_file *fp, u32 access)
>  {
> @@ -4969,7 +5020,7 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh,
>  
>  static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>  		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
> -		struct nfsd4_open *open)
> +		struct nfsd4_open *open, bool new_stp)
>  {
>  	struct nfsd_file *nf = NULL;
>  	__be32 status;
> @@ -4985,6 +5036,13 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>  	 */
>  	status = nfs4_file_check_deny(fp, open->op_share_deny);
>  	if (status != nfs_ok) {
> +		if (status != nfserr_share_denied) {
> +			spin_unlock(&fp->fi_lock);
> +			goto out;
> +		}
> +		if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
> +				stp, open->op_share_deny, false))
> +			status = nfserr_jukebox;
>  		spin_unlock(&fp->fi_lock);
>  		goto out;
>  	}
> @@ -4992,6 +5050,13 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>  	/* set access to the file */
>  	status = nfs4_file_get_access(fp, open->op_share_access);
>  	if (status != nfs_ok) {
> +		if (status != nfserr_share_denied) {
> +			spin_unlock(&fp->fi_lock);
> +			goto out;
> +		}
> +		if (nfs4_resolve_deny_conflicts_locked(fp, new_stp,
> +				stp, open->op_share_access, true))
> +			status = nfserr_jukebox;
>  		spin_unlock(&fp->fi_lock);
>  		goto out;
>  	}
> @@ -5038,21 +5103,29 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp,
>  }
>  
>  static __be32
> -nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp, struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, struct nfsd4_open *open)
> +nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp,
> +		struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp,
> +		struct nfsd4_open *open)
>  {
>  	__be32 status;
>  	unsigned char old_deny_bmap = stp->st_deny_bmap;
>  
>  	if (!test_access(open->op_share_access, stp))
> -		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open);
> +		return nfs4_get_vfs_file(rqstp, fp, cur_fh, stp, open, false);
>  
>  	/* test and set deny mode */
>  	spin_lock(&fp->fi_lock);
>  	status = nfs4_file_check_deny(fp, open->op_share_deny);
>  	if (status == nfs_ok) {
> -		set_deny(open->op_share_deny, stp);
> -		fp->fi_share_deny |=
> +		if (status != nfserr_share_denied) {
> +			set_deny(open->op_share_deny, stp);
> +			fp->fi_share_deny |=
>  				(open->op_share_deny & NFS4_SHARE_DENY_BOTH);
> +		} else {
> +			if (nfs4_resolve_deny_conflicts_locked(fp, false,
> +					stp, open->op_share_deny, false))
> +				status = nfserr_jukebox;
> +		}
>  	}
>  	spin_unlock(&fp->fi_lock);
>  
> @@ -5393,7 +5466,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
>  			goto out;
>  		}
>  	} else {
> -		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
> +		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open, true);
>  		if (status) {
>  			stp->st_stid.sc_type = NFS4_CLOSED_STID;
>  			release_open_stateid(stp);
> @@ -5627,6 +5700,26 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>  }
>  #endif
>  
> +static bool
> +nfs4_has_any_locks(struct nfs4_client *clp)
> +{
> +	int i;
> +	struct nfs4_stateowner *so;
> +
> +	spin_lock(&clp->cl_lock);
> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
> +		list_for_each_entry(so, &clp->cl_ownerstr_hashtbl[i],
> +				so_strhash) {
> +			if (so->so_is_open_owner)
> +				continue;
> +			spin_unlock(&clp->cl_lock);
> +			return true;
> +		}
> +	}
> +	spin_unlock(&clp->cl_lock);
> +	return false;
> +}
> +
>  /*
>   * place holder for now, no check for lock blockers yet
>   */
> @@ -5634,8 +5727,8 @@ static bool
>  nfs4_anylock_blockers(struct nfs4_client *clp)
>  {
>  	if (atomic_read(&clp->cl_delegs_in_recall) ||
> -			client_has_openowners(clp)  ||
> -			!list_empty(&clp->async_copies))
> +			!list_empty(&clp->async_copies) ||
> +			nfs4_has_any_locks(clp))
>  		return true;
>  	return false;
>  }
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 3/7] NFSD: move create/destroy of laundry_wq to init_nfsd and exit_nfsd
  2022-05-01 17:38 ` [PATCH RFC v24 3/7] NFSD: move create/destroy of laundry_wq to init_nfsd and exit_nfsd Dai Ngo
@ 2022-05-02 15:35   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 15:35 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:12AM -0700, Dai Ngo wrote:
> This patch moves create/destroy of laundry_wq from nfs4_state_start
> and nfs4_state_shutdown_net to init_nfsd and exit_nfsd to prevent
> the laundromat from being freed while a thread is processing a
> conflicting lock.

Reviewed-by: J. Bruce Fields <bfields@fieldses.org>

> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/nfsd/nfs4state.c | 28 ++++++++++++++++------------
>  fs/nfsd/nfsctl.c    |  4 ++++
>  fs/nfsd/nfsd.h      |  4 ++++
>  3 files changed, 24 insertions(+), 12 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 0e98e9c16e3e..f369142da79f 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -127,6 +127,21 @@ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
>  
>  static struct workqueue_struct *laundry_wq;
>  
> +int nfsd4_create_laundry_wq(void)
> +{
> +	int rc = 0;
> +
> +	laundry_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, "nfsd4");
> +	if (laundry_wq == NULL)
> +		rc = -ENOMEM;
> +	return rc;
> +}
> +
> +void nfsd4_destroy_laundry_wq(void)
> +{
> +	destroy_workqueue(laundry_wq);
> +}
> +
>  static bool is_session_dead(struct nfsd4_session *ses)
>  {
>  	return ses->se_flags & NFS4_SESSION_DEAD;
> @@ -7748,22 +7763,12 @@ nfs4_state_start(void)
>  {
>  	int ret;
>  
> -	laundry_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, "nfsd4");
> -	if (laundry_wq == NULL) {
> -		ret = -ENOMEM;
> -		goto out;
> -	}
>  	ret = nfsd4_create_callback_queue();
>  	if (ret)
> -		goto out_free_laundry;
> +		return ret;
>  
>  	set_max_delegations();
>  	return 0;
> -
> -out_free_laundry:
> -	destroy_workqueue(laundry_wq);
> -out:
> -	return ret;
>  }
>  
>  void
> @@ -7800,7 +7805,6 @@ nfs4_state_shutdown_net(struct net *net)
>  void
>  nfs4_state_shutdown(void)
>  {
> -	destroy_workqueue(laundry_wq);
>  	nfsd4_destroy_callback_queue();
>  }
>  
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 16920e4512bd..322a208878f2 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -1544,6 +1544,9 @@ static int __init init_nfsd(void)
>  	retval = register_cld_notifier();
>  	if (retval)
>  		goto out_free_all;
> +	retval = nfsd4_create_laundry_wq();
> +	if (retval)
> +		goto out_free_all;
>  	return 0;
>  out_free_all:
>  	unregister_pernet_subsys(&nfsd_net_ops);
> @@ -1566,6 +1569,7 @@ static int __init init_nfsd(void)
>  
>  static void __exit exit_nfsd(void)
>  {
> +	nfsd4_destroy_laundry_wq();
>  	unregister_cld_notifier();
>  	unregister_pernet_subsys(&nfsd_net_ops);
>  	nfsd_drc_slab_free();
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index 23996c6ca75e..847b482155ae 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -162,6 +162,8 @@ void nfs4_state_shutdown_net(struct net *net);
>  int nfs4_reset_recoverydir(char *recdir);
>  char * nfs4_recoverydir(void);
>  bool nfsd4_spo_must_allow(struct svc_rqst *rqstp);
> +int nfsd4_create_laundry_wq(void);
> +void nfsd4_destroy_laundry_wq(void);
>  #else
>  static inline int nfsd4_init_slabs(void) { return 0; }
>  static inline void nfsd4_free_slabs(void) { }
> @@ -175,6 +177,8 @@ static inline bool nfsd4_spo_must_allow(struct svc_rqst *rqstp)
>  {
>  	return false;
>  }
> +static inline int nfsd4_create_laundry_wq(void) { return 0; };
> +static inline void nfsd4_destroy_laundry_wq(void) {};
>  #endif
>  
>  /*
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 4/7] fs/lock: add helper locks_owner_has_blockers to check for blockers
  2022-05-01 17:38 ` [PATCH RFC v24 4/7] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
@ 2022-05-02 15:43   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 15:43 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:13AM -0700, Dai Ngo wrote:
> Add helper locks_owner_has_blockers to check if there is any blockers
> for a given lockowner.

Reviewed-by: J. Bruce Fields <bfields@fieldses.org>

> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/locks.c         | 28 ++++++++++++++++++++++++++++
>  include/linux/fs.h |  7 +++++++
>  2 files changed, 35 insertions(+)
> 
> diff --git a/fs/locks.c b/fs/locks.c
> index 8c6df10cd9ed..c369841ef7d1 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -300,6 +300,34 @@ void locks_release_private(struct file_lock *fl)
>  }
>  EXPORT_SYMBOL_GPL(locks_release_private);
>  
> +/**
> + * locks_owner_has_blockers - Check for blocking lock requests
> + * @flctx: file lock context
> + * @owner: lock owner
> + *
> + * Return values:
> + *   %true: @owner has at least one blocker
> + *   %false: @owner has no blockers
> + */
> +bool locks_owner_has_blockers(struct file_lock_context *flctx,
> +		fl_owner_t owner)
> +{
> +	struct file_lock *fl;
> +
> +	spin_lock(&flctx->flc_lock);
> +	list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
> +		if (fl->fl_owner != owner)
> +			continue;
> +		if (!list_empty(&fl->fl_blocked_requests)) {
> +			spin_unlock(&flctx->flc_lock);
> +			return true;
> +		}
> +	}
> +	spin_unlock(&flctx->flc_lock);
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(locks_owner_has_blockers);
> +
>  /* Free a lock which is not in use. */
>  void locks_free_lock(struct file_lock *fl)
>  {
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index bbde95387a23..b8ed7f974fb4 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1174,6 +1174,8 @@ extern void lease_unregister_notifier(struct notifier_block *);
>  struct files_struct;
>  extern void show_fd_locks(struct seq_file *f,
>  			 struct file *filp, struct files_struct *files);
> +extern bool locks_owner_has_blockers(struct file_lock_context *flctx,
> +			fl_owner_t owner);
>  #else /* !CONFIG_FILE_LOCKING */
>  static inline int fcntl_getlk(struct file *file, unsigned int cmd,
>  			      struct flock __user *user)
> @@ -1309,6 +1311,11 @@ static inline int lease_modify(struct file_lock *fl, int arg,
>  struct files_struct;
>  static inline void show_fd_locks(struct seq_file *f,
>  			struct file *filp, struct files_struct *files) {}
> +static inline bool locks_owner_has_blockers(struct file_lock_context *flctx,
> +			fl_owner_t owner)
> +{
> +	return false;
> +}
>  #endif /* !CONFIG_FILE_LOCKING */
>  
>  static inline struct inode *file_inode(const struct file *f)
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 5/7] fs/lock: add 2 callbacks to lock_manager_operations to resolve conflict
  2022-05-01 17:38 ` [PATCH RFC v24 5/7] fs/lock: add 2 callbacks to lock_manager_operations to resolve conflict Dai Ngo
@ 2022-05-02 15:53   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 15:53 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:14AM -0700, Dai Ngo wrote:
> Add 2 new callbacks, lm_lock_expirable and lm_expire_lock, to
> lock_manager_operations to allow the lock manager to take appropriate
> action to resolve the lock conflict if possible.
> 
> A new field, lm_mod_owner, is also added to lock_manager_operations.
> The lm_mod_owner is used by the fs/lock code to make sure the lock
> manager module such as nfsd, is not freed while lock conflict is being
> resolved.
> 
> lm_lock_expirable checks and returns true to indicate that the lock
> conflict can be resolved else return false. This callback must be
> called with the flc_lock held so it can not block.
> 
> lm_expire_lock is called to resolve the lock conflict if the returned
> value from lm_lock_expirable is true. This callback is called without
> the flc_lock held since it's allowed to block. Upon returning from
> this callback, the lock conflict should be resolved and the caller is
> expected to restart the conflict check from the beginnning of the list.
> 
> Lock manager, such as NFSv4 courteous server, uses this callback to
> resolve conflict by destroying lock owner, or the NFSv4 courtesy client
> (client that has expired but allowed to maintains its states) that owns
> the lock.
> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  Documentation/filesystems/locking.rst |  4 ++++
>  fs/locks.c                            | 45 ++++++++++++++++++++++++++++++++---
>  include/linux/fs.h                    |  3 +++
>  3 files changed, 49 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
> index c26d854275a0..0997a258361a 100644
> --- a/Documentation/filesystems/locking.rst
> +++ b/Documentation/filesystems/locking.rst
> @@ -428,6 +428,8 @@ prototypes::
>  	void (*lm_break)(struct file_lock *); /* break_lease callback */
>  	int (*lm_change)(struct file_lock **, int);
>  	bool (*lm_breaker_owns_lease)(struct file_lock *);
> +        bool (*lm_lock_expirable)(struct file_lock *);
> +        void (*lm_expire_lock)(void);
>  
>  locking rules:
>  
> @@ -439,6 +441,8 @@ lm_grant:		no		no			no
>  lm_break:		yes		no			no
>  lm_change		yes		no			no
>  lm_breaker_owns_lease:	yes     	no			no
> +lm_lock_expirable	yes		no			no
> +lm_expire_lock		no		no			yes
>  ======================	=============	=================	=========
>  
>  buffer_head
> diff --git a/fs/locks.c b/fs/locks.c
> index c369841ef7d1..17917da06463 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -902,6 +902,9 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
>  	struct file_lock *cfl;
>  	struct file_lock_context *ctx;
>  	struct inode *inode = locks_inode(filp);
> +	void *owner;
> +	bool ret;
> +	void (*func)(void);
>  
>  	ctx = smp_load_acquire(&inode->i_flctx);
>  	if (!ctx || list_empty_careful(&ctx->flc_posix)) {
> @@ -909,12 +912,28 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
>  		return;
>  	}
>  
> +retry:
>  	spin_lock(&ctx->flc_lock);
>  	list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
> -		if (posix_locks_conflict(fl, cfl)) {
> -			locks_copy_conflock(fl, cfl);
> -			goto out;
> +		if (!posix_locks_conflict(fl, cfl))
> +			continue;
> +		if (cfl->fl_lmops && cfl->fl_lmops->lm_mod_owner &&
> +				cfl->fl_lmops->lm_lock_expirable &&
> +				cfl->fl_lmops->lm_expire_lock) {
> +			ret = (*cfl->fl_lmops->lm_lock_expirable)(cfl);
> +			if (!ret)
> +				goto conflict;
> +			owner = cfl->fl_lmops->lm_mod_owner;
> +			func = cfl->fl_lmops->lm_expire_lock;
> +			__module_get(owner);
> +			spin_unlock(&ctx->flc_lock);
> +			(*func)();
> +			module_put(owner);
> +			goto retry;
>  		}
> +conflict:
> +		locks_copy_conflock(fl, cfl);
> +		goto out;
>  	}
>  	fl->fl_type = F_UNLCK;
>  out:
> @@ -1088,6 +1107,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
>  	int error;
>  	bool added = false;
>  	LIST_HEAD(dispose);
> +	void *owner;
> +	bool ret;
> +	void (*func)(void);
>  
>  	ctx = locks_get_lock_context(inode, request->fl_type);
>  	if (!ctx)
> @@ -1106,6 +1128,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
>  		new_fl2 = locks_alloc_lock();
>  	}
>  
> +retry:
>  	percpu_down_read(&file_rwsem);
>  	spin_lock(&ctx->flc_lock);
>  	/*
> @@ -1117,6 +1140,22 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
>  		list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
>  			if (!posix_locks_conflict(request, fl))
>  				continue;
> +			if (fl->fl_lmops && fl->fl_lmops->lm_mod_owner &&

The check for lm_mod_owner isn't necessary.

> +					fl->fl_lmops->lm_lock_expirable &&
> +					fl->fl_lmops->lm_expire_lock) {

Let's also drop the check for lm_expire_lock.  Any lock manager defining
one of those methods must define the other.

> +				ret = (*fl->fl_lmops->lm_lock_expirable)(fl);
> +				if (!ret)
> +					goto conflict;

So, I would make this:

			if (fl->fl_lmops && fl->fl_lmops->lm_lock_expirable
				&& fl->fl_lmops->lm_lock_expirable(fl))

and leave out the "conflict" goto.

With that change, feel free to add

Reviewed-by: J. Bruce Fields <bfields@fieldses.org>

--b.

> +				owner = fl->fl_lmops->lm_mod_owner;
> +				func = fl->fl_lmops->lm_expire_lock;
> +				__module_get(owner);
> +				spin_unlock(&ctx->flc_lock);
> +				percpu_up_read(&file_rwsem);
> +				(*func)();
> +				module_put(owner);
> +				goto retry;
> +			}
> +conflict:
>  			if (conflock)
>  				locks_copy_conflock(conflock, fl);
>  			error = -EAGAIN;
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index b8ed7f974fb4..aa6c1bbdb8c4 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1029,6 +1029,7 @@ struct file_lock_operations {
>  };
>  
>  struct lock_manager_operations {
> +	void *lm_mod_owner;
>  	fl_owner_t (*lm_get_owner)(fl_owner_t);
>  	void (*lm_put_owner)(fl_owner_t);
>  	void (*lm_notify)(struct file_lock *);	/* unblock callback */
> @@ -1037,6 +1038,8 @@ struct lock_manager_operations {
>  	int (*lm_change)(struct file_lock *, int, struct list_head *);
>  	void (*lm_setup)(struct file_lock *, void **);
>  	bool (*lm_breaker_owns_lease)(struct file_lock *);
> +	bool (*lm_lock_expirable)(struct file_lock *cfl);
> +	void (*lm_expire_lock)(void);
>  };
>  
>  struct lock_manager {
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 6/7] NFSD: add support for lock conflict to courteous server
  2022-05-01 17:38 ` [PATCH RFC v24 6/7] NFSD: add support for lock conflict to courteous server Dai Ngo
@ 2022-05-02 16:00   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 16:00 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:15AM -0700, Dai Ngo wrote:
> This patch allows expired client with lock state to be in COURTESY
> state. Lock conflict with COURTESY client is resolved by the fs/lock
> code using the lm_lock_expirable and lm_expire_lock callback in the
> struct lock_manager_operations.
> 
> If conflict client is in COURTESY state, set it to EXPIRABLE and
> schedule the laundromat to run immediately to expire the client. The
> callback lm_expire_lock waits for the laundromat to flush its work
> queue before returning to caller.

Reviewed-by: J. Bruce Fields <bfields@fieldses.org>

(These searches over hash tables that we're adding in a few places are
inefficient, but I'm assuming it won't matter.  And I don't have a
better idea off the top of my head.  So I'm fine with just doing this
instead of optimizing prematurely.)

--b.

> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/nfsd/nfs4state.c | 70 +++++++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 54 insertions(+), 16 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index f369142da79f..4ab7dda44f38 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -5715,39 +5715,51 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>  }
>  #endif
>  
> +/* Check if any lock belonging to this lockowner has any blockers */
>  static bool
> -nfs4_has_any_locks(struct nfs4_client *clp)
> +nfs4_lockowner_has_blockers(struct nfs4_lockowner *lo)
> +{
> +	struct file_lock_context *ctx;
> +	struct nfs4_ol_stateid *stp;
> +	struct nfs4_file *nf;
> +
> +	list_for_each_entry(stp, &lo->lo_owner.so_stateids, st_perstateowner) {
> +		nf = stp->st_stid.sc_file;
> +		ctx = nf->fi_inode->i_flctx;
> +		if (!ctx)
> +			continue;
> +		if (locks_owner_has_blockers(ctx, lo))
> +			return true;
> +	}
> +	return false;
> +}
> +
> +static bool
> +nfs4_anylock_blockers(struct nfs4_client *clp)
>  {
>  	int i;
>  	struct nfs4_stateowner *so;
> +	struct nfs4_lockowner *lo;
>  
> +	if (atomic_read(&clp->cl_delegs_in_recall))
> +		return true;
>  	spin_lock(&clp->cl_lock);
>  	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>  		list_for_each_entry(so, &clp->cl_ownerstr_hashtbl[i],
>  				so_strhash) {
>  			if (so->so_is_open_owner)
>  				continue;
> -			spin_unlock(&clp->cl_lock);
> -			return true;
> +			lo = lockowner(so);
> +			if (nfs4_lockowner_has_blockers(lo)) {
> +				spin_unlock(&clp->cl_lock);
> +				return true;
> +			}
>  		}
>  	}
>  	spin_unlock(&clp->cl_lock);
>  	return false;
>  }
>  
> -/*
> - * place holder for now, no check for lock blockers yet
> - */
> -static bool
> -nfs4_anylock_blockers(struct nfs4_client *clp)
> -{
> -	if (atomic_read(&clp->cl_delegs_in_recall) ||
> -			!list_empty(&clp->async_copies) ||
> -			nfs4_has_any_locks(clp))
> -		return true;
> -	return false;
> -}
> -
>  static void
>  nfs4_get_client_reaplist(struct nfsd_net *nn, struct list_head *reaplist,
>  				struct laundry_time *lt)
> @@ -6712,6 +6724,29 @@ nfsd4_lm_put_owner(fl_owner_t owner)
>  		nfs4_put_stateowner(&lo->lo_owner);
>  }
>  
> +/* return pointer to struct nfs4_client if client is expirable */
> +static bool
> +nfsd4_lm_lock_expirable(struct file_lock *cfl)
> +{
> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)cfl->fl_owner;
> +	struct nfs4_client *clp = lo->lo_owner.so_client;
> +	struct nfsd_net *nn;
> +
> +	if (try_to_expire_client(clp)) {
> +		nn = net_generic(clp->net, nfsd_net_id);
> +		mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +		return true;
> +	}
> +	return false;
> +}
> +
> +/* schedule laundromat to run immediately and wait for it to complete */
> +static void
> +nfsd4_lm_expire_lock(void)
> +{
> +	flush_workqueue(laundry_wq);
> +}
> +
>  static void
>  nfsd4_lm_notify(struct file_lock *fl)
>  {
> @@ -6738,9 +6773,12 @@ nfsd4_lm_notify(struct file_lock *fl)
>  }
>  
>  static const struct lock_manager_operations nfsd_posix_mng_ops  = {
> +	.lm_mod_owner = THIS_MODULE,
>  	.lm_notify = nfsd4_lm_notify,
>  	.lm_get_owner = nfsd4_lm_get_owner,
>  	.lm_put_owner = nfsd4_lm_put_owner,
> +	.lm_lock_expirable = nfsd4_lm_lock_expirable,
> +	.lm_expire_lock = nfsd4_lm_expire_lock,
>  };
>  
>  static inline void
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 7/7] NFSD: Show state of courtesy client in client info
  2022-05-01 17:38 ` [PATCH RFC v24 7/7] NFSD: Show state of courtesy client in client info Dai Ngo
@ 2022-05-02 16:06   ` J. Bruce Fields
  0 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 16:06 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:16AM -0700, Dai Ngo wrote:
> Update client_info_show to show state of courtesy client
> and time since last renew.

At this point I may be borderline woodshedding, but: for simplicity's
sake, let's just keep that time as a number of seconds.  I'm thinking
that'll make it marginally easier for people processing the output and
doing comparisons and such.

--b.

> 
> Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
> ---
>  fs/nfsd/nfs4state.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 4ab7dda44f38..9cff06fc3600 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -2473,7 +2473,8 @@ static int client_info_show(struct seq_file *m, void *v)
>  {
>  	struct inode *inode = m->private;
>  	struct nfs4_client *clp;
> -	u64 clid;
> +	u64 clid, hrs;
> +	u32 mins, secs;
>  
>  	clp = get_nfsdfs_clp(inode);
>  	if (!clp)
> @@ -2481,10 +2482,19 @@ static int client_info_show(struct seq_file *m, void *v)
>  	memcpy(&clid, &clp->cl_clientid, sizeof(clid));
>  	seq_printf(m, "clientid: 0x%llx\n", clid);
>  	seq_printf(m, "address: \"%pISpc\"\n", (struct sockaddr *)&clp->cl_addr);
> -	if (test_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags))
> +
> +	if (clp->cl_state == NFSD4_COURTESY)
> +		seq_puts(m, "status: courtesy\n");
> +	else if (clp->cl_state == NFSD4_EXPIRABLE)
> +		seq_puts(m, "status: expirable\n");
> +	else if (test_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags))
>  		seq_puts(m, "status: confirmed\n");
>  	else
>  		seq_puts(m, "status: unconfirmed\n");
> +	hrs = div_u64_rem(ktime_get_boottime_seconds() - clp->cl_time,
> +				3600, &secs);
> +	mins = div_u64_rem((u64)secs, 60, &secs);
> +	seq_printf(m, "time since last renew: %llu:%02u:%02u\n", hrs, mins, secs);
>  	seq_printf(m, "name: ");
>  	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>  	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
> -- 
> 2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server
  2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
                   ` (6 preceding siblings ...)
  2022-05-01 17:38 ` [PATCH RFC v24 7/7] NFSD: Show state of courtesy client in client info Dai Ngo
@ 2022-05-02 16:07 ` J. Bruce Fields
  7 siblings, 0 replies; 16+ messages in thread
From: J. Bruce Fields @ 2022-05-02 16:07 UTC (permalink / raw)
  To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel

On Sun, May 01, 2022 at 10:38:09AM -0700, Dai Ngo wrote:
> Hi Chuck, Bruce
> 
> This series of patches implement the NFSv4 Courteous Server.

Looks good, my only remaining comments (see other email) are minor--with
those fixed, feel free to add my reviewed-by: for the series.

--b.

> 
> A server which does not immediately expunge the state on lease expiration
> is known as a Courteous Server.  A Courteous Server continues to recognize
> previously generated state tokens as valid until conflict arises between
> the expired state and the requests from another client, or the server
> reboots.
> 
> v2:
> 
> . add new callback, lm_expire_lock, to lock_manager_operations to
>   allow the lock manager to take appropriate action with conflict lock.
> 
> . handle conflicts of NFSv4 locks with NFSv3/NLM and local locks.
> 
> . expire courtesy client after 24hr if client has not reconnected.
> 
> . do not allow expired client to become courtesy client if there are
>   waiters for client's locks.
> 
> . modify client_info_show to show courtesy client and seconds from
>   last renew.
> 
> . fix a problem with NFSv4.1 server where the it keeps returning
>   SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after
>   the courtesy client reconnects, causing the client to keep sending
>   BCTS requests to server.
> 
> v3:
> 
> . modified posix_test_lock to check and resolve conflict locks
>   to handle NLM TEST and NFSv4 LOCKT requests.
> 
> . separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.
> 
> v4:
> 
> . rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock
>   by asking the laudromat thread to destroy the courtesy client.
> 
> . handle NFSv4 share reservation conflicts with courtesy client. This
>   includes conflicts between access mode and deny mode and vice versa.
> 
> . drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN.
> 
> v5:
> 
> . fix recursive locking of file_rwsem from posix_lock_file. 
> 
> . retest with LOCKDEP enabled.
> 
> v6:
> 
> . merge witn 5.15-rc7
> 
> . fix a bug in nfs4_check_deny_bmap that did not check for matched
>   nfs4_file before checking for access/deny conflict. This bug causes
>   pynfs OPEN18 to fail since the server taking too long to release
>   lots of un-conflict clients' state.
> 
> . enhance share reservation conflict handler to handle case where
>   a large number of conflict courtesy clients need to be expired.
>   The 1st 100 clients are expired synchronously and the rest are
>   expired in the background by the laundromat and NFS4ERR_DELAY
>   is returned to the NFS client. This is needed to prevent the
>   NFS client from timing out waiting got the reply.
> 
> v7:
> 
> . Fix race condition in posix_test_lock and posix_lock_inode after
>   dropping spinlock.
> 
> . Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock
>   callback
> 
> . Always resolve share reservation conflicts asynchrously.
> 
> . Fix bug in nfs4_laundromat where spinlock is not used when
>   scanning cl_ownerstr_hashtbl.
> 
> . Fix bug in nfs4_laundromat where idr_get_next was called
>   with incorrect 'id'. 
> 
> . Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock.
> 
> v8:
> 
> . Fix warning in nfsd4_fl_expire_lock reported by test robot.
> 
> v9:
> 
> . Simplify lm_expire_lock API by (1) remove the 'testonly' flag
>   and (2) specifying return value as true/false to indicate
>   whether conflict was succesfully resolved.
> 
> . Rework nfsd4_fl_expire_lock to mark client with
>   NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire
>   the client in the background.
> 
> . Add a spinlock in nfs4_client to synchronize access to the
>   NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to
>   handle race conditions when resolving lock and share reservation
>   conflict.
> 
> . Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT
>   are now consisdered 'dead', waiting for the laundromat to expire
>   it. This client is no longer allowed to use its states if it
>   reconnects before the laundromat finishes expiring the client.
> 
>   For v4.1 client, the detection is done in the processing of the
>   SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client
>   to re-establish new clientid and session.
>   For v4.0 client, the detection is done in the processing of the
>   RENEW and state-related ops and return NFS4ERR_EXPIRE to force
>   the client to re-establish new clientid.
> 
> v10:
> 
>   Resolve deadlock in v9 by avoiding getting cl_client and
>   cl_cs_lock together. The laundromat needs to determine whether
>   the expired client has any state and also has no blockers on
>   its locks. Both of these conditions are allowed to change after
>   the laundromat transits an expired client to courtesy client.
>   When this happens, the laundromat will detect it on the next
>   run and and expire the courtesy client.
> 
>   Remove client persistent record before marking it as COURTESY_CLIENT
>   and add client persistent record before clearing the COURTESY_CLIENT
>   flag to allow the courtesy client to transist to normal client to
>   continue to use its state.
> 
>   Lock/delegation/share reversation conflict with courtesy client is
>   resolved by marking the courtesy client as DESTROY_COURTESY_CLIENT,
>   effectively disable it, then allow the current request to proceed
>   immediately.
>   
>   Courtesy client marked as DESTROY_COURTESY_CLIENT is not allowed
>   to reconnect to reuse itsstate. It is expired by the laundromat
>   asynchronously in the background.
> 
>   Move processing of expired clients from nfs4_laudromat to a
>   separate function, nfs4_get_client_reaplist, that creates the
>   reaplist and also to process courtesy clients.
> 
>   Update Documentation/filesystems/locking.rst to include new
>   lm_lock_conflict call.
> 
>   Modify leases_conflict to call lm_breaker_owns_lease only if
>   there is real conflict.  This is to allow the lock manager to
>   resolve the delegation conflict if possible.
> 
> v11:
> 
>   Add comment for lm_lock_conflict callback.
> 
>   Replace static const courtesy_client_expiry with macro.
> 
>   Remove courtesy_clnt argument from find_in_sessionid_hashtbl.
>   Callers use nfs4_client->cl_cs_client boolean to determined if
>   it's the courtesy client and take appropriate actions.
> 
>   Rename NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT
>   with NFSD4_CLIENT_COURTESY and NFSD4_CLIENT_DESTROY_COURTESY.
> 
> v12:
> 
>   Remove unnecessary comment in nfs4_get_client_reaplist.
> 
>   Replace nfs4_client->cl_cs_client boolean with
>   NFSD4_CLIENT_COURTESY_CLNT flag.
> 
>   Remove courtesy_clnt argument from find_client_in_id_table and
>   find_clp_in_name_tree. Callers use NFSD4_CLIENT_COURTESY_CLNT to
>   determined if it's the courtesy client and take appropriate actions.
> 
> v13:
> 
>   Merge with 5.17-rc3.
> 
>   Cleanup Documentation/filesystems/locking.rst: replace i_lock
>   with flc_lock, update API's that use flc_lock.
> 
>   Rename lm_lock_conflict to lm_lock_expired().
> 
>   Remove comment of lm_lock_expired API in lock_manager_operations.
>   Same information is in patch description.
> 
>   Update commit messages of 4/4.
> 
>   Add some comment for NFSD4_CLIENT_COURTESY_CLNT.
> 
>   Add nfsd4_discard_courtesy_clnt() to eliminate duplicate code of
>   discarding courtesy client; setting NFSD4_DESTROY_COURTESY_CLIENT.
> 
> v14:
> 
> . merge with Chuck's public for-next branch.
> 
> . remove courtesy_client_expiry, use client's last renew time.
> 
> . simplify comment of nfs4_check_access_deny_bmap.
> 
> . add comment about race condition in nfs4_get_client_reaplist.
> 
> . add list_del when walking cslist in nfs4_get_client_reaplist.
> 
> . remove duplicate INIT_LIST_HEAD(&reaplist) from nfs4_laundromat
> 
> . Modify find_confirmed_client and find_confirmed_client_by_name
>   to detect courtesy client and destroy it.
> 
> . refactor lookup_clientid to use find_client_in_id_table
>   directly instead of find_confirmed_client.
> 
> . refactor nfsd4_setclientid to call find_clp_in_name_tree
>   directly instead of find_confirmed_client_by_name.
> 
> . remove comment of NFSD4_CLIENT_COURTESY.
> 
> . replace NFSD4_CLIENT_DESTROY_COURTESY with NFSD4_CLIENT_EXPIRED.
> 
> . replace NFSD4_CLIENT_COURTESY_CLNT with NFSD4_CLIENT_RECONNECTED.
> 
> v15:
> 
> . add helper locks_has_blockers_locked in fs.h to check for
>   lock blockers
> 
> . rename nfs4_conflict_clients to nfs4_resolve_deny_conflicts_locked
> 
> . update nfs4_upgrade_open() to handle courtesy clients.
> 
> . add helper nfs4_check_and_expire_courtesy_client and
>   nfs4_is_courtesy_client_expired to deduplicate some code.
> 
> . update nfs4_anylock_blocker:
>    . replace list_for_each_entry_safe with list_for_each_entry
>    . break nfs4_anylock_blocker into 2 smaller functions.
> 
> . update nfs4_get_client_reaplist:
>    . remove unnecessary commets
>    . acquire cl_cs_lock before setting NFSD4_CLIENT_COURTESY flag
> 
> . update client_info_show to show 'time since last renew: 00:00:38'
>   instead of 'seconds from last renew: 38'.
> 
> v16:
> 
> . update client_info_show to display 'status' as
>   'confirmed/unconfirmed/courtesy'
> 
> . replace helper locks_has_blockers_locked in fs.h in v15 with new
>   locks_owner_has_blockers call in fs/locks.c
> 
> . update nfs4_lockowner_has_blockers to use locks_owner_has_blockers
> 
> . move nfs4_check_and_expire_courtesy_client from 5/11 to 4/11
> 
> . remove unnecessary check for NULL clp in find_in_sessionid_hashtb
> 
> . fix typo in commit messages
> 
> v17:
> 
> . replace flags used for courtesy client with enum courtesy_client_state
> 
> . add state table in nfsd/state.h
> 
> . make nfsd4_expire_courtesy_clnt, nfsd4_discard_courtesy_clnt and
>   nfsd4_courtesy_clnt_expired as static inline.
> 
> . update nfsd_breaker_owns_lease to use dl->dl_stid.sc_client directly
> 
> . fix kernel test robot warning when CONFIG_FILE_LOCKING not defined.
> 
> v18:
> 
> . modify 0005-NFSD-Update-nfs4_get_vfs_file-to-handle-courtesy-cli.patch to:
> 
>     . remove nfs4_check_access_deny_bmap, fold this functionality
>       into nfs4_resolve_deny_conflicts_locked by making use of
>       bmap_to_share_mode.
> 
>     . move nfs4_resolve_deny_conflicts_locked into nfs4_file_get_access
>       and nfs4_file_check_deny. 
> 
> v19:
> 
> . modify 0002-NFSD-Add-courtesy-client-state-macro-and-spinlock-to.patch to
> 
>     . add NFSD4_CLIENT_ACTIVE
> 
>     . redo Courtesy client state table
> 
> . modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch and
>   0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch to:
> 
>     . set cl_cs_client_stare to NFSD4_CLIENT_ACTIVE when reactive
>       courtesy client  
> 
> v20:
> 
> . modify 0006-NFSD-Update-find_clp_in_name_tree-to-handle-courtesy.patch to:
> 	. add nfsd4_discard_reconnect_clnt
> 	. replace call to nfsd4_discard_courtesy_clnt with
> 	  nfsd4_discard_reconnect_clnt
> 
> . modify 0007-NFSD-Update-find_in_sessionid_hashtbl-to-handle-cour.patch to:
> 	. replace call to nfsd4_discard_courtesy_clnt with
> 	  nfsd4_discard_reconnect_clnt
>           
> . modify 0008-NFSD-Update-find_client_in_id_table-to-handle-courte.patch
> 	. replace call to nfsd4_discard_courtesy_clnt with
> 	  nfsd4_discard_reconnect_clnt
> 
> v21:
> 
> . merged with 5.18.0-rc3
> 
> . Redo based on Bruce's suggestion by breaking the patches into functionality
>   and also don't remove client record of courtesy client until the client is
>   actually expired.
> 
>   0001: courteous server framework with support for client with delegation only.
>         This patch also handles COURTESY and EXPIRABLE reconnect.
>         Conflict is resolved by set the courtesy client to EXPIRABLE, let the
>         laundromat expires the client on next run and return NFS4ERR_DELAY
>         OPEN request.
> 
>   0002: add support for opens/share reservation to courteous server
>         Conflict is resolved by set the courtesy client to EXPIRABLE, let the
>         laundromat expires the client on next run and return NFS4ERR_DELAY
>         OPEN request.
> 
>   0003: mv creation/destroying laundromat workqueue from nfs4_state_start and
>         and nfs4_state_shutdown_net to init_nfsd and exit_nfsd.
> 
>   0004: fs/lock: add locks_owner_has_blockers helper
>   
>   0005: add 2 callbacks to lock_manager_operations for resolving lock conflict
>   
>   0006: add support for locks to courteous server, making use of 0004 and 0005
>         Conflict is resolved by set the courtesy client to EXPIRABLE, run the
>         laundromat immediately and wait for it to complete before returning to
>         fs/lock code to recheck the lock list from the beginning.
> 
>         NOTE: I could not get queue_work/queue_delay_work and flush_workqueue
>         to work as expected, I have to use mod_delayed_work and flush_workqueue
>         to get the laundromat to run immediately.
> 
>         When we check for blockers in nfs4_anylock_blockers, we do not check
>         for client with delegation conflict. This is because we already hold
>         the client_lock and to check for delegation conflict we need the state_lock
>         and scanning the del_recall_lru list each time. So to avoid this overhead
>         and potential deadlock (not sure about lock of ordering of these locks)
>         we check and set the COURTESY client with delegation being recalled to
>         EXPIRABLE later in nfs4_laundromat.
> 
>   0007: show state of courtesy client in client info.
> 
> v22:
> 
> . modify 0001:
> 	. allow EXPIRABLE client to reconnect.
>         . modify try_to_expire_client to return false if cl_state is
>           either COURTEY or EXPIRABLE.
>         . remove try_to_activate_client and set cl_state to ACTIVE in
>           get_client_locked and renew_client_locked.
>         . remove unnecessary cl_cs_lock. Synchronization between expiring
>           client and client reconnect is provided by mark_client_expired_locked
>           and get_client_locked or renew_client_locked
> 
> . modify 0003:
>         . fix 'ld' error with laundry_wq when CONFIG_NFSD is defined
>           and CONFIG_NFSD_V4 is not defined.
> 
> v23:
> 	. rework try_to_expire_client to return true when cl_state in EXPIRABLE
> 	  and its callers to work accordingly.
> 
> 	. add missing mod_delay_work in nfsd4_lm_lock_expirable.
> 
> 	. add check for cl_rpc_users before setting client state to COURTESY
> 	  in nfs4_get_client_reaplist.
> 
>         . setting client to COURTESY before nfs4_anylock_blockers to handle
>           race between the laundromat and thread resolving lock conflict.
> 
> 	. cleanup 2 fs/lock callbacks: lm_lock_expirable to return bool and
>           lm_expire_lock takes no argument.
> v24:
> 	. add new counter, cl_delegs_in_recall, in nfs4_client to maintain
> 	  delegation recalls and is checked by nfs4_anylock_blockers.
> 
> 	. remove resolve_lock_conflict_locked and move its logic into the
> 	  callers posix_lock_inode and posix_test_lock for clarity.
> 
> 	. rename 'conflict' to 'resolvable' in nfs4_resolve_deny_conflicts_locked.
> 
> 	. fix kernel robot test warning about missing semicolon in nfsd.h
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-05-02 16:08 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-01 17:38 [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server Dai Ngo
2022-05-01 17:38 ` [PATCH RFC v24 1/7] NFSD: add courteous server support for thread with only delegation Dai Ngo
2022-05-02 15:23   ` J. Bruce Fields
2022-05-01 17:38 ` [PATCH RFC v24 2/7] NFSD: add support for share reservation conflict to courteous server Dai Ngo
2022-05-02 15:31   ` J. Bruce Fields
2022-05-01 17:38 ` [PATCH RFC v24 3/7] NFSD: move create/destroy of laundry_wq to init_nfsd and exit_nfsd Dai Ngo
2022-05-02 15:35   ` J. Bruce Fields
2022-05-01 17:38 ` [PATCH RFC v24 4/7] fs/lock: add helper locks_owner_has_blockers to check for blockers Dai Ngo
2022-05-02 15:43   ` J. Bruce Fields
2022-05-01 17:38 ` [PATCH RFC v24 5/7] fs/lock: add 2 callbacks to lock_manager_operations to resolve conflict Dai Ngo
2022-05-02 15:53   ` J. Bruce Fields
2022-05-01 17:38 ` [PATCH RFC v24 6/7] NFSD: add support for lock conflict to courteous server Dai Ngo
2022-05-02 16:00   ` J. Bruce Fields
2022-05-01 17:38 ` [PATCH RFC v24 7/7] NFSD: Show state of courtesy client in client info Dai Ngo
2022-05-02 16:06   ` J. Bruce Fields
2022-05-02 16:07 ` [PATCH RFC v24 0/7] NFSD: Initial implementation of NFSv4 Courteous Server J. Bruce Fields

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.