All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/14] Candidates for 3.6
@ 2012-07-09 15:43 Chuck Lever
  2012-07-09 15:43 ` [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting Chuck Lever
                   ` (13 more replies)
  0 siblings, 14 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:43 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Hi-

Here are the TEST_STATEID clean-ups and patches to implement UCS.
These have been unit tested, but I'm sure I haven't tried every use
case.  I'd like you to consider these for 3.6.

There is one last bug to fix: rpcauth_create() returns -EEXIST when
replacing an RPC client's rpc_auth, if both the existing and new
rpc_auth are RPC_AUTH_GSS.  I'm actively working that now, and should
post a solution for review this week or next.

The final patch in this series is a hack, posted for discussion.

---

Chuck Lever (14):
      NFS: Slow down state manager after an unhandled error
      NFS: Clean up debugging messages in fs/nfs/client.c
      NFS: Add nfs4_unique_id boot parameter
      NFS: Discover NFSv4 server trunking when mounting
      NFS: Use the same nfs_client_id4 for every server
      NFS: Introduce "migration" mount option
      SUNRPC: Add rpcauth_list_flavors()
      NFS: Clean up nfs4_proc_setclientid() and friends
      NFS: Treat NFS4ERR_CLID_INUSE as a fatal error
      NFS: nfs_getaclargs.acl_len is a size_t
      NFS: Refactor nfs41_check_expired_stateid()
      NFS: State reclaim clears OPEN and LOCK state
      NFS: Properly sort TEST_STATEID results
      NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting


 Documentation/filesystems/nfs/nfs.txt |   44 +++++
 Documentation/kernel-parameters.txt   |    5 +
 fs/nfs/client.c                       |  282 +++++++++++++++++++++++++++++++
 fs/nfs/idmap.c                        |    6 +
 fs/nfs/internal.h                     |    6 +
 fs/nfs/nfs4_fs.h                      |    8 +
 fs/nfs/nfs4proc.c                     |  296 +++++++++++++++++++++++++--------
 fs/nfs/nfs4state.c                    |  198 +++++++++++++++++++++-
 fs/nfs/super.c                        |   20 ++
 include/linux/nfs_fs_sb.h             |    5 -
 include/linux/sunrpc/auth.h           |    2 
 include/linux/sunrpc/gss_api.h        |    3 
 net/sunrpc/auth.c                     |   54 ++++++
 net/sunrpc/auth_gss/auth_gss.c        |    1 
 net/sunrpc/auth_gss/gss_mech_switch.c |   18 ++
 15 files changed, 852 insertions(+), 96 deletions(-)

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
@ 2012-07-09 15:43 ` Chuck Lever
  2012-07-09 19:36   ` Myklebust, Trond
  2012-07-09 15:44 ` [PATCH 02/14] NFS: Properly sort TEST_STATEID results Chuck Lever
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:43 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

The TEST_STATEID and FREE_STATEID operations can return
NFS4ERR_BAD_STATEID, NFS4ERR_OLD_STATEID, or NFS4ERR_DEADSESSION.
These error values should not be passed to nfs4_handle_exception(),
since that will recursively kick off state recovery, resulting in a
deadlock.  The reason to continue using nfs4_handle_exception() is
to deal with NFS4ERR_DELAY.

Moreover, specifically when the TEST_STATEID operation returns
NFS4_OK, res.status can contain one of these errors.
_nfs41_test_stateid() replaces NFS4_OK with the value in res.status,
which is then returned to callers.

But res.status is not passed through nfs4_stat_to_errno(), and thus is
a positive NFS4ERR value.  Currently callers are only interested in
!NFS4_OK, and nfs4_handle_exception() ignores positive values, so this
oversight hasn't bitten us so far.

Bryan agrees the original intent was to return res.status as a
negative NFS4ERR value to callers of nfs41_test_stateid(), as long as
no state ID recovery is attempted.

As a finishing touch, add appropriate documenting comments and some
debugging printk's.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c |   51 ++++++++++++++++++++++++++++++++++++++++-----------
 1 files changed, 40 insertions(+), 11 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 15fc7e4..971ec8c 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -6587,22 +6587,36 @@ static int _nfs41_test_stateid(struct nfs_server *server, nfs4_stateid *stateid)
 		.rpc_resp = &res,
 	};
 
+	dprintk("NFS call  test_stateid %p\n", stateid);
 	nfs41_init_sequence(&args.seq_args, &res.seq_res, 0);
 	status = nfs4_call_sync_sequence(server->client, server, &msg, &args.seq_args, &res.seq_res, 1);
-
-	if (status == NFS_OK)
-		return res.status;
-	return status;
+	if (status != NFS_OK) {
+		dprintk("NFS reply test_stateid: failed, %d\n", status);
+		return status;
+	}
+	dprintk("NFS reply test_stateid: succeeded, %d\n", -res.status);
+	return -res.status;
 }
 
+/**
+ * nfs41_test_stateid - perform a TEST_STATEID operation
+ *
+ * @server: NFS server that may know about "stateid"
+ * @stateid: state ID to test
+ *
+ * Returns NFS_OK if the server recognizes the state ID as valid.
+ * Otherwise a negative NFS4ERR value is returned if the operation
+ * failed or the state ID is not currently valid.
+ */
 static int nfs41_test_stateid(struct nfs_server *server, nfs4_stateid *stateid)
 {
 	struct nfs4_exception exception = { };
 	int err;
 	do {
-		err = nfs4_handle_exception(server,
-				_nfs41_test_stateid(server, stateid),
-				&exception);
+		err = _nfs41_test_stateid(server, stateid);
+		if (err != -NFS4ERR_DELAY)
+			break;
+		nfs4_handle_exception(server, err, &exception);
 	} while (exception.retry);
 	return err;
 }
@@ -6618,19 +6632,34 @@ static int _nfs4_free_stateid(struct nfs_server *server, nfs4_stateid *stateid)
 		.rpc_argp = &args,
 		.rpc_resp = &res,
 	};
+	int status;
 
+	dprintk("NFS call  free_stateid %p\n", stateid);
 	nfs41_init_sequence(&args.seq_args, &res.seq_res, 0);
-	return nfs4_call_sync_sequence(server->client, server, &msg, &args.seq_args, &res.seq_res, 1);
+	status = nfs4_call_sync_sequence(server->client, server, &msg,
+					 &args.seq_args, &res.seq_res, 1);
+	dprintk("NFS reply free_stateid: %d\n", status);
+	return status;
 }
 
+/**
+ * nfs41_free_stateid - perform a FREE_STATEID operation
+ *
+ * @server: NFS server that may know about "stateid"
+ * @stateid: state ID to release
+ *
+ * Returns NFS_OK if the server freed the state ID.  Otherwise a
+ * negative NFS4ERR value is returned if the operation failed.
+ */
 static int nfs41_free_stateid(struct nfs_server *server, nfs4_stateid *stateid)
 {
 	struct nfs4_exception exception = { };
 	int err;
 	do {
-		err = nfs4_handle_exception(server,
-				_nfs4_free_stateid(server, stateid),
-				&exception);
+		err = _nfs4_free_stateid(server, stateid);
+		if (err != -NFS4ERR_DELAY)
+			break;
+		nfs4_handle_exception(server, err, &exception);
 	} while (exception.retry);
 	return err;
 }


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 02/14] NFS: Properly sort TEST_STATEID results
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
  2012-07-09 15:43 ` [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 19:34   ` Myklebust, Trond
  2012-07-09 15:44 ` [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state Chuck Lever
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

The result of a TEST_STATEID operation can indicate a few different
things:

  o If NFS_OK is returned, then the client can continue using the
    state ID under test, and skip recovery.

  o RFC 5661 says that if and only if the state ID was revoked, then
    the client must perform an explicit FREE_STATEID before trying to
    re-open.

  o If the server doesn't recognize the state ID at all, then no
    FREE_STATEID is needed, and the client can immediately continue
    with open recovery.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c |   36 ++++++++++++++++++++++++++++++++++--
 1 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 971ec8c..60a320c 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1756,6 +1756,14 @@ static int nfs4_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *sta
 }
 
 #if defined(CONFIG_NFS_V4_1)
+/**
+ * nfs41_check_expired_stateid - does a state ID need recovery?
+ *
+ * @state: NFSv4 open state for an inode
+ *
+ * Returns NFS_OK if recovery for this state ID is now finished.
+ * Otherwise a negative NFS4ERR value is returned.
+ */
 static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *stateid, unsigned int flags)
 {
 	int status = NFS_OK;
@@ -1763,8 +1771,16 @@ static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *s
 
 	if (state->flags & flags) {
 		status = nfs41_test_stateid(server, stateid);
-		if (status != NFS_OK) {
+		switch (status) {
+		case NFS_OK:
+			/* server recognizes this one, don't recover */
+			break;
+		case -NFS4ERR_ADMIN_REVOKED:
+		case -NFS4ERR_DELEG_REVOKED:
+			/* state was revoked, free then re-open */
 			nfs41_free_stateid(server, stateid);
+		default:
+			/* anything else: just re-open it */
 			state->flags &= ~flags;
 		}
 	}
@@ -4698,6 +4714,14 @@ out:
 }
 
 #if defined(CONFIG_NFS_V4_1)
+/**
+ * nfs41_check_expired_locks - clear lock state IDs
+ *
+ * @state: NFSv4 open state for a file
+ *
+ * Returns NFS_OK if recovery for this state ID is now finished.
+ * Otherwise a negative NFS4ERR value is returned.
+ */
 static int nfs41_check_expired_locks(struct nfs4_state *state)
 {
 	int status, ret = NFS_OK;
@@ -4707,8 +4731,16 @@ static int nfs41_check_expired_locks(struct nfs4_state *state)
 	list_for_each_entry(lsp, &state->lock_states, ls_locks) {
 		if (lsp->ls_flags & NFS_LOCK_INITIALIZED) {
 			status = nfs41_test_stateid(server, &lsp->ls_stateid);
-			if (status != NFS_OK) {
+			switch (status) {
+			case NFS_OK:
+				/* server recognizes this one, don't re-lock */
+				break;
+			case -NFS4ERR_ADMIN_REVOKED:
+			case -NFS4ERR_DELEG_REVOKED:
+				/* lock was revoked, free then re-lock */
 				nfs41_free_stateid(server, &lsp->ls_stateid);
+			default:
+				/* anything else: just re-lock it */
 				lsp->ls_flags &= ~NFS_LOCK_INITIALIZED;
 				ret = status;
 			}


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
  2012-07-09 15:43 ` [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting Chuck Lever
  2012-07-09 15:44 ` [PATCH 02/14] NFS: Properly sort TEST_STATEID results Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 20:03   ` Myklebust, Trond
  2012-07-09 15:44 ` [PATCH 04/14] NFS: Refactor nfs41_check_expired_stateid() Chuck Lever
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Before beginning state recovery, the state manager zaps open and lock
state for each open file, thus the "state->flags & flags" test in
nfs41_{open,lock}_expired() always fails during reclaim.  But open
recovery is still needed for these files.

To force a call to nfs4_open_expired(), change the default return
value for nfs41_check_expired_stateid() to force open recovery, and
the default return value for nfs41_check_locks() to force lock
recovery, if the requested flags are clear.  Fix suggested by Bryan
Schumaker.

The presence of a delegation state ID must not prevent normal open
recovery.  The delegation state ID must be cleared if it was revoked,
but once cleared I don't think it's presence or absence has any
bearing on whether open recovery is still needed.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 60a320c..4bc21a3 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1766,8 +1766,8 @@ static int nfs4_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *sta
  */
 static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *stateid, unsigned int flags)
 {
-	int status = NFS_OK;
 	struct nfs_server *server = NFS_SERVER(state->inode);
+	int status = -NFS4ERR_BAD_STATEID;
 
 	if (state->flags & flags) {
 		status = nfs41_test_stateid(server, stateid);
@@ -1789,16 +1789,17 @@ static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *s
 
 static int nfs41_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *state)
 {
-	int deleg_status, open_status;
 	int deleg_flags = 1 << NFS_DELEGATED_STATE;
 	int open_flags = (1 << NFS_O_RDONLY_STATE) | (1 << NFS_O_WRONLY_STATE) | (1 << NFS_O_RDWR_STATE);
+	int status;
 
-	deleg_status = nfs41_check_expired_stateid(state, &state->stateid, deleg_flags);
-	open_status = nfs41_check_expired_stateid(state,  &state->open_stateid, open_flags);
+	nfs41_check_expired_stateid(state, &state->stateid, deleg_flags);
+	status = nfs41_check_expired_stateid(state, &state->open_stateid,
+							open_flags);
 
-	if ((deleg_status == NFS_OK) && (open_status == NFS_OK))
-		return NFS_OK;
-	return nfs4_open_expired(sp, state);
+	if (status != NFS_OK)
+		status = nfs4_open_expired(sp, state);
+	return status;
 }
 #endif
 
@@ -4724,7 +4725,7 @@ out:
  */
 static int nfs41_check_expired_locks(struct nfs4_state *state)
 {
-	int status, ret = NFS_OK;
+	int status, ret = -NFS4ERR_BAD_STATEID;
 	struct nfs4_lock_state *lsp;
 	struct nfs_server *server = NFS_SERVER(state->inode);
 
@@ -4756,9 +4757,9 @@ static int nfs41_lock_expired(struct nfs4_state *state, struct file_lock *reques
 
 	if (test_bit(LK_STATE_IN_USE, &state->flags))
 		status = nfs41_check_expired_locks(state);
-	if (status == NFS_OK)
-		return status;
-	return nfs4_lock_expired(state, request);
+	if (status != NFS_OK)
+		status = nfs4_lock_expired(state, request);
+	return status;
 }
 #endif
 


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 04/14] NFS: Refactor nfs41_check_expired_stateid()
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (2 preceding siblings ...)
  2012-07-09 15:44 ` [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 15:44 ` [PATCH 05/14] NFS: nfs_getaclargs.acl_len is a size_t Chuck Lever
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Clean up: The logic for dealing with a delegation state ID and an
open state ID is now sufficiently distinct that I've refactored
nfs41_check_expired_stateid() into two separate functions.

Instead of open-coded flag manipulation, use test_bit() and
clear_bit() just like all other accessors of the state->flag field.

This also eliminates several unnecessary implicit integer type
conversions.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c |   44 +++++++++++++++++++++++++++++++++++---------
 1 files changed, 35 insertions(+), 9 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 4bc21a3..f368e37 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1757,19 +1757,46 @@ static int nfs4_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *sta
 
 #if defined(CONFIG_NFS_V4_1)
 /**
- * nfs41_check_expired_stateid - does a state ID need recovery?
+ * nfs41_clear_delegation_stateid - possibly clear revoked delegation state ID
+ *
+ * @state: NFSv4 open state for an inode
+ */
+static void nfs41_clear_delegation_stateid(struct nfs4_state *state)
+{
+	struct nfs_server *server = NFS_SERVER(state->inode);
+	nfs4_stateid *stateid = &state->stateid;
+
+	if (test_bit(NFS_DELEGATED_STATE, &state->flags) != 0)
+		switch (nfs41_test_stateid(server, stateid)) {
+		case NFS_OK:
+			/* delegation still OK, preserve it */
+			break;
+		case -NFS4ERR_ADMIN_REVOKED:
+		case -NFS4ERR_DELEG_REVOKED:
+			/* delegation was revoked, must free */
+			nfs41_free_stateid(server, stateid);
+		default:
+			clear_bit(NFS_DELEGATED_STATE, &state->flags);
+		}
+}
+
+/**
+ * nfs41_check_open_stateid - clear open state ID
  *
  * @state: NFSv4 open state for an inode
  *
  * Returns NFS_OK if recovery for this state ID is now finished.
  * Otherwise a negative NFS4ERR value is returned.
  */
-static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *stateid, unsigned int flags)
+static int nfs41_check_open_stateid(struct nfs4_state *state)
 {
 	struct nfs_server *server = NFS_SERVER(state->inode);
+	nfs4_stateid *stateid = &state->stateid;
 	int status = -NFS4ERR_BAD_STATEID;
 
-	if (state->flags & flags) {
+	if ((test_bit(NFS_O_RDONLY_STATE, &state->flags) != 0) ||
+	    (test_bit(NFS_O_WRONLY_STATE, &state->flags) != 0) ||
+	    (test_bit(NFS_O_RDWR_STATE, &state->flags) != 0)) {
 		status = nfs41_test_stateid(server, stateid);
 		switch (status) {
 		case NFS_OK:
@@ -1781,7 +1808,9 @@ static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *s
 			nfs41_free_stateid(server, stateid);
 		default:
 			/* anything else: just re-open it */
-			state->flags &= ~flags;
+			clear_bit(NFS_O_RDONLY_STATE, &state->flags);
+			clear_bit(NFS_O_WRONLY_STATE, &state->flags);
+			clear_bit(NFS_O_RDWR_STATE, &state->flags);
 		}
 	}
 	return status;
@@ -1789,13 +1818,10 @@ static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *s
 
 static int nfs41_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *state)
 {
-	int deleg_flags = 1 << NFS_DELEGATED_STATE;
-	int open_flags = (1 << NFS_O_RDONLY_STATE) | (1 << NFS_O_WRONLY_STATE) | (1 << NFS_O_RDWR_STATE);
 	int status;
 
-	nfs41_check_expired_stateid(state, &state->stateid, deleg_flags);
-	status = nfs41_check_expired_stateid(state, &state->open_stateid,
-							open_flags);
+	nfs41_clear_delegation_stateid(state);
+	status = nfs41_check_open_stateid(state);
 
 	if (status != NFS_OK)
 		status = nfs4_open_expired(sp, state);


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 05/14] NFS: nfs_getaclargs.acl_len is a size_t
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (3 preceding siblings ...)
  2012-07-09 15:44 ` [PATCH 04/14] NFS: Refactor nfs41_check_expired_stateid() Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 15:44 ` [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error Chuck Lever
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Squelch compiler warnings:

fs/nfs/nfs4proc.c: In function ‘__nfs4_get_acl_uncached’:
fs/nfs/nfs4proc.c:3811:14: warning: comparison between signed and
	unsigned integer expressions [-Wsign-compare]
fs/nfs/nfs4proc.c:3818:15: warning: comparison between signed and
	unsigned integer expressions [-Wsign-compare]

Introduced by commit bf118a34 "NFSv4: include bitmap in nfsv4 get
acl data", Dec 7, 2011.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index f368e37..7adb705 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -3792,7 +3792,8 @@ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t bu
 		.rpc_argp = &args,
 		.rpc_resp = &res,
 	};
-	int ret = -ENOMEM, npages, i, acl_len = 0;
+	int ret = -ENOMEM, npages, i;
+	size_t acl_len = 0;
 
 	npages = (buflen + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	/* As long as we're doing a round trip to the server anyway,


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (4 preceding siblings ...)
  2012-07-09 15:44 ` [PATCH 05/14] NFS: nfs_getaclargs.acl_len is a size_t Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 20:10   ` Myklebust, Trond
  2012-07-09 15:44 ` [PATCH 07/14] NFS: Clean up nfs4_proc_setclientid() and friends Chuck Lever
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

For NFSv4 minor version 0, currently the cl_id_uniquifier allows the
Linux client to generate a unique nfs_client_id4 string whenever a
server replies with NFS4ERR_CLID_INUSE.

This implementation seems to be based on a flawed reading of RFC
3530.  NFS4ERR_CLID_INUSE actually means that the client has presented
this nfs_client_id4 string with a different principal at some time in
the past, and that lease is still in use on the server.

For a Linux client this might be rather difficult to achieve: the
authentication flavor is named right in the nfs_client_id4.id
string.  If we change flavors, we change strings automatically.

So, practically speaking, NFS4ERR_CLID_INUSE means there is some other
client using our string.  There is not much that can be done to
recover automatically.  Let's make it a permanent error.

Remove the recovery logic in nfs4_proc_setclientid(), and remove the
cl_id_uniquifier field from the nfs_client data structure.  And,
remove the authentication flavor from the nfs_client_id4 string.

Keeping the authentication flavor in the nfs_client_id4.id string
means that we could have a separate lease for each authentication
flavor used by mounts on the client.  But we want just one lease for
all the mounts on this client.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c         |   54 ++++++++++++++++++---------------------------
 fs/nfs/nfs4state.c        |    7 +++++-
 include/linux/nfs_fs_sb.h |    3 +--
 3 files changed, 29 insertions(+), 35 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 7adb705..63a3cf0 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -259,7 +259,12 @@ static int nfs4_wait_clnt_recover(struct nfs_client *clp)
 
 	res = wait_on_bit(&clp->cl_state, NFS4CLNT_MANAGER_RUNNING,
 			nfs_wait_bit_killable, TASK_KILLABLE);
-	return res;
+	if (res)
+		return res;
+
+	if (clp->cl_cons_state < 0)
+		return clp->cl_cons_state;
+	return 0;
 }
 
 static int nfs4_delay(struct rpc_clnt *clnt, long *timeout)
@@ -4038,42 +4043,28 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 		.rpc_resp = res,
 		.rpc_cred = cred,
 	};
-	int loop = 0;
-	int status;
 
+	/* nfs_client_id4 */
 	nfs4_init_boot_verifier(clp, &sc_verifier);
-
-	for(;;) {
-		rcu_read_lock();
-		setclientid.sc_name_len = scnprintf(setclientid.sc_name,
-				sizeof(setclientid.sc_name), "%s/%s %s %s %u",
-				clp->cl_ipaddr,
-				rpc_peeraddr2str(clp->cl_rpcclient,
-							RPC_DISPLAY_ADDR),
-				rpc_peeraddr2str(clp->cl_rpcclient,
-							RPC_DISPLAY_PROTO),
-				clp->cl_rpcclient->cl_auth->au_ops->au_name,
-				clp->cl_id_uniquifier);
-		setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
+	rcu_read_lock();
+	setclientid.sc_name_len = scnprintf(setclientid.sc_name,
+			sizeof(setclientid.sc_name), "%s/%s %s",
+			clp->cl_ipaddr,
+			rpc_peeraddr2str(clp->cl_rpcclient,
+						RPC_DISPLAY_ADDR),
+			rpc_peeraddr2str(clp->cl_rpcclient,
+						RPC_DISPLAY_PROTO));
+	/* cb_client4 */
+	setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
 				sizeof(setclientid.sc_netid),
 				rpc_peeraddr2str(clp->cl_rpcclient,
 							RPC_DISPLAY_NETID));
-		setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
+	rcu_read_unlock();
+	setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
 				sizeof(setclientid.sc_uaddr), "%s.%u.%u",
 				clp->cl_ipaddr, port >> 8, port & 255);
-		rcu_read_unlock();
 
-		status = rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
-		if (status != -NFS4ERR_CLID_INUSE)
-			break;
-		if (loop != 0) {
-			++clp->cl_id_uniquifier;
-			break;
-		}
-		++loop;
-		ssleep(clp->cl_lease_time / HZ + 1);
-	}
-	return status;
+	return rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
 }
 
 int nfs4_proc_setclientid_confirm(struct nfs_client *clp,
@@ -5275,10 +5266,9 @@ int nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred)
 	nfs4_init_boot_verifier(clp, &verifier);
 
 	args.id_len = scnprintf(args.id, sizeof(args.id),
-				"%s/%s/%u",
+				"%s/%s",
 				clp->cl_ipaddr,
-				clp->cl_rpcclient->cl_nodename,
-				clp->cl_rpcclient->cl_auth->au_flavor);
+				clp->cl_rpcclient->cl_nodename);
 
 	res.server_owner = kzalloc(sizeof(struct nfs41_server_owner),
 					GFP_NOFS);
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index f38300e..a2c1153 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1606,10 +1606,15 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
 			return -ESERVERFAULT;
 		/* Lease confirmation error: retry after purging the lease */
 		ssleep(1);
-	case -NFS4ERR_CLID_INUSE:
 	case -NFS4ERR_STALE_CLIENTID:
 		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
 		break;
+	case -NFS4ERR_CLID_INUSE:
+		printk(KERN_ERR "NFS: Server %s: client ID already in use\n",
+			clp->cl_hostname);
+		nfs_mark_client_ready(clp, -EPERM);
+		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
+		return -EPERM;
 	case -EACCES:
 		if (clp->cl_machine_cred == NULL)
 			return -EACCES;
diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
index f58325a..6532765 100644
--- a/include/linux/nfs_fs_sb.h
+++ b/include/linux/nfs_fs_sb.h
@@ -69,10 +69,9 @@ struct nfs_client {
 	struct idmap *		cl_idmap;
 
 	/* Our own IP address, as a null-terminated string.
-	 * This is used to generate the clientid, and the callback address.
+	 * This is used to generate the mv0 callback address.
 	 */
 	char			cl_ipaddr[48];
-	unsigned char		cl_id_uniquifier;
 	u32			cl_cb_ident;	/* v4.0 callback identifier */
 	const struct nfs4_minor_version_ops *cl_mvops;
 


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 07/14] NFS: Clean up nfs4_proc_setclientid() and friends
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (5 preceding siblings ...)
  2012-07-09 15:44 ` [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 15:44 ` [PATCH 08/14] SUNRPC: Add rpcauth_list_flavors() Chuck Lever
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Add documenting comments and appropriate debugging messages.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c  |   45 +++++++++++++++++++++++++++++++++++++--------
 fs/nfs/nfs4state.c |    4 ++++
 2 files changed, 41 insertions(+), 8 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 63a3cf0..1c27121 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -4027,6 +4027,16 @@ static void nfs4_init_boot_verifier(const struct nfs_client *clp,
 	memcpy(bootverf->data, verf, sizeof(bootverf->data));
 }
 
+/**
+ * nfs4_proc_setclientid - Negotiate client ID
+ * @clp: state data structure
+ * @program: RPC program for NFSv4 callback service
+ * @port: IP port number for NFS4 callback service
+ * @cred: RPC credential to use for this call
+ * @res: where to place the result
+ *
+ * Returns zero, a negative errno, or a negative NFS4ERR status code.
+ */
 int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 		unsigned short port, struct rpc_cred *cred,
 		struct nfs4_setclientid_res *res)
@@ -4043,6 +4053,7 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 		.rpc_resp = res,
 		.rpc_cred = cred,
 	};
+	int status;
 
 	/* nfs_client_id4 */
 	nfs4_init_boot_verifier(clp, &sc_verifier);
@@ -4064,9 +4075,22 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 				sizeof(setclientid.sc_uaddr), "%s.%u.%u",
 				clp->cl_ipaddr, port >> 8, port & 255);
 
-	return rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
+	dprintk("NFS call  setclientid auth=%s, '%.*s'\n",
+		clp->cl_rpcclient->cl_auth->au_ops->au_name,
+		setclientid.sc_name_len, setclientid.sc_name);
+	status = rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
+	dprintk("NFS reply setclientid: %d\n", status);
+	return status;
 }
 
+/**
+ * nfs4_proc_setclientid_confirm - Confirm client ID
+ * @clp: state data structure
+ * @res: result of a previous SETCLIENTID
+ * @cred: RPC credential to use for this call
+ *
+ * Returns zero, a negative errno, or a negative NFS4ERR status code.
+ */
 int nfs4_proc_setclientid_confirm(struct nfs_client *clp,
 		struct nfs4_setclientid_res *arg,
 		struct rpc_cred *cred)
@@ -4081,6 +4105,9 @@ int nfs4_proc_setclientid_confirm(struct nfs_client *clp,
 	unsigned long now;
 	int status;
 
+	dprintk("NFS call  setclientid_confirm auth=%s, (client ID %llx)\n",
+		clp->cl_rpcclient->cl_auth->au_ops->au_name,
+		clp->cl_clientid);
 	now = jiffies;
 	status = rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
 	if (status == 0) {
@@ -4089,6 +4116,7 @@ int nfs4_proc_setclientid_confirm(struct nfs_client *clp,
 		clp->cl_last_renewal = now;
 		spin_unlock(&clp->cl_lock);
 	}
+	dprintk("NFS reply setclientid_confirm: %d\n", status);
 	return status;
 }
 
@@ -5236,6 +5264,8 @@ out:
 /*
  * nfs4_proc_exchange_id()
  *
+ * Returns zero, a negative errno, or a negative NFS4ERR status code.
+ *
  * Since the clientid has expired, all compounds using sessions
  * associated with the stale clientid will be returning
  * NFS4ERR_BADSESSION in the sequence operation, and will therefore
@@ -5260,15 +5290,14 @@ int nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred)
 		.rpc_cred = cred,
 	};
 
-	dprintk("--> %s\n", __func__);
-	BUG_ON(clp == NULL);
-
 	nfs4_init_boot_verifier(clp, &verifier);
-
 	args.id_len = scnprintf(args.id, sizeof(args.id),
 				"%s/%s",
 				clp->cl_ipaddr,
 				clp->cl_rpcclient->cl_nodename);
+	dprintk("NFS call  exchange_id auth=%s, '%.*s'\n",
+		clp->cl_rpcclient->cl_auth->au_ops->au_name,
+		args.id_len, args.id);
 
 	res.server_owner = kzalloc(sizeof(struct nfs41_server_owner),
 					GFP_NOFS);
@@ -5331,12 +5360,12 @@ out_server_scope:
 	kfree(res.server_scope);
 out:
 	if (clp->cl_implid != NULL)
-		dprintk("%s: Server Implementation ID: "
+		dprintk("NFS reply exchange_id: Server Implementation ID: "
 			"domain: %s, name: %s, date: %llu,%u\n",
-			__func__, clp->cl_implid->domain, clp->cl_implid->name,
+			clp->cl_implid->domain, clp->cl_implid->name,
 			clp->cl_implid->date.seconds,
 			clp->cl_implid->date.nseconds);
-	dprintk("<-- %s status= %d\n", __func__, status);
+	dprintk("NFS reply exchange_id: %d\n", status);
 	return status;
 }
 
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index a2c1153..d6fe6f7 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1647,6 +1647,10 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
 	return 0;
 }
 
+/*
+ * Returns zero or a negative errno.  NFS4ERR values are converted
+ * to local errno values.
+ */
 static int nfs4_reclaim_lease(struct nfs_client *clp)
 {
 	struct rpc_cred *cred;


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 08/14] SUNRPC: Add rpcauth_list_flavors()
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (6 preceding siblings ...)
  2012-07-09 15:44 ` [PATCH 07/14] NFS: Clean up nfs4_proc_setclientid() and friends Chuck Lever
@ 2012-07-09 15:44 ` Chuck Lever
  2012-07-09 15:45 ` [PATCH 09/14] NFS: Introduce "migration" mount option Chuck Lever
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:44 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

The gss_mech_list_pseudoflavors() function provides a list of
currently registered GSS pseudoflavors.  This list does not include
any non-GSS flavors that have been registered with the RPC client.
nfs4_find_root_sec() currently adds these extra flavors by hand.

Instead, nfs4_find_root_sec() should be looking at the set of flavors
that have been explicitly registered via rpcauth_register().  And,
other areas of code will soon need the same kind of list that contains
all flavors the kernel currently knows about.

So, rather than cloning the open-coded logic in nfs4_find_root_sec()
to those new places, introduce a generic RPC function that generates a
full list of registered auth flavors and pseudoflavors.

A new virtual rpc_authops method is added that lists a flavor's
pseudoflavors, if it has any.  I encountered an interesting module
loader loop when I tried to get the RPC client to invoke
gss_mech_list_pseudoflavors() by name.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c                     |   11 ++++---
 include/linux/sunrpc/auth.h           |    2 +
 include/linux/sunrpc/gss_api.h        |    3 +-
 net/sunrpc/auth.c                     |   54 +++++++++++++++++++++++++++++++++
 net/sunrpc/auth_gss/auth_gss.c        |    1 +
 net/sunrpc/auth_gss/gss_mech_switch.c |   18 +++++++++--
 6 files changed, 80 insertions(+), 9 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 1c27121..1d3b28d 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -43,7 +43,6 @@
 #include <linux/printk.h>
 #include <linux/slab.h>
 #include <linux/sunrpc/clnt.h>
-#include <linux/sunrpc/gss_api.h>
 #include <linux/nfs.h>
 #include <linux/nfs4.h>
 #include <linux/nfs_fs.h>
@@ -2423,11 +2422,15 @@ static int nfs4_find_root_sec(struct nfs_server *server, struct nfs_fh *fhandle,
 	int i, len, status = 0;
 	rpc_authflavor_t flav_array[NFS_MAX_SECFLAVORS];
 
-	len = gss_mech_list_pseudoflavors(&flav_array[0]);
-	flav_array[len] = RPC_AUTH_NULL;
-	len += 1;
+	len = rpcauth_list_flavors(flav_array, ARRAY_SIZE(flav_array));
+	BUG_ON(len < 0);
 
 	for (i = 0; i < len; i++) {
+		/* AUTH_UNIX is the default flavor if none was specified,
+		 * thus has already been tried. */
+		if (flav_array[i] == RPC_AUTH_UNIX)
+			continue;
+
 		status = nfs4_lookup_root_sec(server, fhandle, info, flav_array[i]);
 		if (status == -NFS4ERR_WRONGSEC || status == -EACCES)
 			continue;
diff --git a/include/linux/sunrpc/auth.h b/include/linux/sunrpc/auth.h
index 492a36d..f25ba92 100644
--- a/include/linux/sunrpc/auth.h
+++ b/include/linux/sunrpc/auth.h
@@ -101,6 +101,7 @@ struct rpc_authops {
 	struct rpc_cred *	(*crcreate)(struct rpc_auth*, struct auth_cred *, int);
 	int			(*pipes_create)(struct rpc_auth *);
 	void			(*pipes_destroy)(struct rpc_auth *);
+	int			(*list_pseudoflavors)(rpc_authflavor_t *, int);
 };
 
 struct rpc_credops {
@@ -135,6 +136,7 @@ int			rpcauth_register(const struct rpc_authops *);
 int			rpcauth_unregister(const struct rpc_authops *);
 struct rpc_auth *	rpcauth_create(rpc_authflavor_t, struct rpc_clnt *);
 void			rpcauth_release(struct rpc_auth *);
+int			rpcauth_list_flavors(rpc_authflavor_t *, int);
 struct rpc_cred *	rpcauth_lookup_credcache(struct rpc_auth *, struct auth_cred *, int);
 void			rpcauth_init_cred(struct rpc_cred *, const struct auth_cred *, struct rpc_auth *, const struct rpc_credops *);
 struct rpc_cred *	rpcauth_lookupcred(struct rpc_auth *, int);
diff --git a/include/linux/sunrpc/gss_api.h b/include/linux/sunrpc/gss_api.h
index 332da61..a19e254 100644
--- a/include/linux/sunrpc/gss_api.h
+++ b/include/linux/sunrpc/gss_api.h
@@ -14,6 +14,7 @@
 
 #ifdef __KERNEL__
 #include <linux/sunrpc/xdr.h>
+#include <linux/sunrpc/msg_prot.h>
 #include <linux/uio.h>
 
 /* The mechanism-independent gss-api context: */
@@ -127,7 +128,7 @@ struct gss_api_mech *gss_mech_get_by_name(const char *);
 struct gss_api_mech *gss_mech_get_by_pseudoflavor(u32);
 
 /* Fill in an array with a list of supported pseudoflavors */
-int gss_mech_list_pseudoflavors(u32 *);
+int gss_mech_list_pseudoflavors(rpc_authflavor_t *, int);
 
 /* Just increments the mechanism's reference count and returns its input: */
 struct gss_api_mech * gss_mech_get(struct gss_api_mech *);
diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
index 727e506..b5c067b 100644
--- a/net/sunrpc/auth.c
+++ b/net/sunrpc/auth.c
@@ -13,6 +13,7 @@
 #include <linux/errno.h>
 #include <linux/hash.h>
 #include <linux/sunrpc/clnt.h>
+#include <linux/sunrpc/gss_api.h>
 #include <linux/spinlock.h>
 
 #ifdef RPC_DEBUG
@@ -122,6 +123,59 @@ rpcauth_unregister(const struct rpc_authops *ops)
 }
 EXPORT_SYMBOL_GPL(rpcauth_unregister);
 
+/**
+ * rpcauth_list_flavors - discover registered flavors and pseudoflavors
+ * @array: array to fill in
+ * @size: size of "array"
+ *
+ * Returns the number of array items filled in, or a negative errno.
+ *
+ * The returned array is not sorted by any policy.  Callers should not
+ * rely on the order of the items in the returned array.
+ */
+int
+rpcauth_list_flavors(rpc_authflavor_t *array, int size)
+{
+	rpc_authflavor_t flavor;
+	int result = 0;
+
+	spin_lock(&rpc_authflavor_lock);
+	for (flavor = 0; flavor < RPC_AUTH_MAXFLAVOR; flavor++) {
+		const struct rpc_authops *ops = auth_flavors[flavor];
+		rpc_authflavor_t pseudos[4];
+		int i, len;
+
+		if (result >= size) {
+			result = -ENOMEM;
+			break;
+		}
+
+		if (ops == NULL)
+			continue;
+		if (ops->list_pseudoflavors == NULL) {
+			array[result++] = ops->au_flavor;
+			continue;
+		}
+		len = ops->list_pseudoflavors(pseudos, ARRAY_SIZE(pseudos));
+		if (len < 0) {
+			result = len;
+			break;
+		}
+		for (i = 0; i < len; i++) {
+			if (result >= size) {
+				result = -ENOMEM;
+				break;
+			}
+			array[result++] = pseudos[i];
+		}
+	}
+	spin_unlock(&rpc_authflavor_lock);
+
+	dprintk("RPC:       %s returns %d\n", __func__, result);
+	return result;
+}
+EXPORT_SYMBOL_GPL(rpcauth_list_flavors);
+
 struct rpc_auth *
 rpcauth_create(rpc_authflavor_t pseudoflavor, struct rpc_clnt *clnt)
 {
diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
index d3ad81f..34c5220 100644
--- a/net/sunrpc/auth_gss/auth_gss.c
+++ b/net/sunrpc/auth_gss/auth_gss.c
@@ -1619,6 +1619,7 @@ static const struct rpc_authops authgss_ops = {
 	.crcreate	= gss_create_cred,
 	.pipes_create	= gss_pipes_dentries_create,
 	.pipes_destroy	= gss_pipes_dentries_destroy,
+	.list_pseudoflavors = gss_mech_list_pseudoflavors,
 };
 
 static const struct rpc_credops gss_credops = {
diff --git a/net/sunrpc/auth_gss/gss_mech_switch.c b/net/sunrpc/auth_gss/gss_mech_switch.c
index 782bfe1..6ac5dfc 100644
--- a/net/sunrpc/auth_gss/gss_mech_switch.c
+++ b/net/sunrpc/auth_gss/gss_mech_switch.c
@@ -239,14 +239,26 @@ gss_mech_get_by_pseudoflavor(u32 pseudoflavor)
 
 EXPORT_SYMBOL_GPL(gss_mech_get_by_pseudoflavor);
 
-int gss_mech_list_pseudoflavors(rpc_authflavor_t *array_ptr)
+/**
+ * gss_mech_list_pseudoflavors - Discover registered GSS pseudoflavors
+ * @array: array to fill in
+ * @size: size of "array"
+ *
+ * Returns the number of array items filled in, or a negative errno.
+ *
+ * The returned array is not sorted by any policy.  Callers should not
+ * rely on the order of the items in the returned array.
+ */
+int gss_mech_list_pseudoflavors(rpc_authflavor_t *array_ptr, int size)
 {
 	struct gss_api_mech *pos = NULL;
 	int j, i = 0;
 
 	spin_lock(&registered_mechs_lock);
 	list_for_each_entry(pos, &registered_mechs, gm_list) {
-		for (j=0; j < pos->gm_pf_num; j++) {
+		for (j = 0; j < pos->gm_pf_num; j++) {
+			if (i >= size)
+				return -ENOMEM;
 			array_ptr[i++] = pos->gm_pfs[j].pseudoflavor;
 		}
 	}
@@ -254,8 +266,6 @@ int gss_mech_list_pseudoflavors(rpc_authflavor_t *array_ptr)
 	return i;
 }
 
-EXPORT_SYMBOL_GPL(gss_mech_list_pseudoflavors);
-
 u32
 gss_svc_to_pseudoflavor(struct gss_api_mech *gm, u32 service)
 {


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 09/14] NFS: Introduce "migration" mount option
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (7 preceding siblings ...)
  2012-07-09 15:44 ` [PATCH 08/14] SUNRPC: Add rpcauth_list_flavors() Chuck Lever
@ 2012-07-09 15:45 ` Chuck Lever
  2012-07-09 15:45 ` [PATCH 10/14] NFS: Use the same nfs_client_id4 for every server Chuck Lever
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:45 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Currently, the Linux client uses a unique nfs_client_id4 string
when identifying itself to distinct NFS servers.

To support transparent state migration, the Linux client will have to
use the same nfs_client_id4 string for all servers it communicates
with (also known as the "uniform client string" approach).  Otherwise
NFS servers will not be able to recognize that open and lock state
need to be merged after a migration event.

Unfortunately, there are some NFSv4.0 servers currently in the field
that do not tolerate the uniform client string approach.

So, by default, our NFSv4.0 mounts will continue to use the current
approach, and we add a mount option that switches them to use the
uniform model.  Client administrators must identify which servers can
be mounted with this option.

The first mount of a server controls the behavior for all subsequent
mounts for the lifetime of that set of mounts of that server.  After
the last mount of that server is gone, the client erases the data
structure that tracks the lease.  A subsequent lease may then honor
a different "migration" setting.

This patch adds only the infrastructure for parsing the new mount
option.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/client.c           |    2 ++
 fs/nfs/super.c            |   20 ++++++++++++++++++++
 include/linux/nfs_fs_sb.h |    2 ++
 3 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index f005b5b..5bad694 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -1451,6 +1451,8 @@ static int nfs4_set_client(struct nfs_server *server,
 
 	if (server->flags & NFS_MOUNT_NORESVPORT)
 		set_bit(NFS_CS_NORESVPORT, &cl_init.init_flags);
+	if (server->options & NFS_OPTION_MIGRATION)
+		set_bit(NFS_CS_MIGRATION, &cl_init.init_flags);
 
 	/* Allocate or find a client reference we can use */
 	clp = nfs_get_client(&cl_init, timeparms, ip_addr, authflavour);
diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index 906f09c..63a75eb 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -87,6 +87,7 @@ enum {
 	Opt_sharecache, Opt_nosharecache,
 	Opt_resvport, Opt_noresvport,
 	Opt_fscache, Opt_nofscache,
+	Opt_migration, Opt_nomigration,
 
 	/* Mount options that take integer arguments */
 	Opt_port,
@@ -146,6 +147,8 @@ static const match_table_t nfs_mount_option_tokens = {
 	{ Opt_noresvport, "noresvport" },
 	{ Opt_fscache, "fsc" },
 	{ Opt_nofscache, "nofsc" },
+	{ Opt_migration, "migration" },
+	{ Opt_nomigration, "nomigration" },
 
 	{ Opt_port, "port=%s" },
 	{ Opt_rsize, "rsize=%s" },
@@ -734,6 +737,9 @@ static void nfs_show_mount_options(struct seq_file *m, struct nfs_server *nfss,
 	if (nfss->options & NFS_OPTION_FSCACHE)
 		seq_printf(m, ",fsc");
 
+	if (nfss->options & NFS_OPTION_MIGRATION)
+		seq_printf(m, ",migration");
+
 	if (nfss->flags & NFS_MOUNT_LOOKUP_CACHE_NONEG) {
 		if (nfss->flags & NFS_MOUNT_LOOKUP_CACHE_NONE)
 			seq_printf(m, ",lookupcache=none");
@@ -1296,6 +1302,12 @@ static int nfs_parse_mount_options(char *raw,
 			kfree(mnt->fscache_uniq);
 			mnt->fscache_uniq = NULL;
 			break;
+		case Opt_migration:
+			mnt->options |= NFS_OPTION_MIGRATION;
+			break;
+		case Opt_nomigration:
+			mnt->options &= NFS_OPTION_MIGRATION;
+			break;
 
 		/*
 		 * options that take numeric values
@@ -1588,6 +1600,10 @@ static int nfs_parse_mount_options(char *raw,
 	if (mnt->minorversion && mnt->version != 4)
 		goto out_minorversion_mismatch;
 
+	if (mnt->options & NFS_OPTION_MIGRATION &&
+	    mnt->version != 4 && mnt->minorversion != 0)
+		goto out_migration_misuse;
+
 	/*
 	 * verify that any proto=/mountproto= options match the address
 	 * familiies in the addr=/mountaddr= options.
@@ -1625,6 +1641,10 @@ out_minorversion_mismatch:
 	printk(KERN_INFO "NFS: mount option vers=%u does not support "
 			 "minorversion=%u\n", mnt->version, mnt->minorversion);
 	return 0;
+out_migration_misuse:
+	printk(KERN_INFO
+		"NFS: 'migration' not supported for this NFS version\n");
+	return 0;
 out_nomem:
 	printk(KERN_INFO "NFS: not enough memory to parse option\n");
 	return 0;
diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
index 6532765..abaa47a 100644
--- a/include/linux/nfs_fs_sb.h
+++ b/include/linux/nfs_fs_sb.h
@@ -39,6 +39,7 @@ struct nfs_client {
 	unsigned long		cl_flags;	/* behavior switches */
 #define NFS_CS_NORESVPORT	0		/* - use ephemeral src port */
 #define NFS_CS_DISCRTRY		1		/* - disconnect on RPC retry */
+#define NFS_CS_MIGRATION	2		/* - transparent state migr */
 	struct sockaddr_storage	cl_addr;	/* server identifier */
 	size_t			cl_addrlen;
 	char *			cl_hostname;	/* hostname of server */
@@ -124,6 +125,7 @@ struct nfs_server {
 	unsigned int		namelen;
 	unsigned int		options;	/* extra options enabled by mount */
 #define NFS_OPTION_FSCACHE	0x00000001	/* - local caching enabled */
+#define NFS_OPTION_MIGRATION	0x00000002	/* - NFSv4 migration enabled */
 
 	struct nfs_fsid		fsid;
 	__u64			maxfilesize;	/* maximum file size */


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 10/14] NFS: Use the same nfs_client_id4 for every server
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (8 preceding siblings ...)
  2012-07-09 15:45 ` [PATCH 09/14] NFS: Introduce "migration" mount option Chuck Lever
@ 2012-07-09 15:45 ` Chuck Lever
  2012-07-09 15:45 ` [PATCH 11/14] NFS: Discover NFSv4 server trunking when mounting Chuck Lever
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:45 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Currently, when identifying itself to NFS servers, the Linux NFS
client uses a unique nfs_client_id4.id string for each server IP
address it talks with.  For example, when client A talks to server X,
the client identifies itself using a string like "AX".

These strings are opaque to servers.  Each client is free to choose
any content, as long as it is unique from other client instances.  A
server must not parse the contents of the string, it can only test
these strings for equality.  These requirements are specified in
detail by RFC 3530 (and bis).

This form of client identification presents a problem for Transparent
State Migration.  When client A's state on server X is migrated to
server Y, it continues to be associated with string "AX."  But,
according to the rules of client string construction above, client
A will present string "AY" when communicating with server Y.

Server Y thus has no way to know that client A should be associated
with the state migrated from server X.  "AX" is all but abandoned,
interferring with establishing fresh state for A on server Y.

To support transparent state migration, then, NFSv4.0 clients must
instead use the same nfs_client_id4.id string to identify themselves
to every NFS server; something like "A".

As part of a migration event, when state associated with string "A"
shows up at server Y, client A identifies itself as "A" and server Y
will know immediately that the state associated with "A," whether it
is native or migrated, is owned by client A.

As a pre-requisite to adding support for NFSv4 migration to the Linux
NFS client, this patch changes the way Linux identifies itself to NFS
servers via the SETCLIENTID (NFSv4 minor version 0) and EXCHANGE_ID
(NFSv4 minor version 1) operations.

In addition to removing the server's IP address from nfs_client_id4,
the Linux NFS client will also no longer use its own source IP address
as part of the nfs_client_id4 string.  On multi-homed clients, the
value of this address depends on the address family and network
routing used to contact the server, thus it can be different for each
server.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/nfs4proc.c |   51 +++++++++++++++++++++++++++++++++++++++------------
 1 files changed, 39 insertions(+), 12 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 1d3b28d..214e1e0 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -4030,6 +4030,32 @@ static void nfs4_init_boot_verifier(const struct nfs_client *clp,
 	memcpy(bootverf->data, verf, sizeof(bootverf->data));
 }
 
+static unsigned int
+nfs4_init_nonuniform_client_string(const struct nfs_client *clp,
+				   char *buf, size_t len)
+{
+	unsigned int result;
+
+	rcu_read_lock();
+	result = scnprintf(buf, len, "Linux NFSv4.0 %s/%s %s",
+				clp->cl_ipaddr,
+				rpc_peeraddr2str(clp->cl_rpcclient,
+							RPC_DISPLAY_ADDR),
+				rpc_peeraddr2str(clp->cl_rpcclient,
+							RPC_DISPLAY_PROTO));
+	rcu_read_unlock();
+	return result;
+}
+
+static unsigned int
+nfs4_init_uniform_client_string(const struct nfs_client *clp,
+				char *buf, size_t len)
+{
+	return scnprintf(buf, len, "Linux NFSv%u.%u %s",
+				clp->rpc_ops->version, clp->cl_minorversion,
+				clp->cl_rpcclient->cl_nodename);
+}
+
 /**
  * nfs4_proc_setclientid - Negotiate client ID
  * @clp: state data structure
@@ -4060,15 +4086,18 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 
 	/* nfs_client_id4 */
 	nfs4_init_boot_verifier(clp, &sc_verifier);
-	rcu_read_lock();
-	setclientid.sc_name_len = scnprintf(setclientid.sc_name,
-			sizeof(setclientid.sc_name), "%s/%s %s",
-			clp->cl_ipaddr,
-			rpc_peeraddr2str(clp->cl_rpcclient,
-						RPC_DISPLAY_ADDR),
-			rpc_peeraddr2str(clp->cl_rpcclient,
-						RPC_DISPLAY_PROTO));
+	if (test_bit(NFS_CS_MIGRATION, &clp->cl_flags))
+		setclientid.sc_name_len =
+				nfs4_init_uniform_client_string(clp,
+						setclientid.sc_name,
+						sizeof(setclientid.sc_name));
+	else
+		setclientid.sc_name_len =
+				nfs4_init_nonuniform_client_string(clp,
+						setclientid.sc_name,
+						sizeof(setclientid.sc_name));
 	/* cb_client4 */
+	rcu_read_lock();
 	setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
 				sizeof(setclientid.sc_netid),
 				rpc_peeraddr2str(clp->cl_rpcclient,
@@ -5294,10 +5323,8 @@ int nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred)
 	};
 
 	nfs4_init_boot_verifier(clp, &verifier);
-	args.id_len = scnprintf(args.id, sizeof(args.id),
-				"%s/%s",
-				clp->cl_ipaddr,
-				clp->cl_rpcclient->cl_nodename);
+	args.id_len = nfs4_init_uniform_client_string(clp, args.id,
+							sizeof(args.id));
 	dprintk("NFS call  exchange_id auth=%s, '%.*s'\n",
 		clp->cl_rpcclient->cl_auth->au_ops->au_name,
 		args.id_len, args.id);


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 11/14] NFS: Discover NFSv4 server trunking when mounting
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (9 preceding siblings ...)
  2012-07-09 15:45 ` [PATCH 10/14] NFS: Use the same nfs_client_id4 for every server Chuck Lever
@ 2012-07-09 15:45 ` Chuck Lever
  2012-07-09 15:45 ` [PATCH 12/14] NFS: Add nfs4_unique_id boot parameter Chuck Lever
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:45 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

"Server trunking" is a fancy named for a multi-homed NFS server.
Trunking might occur if a client sends NFS requests for a single
workload to multiple network interfaces on the same server.  There
are some implications for NFSv4 state management that make it useful
to know if a single NFSv4 server instance is multi-homed.

If a client cares about server trunking, no NFSv4 operations can
proceed until that client determines who it is talking to.  Thus
server IP trunking discovery must be done when the client first
encounters an unfamiliar server IP address.

The nfs_get_client() function walks the nfs_client_list and matches on
server IP address.  The outcome of that walk tells us immediately if
we have an unfamiliar server IP address.  It invokes nfs_init_client()
in this case.  Thus, nfs4_init_client() is a good spot to perform
trunking discovery.

Discovery requires a client to establish a fresh client ID, so our
client will now send SETCLIENTID or EXCHANGE_ID as the first NFS
operation after a successful ping, rather than waiting for an
application to perform an operation that requires NFSv4 state.

The exact process for detecting trunking is different for NFSv4.0 and
NFSv4.1, so a minorversion-specific init_client callout is introduced.

CLID_INUSE recovery is important for the trunking discovery process.
CLID_INUSE is a sign the server recognizes the client's nfs_client_id4
id string, but the client is using the wrong principal this time.
The SETCLIENTID must be retried with a series of different principals
until one works, and the rest of trunking discovery can proceed.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/client.c    |  237 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 fs/nfs/internal.h  |    6 +
 fs/nfs/nfs4_fs.h   |    8 ++
 fs/nfs/nfs4proc.c  |    2 
 fs/nfs/nfs4state.c |  186 ++++++++++++++++++++++++++++++++++++++++-
 5 files changed, 433 insertions(+), 6 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 5bad694..ea1e423 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -576,7 +576,8 @@ nfs_get_client(const struct nfs_client_initdata *cl_init,
 			return nfs_found_client(cl_init, clp);
 		}
 		if (new) {
-			list_add(&new->cl_share_link, &nn->nfs_client_list);
+			list_add_tail(&new->cl_share_link,
+					&nn->nfs_client_list);
 			spin_unlock(&nn->nfs_client_lock);
 			new->cl_flags = cl_init->init_flags;
 			return cl_init->rpc_ops->init_client(new,
@@ -594,6 +595,228 @@ nfs_get_client(const struct nfs_client_initdata *cl_init,
 	return new;
 }
 
+#ifdef CONFIG_NFS_V4
+/*
+ * Returns true if the client IDs match
+ */
+static bool nfs4_match_clientids(struct nfs_client *a, struct nfs_client *b)
+{
+	if (a->cl_clientid != b->cl_clientid) {
+		dprintk("NFS: --> %s client ID %llx does not match %llx\n",
+			__func__, a->cl_clientid, b->cl_clientid);
+		return false;
+	}
+	dprintk("NFS: --> %s client ID %llx matches %llx\n",
+		__func__, a->cl_clientid, b->cl_clientid);
+	return true;
+}
+
+/*
+ * SETCLIENTID just did a callback update with the callback ident in
+ * "drop," but server trunking discovery claims "drop" and "keep" are
+ * actually the same server.  Swap the callback IDs so that "keep"
+ * will continue to use the callback ident the server now knows about,
+ * and so that "keep"'s original callback ident is destroyed when
+ * "drop" is freed.
+ */
+static void nfs4_swap_callback_idents(struct nfs_client *keep,
+				      struct nfs_client *drop)
+{
+	struct nfs_net *nn = net_generic(keep->cl_net, nfs_net_id);
+	unsigned int save = keep->cl_cb_ident;
+
+	if (keep->cl_cb_ident == drop->cl_cb_ident)
+		return;
+
+	dprintk("%s: keeping callback ident %u and dropping ident %u\n",
+		__func__, keep->cl_cb_ident, drop->cl_cb_ident);
+
+	spin_lock(&nn->nfs_client_lock);
+
+	idr_replace(&nn->cb_ident_idr, keep, drop->cl_cb_ident);
+	keep->cl_cb_ident = drop->cl_cb_ident;
+
+	idr_replace(&nn->cb_ident_idr, drop, save);
+	drop->cl_cb_ident = save;
+
+	spin_unlock(&nn->nfs_client_lock);
+}
+
+/**
+ * nfs40_walk_client_list - Find server that recognizes a client ID
+ *
+ * @new: nfs_client with client ID to test
+ * @result: OUT: found nfs_client, or new
+ * @cred: credential to use for trunking test
+ *
+ * Returns zero, a negative errno, or a negative NFS4ERR status.
+ * If zero is returned, an nfs_client pointer is planted in "result."
+ *
+ * NB: nfs40_walk_client_list() relies on the new nfs_client being
+ *     the last nfs_client on the list.
+ */
+int nfs40_walk_client_list(struct nfs_client *new,
+			   struct nfs_client **result,
+			   struct rpc_cred *cred)
+{
+	struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id);
+	struct nfs_client *pos, *prev = NULL;
+	struct nfs4_setclientid_res clid = {
+		.clientid	= new->cl_clientid,
+		.confirm	= new->cl_confirm,
+	};
+	int status;
+
+	spin_lock(&nn->nfs_client_lock);
+
+	list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {
+		if (pos->cl_cons_state < 0)
+			continue;
+
+		if (pos->rpc_ops != new->rpc_ops)
+			continue;
+
+		if (pos->cl_proto != new->cl_proto)
+			continue;
+
+		if (pos->cl_minorversion != new->cl_minorversion)
+			continue;
+
+		if (pos->cl_clientid != new->cl_clientid)
+			continue;
+
+		atomic_inc(&pos->cl_count);
+		spin_unlock(&nn->nfs_client_lock);
+
+		if (prev)
+			nfs_put_client(prev);
+
+		status = nfs4_proc_setclientid_confirm(pos, &clid, cred);
+		if (status == 0) {
+			nfs4_swap_callback_idents(pos, new);
+
+			nfs_put_client(pos);
+			*result = pos;
+			dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
+				__func__, pos, atomic_read(&pos->cl_count));
+			return 0;
+		}
+		if (status != -NFS4ERR_STALE_CLIENTID) {
+			nfs_put_client(pos);
+			dprintk("NFS: <-- %s status = %d, no result\n",
+				__func__, status);
+			return status;
+		}
+
+		spin_lock(&nn->nfs_client_lock);
+		prev = pos;
+	}
+
+	/*
+	 * No matching nfs_client found.  This should be impossible,
+	 * because the new nfs_client has already been added to
+	 * nfs_client_list by nfs_get_client().
+	 *
+	 * Don't BUG(), since the caller is holding a mutex.
+	 */
+	spin_unlock(&nn->nfs_client_lock);
+	pr_err("NFS: %s Error: no matching nfs_client found\n", __func__);
+	return -NFS4ERR_STALE_CLIENTID;
+}
+
+#ifdef CONFIG_NFS_V4_1
+/*
+ * Returns true if the server owners match
+ */
+static bool
+nfs4_match_serverowners(struct nfs_client *a, struct nfs_client *b)
+{
+	struct nfs41_server_owner *o1 = a->cl_serverowner;
+	struct nfs41_server_owner *o2 = b->cl_serverowner;
+
+	if (o1->minor_id != o2->minor_id) {
+		dprintk("NFS: --> %s server owner minor IDs do not match\n",
+			__func__);
+		return false;
+	}
+
+	if (o1->major_id_sz != o2->major_id_sz)
+		goto out_major_mismatch;
+	if (memcmp(o1->major_id, o2->major_id, o1->major_id_sz) != 0)
+		goto out_major_mismatch;
+
+	dprintk("NFS: --> %s server owners match\n", __func__);
+	return true;
+
+out_major_mismatch:
+	dprintk("NFS: --> %s server owner major IDs do not match\n",
+		__func__);
+	return false;
+}
+
+/**
+ * nfs41_walk_client_list - Find nfs_client that matches a client/server owner
+ *
+ * @new: nfs_client with client ID to test
+ * @result: OUT: found nfs_client, or new
+ * @cred: credential to use for trunking test
+ *
+ * Returns zero, a negative errno, or a negative NFS4ERR status.
+ * If zero is returned, an nfs_client pointer is planted in "result."
+ *
+ * NB: nfs41_walk_client_list() relies on the new nfs_client being
+ *     the last nfs_client on the list.
+ */
+int nfs41_walk_client_list(struct nfs_client *new,
+			   struct nfs_client **result,
+			   struct rpc_cred *cred)
+{
+	struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id);
+	struct nfs_client *pos;
+
+	spin_lock(&nn->nfs_client_lock);
+
+	list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {
+		if (pos->cl_cons_state < 0)
+			continue;
+
+		if (pos->rpc_ops != new->rpc_ops)
+			continue;
+
+		if (pos->cl_proto != new->cl_proto)
+			continue;
+
+		if (pos->cl_minorversion != new->cl_minorversion)
+			continue;
+
+		if (!nfs4_match_clientids(pos, new))
+			continue;
+
+		if (!nfs4_match_serverowners(pos, new))
+			continue;
+
+		spin_unlock(&nn->nfs_client_lock);
+		dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
+			__func__, pos, atomic_read(&pos->cl_count));
+
+		*result = pos;
+		return 0;
+	}
+
+	/*
+	 * No matching nfs_client found.  This should be impossible,
+	 * because the new nfs_client has already been added to
+	 * nfs_client_list by nfs_get_client().
+	 *
+	 * Don't BUG(), since the caller is holding a mutex.
+	 */
+	spin_unlock(&nn->nfs_client_lock);
+	pr_err("NFS: %s Error: no matching nfs_client found\n", __func__);
+	return -NFS4ERR_STALE_CLIENTID;
+}
+#endif	/* CONFIG_NFS_V4_1 */
+#endif	/* CONFIG_NFS_V4 */
+
 /*
  * Mark a server as ready or failed
  */
@@ -1369,6 +1592,7 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
 				    rpc_authflavor_t authflavour)
 {
 	char buf[INET6_ADDRSTRLEN + 1];
+	struct nfs_client *old;
 	int error;
 
 	if (clp->cl_cons_state == NFS_CS_READY) {
@@ -1414,6 +1638,17 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
 
 	if (!nfs4_has_session(clp))
 		nfs_mark_client_ready(clp, NFS_CS_READY);
+
+	error = nfs4_discover_server_trunking(clp, &old);
+	if (error < 0)
+		goto error;
+	if (clp != old) {
+		nfs_mark_client_ready(clp, NFS_CS_READY);
+		nfs_put_client(clp);
+		clp = old;
+		atomic_inc(&clp->cl_count);
+	}
+
 	return clp;
 
 error:
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 92d1a5e..305a549 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -155,6 +155,12 @@ extern struct nfs_client *nfs4_find_client_ident(struct net *, int);
 extern struct nfs_client *
 nfs4_find_client_sessionid(struct net *, const struct sockaddr *,
 				struct nfs4_sessionid *);
+extern int nfs40_walk_client_list(struct nfs_client *clp,
+				struct nfs_client **result,
+				struct rpc_cred *cred);
+extern int nfs41_walk_client_list(struct nfs_client *clp,
+				struct nfs_client **result,
+				struct rpc_cred *cred);
 extern struct nfs_server *nfs_create_server(
 					const struct nfs_parsed_mount_data *,
 					struct nfs_fh *);
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index cc5900a..2df6668 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -191,6 +191,8 @@ struct nfs4_state_recovery_ops {
 	int (*establish_clid)(struct nfs_client *, struct rpc_cred *);
 	struct rpc_cred * (*get_clid_cred)(struct nfs_client *);
 	int (*reclaim_complete)(struct nfs_client *);
+	int (*detect_trunking)(struct nfs_client *, struct nfs_client **,
+		struct rpc_cred *);
 };
 
 struct nfs4_state_maintenance_ops {
@@ -310,9 +312,15 @@ extern void nfs4_renew_state(struct work_struct *);
 /* nfs4state.c */
 struct rpc_cred *nfs4_get_setclientid_cred(struct nfs_client *clp);
 struct rpc_cred *nfs4_get_renew_cred_locked(struct nfs_client *clp);
+int nfs4_discover_server_trunking(struct nfs_client *clp,
+			struct nfs_client **);
+int nfs40_discover_server_trunking(struct nfs_client *clp,
+			struct nfs_client **, struct rpc_cred *);
 #if defined(CONFIG_NFS_V4_1)
 struct rpc_cred *nfs4_get_machine_cred_locked(struct nfs_client *clp);
 struct rpc_cred *nfs4_get_exchange_id_cred(struct nfs_client *clp);
+int nfs41_discover_server_trunking(struct nfs_client *clp,
+			struct nfs_client **, struct rpc_cred *);
 extern void nfs4_schedule_session_recovery(struct nfs4_session *, int);
 #else
 static inline void nfs4_schedule_session_recovery(struct nfs4_session *session, int err)
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 214e1e0..d454e3b 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -6803,6 +6803,7 @@ static const struct nfs4_state_recovery_ops nfs40_reboot_recovery_ops = {
 	.recover_lock	= nfs4_lock_reclaim,
 	.establish_clid = nfs4_init_clientid,
 	.get_clid_cred	= nfs4_get_setclientid_cred,
+	.detect_trunking = nfs40_discover_server_trunking,
 };
 
 #if defined(CONFIG_NFS_V4_1)
@@ -6814,6 +6815,7 @@ static const struct nfs4_state_recovery_ops nfs41_reboot_recovery_ops = {
 	.establish_clid = nfs41_init_clientid,
 	.get_clid_cred	= nfs4_get_exchange_id_cred,
 	.reclaim_complete = nfs41_proc_reclaim_complete,
+	.detect_trunking = nfs41_discover_server_trunking,
 };
 #endif /* CONFIG_NFS_V4_1 */
 
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index d6fe6f7..3a8563c 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -51,6 +51,8 @@
 #include <linux/bitops.h>
 #include <linux/jiffies.h>
 
+#include <linux/sunrpc/clnt.h>
+
 #include "nfs4_fs.h"
 #include "callback.h"
 #include "delegation.h"
@@ -62,7 +64,7 @@
 #define OPENOWNER_POOL_SIZE	8
 
 const nfs4_stateid zero_stateid;
-
+static DEFINE_MUTEX(nfs_clid_init_mutex);
 static LIST_HEAD(nfs4_clientid_list);
 
 int nfs4_init_clientid(struct nfs_client *clp, struct rpc_cred *cred)
@@ -96,6 +98,52 @@ out:
 	return status;
 }
 
+/**
+ * nfs40_discover_server_trunking - Detect server IP address trunking (mv0)
+ *
+ * @clp: nfs_client under test
+ * @result: OUT: found nfs_client, or clp
+ * @cred: credential to use for trunking test
+ *
+ * Returns zero, a negative errno, or a negative NFS4ERR status.
+ * If zero is returned, an nfs_client pointer is planted in
+ * "result".
+ */
+int nfs40_discover_server_trunking(struct nfs_client *clp,
+				   struct nfs_client **result,
+				   struct rpc_cred *cred)
+{
+	struct nfs4_setclientid_res clid = {
+		.clientid = clp->cl_clientid,
+		.confirm = clp->cl_confirm,
+	};
+	unsigned short port;
+	int status;
+
+	port = nfs_callback_tcpport;
+	if (clp->cl_addr.ss_family == AF_INET6)
+		port = nfs_callback_tcpport6;
+
+	status = nfs4_proc_setclientid(clp, NFS4_CALLBACK, port, cred, &clid);
+	if (status != 0)
+		goto out;
+	clp->cl_clientid = clid.clientid;
+	clp->cl_confirm = clid.confirm;
+
+	status = nfs40_walk_client_list(clp, result, cred);
+	if (status == -NFS4ERR_STALE_CLIENTID)
+		set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
+
+	/*
+	 * Sustain the lease, even if it's empty.  If the clientid4
+	 * goes stale it's of no use for trunking discovery.
+	 */
+	nfs4_schedule_state_renewal(*result);
+
+out:
+	return status;
+}
+
 struct rpc_cred *nfs4_get_machine_cred_locked(struct nfs_client *clp)
 {
 	struct rpc_cred *cred = NULL;
@@ -275,6 +323,46 @@ out:
 	return status;
 }
 
+/**
+ * nfs41_discover_server_trunking - Detect server IP address trunking (mv1)
+ *
+ * @clp: nfs_client under test
+ * @result: OUT: found nfs_client, or clp
+ * @cred: credential to use for trunking test
+ *
+ * Returns NFS4_OK, a negative errno, or a negative NFS4ERR status.
+ * If NFS4_OK is returned, an nfs_client pointer is planted in
+ * "result".
+ */
+int nfs41_discover_server_trunking(struct nfs_client *clp,
+				   struct nfs_client **result,
+				   struct rpc_cred *cred)
+{
+	struct nfs_client *trunked;
+	int status;
+
+	nfs4_begin_drain_session(clp);
+	status = nfs4_proc_exchange_id(clp, cred);
+	if (status != NFS4_OK)
+		goto out;
+
+	status = nfs41_walk_client_list(clp, &trunked, cred);
+	if (status != NFS4_OK)
+		goto out;
+
+	set_bit(NFS4CLNT_LEASE_CONFIRM, &trunked->cl_state);
+	status = nfs4_proc_create_session(trunked, cred);
+	if (status != NFS4_OK)
+		goto out;
+	clear_bit(NFS4CLNT_LEASE_CONFIRM, &trunked->cl_state);
+	nfs41_setup_state_renewal(trunked);
+	nfs_mark_client_ready(trunked, NFS_CS_READY);
+	*result = trunked;
+out:
+	nfs4_end_drain_session(clp);
+	return status;
+}
+
 struct rpc_cred *nfs4_get_exchange_id_cred(struct nfs_client *clp)
 {
 	struct rpc_cred *cred;
@@ -1656,16 +1744,104 @@ static int nfs4_reclaim_lease(struct nfs_client *clp)
 	struct rpc_cred *cred;
 	const struct nfs4_state_recovery_ops *ops =
 		clp->cl_mvops->reboot_recovery_ops;
-	int status;
+	int status = -ENOENT;
 
+	mutex_lock(&nfs_clid_init_mutex);
 	cred = ops->get_clid_cred(clp);
 	if (cred == NULL)
-		return -ENOENT;
+		goto out;
 	status = ops->establish_clid(clp, cred);
 	put_rpccred(cred);
 	if (status != 0)
-		return nfs4_handle_reclaim_lease_error(clp, status);
-	return 0;
+		status = nfs4_handle_reclaim_lease_error(clp, status);
+out:
+	mutex_unlock(&nfs_clid_init_mutex);
+	return status;
+}
+
+/**
+ * nfs4_discover_server_trunking - Detect server IP address trunking
+ *
+ * @clp: nfs_client under test
+ * @result: OUT: found nfs_client, or clp
+ *
+ * Returns zero or a negative errno.  If zero is returned,
+ * an nfs_client pointer is planted in "result".
+ *
+ * Note: since we are invoked in process context, and
+ * not from inside the state manager, we cannot use
+ * nfs4_handle_reclaim_lease_error().
+ */
+int nfs4_discover_server_trunking(struct nfs_client *clp,
+				  struct nfs_client **result)
+{
+	const struct nfs4_state_recovery_ops *ops =
+				clp->cl_mvops->reboot_recovery_ops;
+	rpc_authflavor_t flav, flavors[NFS_MAX_SECFLAVORS];
+	struct rpc_auth *auth;
+	struct rpc_cred *cred;
+	int i, len, status;
+
+	dprintk("NFS: %s: testing '%s'\n", __func__, clp->cl_hostname);
+	mutex_lock(&nfs_clid_init_mutex);
+
+	i = 0;
+	len = rpcauth_list_flavors(flavors, ARRAY_SIZE(flavors));
+
+	status  = -ENOENT;
+again:
+	cred = ops->get_clid_cred(clp);
+	if (cred == NULL)
+		goto out;
+
+	status = ops->detect_trunking(clp, result, cred);
+	put_rpccred(cred);
+	switch (status) {
+	case 0:
+		break;
+
+	case -EACCES:
+		if (clp->cl_machine_cred == NULL)
+			break;
+		/* Handle case where the user hasn't set up machine creds */
+		nfs4_clear_machine_cred(clp);
+	case -NFS4ERR_DELAY:
+	case -ETIMEDOUT:
+	case -EAGAIN:
+		ssleep(1);
+		dprintk("NFS: %s after status %d, retrying\n",
+			__func__, status);
+		goto again;
+
+	case -NFS4ERR_CLID_INUSE:
+	case -NFS4ERR_WRONGSEC:
+		status = -EPERM;
+		if (i >= len)
+			break;
+
+		flav = flavors[i++];
+		auth = rpcauth_create(flav, clp->cl_rpcclient);
+		if (IS_ERR(auth)) {
+			status = PTR_ERR(auth);
+			break;
+		}
+		goto again;
+
+	case -NFS4ERR_MINOR_VERS_MISMATCH:
+		status = -EPROTONOSUPPORT;
+		break;
+
+	case -EKEYEXPIRED:
+		nfs4_warn_keyexpired(clp->cl_hostname);
+	case -NFS4ERR_NOT_SAME: /* FixMe: implement recovery
+				 * in nfs4_exchange_id */
+		status = -EKEYEXPIRED;
+	}
+
+out:
+	mutex_unlock(&nfs_clid_init_mutex);
+	dprintk("NFS: %s: status = %d\n", __func__, status);
+	return status;
 }
 
 #ifdef CONFIG_NFS_V4_1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 12/14] NFS: Add nfs4_unique_id boot parameter
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (10 preceding siblings ...)
  2012-07-09 15:45 ` [PATCH 11/14] NFS: Discover NFSv4 server trunking when mounting Chuck Lever
@ 2012-07-09 15:45 ` Chuck Lever
  2012-07-09 15:45 ` [PATCH 13/14] NFS: Clean up debugging messages in fs/nfs/client.c Chuck Lever
  2012-07-09 15:45 ` [PATCH 14/14] NFS: Slow down state manager after an unhandled error Chuck Lever
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:45 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

An optional boot parameter is introduced to allow client
administrators to specify a string that the Linux NFS client can
insert into its nfs_client_id4 id string, to make it both more
globally unique, and to ensure that it doesn't change even if the
client's nodename changes.

If this boot parameter is not specified, the client's nodename is
used, as before.

Client installation procedures can create a unique string (typically,
a UUID) which remains unchanged during the lifetime of that client
instance.  This works just like creating a UUID for the label of the
system's root and boot volumes.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 Documentation/filesystems/nfs/nfs.txt |   44 +++++++++++++++++++++++++++++++--
 Documentation/kernel-parameters.txt   |    5 ++++
 fs/nfs/nfs4proc.c                     |   12 ++++++++-
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/Documentation/filesystems/nfs/nfs.txt b/Documentation/filesystems/nfs/nfs.txt
index f50f26c..f2571c8 100644
--- a/Documentation/filesystems/nfs/nfs.txt
+++ b/Documentation/filesystems/nfs/nfs.txt
@@ -12,9 +12,47 @@ and work is in progress on adding support for minor version 1 of the NFSv4
 protocol.
 
 The purpose of this document is to provide information on some of the
-upcall interfaces that are used in order to provide the NFS client with
-some of the information that it requires in order to fully comply with
-the NFS spec.
+special features of the NFS client that can be configured by system
+administrators.
+
+
+The nfs4_unique_id parameter
+============================
+
+NFSv4 requires clients to identify themselves to servers with a unique
+string.  File open and lock state shared between one client and one server
+is associated with this identity.  To support robust NFSv4 state recovery
+and transparent state migration, this identity string must not change
+across client reboots.
+
+Without any other intervention, the Linux client uses a string that contains
+the local system's node name.  System administrators, however, often do not
+take care to ensure that node names are fully qualified and do not change
+over the lifetime of a client system.  Node names can have other
+administrative requirements that require particular behavior that does not
+work well as part of an nfs_client_id4 string.
+
+The nfs.nfs4_unique_id boot parameter specifies a unique string that can be
+used instead of a system's node name when an NFS client identifies itself to
+a server.  Thus, if the system's node name is not unique, or it changes, its
+nfs.nfs4_unique_id stays the same, preventing collision with other clients
+or loss of state during NFS reboot recovery or transparent state migration.
+
+The nfs.nfs4_unique_id string is typically a UUID, though it can contain
+anything that is believed to be unique across all NFS clients.  An
+nfs4_unique_id string should be chosen when a client system is installed,
+just as a system's root file system gets a fresh UUID in its label at
+install time.
+
+The string should remain fixed for the lifetime of the client.  It can be
+changed safely if care is taken that the client shuts down cleanly and all
+outstanding NFSv4 state has expired, to prevent loss of NFSv4 state.
+
+This string can be stored in an NFS client's grub.conf, or it can be provided
+via a net boot facility such as PXE.  It may also be specified as an nfs.ko
+module parameter.  Specifying a uniquifier string is not support for NFS
+clients running in containers.
+
 
 The DNS resolver
 ================
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index a92c5eb..733b5ab 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -1720,6 +1720,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			will be autodetected by the client, and it will fall
 			back to using the idmapper.
 			To turn off this behaviour, set the value to '0'.
+	nfs.nfs4_unique_id=
+			[NFS4] Specify an additional fixed unique ident-
+			ification string that NFSv4 clients can insert into
+			their nfs_client_id4 string.  This is typically a
+			UUID that is generated at system install time.
 
 	nfs.send_implementation_id =
 			[NFSv4.1] Send client implementation identification
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index d454e3b..586d88b 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -74,6 +74,12 @@
 
 static unsigned short max_session_slots = NFS4_DEF_SLOT_TABLE_SIZE;
 
+#define NFS4_CLIENT_ID_UNIQ_LEN		(64)
+static char nfs4_client_id_uniquifier[NFS4_CLIENT_ID_UNIQ_LEN] = "";
+module_param_string(nfs4_unique_id, nfs4_client_id_uniquifier,
+			NFS4_CLIENT_ID_UNIQ_LEN, 0600);
+MODULE_PARM_DESC(nfs4_unique_id, "nfs_client_id4 uniquifier string");
+
 struct nfs4_opendata;
 static int _nfs4_proc_open(struct nfs4_opendata *data);
 static int _nfs4_recover_proc_open(struct nfs4_opendata *data);
@@ -4051,9 +4057,13 @@ static unsigned int
 nfs4_init_uniform_client_string(const struct nfs_client *clp,
 				char *buf, size_t len)
 {
+	char *nodename = clp->cl_rpcclient->cl_nodename;
+
+	if (nfs4_client_id_uniquifier[0] != '\0')
+		nodename = nfs4_client_id_uniquifier;
 	return scnprintf(buf, len, "Linux NFSv%u.%u %s",
 				clp->rpc_ops->version, clp->cl_minorversion,
-				clp->cl_rpcclient->cl_nodename);
+				nodename);
 }
 
 /**


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 13/14] NFS: Clean up debugging messages in fs/nfs/client.c
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (11 preceding siblings ...)
  2012-07-09 15:45 ` [PATCH 12/14] NFS: Add nfs4_unique_id boot parameter Chuck Lever
@ 2012-07-09 15:45 ` Chuck Lever
  2012-07-09 15:45 ` [PATCH 14/14] NFS: Slow down state manager after an unhandled error Chuck Lever
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:45 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---

 fs/nfs/client.c |   43 +++++++++++++++++++++++++++++++++++++------
 fs/nfs/idmap.c  |    6 ++++++
 2 files changed, 43 insertions(+), 6 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index ea1e423..47ac301 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -61,6 +61,9 @@ static DECLARE_WAIT_QUEUE_HEAD(nfs_client_active_wq);
 /*
  * Get a unique NFSv4.0 callback identifier which will be used
  * by the V4.0 callback service to lookup the nfs_client struct
+ *
+ * Note: A return of zero means no ident was allocated, and thus
+ *       none should be released when "clp" is destroyed.
  */
 static int nfs_get_cb_ident_idr(struct nfs_client *clp, int minorversion)
 {
@@ -70,13 +73,18 @@ static int nfs_get_cb_ident_idr(struct nfs_client *clp, int minorversion)
 	if (clp->rpc_ops->version != 4 || minorversion != 0)
 		return ret;
 retry:
-	if (!idr_pre_get(&nn->cb_ident_idr, GFP_KERNEL))
+	if (!idr_pre_get(&nn->cb_ident_idr, GFP_KERNEL)) {
+		dprintk("NFS: %s idr_pre_get failed\n", __func__);
 		return -ENOMEM;
+	}
 	spin_lock(&nn->nfs_client_lock);
 	ret = idr_get_new(&nn->cb_ident_idr, clp, &clp->cl_cb_ident);
 	spin_unlock(&nn->nfs_client_lock);
 	if (ret == -EAGAIN)
 		goto retry;
+
+	dprintk("NFS: %s nfs_client %p gets callback ident %d\n",
+		__func__, clp, ret);
 	return ret;
 }
 #endif /* CONFIG_NFS_V4 */
@@ -194,6 +202,8 @@ static struct nfs_client *nfs_alloc_client(const struct nfs_client_initdata *cl_
 		clp->cl_machine_cred = cred;
 	nfs_fscache_get_client_cookie(clp);
 
+	dprintk("NFS: %s returning new nfs_client = %p ({1})\n",
+		__func__, clp);
 	return clp;
 
 error_cleanup:
@@ -255,8 +265,11 @@ static void nfs_cb_idr_remove_locked(struct nfs_client *clp)
 {
 	struct nfs_net *nn = net_generic(clp->cl_net, nfs_net_id);
 
-	if (clp->cl_cb_ident)
+	if (clp->cl_cb_ident) {
 		idr_remove(&nn->cb_ident_idr, clp->cl_cb_ident);
+		dprintk("NFS: %s removed ident %d for nfs_client %p\n",
+			__func__, clp->cl_cb_ident, clp);
+	}
 }
 
 static void pnfs_init_server(struct nfs_server *server)
@@ -293,7 +306,7 @@ static void pnfs_init_server(struct nfs_server *server)
  */
 static void nfs_free_client(struct nfs_client *clp)
 {
-	dprintk("--> nfs_free_client(%u)\n", clp->rpc_ops->version);
+	dprintk("--> %s destroying nfs_client = %p\n", __func__, clp);
 
 	nfs4_shutdown_client(clp);
 
@@ -310,7 +323,7 @@ static void nfs_free_client(struct nfs_client *clp)
 	kfree(clp->cl_hostname);
 	kfree(clp);
 
-	dprintk("<-- nfs_free_client()\n");
+	dprintk("<-- %s done\n", __func__);
 }
 
 /*
@@ -323,7 +336,8 @@ void nfs_put_client(struct nfs_client *clp)
 	if (!clp)
 		return;
 
-	dprintk("--> nfs_put_client({%d})\n", atomic_read(&clp->cl_count));
+	dprintk("--> %s nfs_client = %p ({%d})\n",
+		__func__, clp, atomic_read(&clp->cl_count));
 	nn = net_generic(clp->cl_net, nfs_net_id);
 
 	if (atomic_dec_and_lock(&clp->cl_count, &nn->nfs_client_lock)) {
@@ -504,6 +518,8 @@ static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *dat
 			continue;
 
 		atomic_inc(&clp->cl_count);
+		dprintk("%s nfs_client = %p ({%d})\n",
+			__func__, clp, atomic_read(&clp->cl_count));
 		return clp;
 	}
 	return NULL;
@@ -1461,11 +1477,21 @@ nfs4_find_client_ident(struct net *net, int cb_ident)
 	struct nfs_client *clp;
 	struct nfs_net *nn = net_generic(net, nfs_net_id);
 
+	dprintk("NFS: --> %s looking for %d in net %p\n",
+		__func__, cb_ident, net);
+
 	spin_lock(&nn->nfs_client_lock);
 	clp = idr_find(&nn->cb_ident_idr, cb_ident);
-	if (clp)
+	if (clp) {
 		atomic_inc(&clp->cl_count);
+	}
 	spin_unlock(&nn->nfs_client_lock);
+
+	if (clp)
+		dprintk("NFS: <-- %s found nfs_client = %p ({%d})\n",
+			__func__, clp, atomic_read(&clp->cl_count));
+	else
+		dprintk("NFS: <-- %s no matching cb_ident\n", __func__);
 	return clp;
 }
 
@@ -1498,6 +1524,8 @@ nfs4_find_client_sessionid(struct net *net, const struct sockaddr *addr,
 			continue;
 
 		atomic_inc(&clp->cl_count);
+		dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
+			__func__, clp, atomic_read(&clp->cl_count));
 		spin_unlock(&nn->nfs_client_lock);
 		return clp;
 	}
@@ -2008,6 +2036,9 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source,
 	server->nfs_client = source->nfs_client;
 	server->destroy = source->destroy;
 	atomic_inc(&server->nfs_client->cl_count);
+	dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
+		__func__, server->nfs_client,
+		atomic_read(&server->nfs_client->cl_count));
 	nfs_server_copy_userdata(server, source);
 
 	server->fsid = fattr->fsid;
diff --git a/fs/nfs/idmap.c b/fs/nfs/idmap.c
index 864c51e..d8a0856 100644
--- a/fs/nfs/idmap.c
+++ b/fs/nfs/idmap.c
@@ -50,6 +50,8 @@
 #include "internal.h"
 #include "netns.h"
 
+#define NFSDBG_FACILITY		NFSDBG_CLIENT
+
 #define NFS_UINT_MAXLEN 11
 
 /* Default cache timeout is 10 minutes */
@@ -559,6 +561,8 @@ restart:
 		    ((event == RPC_PIPEFS_UMOUNT) && !cl_dentry))
 			continue;
 		atomic_inc(&clp->cl_count);
+		dprintk("NFS: <-- %s returning nfs_client = %p ({%d})\n",
+			__func__, clp, atomic_read(&clp->cl_count));
 		spin_unlock(&nn->nfs_client_lock);
 		return clp;
 	}
@@ -573,6 +577,8 @@ static int rpc_pipefs_event(struct notifier_block *nb, unsigned long event,
 	struct nfs_client *clp;
 	int error = 0;
 
+	dprintk("%s\n", __func__);
+
 	if (!try_module_get(THIS_MODULE))
 		return 0;
 


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 14/14] NFS: Slow down state manager after an unhandled error
  2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
                   ` (12 preceding siblings ...)
  2012-07-09 15:45 ` [PATCH 13/14] NFS: Clean up debugging messages in fs/nfs/client.c Chuck Lever
@ 2012-07-09 15:45 ` Chuck Lever
  13 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 15:45 UTC (permalink / raw)
  To: trond.myklebust; +Cc: linux-nfs

If the state manager thread is not actually able to fully recover from
some situation, it wakes up waiters, who kick off a new state manager
thread.  Quite often the fresh invocation of the state manager is just
as successful.

This results in a livelock as the client dumps thousands of NFS
requests a second on the network in a vain attempt to recover.  Not
very friendly.

To mitigate this situation, add a delay in the state manager after
an unhandled error, so that the client sends just a few requests
every second in this case.
---

 fs/nfs/nfs4state.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 3a8563c..a5844e1 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -2151,6 +2151,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
 out_error:
 	pr_warn_ratelimited("NFS: state manager failed on NFSv4 server %s"
 			" with error %d\n", clp->cl_hostname, -status);
+	ssleep(1);
 	nfs4_end_drain_session(clp);
 	nfs4_clear_state_manager_bit(clp);
 }


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 02/14] NFS: Properly sort TEST_STATEID results
  2012-07-09 15:44 ` [PATCH 02/14] NFS: Properly sort TEST_STATEID results Chuck Lever
@ 2012-07-09 19:34   ` Myklebust, Trond
  2012-07-09 19:47     ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 19:34 UTC (permalink / raw)
  To: Chuck Lever; +Cc: linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDExOjQ0IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
VGhlIHJlc3VsdCBvZiBhIFRFU1RfU1RBVEVJRCBvcGVyYXRpb24gY2FuIGluZGljYXRlIGEgZmV3
IGRpZmZlcmVudA0KPiB0aGluZ3M6DQo+IA0KPiAgIG8gSWYgTkZTX09LIGlzIHJldHVybmVkLCB0
aGVuIHRoZSBjbGllbnQgY2FuIGNvbnRpbnVlIHVzaW5nIHRoZQ0KPiAgICAgc3RhdGUgSUQgdW5k
ZXIgdGVzdCwgYW5kIHNraXAgcmVjb3ZlcnkuDQo+IA0KPiAgIG8gUkZDIDU2NjEgc2F5cyB0aGF0
IGlmIGFuZCBvbmx5IGlmIHRoZSBzdGF0ZSBJRCB3YXMgcmV2b2tlZCwgdGhlbg0KPiAgICAgdGhl
IGNsaWVudCBtdXN0IHBlcmZvcm0gYW4gZXhwbGljaXQgRlJFRV9TVEFURUlEIGJlZm9yZSB0cnlp
bmcgdG8NCj4gICAgIHJlLW9wZW4uDQo+IA0KPiAgIG8gSWYgdGhlIHNlcnZlciBkb2Vzbid0IHJl
Y29nbml6ZSB0aGUgc3RhdGUgSUQgYXQgYWxsLCB0aGVuIG5vDQo+ICAgICBGUkVFX1NUQVRFSUQg
aXMgbmVlZGVkLCBhbmQgdGhlIGNsaWVudCBjYW4gaW1tZWRpYXRlbHkgY29udGludWUNCj4gICAg
IHdpdGggb3BlbiByZWNvdmVyeS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IENodWNrIExldmVyIDxj
aHVjay5sZXZlckBvcmFjbGUuY29tPg0KPiAtLS0NCj4gDQo+ICBmcy9uZnMvbmZzNHByb2MuYyB8
ICAgMzYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tDQo+ICAxIGZpbGVzIGNo
YW5nZWQsIDM0IGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0
IGEvZnMvbmZzL25mczRwcm9jLmMgYi9mcy9uZnMvbmZzNHByb2MuYw0KPiBpbmRleCA5NzFlYzhj
Li42MGEzMjBjIDEwMDY0NA0KPiAtLS0gYS9mcy9uZnMvbmZzNHByb2MuYw0KPiArKysgYi9mcy9u
ZnMvbmZzNHByb2MuYw0KPiBAQCAtMTc1Niw2ICsxNzU2LDE0IEBAIHN0YXRpYyBpbnQgbmZzNF9v
cGVuX2V4cGlyZWQoc3RydWN0IG5mczRfc3RhdGVfb3duZXIgKnNwLCBzdHJ1Y3QgbmZzNF9zdGF0
ZSAqc3RhDQo+ICB9DQo+ICANCj4gICNpZiBkZWZpbmVkKENPTkZJR19ORlNfVjRfMSkNCj4gKy8q
Kg0KPiArICogbmZzNDFfY2hlY2tfZXhwaXJlZF9zdGF0ZWlkIC0gZG9lcyBhIHN0YXRlIElEIG5l
ZWQgcmVjb3Zlcnk/DQo+ICsgKg0KPiArICogQHN0YXRlOiBORlN2NCBvcGVuIHN0YXRlIGZvciBh
biBpbm9kZQ0KPiArICoNCj4gKyAqIFJldHVybnMgTkZTX09LIGlmIHJlY292ZXJ5IGZvciB0aGlz
IHN0YXRlIElEIGlzIG5vdyBmaW5pc2hlZC4NCj4gKyAqIE90aGVyd2lzZSBhIG5lZ2F0aXZlIE5G
UzRFUlIgdmFsdWUgaXMgcmV0dXJuZWQuDQo+ICsgKi8NCj4gIHN0YXRpYyBpbnQgbmZzNDFfY2hl
Y2tfZXhwaXJlZF9zdGF0ZWlkKHN0cnVjdCBuZnM0X3N0YXRlICpzdGF0ZSwgbmZzNF9zdGF0ZWlk
ICpzdGF0ZWlkLCB1bnNpZ25lZCBpbnQgZmxhZ3MpDQo+ICB7DQo+ICAJaW50IHN0YXR1cyA9IE5G
U19PSzsNCj4gQEAgLTE3NjMsOCArMTc3MSwxNiBAQCBzdGF0aWMgaW50IG5mczQxX2NoZWNrX2V4
cGlyZWRfc3RhdGVpZChzdHJ1Y3QgbmZzNF9zdGF0ZSAqc3RhdGUsIG5mczRfc3RhdGVpZCAqcw0K
PiAgDQo+ICAJaWYgKHN0YXRlLT5mbGFncyAmIGZsYWdzKSB7DQo+ICAJCXN0YXR1cyA9IG5mczQx
X3Rlc3Rfc3RhdGVpZChzZXJ2ZXIsIHN0YXRlaWQpOw0KPiAtCQlpZiAoc3RhdHVzICE9IE5GU19P
Sykgew0KPiArCQlzd2l0Y2ggKHN0YXR1cykgew0KPiArCQljYXNlIE5GU19PSzoNCj4gKwkJCS8q
IHNlcnZlciByZWNvZ25pemVzIHRoaXMgb25lLCBkb24ndCByZWNvdmVyICovDQo+ICsJCQlicmVh
azsNCj4gKwkJY2FzZSAtTkZTNEVSUl9BRE1JTl9SRVZPS0VEOg0KPiArCQljYXNlIC1ORlM0RVJS
X0RFTEVHX1JFVk9LRUQ6DQoNCldoYXQgYWJvdXQgLU5GUzRFUlJfQkFEX1NUQVRFSUQgYW5kL29y
IC1ORlM0RVJSX0VYUElSRUQ/DQoNCj4gKwkJCS8qIHN0YXRlIHdhcyByZXZva2VkLCBmcmVlIHRo
ZW4gcmUtb3BlbiAqLw0KPiAgCQkJbmZzNDFfZnJlZV9zdGF0ZWlkKHNlcnZlciwgc3RhdGVpZCk7
DQo+ICsJCWRlZmF1bHQ6DQo+ICsJCQkvKiBhbnl0aGluZyBlbHNlOiBqdXN0IHJlLW9wZW4gaXQg
Ki8NCj4gIAkJCXN0YXRlLT5mbGFncyAmPSB+ZmxhZ3M7DQo+ICAJCX0NCj4gIAl9DQo+IEBAIC00
Njk4LDYgKzQ3MTQsMTQgQEAgb3V0Og0KPiAgfQ0KPiAgDQo+ICAjaWYgZGVmaW5lZChDT05GSUdf
TkZTX1Y0XzEpDQo+ICsvKioNCj4gKyAqIG5mczQxX2NoZWNrX2V4cGlyZWRfbG9ja3MgLSBjbGVh
ciBsb2NrIHN0YXRlIElEcw0KPiArICoNCj4gKyAqIEBzdGF0ZTogTkZTdjQgb3BlbiBzdGF0ZSBm
b3IgYSBmaWxlDQo+ICsgKg0KPiArICogUmV0dXJucyBORlNfT0sgaWYgcmVjb3ZlcnkgZm9yIHRo
aXMgc3RhdGUgSUQgaXMgbm93IGZpbmlzaGVkLg0KPiArICogT3RoZXJ3aXNlIGEgbmVnYXRpdmUg
TkZTNEVSUiB2YWx1ZSBpcyByZXR1cm5lZC4NCj4gKyAqLw0KPiAgc3RhdGljIGludCBuZnM0MV9j
aGVja19leHBpcmVkX2xvY2tzKHN0cnVjdCBuZnM0X3N0YXRlICpzdGF0ZSkNCj4gIHsNCj4gIAlp
bnQgc3RhdHVzLCByZXQgPSBORlNfT0s7DQo+IEBAIC00NzA3LDggKzQ3MzEsMTYgQEAgc3RhdGlj
IGludCBuZnM0MV9jaGVja19leHBpcmVkX2xvY2tzKHN0cnVjdCBuZnM0X3N0YXRlICpzdGF0ZSkN
Cj4gIAlsaXN0X2Zvcl9lYWNoX2VudHJ5KGxzcCwgJnN0YXRlLT5sb2NrX3N0YXRlcywgbHNfbG9j
a3MpIHsNCj4gIAkJaWYgKGxzcC0+bHNfZmxhZ3MgJiBORlNfTE9DS19JTklUSUFMSVpFRCkgew0K
PiAgCQkJc3RhdHVzID0gbmZzNDFfdGVzdF9zdGF0ZWlkKHNlcnZlciwgJmxzcC0+bHNfc3RhdGVp
ZCk7DQo+IC0JCQlpZiAoc3RhdHVzICE9IE5GU19PSykgew0KPiArCQkJc3dpdGNoIChzdGF0dXMp
IHsNCj4gKwkJCWNhc2UgTkZTX09LOg0KPiArCQkJCS8qIHNlcnZlciByZWNvZ25pemVzIHRoaXMg
b25lLCBkb24ndCByZS1sb2NrICovDQo+ICsJCQkJYnJlYWs7DQo+ICsJCQljYXNlIC1ORlM0RVJS
X0FETUlOX1JFVk9LRUQ6DQo+ICsJCQljYXNlIC1ORlM0RVJSX0RFTEVHX1JFVk9LRUQ6DQoNCkRp
dHRvDQoNCj4gKwkJCQkvKiBsb2NrIHdhcyByZXZva2VkLCBmcmVlIHRoZW4gcmUtbG9jayAqLw0K
PiAgCQkJCW5mczQxX2ZyZWVfc3RhdGVpZChzZXJ2ZXIsICZsc3AtPmxzX3N0YXRlaWQpOw0KPiAr
CQkJZGVmYXVsdDoNCj4gKwkJCQkvKiBhbnl0aGluZyBlbHNlOiBqdXN0IHJlLWxvY2sgaXQgKi8N
Cj4gIAkJCQlsc3AtPmxzX2ZsYWdzICY9IH5ORlNfTE9DS19JTklUSUFMSVpFRDsNCj4gIAkJCQly
ZXQgPSBzdGF0dXM7DQo+ICAJCQl9DQo+IA0KDQotLSANClRyb25kIE15a2xlYnVzdA0KTGludXgg
TkZTIGNsaWVudCBtYWludGFpbmVyDQoNCk5ldEFwcA0KVHJvbmQuTXlrbGVidXN0QG5ldGFwcC5j
b20NCnd3dy5uZXRhcHAuY29tDQoNCg==

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting
  2012-07-09 15:43 ` [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting Chuck Lever
@ 2012-07-09 19:36   ` Myklebust, Trond
  2012-07-09 19:51     ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 19:36 UTC (permalink / raw)
  To: Chuck Lever; +Cc: linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDExOjQzIC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
VGhlIFRFU1RfU1RBVEVJRCBhbmQgRlJFRV9TVEFURUlEIG9wZXJhdGlvbnMgY2FuIHJldHVybg0K
PiBORlM0RVJSX0JBRF9TVEFURUlELCBORlM0RVJSX09MRF9TVEFURUlELCBvciBORlM0RVJSX0RF
QURTRVNTSU9OLg0KPiBUaGVzZSBlcnJvciB2YWx1ZXMgc2hvdWxkIG5vdCBiZSBwYXNzZWQgdG8g
bmZzNF9oYW5kbGVfZXhjZXB0aW9uKCksDQo+IHNpbmNlIHRoYXQgd2lsbCByZWN1cnNpdmVseSBr
aWNrIG9mZiBzdGF0ZSByZWNvdmVyeSwgcmVzdWx0aW5nIGluIGENCj4gZGVhZGxvY2suICBUaGUg
cmVhc29uIHRvIGNvbnRpbnVlIHVzaW5nIG5mczRfaGFuZGxlX2V4Y2VwdGlvbigpIGlzDQo+IHRv
IGRlYWwgd2l0aCBORlM0RVJSX0RFTEFZLg0KPiANCj4gTW9yZW92ZXIsIHNwZWNpZmljYWxseSB3
aGVuIHRoZSBURVNUX1NUQVRFSUQgb3BlcmF0aW9uIHJldHVybnMNCj4gTkZTNF9PSywgcmVzLnN0
YXR1cyBjYW4gY29udGFpbiBvbmUgb2YgdGhlc2UgZXJyb3JzLg0KPiBfbmZzNDFfdGVzdF9zdGF0
ZWlkKCkgcmVwbGFjZXMgTkZTNF9PSyB3aXRoIHRoZSB2YWx1ZSBpbiByZXMuc3RhdHVzLA0KPiB3
aGljaCBpcyB0aGVuIHJldHVybmVkIHRvIGNhbGxlcnMuDQo+IA0KPiBCdXQgcmVzLnN0YXR1cyBp
cyBub3QgcGFzc2VkIHRocm91Z2ggbmZzNF9zdGF0X3RvX2Vycm5vKCksIGFuZCB0aHVzIGlzDQo+
IGEgcG9zaXRpdmUgTkZTNEVSUiB2YWx1ZS4gIEN1cnJlbnRseSBjYWxsZXJzIGFyZSBvbmx5IGlu
dGVyZXN0ZWQgaW4NCj4gIU5GUzRfT0ssIGFuZCBuZnM0X2hhbmRsZV9leGNlcHRpb24oKSBpZ25v
cmVzIHBvc2l0aXZlIHZhbHVlcywgc28gdGhpcw0KPiBvdmVyc2lnaHQgaGFzbid0IGJpdHRlbiB1
cyBzbyBmYXIuDQo+IA0KPiBCcnlhbiBhZ3JlZXMgdGhlIG9yaWdpbmFsIGludGVudCB3YXMgdG8g
cmV0dXJuIHJlcy5zdGF0dXMgYXMgYQ0KPiBuZWdhdGl2ZSBORlM0RVJSIHZhbHVlIHRvIGNhbGxl
cnMgb2YgbmZzNDFfdGVzdF9zdGF0ZWlkKCksIGFzIGxvbmcgYXMNCj4gbm8gc3RhdGUgSUQgcmVj
b3ZlcnkgaXMgYXR0ZW1wdGVkLg0KDQpOb25lIG9mIHRoaXMgZGVzY3JpYmVzIGEgY2xlYW51cC4u
LiBBRkFJQ1MgaXQgaXMgYWxsIGEgYnVnZml4DQoNCj4gQXMgYSBmaW5pc2hpbmcgdG91Y2gsIGFk
ZCBhcHByb3ByaWF0ZSBkb2N1bWVudGluZyBjb21tZW50cyBhbmQgc29tZQ0KPiBkZWJ1Z2dpbmcg
cHJpbnRrJ3MuDQoNClRoaXMgc2hvdWxkIGJlIGEgc2VwYXJhdGUgcGF0Y2ggc28gdGhhdCB3ZSBj
YW4gc2VuZCB0aGUgYnVnZml4IHBhcnQgdG8NCnN0YWJsZUB2Z2VyIGlmIG5lZWRlZC4uLg0KDQo+
IFNpZ25lZC1vZmYtYnk6IENodWNrIExldmVyIDxjaHVjay5sZXZlckBvcmFjbGUuY29tPg0KPiAt
LS0NCg0KDQotLSANClRyb25kIE15a2xlYnVzdA0KTGludXggTkZTIGNsaWVudCBtYWludGFpbmVy
DQoNCk5ldEFwcA0KVHJvbmQuTXlrbGVidXN0QG5ldGFwcC5jb20NCnd3dy5uZXRhcHAuY29tDQoN
Cg==

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 02/14] NFS: Properly sort TEST_STATEID results
  2012-07-09 19:34   ` Myklebust, Trond
@ 2012-07-09 19:47     ` Chuck Lever
  0 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 19:47 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: linux-nfs


On Jul 9, 2012, at 3:34 PM, Myklebust, Trond wrote:

> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>> The result of a TEST_STATEID operation can indicate a few different
>> things:
>> 
>>  o If NFS_OK is returned, then the client can continue using the
>>    state ID under test, and skip recovery.
>> 
>>  o RFC 5661 says that if and only if the state ID was revoked, then
>>    the client must perform an explicit FREE_STATEID before trying to
>>    re-open.
>> 
>>  o If the server doesn't recognize the state ID at all, then no
>>    FREE_STATEID is needed, and the client can immediately continue
>>    with open recovery.
>> 
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> 
>> fs/nfs/nfs4proc.c |   36 ++++++++++++++++++++++++++++++++++--
>> 1 files changed, 34 insertions(+), 2 deletions(-)
>> 
>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
>> index 971ec8c..60a320c 100644
>> --- a/fs/nfs/nfs4proc.c
>> +++ b/fs/nfs/nfs4proc.c
>> @@ -1756,6 +1756,14 @@ static int nfs4_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *sta
>> }
>> 
>> #if defined(CONFIG_NFS_V4_1)
>> +/**
>> + * nfs41_check_expired_stateid - does a state ID need recovery?
>> + *
>> + * @state: NFSv4 open state for an inode
>> + *
>> + * Returns NFS_OK if recovery for this state ID is now finished.
>> + * Otherwise a negative NFS4ERR value is returned.
>> + */
>> static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *stateid, unsigned int flags)
>> {
>> 	int status = NFS_OK;
>> @@ -1763,8 +1771,16 @@ static int nfs41_check_expired_stateid(struct nfs4_state *state, nfs4_stateid *s
>> 
>> 	if (state->flags & flags) {
>> 		status = nfs41_test_stateid(server, stateid);
>> -		if (status != NFS_OK) {
>> +		switch (status) {
>> +		case NFS_OK:
>> +			/* server recognizes this one, don't recover */
>> +			break;
>> +		case -NFS4ERR_ADMIN_REVOKED:
>> +		case -NFS4ERR_DELEG_REVOKED:
> 
> What about -NFS4ERR_BAD_STATEID and/or -NFS4ERR_EXPIRED?

My impression from RFC 5661 was that no FREE_STATEID was needed in those cases.  Those errors would be handled by the default arm.

I don't pretend to be an expert on this, though.  What is your thought?

> 
>> +			/* state was revoked, free then re-open */
>> 			nfs41_free_stateid(server, stateid);
>> +		default:
>> +			/* anything else: just re-open it */
>> 			state->flags &= ~flags;
>> 		}
>> 	}
>> @@ -4698,6 +4714,14 @@ out:
>> }
>> 
>> #if defined(CONFIG_NFS_V4_1)
>> +/**
>> + * nfs41_check_expired_locks - clear lock state IDs
>> + *
>> + * @state: NFSv4 open state for a file
>> + *
>> + * Returns NFS_OK if recovery for this state ID is now finished.
>> + * Otherwise a negative NFS4ERR value is returned.
>> + */
>> static int nfs41_check_expired_locks(struct nfs4_state *state)
>> {
>> 	int status, ret = NFS_OK;
>> @@ -4707,8 +4731,16 @@ static int nfs41_check_expired_locks(struct nfs4_state *state)
>> 	list_for_each_entry(lsp, &state->lock_states, ls_locks) {
>> 		if (lsp->ls_flags & NFS_LOCK_INITIALIZED) {
>> 			status = nfs41_test_stateid(server, &lsp->ls_stateid);
>> -			if (status != NFS_OK) {
>> +			switch (status) {
>> +			case NFS_OK:
>> +				/* server recognizes this one, don't re-lock */
>> +				break;
>> +			case -NFS4ERR_ADMIN_REVOKED:
>> +			case -NFS4ERR_DELEG_REVOKED:
> 
> Ditto
> 
>> +				/* lock was revoked, free then re-lock */
>> 				nfs41_free_stateid(server, &lsp->ls_stateid);
>> +			default:
>> +				/* anything else: just re-lock it */
>> 				lsp->ls_flags &= ~NFS_LOCK_INITIALIZED;
>> 				ret = status;
>> 			}
>> 
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer
> 
> NetApp
> Trond.Myklebust@netapp.com
> www.netapp.com
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting
  2012-07-09 19:36   ` Myklebust, Trond
@ 2012-07-09 19:51     ` Chuck Lever
  0 siblings, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 19:51 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: linux-nfs


On Jul 9, 2012, at 3:36 PM, Myklebust, Trond wrote:

> On Mon, 2012-07-09 at 11:43 -0400, Chuck Lever wrote:
>> The TEST_STATEID and FREE_STATEID operations can return
>> NFS4ERR_BAD_STATEID, NFS4ERR_OLD_STATEID, or NFS4ERR_DEADSESSION.
>> These error values should not be passed to nfs4_handle_exception(),
>> since that will recursively kick off state recovery, resulting in a
>> deadlock.  The reason to continue using nfs4_handle_exception() is
>> to deal with NFS4ERR_DELAY.
>> 
>> Moreover, specifically when the TEST_STATEID operation returns
>> NFS4_OK, res.status can contain one of these errors.
>> _nfs41_test_stateid() replaces NFS4_OK with the value in res.status,
>> which is then returned to callers.
>> 
>> But res.status is not passed through nfs4_stat_to_errno(), and thus is
>> a positive NFS4ERR value.  Currently callers are only interested in
>> !NFS4_OK, and nfs4_handle_exception() ignores positive values, so this
>> oversight hasn't bitten us so far.
>> 
>> Bryan agrees the original intent was to return res.status as a
>> negative NFS4ERR value to callers of nfs41_test_stateid(), as long as
>> no state ID recovery is attempted.
> 
> None of this describes a cleanup... AFAICS it is all a bugfix

IIRC behavior isn't supposed to change, so I don't think this should go to -stable.  The subsequent patches rely on the error path being corrected, though.

>> As a finishing touch, add appropriate documenting comments and some
>> debugging printk's.
> 
> This should be a separate patch so that we can send the bugfix part to
> stable@vger if needed...

I can redrive as two patches, but we should be careful about back porting, IMO.

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 15:44 ` [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state Chuck Lever
@ 2012-07-09 20:03   ` Myklebust, Trond
  2012-07-09 20:15     ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 20:03 UTC (permalink / raw)
  To: Chuck Lever; +Cc: linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDExOjQ0IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
QmVmb3JlIGJlZ2lubmluZyBzdGF0ZSByZWNvdmVyeSwgdGhlIHN0YXRlIG1hbmFnZXIgemFwcyBv
cGVuIGFuZCBsb2NrDQo+IHN0YXRlIGZvciBlYWNoIG9wZW4gZmlsZSwgdGh1cyB0aGUgInN0YXRl
LT5mbGFncyAmIGZsYWdzIiB0ZXN0IGluDQo+IG5mczQxX3tvcGVuLGxvY2t9X2V4cGlyZWQoKSBh
bHdheXMgZmFpbHMgZHVyaW5nIHJlY2xhaW0uICBCdXQgb3Blbg0KPiByZWNvdmVyeSBpcyBzdGls
bCBuZWVkZWQgZm9yIHRoZXNlIGZpbGVzLg0KPiANCj4gVG8gZm9yY2UgYSBjYWxsIHRvIG5mczRf
b3Blbl9leHBpcmVkKCksIGNoYW5nZSB0aGUgZGVmYXVsdCByZXR1cm4NCj4gdmFsdWUgZm9yIG5m
czQxX2NoZWNrX2V4cGlyZWRfc3RhdGVpZCgpIHRvIGZvcmNlIG9wZW4gcmVjb3ZlcnksIGFuZA0K
PiB0aGUgZGVmYXVsdCByZXR1cm4gdmFsdWUgZm9yIG5mczQxX2NoZWNrX2xvY2tzKCkgdG8gZm9y
Y2UgbG9jaw0KPiByZWNvdmVyeSwgaWYgdGhlIHJlcXVlc3RlZCBmbGFncyBhcmUgY2xlYXIuICBG
aXggc3VnZ2VzdGVkIGJ5IEJyeWFuDQo+IFNjaHVtYWtlci4NCg0KVGhpcyBtYWtlcyBhYnNvbHV0
ZWx5IG5vIHNlbnNlLi4uDQoNClRoZSBwb2ludCBvZiB0aGVzZSBmdW5jdGlvbiBzaG91bGQgYmUg
dG8gdGVzdCBpZiB0aGUgc3RhdGVpZCBpcyBzdGlsbA0KdmFsaWQuIElmIGl0IGlzLCB0aGVuIHdl
IG5lZWQgbm8gcmVjb3ZlcnkuDQpJZiB0aGUgc3RhdGVpZCBpcyBfbm90XyB2YWxpZCwgdGhlbiB3
ZSBmcmVlIGl0LCBhbmQgdGhlbiBkZWNpZGUgaWYNCnJlY292ZXJ5IGlzIG5lZWRlZC4gVGhlIG9u
bHkgZXhjZXB0aW9uIHRvIHRoYXQgcnVsZSBpcyBpZiBURVNUX1NUQVRFSUQNCnJldHVybnMgTkZT
NEVSUl9CQURfU1RBVEVJRCwgKGluIHdoaWNoIGNhc2Ugd2UgZG9uJ3QgbmVlZCB0byBmcmVlIHRo
ZQ0Kc3RhdGVpZCBidXQganVzdCByZWNvdmVyIGltbWVkaWF0ZWx5KS4NCg0KSU9XOiBUaGUgc3Rh
dGUtPmZsYWdzIGFuZCBsc3AtPmxzX2ZsYWdzIHRlc3RzIHNob3VsZCBqdXN0IGJlIHJlbW92ZWQu
DQoNCi0tIA0KVHJvbmQgTXlrbGVidXN0DQpMaW51eCBORlMgY2xpZW50IG1haW50YWluZXINCg0K
TmV0QXBwDQpUcm9uZC5NeWtsZWJ1c3RAbmV0YXBwLmNvbQ0Kd3d3Lm5ldGFwcC5jb20NCg0K

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error
  2012-07-09 15:44 ` [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error Chuck Lever
@ 2012-07-09 20:10   ` Myklebust, Trond
  2012-07-09 20:12     ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 20:10 UTC (permalink / raw)
  To: Chuck Lever; +Cc: linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDExOjQ0IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
Rm9yIE5GU3Y0IG1pbm9yIHZlcnNpb24gMCwgY3VycmVudGx5IHRoZSBjbF9pZF91bmlxdWlmaWVy
IGFsbG93cyB0aGUNCj4gTGludXggY2xpZW50IHRvIGdlbmVyYXRlIGEgdW5pcXVlIG5mc19jbGll
bnRfaWQ0IHN0cmluZyB3aGVuZXZlciBhDQo+IHNlcnZlciByZXBsaWVzIHdpdGggTkZTNEVSUl9D
TElEX0lOVVNFLg0KPiANCj4gVGhpcyBpbXBsZW1lbnRhdGlvbiBzZWVtcyB0byBiZSBiYXNlZCBv
biBhIGZsYXdlZCByZWFkaW5nIG9mIFJGQw0KPiAzNTMwLiAgTkZTNEVSUl9DTElEX0lOVVNFIGFj
dHVhbGx5IG1lYW5zIHRoYXQgdGhlIGNsaWVudCBoYXMgcHJlc2VudGVkDQo+IHRoaXMgbmZzX2Ns
aWVudF9pZDQgc3RyaW5nIHdpdGggYSBkaWZmZXJlbnQgcHJpbmNpcGFsIGF0IHNvbWUgdGltZSBp
bg0KPiB0aGUgcGFzdCwgYW5kIHRoYXQgbGVhc2UgaXMgc3RpbGwgaW4gdXNlIG9uIHRoZSBzZXJ2
ZXIuDQo+IA0KPiBGb3IgYSBMaW51eCBjbGllbnQgdGhpcyBtaWdodCBiZSByYXRoZXIgZGlmZmlj
dWx0IHRvIGFjaGlldmU6IHRoZQ0KPiBhdXRoZW50aWNhdGlvbiBmbGF2b3IgaXMgbmFtZWQgcmln
aHQgaW4gdGhlIG5mc19jbGllbnRfaWQ0LmlkDQo+IHN0cmluZy4gIElmIHdlIGNoYW5nZSBmbGF2
b3JzLCB3ZSBjaGFuZ2Ugc3RyaW5ncyBhdXRvbWF0aWNhbGx5Lg0KPiANCj4gU28sIHByYWN0aWNh
bGx5IHNwZWFraW5nLCBORlM0RVJSX0NMSURfSU5VU0UgbWVhbnMgdGhlcmUgaXMgc29tZSBvdGhl
cg0KPiBjbGllbnQgdXNpbmcgb3VyIHN0cmluZy4gIFRoZXJlIGlzIG5vdCBtdWNoIHRoYXQgY2Fu
IGJlIGRvbmUgdG8NCj4gcmVjb3ZlciBhdXRvbWF0aWNhbGx5LiAgTGV0J3MgbWFrZSBpdCBhIHBl
cm1hbmVudCBlcnJvci4NCj4gDQo+IFJlbW92ZSB0aGUgcmVjb3ZlcnkgbG9naWMgaW4gbmZzNF9w
cm9jX3NldGNsaWVudGlkKCksIGFuZCByZW1vdmUgdGhlDQo+IGNsX2lkX3VuaXF1aWZpZXIgZmll
bGQgZnJvbSB0aGUgbmZzX2NsaWVudCBkYXRhIHN0cnVjdHVyZS4gIEFuZCwNCj4gcmVtb3ZlIHRo
ZSBhdXRoZW50aWNhdGlvbiBmbGF2b3IgZnJvbSB0aGUgbmZzX2NsaWVudF9pZDQgc3RyaW5nLg0K
PiANCj4gS2VlcGluZyB0aGUgYXV0aGVudGljYXRpb24gZmxhdm9yIGluIHRoZSBuZnNfY2xpZW50
X2lkNC5pZCBzdHJpbmcNCj4gbWVhbnMgdGhhdCB3ZSBjb3VsZCBoYXZlIGEgc2VwYXJhdGUgbGVh
c2UgZm9yIGVhY2ggYXV0aGVudGljYXRpb24NCj4gZmxhdm9yIHVzZWQgYnkgbW91bnRzIG9uIHRo
ZSBjbGllbnQuICBCdXQgd2Ugd2FudCBqdXN0IG9uZSBsZWFzZSBmb3INCj4gYWxsIHRoZSBtb3Vu
dHMgb24gdGhpcyBjbGllbnQuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBDaHVjayBMZXZlciA8Y2h1
Y2subGV2ZXJAb3JhY2xlLmNvbT4NCj4gLS0tDQo+IA0KPiAgZnMvbmZzL25mczRwcm9jLmMgICAg
ICAgICB8ICAgNTQgKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
DQo+ICBmcy9uZnMvbmZzNHN0YXRlLmMgICAgICAgIHwgICAgNyArKysrKy0NCj4gIGluY2x1ZGUv
bGludXgvbmZzX2ZzX3NiLmggfCAgICAzICstLQ0KPiAgMyBmaWxlcyBjaGFuZ2VkLCAyOSBpbnNl
cnRpb25zKCspLCAzNSBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS9mcy9uZnMvbmZz
NHByb2MuYyBiL2ZzL25mcy9uZnM0cHJvYy5jDQo+IGluZGV4IDdhZGI3MDUuLjYzYTNjZjAgMTAw
NjQ0DQo+IC0tLSBhL2ZzL25mcy9uZnM0cHJvYy5jDQo+ICsrKyBiL2ZzL25mcy9uZnM0cHJvYy5j
DQo+IEBAIC0yNTksNyArMjU5LDEyIEBAIHN0YXRpYyBpbnQgbmZzNF93YWl0X2NsbnRfcmVjb3Zl
cihzdHJ1Y3QgbmZzX2NsaWVudCAqY2xwKQ0KPiAgDQo+ICAJcmVzID0gd2FpdF9vbl9iaXQoJmNs
cC0+Y2xfc3RhdGUsIE5GUzRDTE5UX01BTkFHRVJfUlVOTklORywNCj4gIAkJCW5mc193YWl0X2Jp
dF9raWxsYWJsZSwgVEFTS19LSUxMQUJMRSk7DQo+IC0JcmV0dXJuIHJlczsNCj4gKwlpZiAocmVz
KQ0KPiArCQlyZXR1cm4gcmVzOw0KPiArDQo+ICsJaWYgKGNscC0+Y2xfY29uc19zdGF0ZSA8IDAp
DQo+ICsJCXJldHVybiBjbHAtPmNsX2NvbnNfc3RhdGU7DQo+ICsJcmV0dXJuIDA7DQo+ICB9DQoN
CkEpIFRoaXMgc2hvdWxkIGJlIGEgc2VwYXJhdGUgcGF0Y2guDQoNCkIpIFdoeSBpcyBpdCBuZWVk
ZWQgaW4gdGhlIGZpcnN0IHBsYWNlPyBBbGwgcGxhY2VzIHRoYXQgY2FsbA0KbmZzNF93YWl0X2Ns
bnRfcmVjb3ZlciBsaWUgb3V0c2lkZSB0aGUgbW91bnQgY29kZSBwYXRoLg0KDQoNCj4gIHN0YXRp
YyBpbnQgbmZzNF9kZWxheShzdHJ1Y3QgcnBjX2NsbnQgKmNsbnQsIGxvbmcgKnRpbWVvdXQpDQo+
IEBAIC00MDM4LDQyICs0MDQzLDI4IEBAIGludCBuZnM0X3Byb2Nfc2V0Y2xpZW50aWQoc3RydWN0
IG5mc19jbGllbnQgKmNscCwgdTMyIHByb2dyYW0sDQo+ICAJCS5ycGNfcmVzcCA9IHJlcywNCj4g
IAkJLnJwY19jcmVkID0gY3JlZCwNCj4gIAl9Ow0KPiAtCWludCBsb29wID0gMDsNCj4gLQlpbnQg
c3RhdHVzOw0KPiAgDQo+ICsJLyogbmZzX2NsaWVudF9pZDQgKi8NCj4gIAluZnM0X2luaXRfYm9v
dF92ZXJpZmllcihjbHAsICZzY192ZXJpZmllcik7DQo+IC0NCj4gLQlmb3IoOzspIHsNCj4gLQkJ
cmN1X3JlYWRfbG9jaygpOw0KPiAtCQlzZXRjbGllbnRpZC5zY19uYW1lX2xlbiA9IHNjbnByaW50
ZihzZXRjbGllbnRpZC5zY19uYW1lLA0KPiAtCQkJCXNpemVvZihzZXRjbGllbnRpZC5zY19uYW1l
KSwgIiVzLyVzICVzICVzICV1IiwNCj4gLQkJCQljbHAtPmNsX2lwYWRkciwNCj4gLQkJCQlycGNf
cGVlcmFkZHIyc3RyKGNscC0+Y2xfcnBjY2xpZW50LA0KPiAtCQkJCQkJCVJQQ19ESVNQTEFZX0FE
RFIpLA0KPiAtCQkJCXJwY19wZWVyYWRkcjJzdHIoY2xwLT5jbF9ycGNjbGllbnQsDQo+IC0JCQkJ
CQkJUlBDX0RJU1BMQVlfUFJPVE8pLA0KPiAtCQkJCWNscC0+Y2xfcnBjY2xpZW50LT5jbF9hdXRo
LT5hdV9vcHMtPmF1X25hbWUsDQo+IC0JCQkJY2xwLT5jbF9pZF91bmlxdWlmaWVyKTsNCj4gLQkJ
c2V0Y2xpZW50aWQuc2NfbmV0aWRfbGVuID0gc2NucHJpbnRmKHNldGNsaWVudGlkLnNjX25ldGlk
LA0KPiArCXJjdV9yZWFkX2xvY2soKTsNCj4gKwlzZXRjbGllbnRpZC5zY19uYW1lX2xlbiA9IHNj
bnByaW50ZihzZXRjbGllbnRpZC5zY19uYW1lLA0KPiArCQkJc2l6ZW9mKHNldGNsaWVudGlkLnNj
X25hbWUpLCAiJXMvJXMgJXMiLA0KPiArCQkJY2xwLT5jbF9pcGFkZHIsDQo+ICsJCQlycGNfcGVl
cmFkZHIyc3RyKGNscC0+Y2xfcnBjY2xpZW50LA0KPiArCQkJCQkJUlBDX0RJU1BMQVlfQUREUiks
DQo+ICsJCQlycGNfcGVlcmFkZHIyc3RyKGNscC0+Y2xfcnBjY2xpZW50LA0KPiArCQkJCQkJUlBD
X0RJU1BMQVlfUFJPVE8pKTsNCj4gKwkvKiBjYl9jbGllbnQ0ICovDQo+ICsJc2V0Y2xpZW50aWQu
c2NfbmV0aWRfbGVuID0gc2NucHJpbnRmKHNldGNsaWVudGlkLnNjX25ldGlkLA0KPiAgCQkJCXNp
emVvZihzZXRjbGllbnRpZC5zY19uZXRpZCksDQo+ICAJCQkJcnBjX3BlZXJhZGRyMnN0cihjbHAt
PmNsX3JwY2NsaWVudCwNCj4gIAkJCQkJCQlSUENfRElTUExBWV9ORVRJRCkpOw0KPiAtCQlzZXRj
bGllbnRpZC5zY191YWRkcl9sZW4gPSBzY25wcmludGYoc2V0Y2xpZW50aWQuc2NfdWFkZHIsDQo+
ICsJcmN1X3JlYWRfdW5sb2NrKCk7DQo+ICsJc2V0Y2xpZW50aWQuc2NfdWFkZHJfbGVuID0gc2Nu
cHJpbnRmKHNldGNsaWVudGlkLnNjX3VhZGRyLA0KPiAgCQkJCXNpemVvZihzZXRjbGllbnRpZC5z
Y191YWRkciksICIlcy4ldS4ldSIsDQo+ICAJCQkJY2xwLT5jbF9pcGFkZHIsIHBvcnQgPj4gOCwg
cG9ydCAmIDI1NSk7DQo+IC0JCXJjdV9yZWFkX3VubG9jaygpOw0KPiAgDQo+IC0JCXN0YXR1cyA9
IHJwY19jYWxsX3N5bmMoY2xwLT5jbF9ycGNjbGllbnQsICZtc2csIFJQQ19UQVNLX1RJTUVPVVQp
Ow0KPiAtCQlpZiAoc3RhdHVzICE9IC1ORlM0RVJSX0NMSURfSU5VU0UpDQo+IC0JCQlicmVhazsN
Cj4gLQkJaWYgKGxvb3AgIT0gMCkgew0KPiAtCQkJKytjbHAtPmNsX2lkX3VuaXF1aWZpZXI7DQo+
IC0JCQlicmVhazsNCj4gLQkJfQ0KPiAtCQkrK2xvb3A7DQo+IC0JCXNzbGVlcChjbHAtPmNsX2xl
YXNlX3RpbWUgLyBIWiArIDEpOw0KPiAtCX0NCj4gLQlyZXR1cm4gc3RhdHVzOw0KPiArCXJldHVy
biBycGNfY2FsbF9zeW5jKGNscC0+Y2xfcnBjY2xpZW50LCAmbXNnLCBSUENfVEFTS19USU1FT1VU
KTsNCj4gIH0NCj4gIA0KPiAgaW50IG5mczRfcHJvY19zZXRjbGllbnRpZF9jb25maXJtKHN0cnVj
dCBuZnNfY2xpZW50ICpjbHAsDQo+IEBAIC01Mjc1LDEwICs1MjY2LDkgQEAgaW50IG5mczRfcHJv
Y19leGNoYW5nZV9pZChzdHJ1Y3QgbmZzX2NsaWVudCAqY2xwLCBzdHJ1Y3QgcnBjX2NyZWQgKmNy
ZWQpDQo+ICAJbmZzNF9pbml0X2Jvb3RfdmVyaWZpZXIoY2xwLCAmdmVyaWZpZXIpOw0KPiAgDQo+
ICAJYXJncy5pZF9sZW4gPSBzY25wcmludGYoYXJncy5pZCwgc2l6ZW9mKGFyZ3MuaWQpLA0KPiAt
CQkJCSIlcy8lcy8ldSIsDQo+ICsJCQkJIiVzLyVzIiwNCj4gIAkJCQljbHAtPmNsX2lwYWRkciwN
Cj4gLQkJCQljbHAtPmNsX3JwY2NsaWVudC0+Y2xfbm9kZW5hbWUsDQo+IC0JCQkJY2xwLT5jbF9y
cGNjbGllbnQtPmNsX2F1dGgtPmF1X2ZsYXZvcik7DQo+ICsJCQkJY2xwLT5jbF9ycGNjbGllbnQt
PmNsX25vZGVuYW1lKTsNCj4gIA0KPiAgCXJlcy5zZXJ2ZXJfb3duZXIgPSBremFsbG9jKHNpemVv
ZihzdHJ1Y3QgbmZzNDFfc2VydmVyX293bmVyKSwNCj4gIAkJCQkJR0ZQX05PRlMpOw0KPiBkaWZm
IC0tZ2l0IGEvZnMvbmZzL25mczRzdGF0ZS5jIGIvZnMvbmZzL25mczRzdGF0ZS5jDQo+IGluZGV4
IGYzODMwMGUuLmEyYzExNTMgMTAwNjQ0DQo+IC0tLSBhL2ZzL25mcy9uZnM0c3RhdGUuYw0KPiAr
KysgYi9mcy9uZnMvbmZzNHN0YXRlLmMNCj4gQEAgLTE2MDYsMTAgKzE2MDYsMTUgQEAgc3RhdGlj
IGludCBuZnM0X2hhbmRsZV9yZWNsYWltX2xlYXNlX2Vycm9yKHN0cnVjdCBuZnNfY2xpZW50ICpj
bHAsIGludCBzdGF0dXMpDQo+ICAJCQlyZXR1cm4gLUVTRVJWRVJGQVVMVDsNCj4gIAkJLyogTGVh
c2UgY29uZmlybWF0aW9uIGVycm9yOiByZXRyeSBhZnRlciBwdXJnaW5nIHRoZSBsZWFzZSAqLw0K
PiAgCQlzc2xlZXAoMSk7DQo+IC0JY2FzZSAtTkZTNEVSUl9DTElEX0lOVVNFOg0KPiAgCWNhc2Ug
LU5GUzRFUlJfU1RBTEVfQ0xJRU5USUQ6DQo+ICAJCWNsZWFyX2JpdChORlM0Q0xOVF9MRUFTRV9D
T05GSVJNLCAmY2xwLT5jbF9zdGF0ZSk7DQo+ICAJCWJyZWFrOw0KPiArCWNhc2UgLU5GUzRFUlJf
Q0xJRF9JTlVTRToNCj4gKwkJcHJpbnRrKEtFUk5fRVJSICJORlM6IFNlcnZlciAlczogY2xpZW50
IElEIGFscmVhZHkgaW4gdXNlXG4iLA0KPiArCQkJY2xwLT5jbF9ob3N0bmFtZSk7DQo+ICsJCW5m
c19tYXJrX2NsaWVudF9yZWFkeShjbHAsIC1FUEVSTSk7DQo+ICsJCWNsZWFyX2JpdChORlM0Q0xO
VF9MRUFTRV9DT05GSVJNLCAmY2xwLT5jbF9zdGF0ZSk7DQo+ICsJCXJldHVybiAtRVBFUk07DQoN
CkhvdyBkbyB3ZSBndWFyYW50ZWUgdGhhdCB0aGlzIHdvbid0IG9jY3VyIG9uIGFuIGV4aXN0aW5n
IG1vdW50Pw0KDQo+ICAJY2FzZSAtRUFDQ0VTOg0KPiAgCQlpZiAoY2xwLT5jbF9tYWNoaW5lX2Ny
ZWQgPT0gTlVMTCkNCj4gIAkJCXJldHVybiAtRUFDQ0VTOw0KPiBkaWZmIC0tZ2l0IGEvaW5jbHVk
ZS9saW51eC9uZnNfZnNfc2IuaCBiL2luY2x1ZGUvbGludXgvbmZzX2ZzX3NiLmgNCj4gaW5kZXgg
ZjU4MzI1YS4uNjUzMjc2NSAxMDA2NDQNCj4gLS0tIGEvaW5jbHVkZS9saW51eC9uZnNfZnNfc2Iu
aA0KPiArKysgYi9pbmNsdWRlL2xpbnV4L25mc19mc19zYi5oDQo+IEBAIC02OSwxMCArNjksOSBA
QCBzdHJ1Y3QgbmZzX2NsaWVudCB7DQo+ICAJc3RydWN0IGlkbWFwICoJCWNsX2lkbWFwOw0KPiAg
DQo+ICAJLyogT3VyIG93biBJUCBhZGRyZXNzLCBhcyBhIG51bGwtdGVybWluYXRlZCBzdHJpbmcu
DQo+IC0JICogVGhpcyBpcyB1c2VkIHRvIGdlbmVyYXRlIHRoZSBjbGllbnRpZCwgYW5kIHRoZSBj
YWxsYmFjayBhZGRyZXNzLg0KPiArCSAqIFRoaXMgaXMgdXNlZCB0byBnZW5lcmF0ZSB0aGUgbXYw
IGNhbGxiYWNrIGFkZHJlc3MuDQo+ICAJICovDQo+ICAJY2hhcgkJCWNsX2lwYWRkcls0OF07DQo+
IC0JdW5zaWduZWQgY2hhcgkJY2xfaWRfdW5pcXVpZmllcjsNCj4gIAl1MzIJCQljbF9jYl9pZGVu
dDsJLyogdjQuMCBjYWxsYmFjayBpZGVudGlmaWVyICovDQo+ICAJY29uc3Qgc3RydWN0IG5mczRf
bWlub3JfdmVyc2lvbl9vcHMgKmNsX212b3BzOw0KPiAgDQo+IA0KDQotLSANClRyb25kIE15a2xl
YnVzdA0KTGludXggTkZTIGNsaWVudCBtYWludGFpbmVyDQoNCk5ldEFwcA0KVHJvbmQuTXlrbGVi
dXN0QG5ldGFwcC5jb20NCnd3dy5uZXRhcHAuY29tDQoNCg==

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error
  2012-07-09 20:10   ` Myklebust, Trond
@ 2012-07-09 20:12     ` Chuck Lever
  2012-07-09 20:25       ` Chuck Lever
  2012-07-09 20:34       ` Myklebust, Trond
  0 siblings, 2 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 20:12 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: linux-nfs


On Jul 9, 2012, at 4:10 PM, Myklebust, Trond wrote:

> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>> For NFSv4 minor version 0, currently the cl_id_uniquifier allows the
>> Linux client to generate a unique nfs_client_id4 string whenever a
>> server replies with NFS4ERR_CLID_INUSE.
>> 
>> This implementation seems to be based on a flawed reading of RFC
>> 3530.  NFS4ERR_CLID_INUSE actually means that the client has presented
>> this nfs_client_id4 string with a different principal at some time in
>> the past, and that lease is still in use on the server.
>> 
>> For a Linux client this might be rather difficult to achieve: the
>> authentication flavor is named right in the nfs_client_id4.id
>> string.  If we change flavors, we change strings automatically.
>> 
>> So, practically speaking, NFS4ERR_CLID_INUSE means there is some other
>> client using our string.  There is not much that can be done to
>> recover automatically.  Let's make it a permanent error.
>> 
>> Remove the recovery logic in nfs4_proc_setclientid(), and remove the
>> cl_id_uniquifier field from the nfs_client data structure.  And,
>> remove the authentication flavor from the nfs_client_id4 string.
>> 
>> Keeping the authentication flavor in the nfs_client_id4.id string
>> means that we could have a separate lease for each authentication
>> flavor used by mounts on the client.  But we want just one lease for
>> all the mounts on this client.
>> 
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> 
>> fs/nfs/nfs4proc.c         |   54 ++++++++++++++++++---------------------------
>> fs/nfs/nfs4state.c        |    7 +++++-
>> include/linux/nfs_fs_sb.h |    3 +--
>> 3 files changed, 29 insertions(+), 35 deletions(-)
>> 
>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
>> index 7adb705..63a3cf0 100644
>> --- a/fs/nfs/nfs4proc.c
>> +++ b/fs/nfs/nfs4proc.c
>> @@ -259,7 +259,12 @@ static int nfs4_wait_clnt_recover(struct nfs_client *clp)
>> 
>> 	res = wait_on_bit(&clp->cl_state, NFS4CLNT_MANAGER_RUNNING,
>> 			nfs_wait_bit_killable, TASK_KILLABLE);
>> -	return res;
>> +	if (res)
>> +		return res;
>> +
>> +	if (clp->cl_cons_state < 0)
>> +		return clp->cl_cons_state;
>> +	return 0;
>> }
> 
> A) This should be a separate patch.

OK.

> 
> B) Why is it needed in the first place? All places that call
> nfs4_wait_clnt_recover lie outside the mount code path.

The problem is that is no longer the case once we have trunking discovery in the mount path.

We discussed this.  This is what you told me to do.

>> static int nfs4_delay(struct rpc_clnt *clnt, long *timeout)
>> @@ -4038,42 +4043,28 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
>> 		.rpc_resp = res,
>> 		.rpc_cred = cred,
>> 	};
>> -	int loop = 0;
>> -	int status;
>> 
>> +	/* nfs_client_id4 */
>> 	nfs4_init_boot_verifier(clp, &sc_verifier);
>> -
>> -	for(;;) {
>> -		rcu_read_lock();
>> -		setclientid.sc_name_len = scnprintf(setclientid.sc_name,
>> -				sizeof(setclientid.sc_name), "%s/%s %s %s %u",
>> -				clp->cl_ipaddr,
>> -				rpc_peeraddr2str(clp->cl_rpcclient,
>> -							RPC_DISPLAY_ADDR),
>> -				rpc_peeraddr2str(clp->cl_rpcclient,
>> -							RPC_DISPLAY_PROTO),
>> -				clp->cl_rpcclient->cl_auth->au_ops->au_name,
>> -				clp->cl_id_uniquifier);
>> -		setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
>> +	rcu_read_lock();
>> +	setclientid.sc_name_len = scnprintf(setclientid.sc_name,
>> +			sizeof(setclientid.sc_name), "%s/%s %s",
>> +			clp->cl_ipaddr,
>> +			rpc_peeraddr2str(clp->cl_rpcclient,
>> +						RPC_DISPLAY_ADDR),
>> +			rpc_peeraddr2str(clp->cl_rpcclient,
>> +						RPC_DISPLAY_PROTO));
>> +	/* cb_client4 */
>> +	setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
>> 				sizeof(setclientid.sc_netid),
>> 				rpc_peeraddr2str(clp->cl_rpcclient,
>> 							RPC_DISPLAY_NETID));
>> -		setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
>> +	rcu_read_unlock();
>> +	setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
>> 				sizeof(setclientid.sc_uaddr), "%s.%u.%u",
>> 				clp->cl_ipaddr, port >> 8, port & 255);
>> -		rcu_read_unlock();
>> 
>> -		status = rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
>> -		if (status != -NFS4ERR_CLID_INUSE)
>> -			break;
>> -		if (loop != 0) {
>> -			++clp->cl_id_uniquifier;
>> -			break;
>> -		}
>> -		++loop;
>> -		ssleep(clp->cl_lease_time / HZ + 1);
>> -	}
>> -	return status;
>> +	return rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
>> }
>> 
>> int nfs4_proc_setclientid_confirm(struct nfs_client *clp,
>> @@ -5275,10 +5266,9 @@ int nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred)
>> 	nfs4_init_boot_verifier(clp, &verifier);
>> 
>> 	args.id_len = scnprintf(args.id, sizeof(args.id),
>> -				"%s/%s/%u",
>> +				"%s/%s",
>> 				clp->cl_ipaddr,
>> -				clp->cl_rpcclient->cl_nodename,
>> -				clp->cl_rpcclient->cl_auth->au_flavor);
>> +				clp->cl_rpcclient->cl_nodename);
>> 
>> 	res.server_owner = kzalloc(sizeof(struct nfs41_server_owner),
>> 					GFP_NOFS);
>> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
>> index f38300e..a2c1153 100644
>> --- a/fs/nfs/nfs4state.c
>> +++ b/fs/nfs/nfs4state.c
>> @@ -1606,10 +1606,15 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
>> 			return -ESERVERFAULT;
>> 		/* Lease confirmation error: retry after purging the lease */
>> 		ssleep(1);
>> -	case -NFS4ERR_CLID_INUSE:
>> 	case -NFS4ERR_STALE_CLIENTID:
>> 		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
>> 		break;
>> +	case -NFS4ERR_CLID_INUSE:
>> +		printk(KERN_ERR "NFS: Server %s: client ID already in use\n",
>> +			clp->cl_hostname);
>> +		nfs_mark_client_ready(clp, -EPERM);
>> +		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
>> +		return -EPERM;
> 
> How do we guarantee that this won't occur on an existing mount?
> 
>> 	case -EACCES:
>> 		if (clp->cl_machine_cred == NULL)
>> 			return -EACCES;
>> diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
>> index f58325a..6532765 100644
>> --- a/include/linux/nfs_fs_sb.h
>> +++ b/include/linux/nfs_fs_sb.h
>> @@ -69,10 +69,9 @@ struct nfs_client {
>> 	struct idmap *		cl_idmap;
>> 
>> 	/* Our own IP address, as a null-terminated string.
>> -	 * This is used to generate the clientid, and the callback address.
>> +	 * This is used to generate the mv0 callback address.
>> 	 */
>> 	char			cl_ipaddr[48];
>> -	unsigned char		cl_id_uniquifier;
>> 	u32			cl_cb_ident;	/* v4.0 callback identifier */
>> 	const struct nfs4_minor_version_ops *cl_mvops;
>> 
>> 
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer
> 
> NetApp
> Trond.Myklebust@netapp.com
> www.netapp.com
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 20:03   ` Myklebust, Trond
@ 2012-07-09 20:15     ` Chuck Lever
  2012-07-09 20:19       ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 20:15 UTC (permalink / raw)
  To: Trond Myklebust, Bryan Schumaker; +Cc: linux-nfs


On Jul 9, 2012, at 4:03 PM, Myklebust, Trond wrote:

> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>> Before beginning state recovery, the state manager zaps open and lock
>> state for each open file, thus the "state->flags & flags" test in
>> nfs41_{open,lock}_expired() always fails during reclaim.  But open
>> recovery is still needed for these files.
>> 
>> To force a call to nfs4_open_expired(), change the default return
>> value for nfs41_check_expired_stateid() to force open recovery, and
>> the default return value for nfs41_check_locks() to force lock
>> recovery, if the requested flags are clear.  Fix suggested by Bryan
>> Schumaker.
> 
> This makes absolutely no sense...
> 
> The point of these function should be to test if the stateid is still
> valid. If it is, then we need no recovery.
> If the stateid is _not_ valid, then we free it, and then decide if
> recovery is needed. The only exception to that rule is if TEST_STATEID
> returns NFS4ERR_BAD_STATEID, (in which case we don't need to free the
> stateid but just recover immediately).

So you'd like to change the sense of the switch statement in my earlier patch:

   NFS4_OK:
     do nothing
   NFS4ERR_BAD_STATEID:
     just open
   default:
     free the state ID, then open

I still don't think we need a FREE_STATEID call for NFS4ERR_EXPIRED, for example.  It seems to me the only cases where FREE_STATEID is needed is the _REVOKED cases.

> IOW: The state->flags and lsp->ls_flags tests should just be removed.

Bryan had a reason to leave those tests in, but I don't remember what it was.

> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer
> 
> NetApp
> Trond.Myklebust@netapp.com
> www.netapp.com
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 20:15     ` Chuck Lever
@ 2012-07-09 20:19       ` Chuck Lever
  2012-07-09 20:35         ` Myklebust, Trond
  0 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 20:19 UTC (permalink / raw)
  To: Trond Myklebust, Bryan Schumaker; +Cc: linux-nfs


On Jul 9, 2012, at 4:15 PM, Chuck Lever wrote:

> 
> On Jul 9, 2012, at 4:03 PM, Myklebust, Trond wrote:
> 
>> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>>> Before beginning state recovery, the state manager zaps open and lock
>>> state for each open file, thus the "state->flags & flags" test in
>>> nfs41_{open,lock}_expired() always fails during reclaim.  But open
>>> recovery is still needed for these files.
>>> 
>>> To force a call to nfs4_open_expired(), change the default return
>>> value for nfs41_check_expired_stateid() to force open recovery, and
>>> the default return value for nfs41_check_locks() to force lock
>>> recovery, if the requested flags are clear.  Fix suggested by Bryan
>>> Schumaker.
>> 
>> This makes absolutely no sense...
>> 
>> The point of these function should be to test if the stateid is still
>> valid. If it is, then we need no recovery.
>> If the stateid is _not_ valid, then we free it, and then decide if
>> recovery is needed. The only exception to that rule is if TEST_STATEID
>> returns NFS4ERR_BAD_STATEID, (in which case we don't need to free the
>> stateid but just recover immediately).
> 
> So you'd like to change the sense of the switch statement in my earlier patch:
> 
>   NFS4_OK:
>     do nothing
>   NFS4ERR_BAD_STATEID:
>     just open
>   default:
>     free the state ID, then open
> 
> I still don't think we need a FREE_STATEID call for NFS4ERR_EXPIRED, for example.  It seems to me the only cases where FREE_STATEID is needed is the _REVOKED cases.
> 
>> IOW: The state->flags and lsp->ls_flags tests should just be removed.
> 
> Bryan had a reason to leave those tests in, but I don't remember what it was.

Ah.

I think it was that if those flags are set, then the client assumes the server still has that state, thus no recovery actions should be done.

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error
  2012-07-09 20:12     ` Chuck Lever
@ 2012-07-09 20:25       ` Chuck Lever
  2012-07-09 20:34       ` Myklebust, Trond
  1 sibling, 0 replies; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 20:25 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: linux-nfs


On Jul 9, 2012, at 4:12 PM, Chuck Lever wrote:

> 
> On Jul 9, 2012, at 4:10 PM, Myklebust, Trond wrote:
> 
>> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>>> For NFSv4 minor version 0, currently the cl_id_uniquifier allows the
>>> Linux client to generate a unique nfs_client_id4 string whenever a
>>> server replies with NFS4ERR_CLID_INUSE.
>>> 
>>> This implementation seems to be based on a flawed reading of RFC
>>> 3530.  NFS4ERR_CLID_INUSE actually means that the client has presented
>>> this nfs_client_id4 string with a different principal at some time in
>>> the past, and that lease is still in use on the server.
>>> 
>>> For a Linux client this might be rather difficult to achieve: the
>>> authentication flavor is named right in the nfs_client_id4.id
>>> string.  If we change flavors, we change strings automatically.
>>> 
>>> So, practically speaking, NFS4ERR_CLID_INUSE means there is some other
>>> client using our string.  There is not much that can be done to
>>> recover automatically.  Let's make it a permanent error.
>>> 
>>> Remove the recovery logic in nfs4_proc_setclientid(), and remove the
>>> cl_id_uniquifier field from the nfs_client data structure.  And,
>>> remove the authentication flavor from the nfs_client_id4 string.
>>> 
>>> Keeping the authentication flavor in the nfs_client_id4.id string
>>> means that we could have a separate lease for each authentication
>>> flavor used by mounts on the client.  But we want just one lease for
>>> all the mounts on this client.
>>> 
>>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>>> ---
>>> 
>>> fs/nfs/nfs4proc.c         |   54 ++++++++++++++++++---------------------------
>>> fs/nfs/nfs4state.c        |    7 +++++-
>>> include/linux/nfs_fs_sb.h |    3 +--
>>> 3 files changed, 29 insertions(+), 35 deletions(-)
>>> 
>>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
>>> index 7adb705..63a3cf0 100644
>>> --- a/fs/nfs/nfs4proc.c
>>> +++ b/fs/nfs/nfs4proc.c
>>> @@ -259,7 +259,12 @@ static int nfs4_wait_clnt_recover(struct nfs_client *clp)
>>> 
>>> 	res = wait_on_bit(&clp->cl_state, NFS4CLNT_MANAGER_RUNNING,
>>> 			nfs_wait_bit_killable, TASK_KILLABLE);
>>> -	return res;
>>> +	if (res)
>>> +		return res;
>>> +
>>> +	if (clp->cl_cons_state < 0)
>>> +		return clp->cl_cons_state;
>>> +	return 0;
>>> }
>> 
>> A) This should be a separate patch.
> 
> OK.
> 
>> 
>> B) Why is it needed in the first place? All places that call
>> nfs4_wait_clnt_recover lie outside the mount code path.
> 
> The problem is that is no longer the case once we have trunking discovery in the mount path.

Alright, that's not the problem.  The problem is we decided to make CLID_INUSE a permanent error during state recovery.  If the state manager doesn't cause waiting operations to fail, then we get an infinite loop, right?

> We discussed this.  This is what you told me to do.
> 
>>> static int nfs4_delay(struct rpc_clnt *clnt, long *timeout)
>>> @@ -4038,42 +4043,28 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
>>> 		.rpc_resp = res,
>>> 		.rpc_cred = cred,
>>> 	};
>>> -	int loop = 0;
>>> -	int status;
>>> 
>>> +	/* nfs_client_id4 */
>>> 	nfs4_init_boot_verifier(clp, &sc_verifier);
>>> -
>>> -	for(;;) {
>>> -		rcu_read_lock();
>>> -		setclientid.sc_name_len = scnprintf(setclientid.sc_name,
>>> -				sizeof(setclientid.sc_name), "%s/%s %s %s %u",
>>> -				clp->cl_ipaddr,
>>> -				rpc_peeraddr2str(clp->cl_rpcclient,
>>> -							RPC_DISPLAY_ADDR),
>>> -				rpc_peeraddr2str(clp->cl_rpcclient,
>>> -							RPC_DISPLAY_PROTO),
>>> -				clp->cl_rpcclient->cl_auth->au_ops->au_name,
>>> -				clp->cl_id_uniquifier);
>>> -		setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
>>> +	rcu_read_lock();
>>> +	setclientid.sc_name_len = scnprintf(setclientid.sc_name,
>>> +			sizeof(setclientid.sc_name), "%s/%s %s",
>>> +			clp->cl_ipaddr,
>>> +			rpc_peeraddr2str(clp->cl_rpcclient,
>>> +						RPC_DISPLAY_ADDR),
>>> +			rpc_peeraddr2str(clp->cl_rpcclient,
>>> +						RPC_DISPLAY_PROTO));
>>> +	/* cb_client4 */
>>> +	setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
>>> 				sizeof(setclientid.sc_netid),
>>> 				rpc_peeraddr2str(clp->cl_rpcclient,
>>> 							RPC_DISPLAY_NETID));
>>> -		setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
>>> +	rcu_read_unlock();
>>> +	setclientid.sc_uaddr_len = scnprintf(setclientid.sc_uaddr,
>>> 				sizeof(setclientid.sc_uaddr), "%s.%u.%u",
>>> 				clp->cl_ipaddr, port >> 8, port & 255);
>>> -		rcu_read_unlock();
>>> 
>>> -		status = rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
>>> -		if (status != -NFS4ERR_CLID_INUSE)
>>> -			break;
>>> -		if (loop != 0) {
>>> -			++clp->cl_id_uniquifier;
>>> -			break;
>>> -		}
>>> -		++loop;
>>> -		ssleep(clp->cl_lease_time / HZ + 1);
>>> -	}
>>> -	return status;
>>> +	return rpc_call_sync(clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);
>>> }
>>> 
>>> int nfs4_proc_setclientid_confirm(struct nfs_client *clp,
>>> @@ -5275,10 +5266,9 @@ int nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred)
>>> 	nfs4_init_boot_verifier(clp, &verifier);
>>> 
>>> 	args.id_len = scnprintf(args.id, sizeof(args.id),
>>> -				"%s/%s/%u",
>>> +				"%s/%s",
>>> 				clp->cl_ipaddr,
>>> -				clp->cl_rpcclient->cl_nodename,
>>> -				clp->cl_rpcclient->cl_auth->au_flavor);
>>> +				clp->cl_rpcclient->cl_nodename);
>>> 
>>> 	res.server_owner = kzalloc(sizeof(struct nfs41_server_owner),
>>> 					GFP_NOFS);
>>> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
>>> index f38300e..a2c1153 100644
>>> --- a/fs/nfs/nfs4state.c
>>> +++ b/fs/nfs/nfs4state.c
>>> @@ -1606,10 +1606,15 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
>>> 			return -ESERVERFAULT;
>>> 		/* Lease confirmation error: retry after purging the lease */
>>> 		ssleep(1);
>>> -	case -NFS4ERR_CLID_INUSE:
>>> 	case -NFS4ERR_STALE_CLIENTID:
>>> 		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
>>> 		break;
>>> +	case -NFS4ERR_CLID_INUSE:
>>> +		printk(KERN_ERR "NFS: Server %s: client ID already in use\n",
>>> +			clp->cl_hostname);
>>> +		nfs_mark_client_ready(clp, -EPERM);
>>> +		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
>>> +		return -EPERM;
>> 
>> How do we guarantee that this won't occur on an existing mount?
>> 
>>> 	case -EACCES:
>>> 		if (clp->cl_machine_cred == NULL)
>>> 			return -EACCES;
>>> diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
>>> index f58325a..6532765 100644
>>> --- a/include/linux/nfs_fs_sb.h
>>> +++ b/include/linux/nfs_fs_sb.h
>>> @@ -69,10 +69,9 @@ struct nfs_client {
>>> 	struct idmap *		cl_idmap;
>>> 
>>> 	/* Our own IP address, as a null-terminated string.
>>> -	 * This is used to generate the clientid, and the callback address.
>>> +	 * This is used to generate the mv0 callback address.
>>> 	 */
>>> 	char			cl_ipaddr[48];
>>> -	unsigned char		cl_id_uniquifier;
>>> 	u32			cl_cb_ident;	/* v4.0 callback identifier */
>>> 	const struct nfs4_minor_version_ops *cl_mvops;
>>> 
>>> 
>> 
>> -- 
>> Trond Myklebust
>> Linux NFS client maintainer
>> 
>> NetApp
>> Trond.Myklebust@netapp.com
>> www.netapp.com
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
> 
> -- 
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
> 
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error
  2012-07-09 20:12     ` Chuck Lever
  2012-07-09 20:25       ` Chuck Lever
@ 2012-07-09 20:34       ` Myklebust, Trond
  1 sibling, 0 replies; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 20:34 UTC (permalink / raw)
  To: Chuck Lever; +Cc: linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDE2OjEyIC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
T24gSnVsIDksIDIwMTIsIGF0IDQ6MTAgUE0sIE15a2xlYnVzdCwgVHJvbmQgd3JvdGU6DQo+IA0K
PiA+IE9uIE1vbiwgMjAxMi0wNy0wOSBhdCAxMTo0NCAtMDQwMCwgQ2h1Y2sgTGV2ZXIgd3JvdGU6
DQo+ID4+IEZvciBORlN2NCBtaW5vciB2ZXJzaW9uIDAsIGN1cnJlbnRseSB0aGUgY2xfaWRfdW5p
cXVpZmllciBhbGxvd3MgdGhlDQo+ID4+IExpbnV4IGNsaWVudCB0byBnZW5lcmF0ZSBhIHVuaXF1
ZSBuZnNfY2xpZW50X2lkNCBzdHJpbmcgd2hlbmV2ZXIgYQ0KPiA+PiBzZXJ2ZXIgcmVwbGllcyB3
aXRoIE5GUzRFUlJfQ0xJRF9JTlVTRS4NCj4gPj4gDQo+ID4+IFRoaXMgaW1wbGVtZW50YXRpb24g
c2VlbXMgdG8gYmUgYmFzZWQgb24gYSBmbGF3ZWQgcmVhZGluZyBvZiBSRkMNCj4gPj4gMzUzMC4g
IE5GUzRFUlJfQ0xJRF9JTlVTRSBhY3R1YWxseSBtZWFucyB0aGF0IHRoZSBjbGllbnQgaGFzIHBy
ZXNlbnRlZA0KPiA+PiB0aGlzIG5mc19jbGllbnRfaWQ0IHN0cmluZyB3aXRoIGEgZGlmZmVyZW50
IHByaW5jaXBhbCBhdCBzb21lIHRpbWUgaW4NCj4gPj4gdGhlIHBhc3QsIGFuZCB0aGF0IGxlYXNl
IGlzIHN0aWxsIGluIHVzZSBvbiB0aGUgc2VydmVyLg0KPiA+PiANCj4gPj4gRm9yIGEgTGludXgg
Y2xpZW50IHRoaXMgbWlnaHQgYmUgcmF0aGVyIGRpZmZpY3VsdCB0byBhY2hpZXZlOiB0aGUNCj4g
Pj4gYXV0aGVudGljYXRpb24gZmxhdm9yIGlzIG5hbWVkIHJpZ2h0IGluIHRoZSBuZnNfY2xpZW50
X2lkNC5pZA0KPiA+PiBzdHJpbmcuICBJZiB3ZSBjaGFuZ2UgZmxhdm9ycywgd2UgY2hhbmdlIHN0
cmluZ3MgYXV0b21hdGljYWxseS4NCj4gPj4gDQo+ID4+IFNvLCBwcmFjdGljYWxseSBzcGVha2lu
ZywgTkZTNEVSUl9DTElEX0lOVVNFIG1lYW5zIHRoZXJlIGlzIHNvbWUgb3RoZXINCj4gPj4gY2xp
ZW50IHVzaW5nIG91ciBzdHJpbmcuICBUaGVyZSBpcyBub3QgbXVjaCB0aGF0IGNhbiBiZSBkb25l
IHRvDQo+ID4+IHJlY292ZXIgYXV0b21hdGljYWxseS4gIExldCdzIG1ha2UgaXQgYSBwZXJtYW5l
bnQgZXJyb3IuDQo+ID4+IA0KPiA+PiBSZW1vdmUgdGhlIHJlY292ZXJ5IGxvZ2ljIGluIG5mczRf
cHJvY19zZXRjbGllbnRpZCgpLCBhbmQgcmVtb3ZlIHRoZQ0KPiA+PiBjbF9pZF91bmlxdWlmaWVy
IGZpZWxkIGZyb20gdGhlIG5mc19jbGllbnQgZGF0YSBzdHJ1Y3R1cmUuICBBbmQsDQo+ID4+IHJl
bW92ZSB0aGUgYXV0aGVudGljYXRpb24gZmxhdm9yIGZyb20gdGhlIG5mc19jbGllbnRfaWQ0IHN0
cmluZy4NCj4gPj4gDQo+ID4+IEtlZXBpbmcgdGhlIGF1dGhlbnRpY2F0aW9uIGZsYXZvciBpbiB0
aGUgbmZzX2NsaWVudF9pZDQuaWQgc3RyaW5nDQo+ID4+IG1lYW5zIHRoYXQgd2UgY291bGQgaGF2
ZSBhIHNlcGFyYXRlIGxlYXNlIGZvciBlYWNoIGF1dGhlbnRpY2F0aW9uDQo+ID4+IGZsYXZvciB1
c2VkIGJ5IG1vdW50cyBvbiB0aGUgY2xpZW50LiAgQnV0IHdlIHdhbnQganVzdCBvbmUgbGVhc2Ug
Zm9yDQo+ID4+IGFsbCB0aGUgbW91bnRzIG9uIHRoaXMgY2xpZW50Lg0KPiA+PiANCj4gPj4gU2ln
bmVkLW9mZi1ieTogQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+DQo+ID4+IC0t
LQ0KPiA+PiANCj4gPj4gZnMvbmZzL25mczRwcm9jLmMgICAgICAgICB8ICAgNTQgKysrKysrKysr
KysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ID4+IGZzL25mcy9uZnM0c3Rh
dGUuYyAgICAgICAgfCAgICA3ICsrKysrLQ0KPiA+PiBpbmNsdWRlL2xpbnV4L25mc19mc19zYi5o
IHwgICAgMyArLS0NCj4gPj4gMyBmaWxlcyBjaGFuZ2VkLCAyOSBpbnNlcnRpb25zKCspLCAzNSBk
ZWxldGlvbnMoLSkNCj4gPj4gDQo+ID4+IGRpZmYgLS1naXQgYS9mcy9uZnMvbmZzNHByb2MuYyBi
L2ZzL25mcy9uZnM0cHJvYy5jDQo+ID4+IGluZGV4IDdhZGI3MDUuLjYzYTNjZjAgMTAwNjQ0DQo+
ID4+IC0tLSBhL2ZzL25mcy9uZnM0cHJvYy5jDQo+ID4+ICsrKyBiL2ZzL25mcy9uZnM0cHJvYy5j
DQo+ID4+IEBAIC0yNTksNyArMjU5LDEyIEBAIHN0YXRpYyBpbnQgbmZzNF93YWl0X2NsbnRfcmVj
b3ZlcihzdHJ1Y3QgbmZzX2NsaWVudCAqY2xwKQ0KPiA+PiANCj4gPj4gCXJlcyA9IHdhaXRfb25f
Yml0KCZjbHAtPmNsX3N0YXRlLCBORlM0Q0xOVF9NQU5BR0VSX1JVTk5JTkcsDQo+ID4+IAkJCW5m
c193YWl0X2JpdF9raWxsYWJsZSwgVEFTS19LSUxMQUJMRSk7DQo+ID4+IC0JcmV0dXJuIHJlczsN
Cj4gPj4gKwlpZiAocmVzKQ0KPiA+PiArCQlyZXR1cm4gcmVzOw0KPiA+PiArDQo+ID4+ICsJaWYg
KGNscC0+Y2xfY29uc19zdGF0ZSA8IDApDQo+ID4+ICsJCXJldHVybiBjbHAtPmNsX2NvbnNfc3Rh
dGU7DQo+ID4+ICsJcmV0dXJuIDA7DQo+ID4+IH0NCj4gPiANCj4gPiBBKSBUaGlzIHNob3VsZCBi
ZSBhIHNlcGFyYXRlIHBhdGNoLg0KPiANCj4gT0suDQo+IA0KPiA+IA0KPiA+IEIpIFdoeSBpcyBp
dCBuZWVkZWQgaW4gdGhlIGZpcnN0IHBsYWNlPyBBbGwgcGxhY2VzIHRoYXQgY2FsbA0KPiA+IG5m
czRfd2FpdF9jbG50X3JlY292ZXIgbGllIG91dHNpZGUgdGhlIG1vdW50IGNvZGUgcGF0aC4NCj4g
DQo+IFRoZSBwcm9ibGVtIGlzIHRoYXQgaXMgbm8gbG9uZ2VyIHRoZSBjYXNlIG9uY2Ugd2UgaGF2
ZSB0cnVua2luZyBkaXNjb3ZlcnkgaW4gdGhlIG1vdW50IHBhdGguDQo+IA0KPiBXZSBkaXNjdXNz
ZWQgdGhpcy4gIFRoaXMgaXMgd2hhdCB5b3UgdG9sZCBtZSB0byBkby4NCg0KVGhlIG9ubHkgbmV3
IGVycm9yIGNvbmRpdGlvbiB5b3VyIHRydW5raW5nIGRldGVjdGlvbiBpcyBwb3RlbnRpYWxseQ0K
aW50cm9kdWNpbmcgaGVyZSBpcyBDTElEX0lOVVNFLiBXaHkgZG9lcyB0aGF0IHdhcnJhbnQgdGhp
cyBjaGFuZ2U/DQoNCi0tIA0KVHJvbmQgTXlrbGVidXN0DQpMaW51eCBORlMgY2xpZW50IG1haW50
YWluZXINCg0KTmV0QXBwDQpUcm9uZC5NeWtsZWJ1c3RAbmV0YXBwLmNvbQ0Kd3d3Lm5ldGFwcC5j
b20NCg0K

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 20:19       ` Chuck Lever
@ 2012-07-09 20:35         ` Myklebust, Trond
  2012-07-09 20:37           ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 20:35 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Schumaker, Bryan, linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDE2OjE5IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
T24gSnVsIDksIDIwMTIsIGF0IDQ6MTUgUE0sIENodWNrIExldmVyIHdyb3RlOg0KPiANCj4gPiAN
Cj4gPiBPbiBKdWwgOSwgMjAxMiwgYXQgNDowMyBQTSwgTXlrbGVidXN0LCBUcm9uZCB3cm90ZToN
Cj4gPiANCj4gPj4gT24gTW9uLCAyMDEyLTA3LTA5IGF0IDExOjQ0IC0wNDAwLCBDaHVjayBMZXZl
ciB3cm90ZToNCj4gPj4+IEJlZm9yZSBiZWdpbm5pbmcgc3RhdGUgcmVjb3ZlcnksIHRoZSBzdGF0
ZSBtYW5hZ2VyIHphcHMgb3BlbiBhbmQgbG9jaw0KPiA+Pj4gc3RhdGUgZm9yIGVhY2ggb3BlbiBm
aWxlLCB0aHVzIHRoZSAic3RhdGUtPmZsYWdzICYgZmxhZ3MiIHRlc3QgaW4NCj4gPj4+IG5mczQx
X3tvcGVuLGxvY2t9X2V4cGlyZWQoKSBhbHdheXMgZmFpbHMgZHVyaW5nIHJlY2xhaW0uICBCdXQg
b3Blbg0KPiA+Pj4gcmVjb3ZlcnkgaXMgc3RpbGwgbmVlZGVkIGZvciB0aGVzZSBmaWxlcy4NCj4g
Pj4+IA0KPiA+Pj4gVG8gZm9yY2UgYSBjYWxsIHRvIG5mczRfb3Blbl9leHBpcmVkKCksIGNoYW5n
ZSB0aGUgZGVmYXVsdCByZXR1cm4NCj4gPj4+IHZhbHVlIGZvciBuZnM0MV9jaGVja19leHBpcmVk
X3N0YXRlaWQoKSB0byBmb3JjZSBvcGVuIHJlY292ZXJ5LCBhbmQNCj4gPj4+IHRoZSBkZWZhdWx0
IHJldHVybiB2YWx1ZSBmb3IgbmZzNDFfY2hlY2tfbG9ja3MoKSB0byBmb3JjZSBsb2NrDQo+ID4+
PiByZWNvdmVyeSwgaWYgdGhlIHJlcXVlc3RlZCBmbGFncyBhcmUgY2xlYXIuICBGaXggc3VnZ2Vz
dGVkIGJ5IEJyeWFuDQo+ID4+PiBTY2h1bWFrZXIuDQo+ID4+IA0KPiA+PiBUaGlzIG1ha2VzIGFi
c29sdXRlbHkgbm8gc2Vuc2UuLi4NCj4gPj4gDQo+ID4+IFRoZSBwb2ludCBvZiB0aGVzZSBmdW5j
dGlvbiBzaG91bGQgYmUgdG8gdGVzdCBpZiB0aGUgc3RhdGVpZCBpcyBzdGlsbA0KPiA+PiB2YWxp
ZC4gSWYgaXQgaXMsIHRoZW4gd2UgbmVlZCBubyByZWNvdmVyeS4NCj4gPj4gSWYgdGhlIHN0YXRl
aWQgaXMgX25vdF8gdmFsaWQsIHRoZW4gd2UgZnJlZSBpdCwgYW5kIHRoZW4gZGVjaWRlIGlmDQo+
ID4+IHJlY292ZXJ5IGlzIG5lZWRlZC4gVGhlIG9ubHkgZXhjZXB0aW9uIHRvIHRoYXQgcnVsZSBp
cyBpZiBURVNUX1NUQVRFSUQNCj4gPj4gcmV0dXJucyBORlM0RVJSX0JBRF9TVEFURUlELCAoaW4g
d2hpY2ggY2FzZSB3ZSBkb24ndCBuZWVkIHRvIGZyZWUgdGhlDQo+ID4+IHN0YXRlaWQgYnV0IGp1
c3QgcmVjb3ZlciBpbW1lZGlhdGVseSkuDQo+ID4gDQo+ID4gU28geW91J2QgbGlrZSB0byBjaGFu
Z2UgdGhlIHNlbnNlIG9mIHRoZSBzd2l0Y2ggc3RhdGVtZW50IGluIG15IGVhcmxpZXIgcGF0Y2g6
DQo+ID4gDQo+ID4gICBORlM0X09LOg0KPiA+ICAgICBkbyBub3RoaW5nDQo+ID4gICBORlM0RVJS
X0JBRF9TVEFURUlEOg0KPiA+ICAgICBqdXN0IG9wZW4NCj4gPiAgIGRlZmF1bHQ6DQo+ID4gICAg
IGZyZWUgdGhlIHN0YXRlIElELCB0aGVuIG9wZW4NCj4gPiANCj4gPiBJIHN0aWxsIGRvbid0IHRo
aW5rIHdlIG5lZWQgYSBGUkVFX1NUQVRFSUQgY2FsbCBmb3IgTkZTNEVSUl9FWFBJUkVELCBmb3Ig
ZXhhbXBsZS4gIEl0IHNlZW1zIHRvIG1lIHRoZSBvbmx5IGNhc2VzIHdoZXJlIEZSRUVfU1RBVEVJ
RCBpcyBuZWVkZWQgaXMgdGhlIF9SRVZPS0VEIGNhc2VzLg0KPiA+IA0KPiA+PiBJT1c6IFRoZSBz
dGF0ZS0+ZmxhZ3MgYW5kIGxzcC0+bHNfZmxhZ3MgdGVzdHMgc2hvdWxkIGp1c3QgYmUgcmVtb3Zl
ZC4NCj4gPiANCj4gPiBCcnlhbiBoYWQgYSByZWFzb24gdG8gbGVhdmUgdGhvc2UgdGVzdHMgaW4s
IGJ1dCBJIGRvbid0IHJlbWVtYmVyIHdoYXQgaXQgd2FzLg0KPiANCj4gQWguDQo+IA0KPiBJIHRo
aW5rIGl0IHdhcyB0aGF0IGlmIHRob3NlIGZsYWdzIGFyZSBzZXQsIHRoZW4gdGhlIGNsaWVudCBh
c3N1bWVzIHRoZSBzZXJ2ZXIgc3RpbGwgaGFzIHRoYXQgc3RhdGUsIHRodXMgbm8gcmVjb3Zlcnkg
YWN0aW9ucyBzaG91bGQgYmUgZG9uZS4NCg0KPz8/IElmIHRoZSBzZXJ2ZXIgaXMgc3RpbGwgaG9s
ZGluZyB0aGUgc3RhdGUsIHRoZW4gdGhlIFRFU1RfU1RBVEVJRCB3aWxsDQpyZXR1cm4gJzAnLiBX
aHkgZG8gd2UgbmVlZCBleHRyYSB0ZXN0cz8NCg0KLS0gDQpUcm9uZCBNeWtsZWJ1c3QNCkxpbnV4
IE5GUyBjbGllbnQgbWFpbnRhaW5lcg0KDQpOZXRBcHANClRyb25kLk15a2xlYnVzdEBuZXRhcHAu
Y29tDQp3d3cubmV0YXBwLmNvbQ0KDQo=

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 20:35         ` Myklebust, Trond
@ 2012-07-09 20:37           ` Chuck Lever
  2012-07-09 21:31             ` Myklebust, Trond
  0 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 20:37 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: Schumaker, Bryan, linux-nfs


On Jul 9, 2012, at 4:35 PM, Myklebust, Trond wrote:

> On Mon, 2012-07-09 at 16:19 -0400, Chuck Lever wrote:
>> On Jul 9, 2012, at 4:15 PM, Chuck Lever wrote:
>> 
>>> 
>>> On Jul 9, 2012, at 4:03 PM, Myklebust, Trond wrote:
>>> 
>>>> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>>>>> Before beginning state recovery, the state manager zaps open and lock
>>>>> state for each open file, thus the "state->flags & flags" test in
>>>>> nfs41_{open,lock}_expired() always fails during reclaim.  But open
>>>>> recovery is still needed for these files.
>>>>> 
>>>>> To force a call to nfs4_open_expired(), change the default return
>>>>> value for nfs41_check_expired_stateid() to force open recovery, and
>>>>> the default return value for nfs41_check_locks() to force lock
>>>>> recovery, if the requested flags are clear.  Fix suggested by Bryan
>>>>> Schumaker.
>>>> 
>>>> This makes absolutely no sense...
>>>> 
>>>> The point of these function should be to test if the stateid is still
>>>> valid. If it is, then we need no recovery.
>>>> If the stateid is _not_ valid, then we free it, and then decide if
>>>> recovery is needed. The only exception to that rule is if TEST_STATEID
>>>> returns NFS4ERR_BAD_STATEID, (in which case we don't need to free the
>>>> stateid but just recover immediately).
>>> 
>>> So you'd like to change the sense of the switch statement in my earlier patch:
>>> 
>>>  NFS4_OK:
>>>    do nothing
>>>  NFS4ERR_BAD_STATEID:
>>>    just open
>>>  default:
>>>    free the state ID, then open
>>> 
>>> I still don't think we need a FREE_STATEID call for NFS4ERR_EXPIRED, for example.  It seems to me the only cases where FREE_STATEID is needed is the _REVOKED cases.
>>> 
>>>> IOW: The state->flags and lsp->ls_flags tests should just be removed.
>>> 
>>> Bryan had a reason to leave those tests in, but I don't remember what it was.
>> 
>> Ah.
>> 
>> I think it was that if those flags are set, then the client assumes the server still has that state, thus no recovery actions should be done.
> 
> ??? If the server is still holding the state, then the TEST_STATEID will
> return '0'. Why do we need extra tests?

This is Bryan's design.  But my understanding is that recovery always visits every state ID, and these flags prevent tests from going to the server for all but the state ID to be recovered.

Is that correct, Bryan?

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 20:37           ` Chuck Lever
@ 2012-07-09 21:31             ` Myklebust, Trond
  2012-07-09 21:45               ` Chuck Lever
  0 siblings, 1 reply; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 21:31 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Schumaker, Bryan, linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDE2OjM3IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
T24gSnVsIDksIDIwMTIsIGF0IDQ6MzUgUE0sIE15a2xlYnVzdCwgVHJvbmQgd3JvdGU6DQo+IA0K
PiA+IE9uIE1vbiwgMjAxMi0wNy0wOSBhdCAxNjoxOSAtMDQwMCwgQ2h1Y2sgTGV2ZXIgd3JvdGU6
DQo+ID4+IE9uIEp1bCA5LCAyMDEyLCBhdCA0OjE1IFBNLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
Pj4gDQo+ID4+PiANCj4gPj4+IE9uIEp1bCA5LCAyMDEyLCBhdCA0OjAzIFBNLCBNeWtsZWJ1c3Qs
IFRyb25kIHdyb3RlOg0KPiA+Pj4gDQo+ID4+Pj4gT24gTW9uLCAyMDEyLTA3LTA5IGF0IDExOjQ0
IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4gPj4+Pj4gQmVmb3JlIGJlZ2lubmluZyBzdGF0
ZSByZWNvdmVyeSwgdGhlIHN0YXRlIG1hbmFnZXIgemFwcyBvcGVuIGFuZCBsb2NrDQo+ID4+Pj4+
IHN0YXRlIGZvciBlYWNoIG9wZW4gZmlsZSwgdGh1cyB0aGUgInN0YXRlLT5mbGFncyAmIGZsYWdz
IiB0ZXN0IGluDQo+ID4+Pj4+IG5mczQxX3tvcGVuLGxvY2t9X2V4cGlyZWQoKSBhbHdheXMgZmFp
bHMgZHVyaW5nIHJlY2xhaW0uICBCdXQgb3Blbg0KPiA+Pj4+PiByZWNvdmVyeSBpcyBzdGlsbCBu
ZWVkZWQgZm9yIHRoZXNlIGZpbGVzLg0KPiA+Pj4+PiANCj4gPj4+Pj4gVG8gZm9yY2UgYSBjYWxs
IHRvIG5mczRfb3Blbl9leHBpcmVkKCksIGNoYW5nZSB0aGUgZGVmYXVsdCByZXR1cm4NCj4gPj4+
Pj4gdmFsdWUgZm9yIG5mczQxX2NoZWNrX2V4cGlyZWRfc3RhdGVpZCgpIHRvIGZvcmNlIG9wZW4g
cmVjb3ZlcnksIGFuZA0KPiA+Pj4+PiB0aGUgZGVmYXVsdCByZXR1cm4gdmFsdWUgZm9yIG5mczQx
X2NoZWNrX2xvY2tzKCkgdG8gZm9yY2UgbG9jaw0KPiA+Pj4+PiByZWNvdmVyeSwgaWYgdGhlIHJl
cXVlc3RlZCBmbGFncyBhcmUgY2xlYXIuICBGaXggc3VnZ2VzdGVkIGJ5IEJyeWFuDQo+ID4+Pj4+
IFNjaHVtYWtlci4NCj4gPj4+PiANCj4gPj4+PiBUaGlzIG1ha2VzIGFic29sdXRlbHkgbm8gc2Vu
c2UuLi4NCj4gPj4+PiANCj4gPj4+PiBUaGUgcG9pbnQgb2YgdGhlc2UgZnVuY3Rpb24gc2hvdWxk
IGJlIHRvIHRlc3QgaWYgdGhlIHN0YXRlaWQgaXMgc3RpbGwNCj4gPj4+PiB2YWxpZC4gSWYgaXQg
aXMsIHRoZW4gd2UgbmVlZCBubyByZWNvdmVyeS4NCj4gPj4+PiBJZiB0aGUgc3RhdGVpZCBpcyBf
bm90XyB2YWxpZCwgdGhlbiB3ZSBmcmVlIGl0LCBhbmQgdGhlbiBkZWNpZGUgaWYNCj4gPj4+PiBy
ZWNvdmVyeSBpcyBuZWVkZWQuIFRoZSBvbmx5IGV4Y2VwdGlvbiB0byB0aGF0IHJ1bGUgaXMgaWYg
VEVTVF9TVEFURUlEDQo+ID4+Pj4gcmV0dXJucyBORlM0RVJSX0JBRF9TVEFURUlELCAoaW4gd2hp
Y2ggY2FzZSB3ZSBkb24ndCBuZWVkIHRvIGZyZWUgdGhlDQo+ID4+Pj4gc3RhdGVpZCBidXQganVz
dCByZWNvdmVyIGltbWVkaWF0ZWx5KS4NCj4gPj4+IA0KPiA+Pj4gU28geW91J2QgbGlrZSB0byBj
aGFuZ2UgdGhlIHNlbnNlIG9mIHRoZSBzd2l0Y2ggc3RhdGVtZW50IGluIG15IGVhcmxpZXIgcGF0
Y2g6DQo+ID4+PiANCj4gPj4+ICBORlM0X09LOg0KPiA+Pj4gICAgZG8gbm90aGluZw0KPiA+Pj4g
IE5GUzRFUlJfQkFEX1NUQVRFSUQ6DQo+ID4+PiAgICBqdXN0IG9wZW4NCj4gPj4+ICBkZWZhdWx0
Og0KPiA+Pj4gICAgZnJlZSB0aGUgc3RhdGUgSUQsIHRoZW4gb3Blbg0KPiA+Pj4gDQo+ID4+PiBJ
IHN0aWxsIGRvbid0IHRoaW5rIHdlIG5lZWQgYSBGUkVFX1NUQVRFSUQgY2FsbCBmb3IgTkZTNEVS
Ul9FWFBJUkVELCBmb3IgZXhhbXBsZS4gIEl0IHNlZW1zIHRvIG1lIHRoZSBvbmx5IGNhc2VzIHdo
ZXJlIEZSRUVfU1RBVEVJRCBpcyBuZWVkZWQgaXMgdGhlIF9SRVZPS0VEIGNhc2VzLg0KDQpJJ20g
bm90IHN1cmUgYWJvdXQgd2hhdCB0aGUgcnVsZXMgYXJlIGZvciBORlM0RVJSX0VYUElSRUQuIEFz
IGZhciBhcyBJJ20NCmF3YXJlLCBORlM0RVJSX0VYUElSRUQgaXMgTkZTdjQuMC1zcGVhayBmb3Ig
Kl9TVEFURV9SRVZPS0VEOyByZXR1cm5pbmcNCml0IGFzIHBhcnQgb2YgYSBURVNUX1NUQVRFSUQg
bWFrZXMgbGl0dGxlIHNlbnNlIHRvIG1lLCBzbyBJJ20gdGFraW5nIHRoZQ0KY29uc2VydmF0aXZl
IGFwcHJvYWNoIHRvIGl0Lg0KDQo+ID4+Pj4gSU9XOiBUaGUgc3RhdGUtPmZsYWdzIGFuZCBsc3At
PmxzX2ZsYWdzIHRlc3RzIHNob3VsZCBqdXN0IGJlIHJlbW92ZWQuDQo+ID4+PiANCj4gPj4+IEJy
eWFuIGhhZCBhIHJlYXNvbiB0byBsZWF2ZSB0aG9zZSB0ZXN0cyBpbiwgYnV0IEkgZG9uJ3QgcmVt
ZW1iZXIgd2hhdCBpdCB3YXMuDQo+ID4+IA0KPiA+PiBBaC4NCj4gPj4gDQo+ID4+IEkgdGhpbmsg
aXQgd2FzIHRoYXQgaWYgdGhvc2UgZmxhZ3MgYXJlIHNldCwgdGhlbiB0aGUgY2xpZW50IGFzc3Vt
ZXMgdGhlIHNlcnZlciBzdGlsbCBoYXMgdGhhdCBzdGF0ZSwgdGh1cyBubyByZWNvdmVyeSBhY3Rp
b25zIHNob3VsZCBiZSBkb25lLg0KPiA+IA0KPiA+ID8/PyBJZiB0aGUgc2VydmVyIGlzIHN0aWxs
IGhvbGRpbmcgdGhlIHN0YXRlLCB0aGVuIHRoZSBURVNUX1NUQVRFSUQgd2lsbA0KPiA+IHJldHVy
biAnMCcuIFdoeSBkbyB3ZSBuZWVkIGV4dHJhIHRlc3RzPw0KPiANCj4gVGhpcyBpcyBCcnlhbidz
IGRlc2lnbi4gIEJ1dCBteSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQgcmVjb3ZlcnkgYWx3YXlzIHZp
c2l0cyBldmVyeSBzdGF0ZSBJRCwgYW5kIHRoZXNlIGZsYWdzIHByZXZlbnQgdGVzdHMgZnJvbSBn
b2luZyB0byB0aGUgc2VydmVyIGZvciBhbGwgYnV0IHRoZSBzdGF0ZSBJRCB0byBiZSByZWNvdmVy
ZWQuDQo+IA0KPiBJcyB0aGF0IGNvcnJlY3QsIEJyeWFuPw0KDQpPSy4gSSB0aGluayBJIHVuZGVy
c3RhbmQgd2hhdCB0aGUgaXNzdWUgaXMuDQoNCklmIHdlIGNhbGwgbmZzNF9zdGF0ZV9zdGFydF9y
ZWNsYWltX25vZ3JhY2UoKSAoZHVlIHRvIGEgbGVhc2UgcHVyZ2Ugb3IgYQ0KTkZTNEVSUl9FWFBJ
UkVEIG9uIG5mczRfY2hlY2tfbGVhc2UoKSksIHRoZW4gdGhhdCByZXNldHMgdGhlIHNlcXVlbmNl
DQppZHMgYW5kIGRvZXMgYSBmdWxsIHJlc2V0IG9mIGFsbCBzdGF0ZS4gSW4gdGhhdCBjYXNlLCB3
ZSBkbyBpbmRlZWQgbm90DQp3YW50IHRvIGNhbGwgVEVTVF9TVEFURUlELCBzaW5jZSB0aGUgYmVz
dCBpdCBjYW4gZG8gaXMgcmV0dXJuDQpORlM0RVJSX09MRF9TVEFURUlELg0KDQpPSy4gVGhpcyBu
ZWVkcyBzb21lIGNvbW1lbnRzIGluIHRoZSBjb2RlLi4uDQoNCi0tIA0KVHJvbmQgTXlrbGVidXN0
DQpMaW51eCBORlMgY2xpZW50IG1haW50YWluZXINCg0KTmV0QXBwDQpUcm9uZC5NeWtsZWJ1c3RA
bmV0YXBwLmNvbQ0Kd3d3Lm5ldGFwcC5jb20NCg0K

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 21:31             ` Myklebust, Trond
@ 2012-07-09 21:45               ` Chuck Lever
  2012-07-09 21:51                 ` Myklebust, Trond
  0 siblings, 1 reply; 31+ messages in thread
From: Chuck Lever @ 2012-07-09 21:45 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: Schumaker, Bryan, linux-nfs


On Jul 9, 2012, at 5:31 PM, Myklebust, Trond wrote:

> On Mon, 2012-07-09 at 16:37 -0400, Chuck Lever wrote:
>> On Jul 9, 2012, at 4:35 PM, Myklebust, Trond wrote:
>> 
>>> On Mon, 2012-07-09 at 16:19 -0400, Chuck Lever wrote:
>>>> On Jul 9, 2012, at 4:15 PM, Chuck Lever wrote:
>>>> 
>>>>> 
>>>>> On Jul 9, 2012, at 4:03 PM, Myklebust, Trond wrote:
>>>>> 
>>>>>> On Mon, 2012-07-09 at 11:44 -0400, Chuck Lever wrote:
>>>>>>> Before beginning state recovery, the state manager zaps open and lock
>>>>>>> state for each open file, thus the "state->flags & flags" test in
>>>>>>> nfs41_{open,lock}_expired() always fails during reclaim.  But open
>>>>>>> recovery is still needed for these files.
>>>>>>> 
>>>>>>> To force a call to nfs4_open_expired(), change the default return
>>>>>>> value for nfs41_check_expired_stateid() to force open recovery, and
>>>>>>> the default return value for nfs41_check_locks() to force lock
>>>>>>> recovery, if the requested flags are clear.  Fix suggested by Bryan
>>>>>>> Schumaker.
>>>>>> 
>>>>>> This makes absolutely no sense...
>>>>>> 
>>>>>> The point of these function should be to test if the stateid is still
>>>>>> valid. If it is, then we need no recovery.
>>>>>> If the stateid is _not_ valid, then we free it, and then decide if
>>>>>> recovery is needed. The only exception to that rule is if TEST_STATEID
>>>>>> returns NFS4ERR_BAD_STATEID, (in which case we don't need to free the
>>>>>> stateid but just recover immediately).
>>>>> 
>>>>> So you'd like to change the sense of the switch statement in my earlier patch:
>>>>> 
>>>>> NFS4_OK:
>>>>>   do nothing
>>>>> NFS4ERR_BAD_STATEID:
>>>>>   just open
>>>>> default:
>>>>>   free the state ID, then open
>>>>> 
>>>>> I still don't think we need a FREE_STATEID call for NFS4ERR_EXPIRED, for example.  It seems to me the only cases where FREE_STATEID is needed is the _REVOKED cases.
> 
> I'm not sure about what the rules are for NFS4ERR_EXPIRED. As far as I'm
> aware, NFS4ERR_EXPIRED is NFSv4.0-speak for *_STATE_REVOKED; returning
> it as part of a TEST_STATEID makes little sense to me, so I'm taking the
> conservative approach to it.

Based on your review comments, we have an opportunity to make this error handling more clear, which I'd like to clarify before redriving the patch.

I'm not quite sure what "conservative approach" means.  Do you want to include NFS4ERR_EXPIRED with the _REVOKED arms, treat it as evidence of a broken server, or ignore it entirely?

Do you want to leave the default: arm as it is, or do you have a preference for adding additional explicit error codes in the switch statement?  Are there cases which report that something has gone awry versus cases which simply report the state ID is rotten?  My feeling when I wrote this was to get the switch statement in there, and then we can introduce more error codes as needed.


> 
>>>>>> IOW: The state->flags and lsp->ls_flags tests should just be removed.
>>>>> 
>>>>> Bryan had a reason to leave those tests in, but I don't remember what it was.
>>>> 
>>>> Ah.
>>>> 
>>>> I think it was that if those flags are set, then the client assumes the server still has that state, thus no recovery actions should be done.
>>> 
>>> ??? If the server is still holding the state, then the TEST_STATEID will
>>> return '0'. Why do we need extra tests?
>> 
>> This is Bryan's design.  But my understanding is that recovery always visits every state ID, and these flags prevent tests from going to the server for all but the state ID to be recovered.
>> 
>> Is that correct, Bryan?
> 
> OK. I think I understand what the issue is.
> 
> If we call nfs4_state_start_reclaim_nograce() (due to a lease purge or a
> NFS4ERR_EXPIRED on nfs4_check_lease()), then that resets the sequence
> ids and does a full reset of all state. In that case, we do indeed not
> want to call TEST_STATEID, since the best it can do is return
> NFS4ERR_OLD_STATEID.
> 
> OK. This needs some comments in the code...

I can add comments in both the patch description and in the code itself.

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state
  2012-07-09 21:45               ` Chuck Lever
@ 2012-07-09 21:51                 ` Myklebust, Trond
  0 siblings, 0 replies; 31+ messages in thread
From: Myklebust, Trond @ 2012-07-09 21:51 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Schumaker, Bryan, linux-nfs

T24gTW9uLCAyMDEyLTA3LTA5IGF0IDE3OjQ1IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4g
T24gSnVsIDksIDIwMTIsIGF0IDU6MzEgUE0sIE15a2xlYnVzdCwgVHJvbmQgd3JvdGU6DQo+IA0K
PiA+IE9uIE1vbiwgMjAxMi0wNy0wOSBhdCAxNjozNyAtMDQwMCwgQ2h1Y2sgTGV2ZXIgd3JvdGU6
DQo+ID4+IE9uIEp1bCA5LCAyMDEyLCBhdCA0OjM1IFBNLCBNeWtsZWJ1c3QsIFRyb25kIHdyb3Rl
Og0KPiA+PiANCj4gPj4+IE9uIE1vbiwgMjAxMi0wNy0wOSBhdCAxNjoxOSAtMDQwMCwgQ2h1Y2sg
TGV2ZXIgd3JvdGU6DQo+ID4+Pj4gT24gSnVsIDksIDIwMTIsIGF0IDQ6MTUgUE0sIENodWNrIExl
dmVyIHdyb3RlOg0KPiA+Pj4+IA0KPiA+Pj4+PiANCj4gPj4+Pj4gT24gSnVsIDksIDIwMTIsIGF0
IDQ6MDMgUE0sIE15a2xlYnVzdCwgVHJvbmQgd3JvdGU6DQo+ID4+Pj4+IA0KPiA+Pj4+Pj4gT24g
TW9uLCAyMDEyLTA3LTA5IGF0IDExOjQ0IC0wNDAwLCBDaHVjayBMZXZlciB3cm90ZToNCj4gPj4+
Pj4+PiBCZWZvcmUgYmVnaW5uaW5nIHN0YXRlIHJlY292ZXJ5LCB0aGUgc3RhdGUgbWFuYWdlciB6
YXBzIG9wZW4gYW5kIGxvY2sNCj4gPj4+Pj4+PiBzdGF0ZSBmb3IgZWFjaCBvcGVuIGZpbGUsIHRo
dXMgdGhlICJzdGF0ZS0+ZmxhZ3MgJiBmbGFncyIgdGVzdCBpbg0KPiA+Pj4+Pj4+IG5mczQxX3tv
cGVuLGxvY2t9X2V4cGlyZWQoKSBhbHdheXMgZmFpbHMgZHVyaW5nIHJlY2xhaW0uICBCdXQgb3Bl
bg0KPiA+Pj4+Pj4+IHJlY292ZXJ5IGlzIHN0aWxsIG5lZWRlZCBmb3IgdGhlc2UgZmlsZXMuDQo+
ID4+Pj4+Pj4gDQo+ID4+Pj4+Pj4gVG8gZm9yY2UgYSBjYWxsIHRvIG5mczRfb3Blbl9leHBpcmVk
KCksIGNoYW5nZSB0aGUgZGVmYXVsdCByZXR1cm4NCj4gPj4+Pj4+PiB2YWx1ZSBmb3IgbmZzNDFf
Y2hlY2tfZXhwaXJlZF9zdGF0ZWlkKCkgdG8gZm9yY2Ugb3BlbiByZWNvdmVyeSwgYW5kDQo+ID4+
Pj4+Pj4gdGhlIGRlZmF1bHQgcmV0dXJuIHZhbHVlIGZvciBuZnM0MV9jaGVja19sb2NrcygpIHRv
IGZvcmNlIGxvY2sNCj4gPj4+Pj4+PiByZWNvdmVyeSwgaWYgdGhlIHJlcXVlc3RlZCBmbGFncyBh
cmUgY2xlYXIuICBGaXggc3VnZ2VzdGVkIGJ5IEJyeWFuDQo+ID4+Pj4+Pj4gU2NodW1ha2VyLg0K
PiA+Pj4+Pj4gDQo+ID4+Pj4+PiBUaGlzIG1ha2VzIGFic29sdXRlbHkgbm8gc2Vuc2UuLi4NCj4g
Pj4+Pj4+IA0KPiA+Pj4+Pj4gVGhlIHBvaW50IG9mIHRoZXNlIGZ1bmN0aW9uIHNob3VsZCBiZSB0
byB0ZXN0IGlmIHRoZSBzdGF0ZWlkIGlzIHN0aWxsDQo+ID4+Pj4+PiB2YWxpZC4gSWYgaXQgaXMs
IHRoZW4gd2UgbmVlZCBubyByZWNvdmVyeS4NCj4gPj4+Pj4+IElmIHRoZSBzdGF0ZWlkIGlzIF9u
b3RfIHZhbGlkLCB0aGVuIHdlIGZyZWUgaXQsIGFuZCB0aGVuIGRlY2lkZSBpZg0KPiA+Pj4+Pj4g
cmVjb3ZlcnkgaXMgbmVlZGVkLiBUaGUgb25seSBleGNlcHRpb24gdG8gdGhhdCBydWxlIGlzIGlm
IFRFU1RfU1RBVEVJRA0KPiA+Pj4+Pj4gcmV0dXJucyBORlM0RVJSX0JBRF9TVEFURUlELCAoaW4g
d2hpY2ggY2FzZSB3ZSBkb24ndCBuZWVkIHRvIGZyZWUgdGhlDQo+ID4+Pj4+PiBzdGF0ZWlkIGJ1
dCBqdXN0IHJlY292ZXIgaW1tZWRpYXRlbHkpLg0KPiA+Pj4+PiANCj4gPj4+Pj4gU28geW91J2Qg
bGlrZSB0byBjaGFuZ2UgdGhlIHNlbnNlIG9mIHRoZSBzd2l0Y2ggc3RhdGVtZW50IGluIG15IGVh
cmxpZXIgcGF0Y2g6DQo+ID4+Pj4+IA0KPiA+Pj4+PiBORlM0X09LOg0KPiA+Pj4+PiAgIGRvIG5v
dGhpbmcNCj4gPj4+Pj4gTkZTNEVSUl9CQURfU1RBVEVJRDoNCj4gPj4+Pj4gICBqdXN0IG9wZW4N
Cj4gPj4+Pj4gZGVmYXVsdDoNCj4gPj4+Pj4gICBmcmVlIHRoZSBzdGF0ZSBJRCwgdGhlbiBvcGVu
DQo+ID4+Pj4+IA0KPiA+Pj4+PiBJIHN0aWxsIGRvbid0IHRoaW5rIHdlIG5lZWQgYSBGUkVFX1NU
QVRFSUQgY2FsbCBmb3IgTkZTNEVSUl9FWFBJUkVELCBmb3IgZXhhbXBsZS4gIEl0IHNlZW1zIHRv
IG1lIHRoZSBvbmx5IGNhc2VzIHdoZXJlIEZSRUVfU1RBVEVJRCBpcyBuZWVkZWQgaXMgdGhlIF9S
RVZPS0VEIGNhc2VzLg0KPiA+IA0KPiA+IEknbSBub3Qgc3VyZSBhYm91dCB3aGF0IHRoZSBydWxl
cyBhcmUgZm9yIE5GUzRFUlJfRVhQSVJFRC4gQXMgZmFyIGFzIEknbQ0KPiA+IGF3YXJlLCBORlM0
RVJSX0VYUElSRUQgaXMgTkZTdjQuMC1zcGVhayBmb3IgKl9TVEFURV9SRVZPS0VEOyByZXR1cm5p
bmcNCj4gPiBpdCBhcyBwYXJ0IG9mIGEgVEVTVF9TVEFURUlEIG1ha2VzIGxpdHRsZSBzZW5zZSB0
byBtZSwgc28gSSdtIHRha2luZyB0aGUNCj4gPiBjb25zZXJ2YXRpdmUgYXBwcm9hY2ggdG8gaXQu
DQo+IA0KPiBCYXNlZCBvbiB5b3VyIHJldmlldyBjb21tZW50cywgd2UgaGF2ZSBhbiBvcHBvcnR1
bml0eSB0byBtYWtlIHRoaXMgZXJyb3IgaGFuZGxpbmcgbW9yZSBjbGVhciwgd2hpY2ggSSdkIGxp
a2UgdG8gY2xhcmlmeSBiZWZvcmUgcmVkcml2aW5nIHRoZSBwYXRjaC4NCj4gDQo+IEknbSBub3Qg
cXVpdGUgc3VyZSB3aGF0ICJjb25zZXJ2YXRpdmUgYXBwcm9hY2giIG1lYW5zLiAgRG8geW91IHdh
bnQgdG8gaW5jbHVkZSBORlM0RVJSX0VYUElSRUQgd2l0aCB0aGUgX1JFVk9LRUQgYXJtcywgdHJl
YXQgaXQgYXMgZXZpZGVuY2Ugb2YgYSBicm9rZW4gc2VydmVyLCBvciBpZ25vcmUgaXQgZW50aXJl
bHk/DQoNCkkgbWVhbiB0aGF0IGNhbGxpbmcgRlJFRV9TVEFURUlEIGluIGNhc2VzIHdoZXJlIGl0
IGlzbid0IHN0cmljdGx5IG5lZWRlZA0KYmVjYXVzZSB0aGUgc3RhdGVpZCBpcyBhbHJlYWR5IGlu
dmFsaWQgaXMgc2FmZS4gTm90IGNhbGxpbmcgaXQgd2hlbiBpdA0KaXMgbmVlZGVkIHRpZXMgdXAg
cmVzb3VyY2VzIHVubmVjZXNzYXJpbHkgb24gdGhlIHNlcnZlci4NCg0KLS0gDQpUcm9uZCBNeWts
ZWJ1c3QNCkxpbnV4IE5GUyBjbGllbnQgbWFpbnRhaW5lcg0KDQpOZXRBcHANClRyb25kLk15a2xl
YnVzdEBuZXRhcHAuY29tDQp3d3cubmV0YXBwLmNvbQ0KDQo=

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2012-07-09 21:51 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-09 15:43 [PATCH 00/14] Candidates for 3.6 Chuck Lever
2012-07-09 15:43 ` [PATCH 01/14] NFS: Clean up TEST_STATEID and FREE_STATEID proc error reporting Chuck Lever
2012-07-09 19:36   ` Myklebust, Trond
2012-07-09 19:51     ` Chuck Lever
2012-07-09 15:44 ` [PATCH 02/14] NFS: Properly sort TEST_STATEID results Chuck Lever
2012-07-09 19:34   ` Myklebust, Trond
2012-07-09 19:47     ` Chuck Lever
2012-07-09 15:44 ` [PATCH 03/14] NFS: State reclaim clears OPEN and LOCK state Chuck Lever
2012-07-09 20:03   ` Myklebust, Trond
2012-07-09 20:15     ` Chuck Lever
2012-07-09 20:19       ` Chuck Lever
2012-07-09 20:35         ` Myklebust, Trond
2012-07-09 20:37           ` Chuck Lever
2012-07-09 21:31             ` Myklebust, Trond
2012-07-09 21:45               ` Chuck Lever
2012-07-09 21:51                 ` Myklebust, Trond
2012-07-09 15:44 ` [PATCH 04/14] NFS: Refactor nfs41_check_expired_stateid() Chuck Lever
2012-07-09 15:44 ` [PATCH 05/14] NFS: nfs_getaclargs.acl_len is a size_t Chuck Lever
2012-07-09 15:44 ` [PATCH 06/14] NFS: Treat NFS4ERR_CLID_INUSE as a fatal error Chuck Lever
2012-07-09 20:10   ` Myklebust, Trond
2012-07-09 20:12     ` Chuck Lever
2012-07-09 20:25       ` Chuck Lever
2012-07-09 20:34       ` Myklebust, Trond
2012-07-09 15:44 ` [PATCH 07/14] NFS: Clean up nfs4_proc_setclientid() and friends Chuck Lever
2012-07-09 15:44 ` [PATCH 08/14] SUNRPC: Add rpcauth_list_flavors() Chuck Lever
2012-07-09 15:45 ` [PATCH 09/14] NFS: Introduce "migration" mount option Chuck Lever
2012-07-09 15:45 ` [PATCH 10/14] NFS: Use the same nfs_client_id4 for every server Chuck Lever
2012-07-09 15:45 ` [PATCH 11/14] NFS: Discover NFSv4 server trunking when mounting Chuck Lever
2012-07-09 15:45 ` [PATCH 12/14] NFS: Add nfs4_unique_id boot parameter Chuck Lever
2012-07-09 15:45 ` [PATCH 13/14] NFS: Clean up debugging messages in fs/nfs/client.c Chuck Lever
2012-07-09 15:45 ` [PATCH 14/14] NFS: Slow down state manager after an unhandled error Chuck Lever

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.