All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH_V4 0/9] NFSv4 callback find client fix Version 4
@ 2010-12-17 18:20 andros
  2010-12-17 18:20 ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel andros
  0 siblings, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs


Version 4 of these patches
Applies to Tronds nfs-for-next branch

The back channel server svc_process_common RPC layer pg_authenticate call
[nfs_callback_authenticate] is shared by both the NFSv4.0 and the NFSv4.1
callback threads. It authenticates the incoming request by finding (and
referencing) an nfs_client struct based on the incoming request address
and the NFS version (4). This is akin to the NFS server which authenticates
requests by matching the server address to the exports file client list.

Since there is no minorversion in the search, it may find the wrong
nfs_client struct. For the nfsv4.0 callback service thread, this means it
could find an NFSv4.1 nfs_client. For the NFSv4.1 callback service thread, it
could find an NFSv4.0 instead of v4.1, or find an NFSv4.1 nfs_client with the
wrong session.

The nfs_client is dereferenced at the end of pg_authenticate. Another
nfs_find_client call is done in the NFS layouer per operation dispatcher
routines for NFSv4.0 and in the cb_sequence operation dispatcher routine for
NFSv4.1 after decoding.

This means the callback server could start processing a callback, passing
the pg_authenticate test, and have the nfs_client struct freed between the
pg_authenticate call and the dispatcher operation call. Or, it could have
found the wrong nfs_client in the pg_authenticate call.

The current code has this behavior: If the nfs_client is not found in
pg_authenticate, the request is simply dropped (SVC_DROP). If an nfs_client
is not found in the dispatcher routines NFS4ERR_BADSESSION is returned for
v4.1 requests and NFS4ERR_BADHANDLE for v4.0 requests.

The fix is to implement the v4.0 SETCLIENTID callback_ident and use it to find
the one and only client for v4.0 callbacks, and to associate the sessionid
with the sessions based callback service (v4.1) and use it to identify the
one and only client. This can be done in the NFS layer, in CB_COMPOUND header
processing for v4.0 and in CB_SEQUENCE processing in v4.1.
In both cases, a reference to the found client is held across CB_COMPOUND
processing.

The pg_authenticate method does not have the callback_ident for CB_NULL or
CB_COMPOUND v4.0 processing, so just the server address, nfsversion and
minorversion is used for the client search

For sessions based callback service, the sessionID can't be set until the
return of the CREATE_SESSION call, yet the CB_NULL ping from a server can
be initiated by the server at the receipt of the CREATE_SESSION call before
the response is sent. So for CB_NULL, the sessionid is (usually) not set, and
the sessionid is not used for CB_NULL pg_authenticate client searches.

BUG fixes
0001-SUNRPC-new-transport-for-the-NFSv4.1-shared-back-cha.patch
0002-NFS-use-svc_create_xprt-for-NFSv4.1-callback-service.patch
0003-NFS-do-not-clear-minor-version-at-nfs_client-free.patch

New callback functionality
0004-NFS-implement-v4.0-callback_ident.patch
0005-NFS-associate-sessionid-with-callback-connection.patch
0006-NFS-reference-nfs_client-across-cb_compound-processi.patch
0007-NFS-RPC_AUTH_GSS-unsupported-on-v4.1-back-channel.patch

Rename
0008-NFS-add-session-back-channel-draining.patch
0009-NFS-rename-client-back-channel-transport-field.patch

-->Andy


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 18:20 [PATCH_V4 0/9] NFSv4 callback find client fix Version 4 andros
@ 2010-12-17 18:20 ` andros
  2010-12-17 18:20   ` [PATCH_V4 2/9] NFS use svc_create_xprt for NFSv4.1 callback service andros
  2010-12-17 18:40   ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel J. Bruce Fields
  0 siblings, 2 replies; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Create a new transport for the shared back channel.l Move the current sock
create and destroy routines into the new transport ops.

Reference the back channel transport across message processing (recv/send).

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 include/linux/sunrpc/svcsock.h |    1 +
 net/sunrpc/svc.c               |   18 +++---
 net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
 3 files changed, 112 insertions(+), 29 deletions(-)

diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 1b353a7..3a45a80 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -45,6 +45,7 @@ int		svc_sock_names(struct svc_serv *serv, char *buf,
 int		svc_addsock(struct svc_serv *serv, const int fd,
 					char *name_return, const size_t len);
 void		svc_init_xprt_sock(void);
+void		svc_init_bc_xprt_sock(void);
 void		svc_cleanup_xprt_sock(void);
 struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
 void		svc_sock_destroy(struct svc_xprt *);
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 6359c42..3520cb3 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
 	if (svc_serv_is_pooled(serv))
 		svc_pool_map_put();
 
-#if defined(CONFIG_NFS_V4_1)
-	svc_sock_destroy(serv->bc_xprt);
-#endif /* CONFIG_NFS_V4_1 */
-
 	svc_unregister(serv);
 	kfree(serv->sv_pools);
 	kfree(serv);
@@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
 {
 	struct kvec	*argv = &rqstp->rq_arg.head[0];
 	struct kvec	*resv = &rqstp->rq_res.head[0];
-	int 		error;
+	int		ret;
 
 	/* Build the svc_rqst used by the common processing routine */
 	rqstp->rq_xprt = serv->bc_xprt;
@@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
 	svc_getu32(argv);	/* XID */
 	svc_getnl(argv);	/* CALLDIR */
 
-	error = svc_process_common(rqstp, argv, resv);
-	if (error <= 0)
-		return error;
+	/* Hold a reference to the back channel socket across the call */
+	svc_xprt_get(serv->bc_xprt);
+	ret = svc_process_common(rqstp, argv, resv);
+	if (ret <= 0)
+		return ret;
 
 	memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
-	return bc_send(req);
+	ret = bc_send(req);
+	svc_xprt_put(serv->bc_xprt);
+	return ret;
 }
 EXPORT_SYMBOL(bc_svc_process);
 #endif /* CONFIG_NFS_V4_1 */
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 07919e1..5875551 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -66,6 +66,13 @@ static void		svc_sock_free(struct svc_xprt *);
 static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
 					  struct net *, struct sockaddr *,
 					  int, int);
+#if defined(CONFIG_NFS_V4_1)
+static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
+					     struct net *, struct sockaddr *,
+					     int, int);
+static void svc_bc_sock_free(struct svc_xprt *xprt);
+#endif /* CONFIG_NFS_V4_1 */
+
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 static struct lock_class_key svc_key[2];
 static struct lock_class_key svc_slock_key[2];
@@ -1184,6 +1191,79 @@ static struct svc_xprt *svc_tcp_create(struct svc_serv *serv,
 	return svc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
 }
 
+#if defined(CONFIG_NFS_V4_1)
+static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
+					     struct net *, struct sockaddr *,
+					     int, int);
+static void svc_bc_sock_free(struct svc_xprt *xprt);
+
+static struct svc_xprt *svc_bc_tcp_create(struct svc_serv *serv,
+				       struct net *net,
+				       struct sockaddr *sa, int salen,
+				       int flags)
+{
+	return svc_bc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
+}
+
+static void svc_bc_tcp_sock_detach(struct svc_xprt *xprt)
+{
+}
+
+/* These bc ops are never called. The shared fore channel is used instead */
+static int svc_bc_tcp_recvfrom(struct svc_rqst *rqstp)
+{
+	return 0;
+}
+
+static int svc_bc_tcp_sendto(struct svc_rqst *rqstp)
+{
+	return 0;
+}
+
+static void svc_bc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
+{
+}
+
+static void svc_bc_release_skb(struct svc_rqst *rqstp)
+{
+}
+
+static int svc_bc_tcp_has_wspace(struct svc_xprt *xprt)
+{
+	return 0;
+}
+
+static struct svc_xprt *svc_bc_tcp_accept(struct svc_xprt *xprt)
+{
+	return NULL;
+}
+
+static struct svc_xprt_ops svc_tcp_bc_ops = {
+	.xpo_create = svc_bc_tcp_create,
+	.xpo_recvfrom = svc_bc_tcp_recvfrom,
+	.xpo_sendto = svc_bc_tcp_sendto,
+	.xpo_release_rqst = svc_bc_release_skb,
+	.xpo_detach = svc_bc_tcp_sock_detach,
+	.xpo_free = svc_bc_sock_free,
+	.xpo_prep_reply_hdr = svc_bc_tcp_prep_reply_hdr,
+	.xpo_has_wspace = svc_bc_tcp_has_wspace,
+	.xpo_accept = svc_bc_tcp_accept,
+};
+
+static struct svc_xprt_class svc_tcp_bc_class = {
+	.xcl_name = "tcp-bc",
+	.xcl_owner = THIS_MODULE,
+	.xcl_ops = &svc_tcp_bc_ops,
+	.xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
+};
+
+void svc_init_bc_xprt_sock(void)
+{
+	svc_reg_xprt_class(&svc_tcp_bc_class);
+}
+EXPORT_SYMBOL_GPL(svc_init_bc_xprt_sock);
+#endif /* CONFIG_NFS_V4_1 */
+
 static struct svc_xprt_ops svc_tcp_ops = {
 	.xpo_create = svc_tcp_create,
 	.xpo_recvfrom = svc_tcp_recvfrom,
@@ -1509,41 +1589,43 @@ static void svc_sock_free(struct svc_xprt *xprt)
 	kfree(svsk);
 }
 
+#if defined(CONFIG_NFS_V4_1)
 /*
- * Create a svc_xprt.
- *
- * For internal use only (e.g. nfsv4.1 backchannel).
- * Callers should typically use the xpo_create() method.
+ * Create a back channel svc_xprt which shares the fore channel socket.
  */
-struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot)
+static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
+					     int protocol,
+					     struct net *net,
+					     struct sockaddr *sin, int len,
+					     int flags)
 {
 	struct svc_sock *svsk;
-	struct svc_xprt *xprt = NULL;
+	struct svc_xprt *xprt;
+
+	if (protocol != IPPROTO_TCP) {
+		printk(KERN_WARNING "svc: only and TCP sockets"
+			" supported on shared back channel\n");
+		return ERR_PTR(-EINVAL);
+	}
 
-	dprintk("svc: %s\n", __func__);
 	svsk = kzalloc(sizeof(*svsk), GFP_KERNEL);
 	if (!svsk)
-		goto out;
+		return ERR_PTR(-ENOMEM);
 
 	xprt = &svsk->sk_xprt;
-	if (prot == IPPROTO_TCP)
-		svc_xprt_init(&svc_tcp_class, xprt, serv);
-	else if (prot == IPPROTO_UDP)
-		svc_xprt_init(&svc_udp_class, xprt, serv);
-	else
-		BUG();
-out:
-	dprintk("svc: %s return %p\n", __func__, xprt);
+	svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
+
+	serv->bc_xprt = xprt;
+
 	return xprt;
 }
-EXPORT_SYMBOL_GPL(svc_sock_create);
 
 /*
- * Destroy a svc_sock.
+ * Free a back channel svc_sock.
  */
-void svc_sock_destroy(struct svc_xprt *xprt)
+static void svc_bc_sock_free(struct svc_xprt *xprt)
 {
 	if (xprt)
 		kfree(container_of(xprt, struct svc_sock, sk_xprt));
 }
-EXPORT_SYMBOL_GPL(svc_sock_destroy);
+#endif /* CONFIG_NFS_V4_1 */
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 2/9] NFS use svc_create_xprt for NFSv4.1 callback service
  2010-12-17 18:20 ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel andros
@ 2010-12-17 18:20   ` andros
  2010-12-17 18:20     ` [PATCH_V4 3/9] NFS do not clear minor version at nfs_client free andros
  2010-12-17 18:40   ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel J. Bruce Fields
  1 sibling, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Fixes a bug where the svc_drop svc_xprt_put() call would try to free the
bc_xprt where the TCP transport xpo_free routine hits a NULL svc_sock pointer.
SVC_DROP should release the xprt, but not free it. svc_xprt_free should call
the back channel free routine.

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/callback.c |   30 +++++++++++++++++++++---------
 1 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 93a8b3b..06f0ad9 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -177,30 +177,42 @@ nfs41_callback_svc(void *vrqstp)
 struct svc_rqst *
 nfs41_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
 {
-	struct svc_xprt *bc_xprt;
-	struct svc_rqst *rqstp = ERR_PTR(-ENOMEM);
+	struct svc_rqst *rqstp;
+	int ret;
 
 	dprintk("--> %s\n", __func__);
-	/* Create a svc_sock for the service */
-	bc_xprt = svc_sock_create(serv, xprt->prot);
-	if (!bc_xprt)
+
+	/* register the back channel transport class */
+	svc_init_bc_xprt_sock();
+
+	/*
+	 * Create an svc_sock for the back channel service that shares the
+	 * fore channel connection.
+	 * Returns the input port (0) and sets the svc_serv bc_xprt on success
+	 */
+	ret = svc_create_xprt(serv, "tcp-bc", &init_net, PF_INET, 0,
+			      SVC_SOCK_ANONYMOUS);
+	if (ret < 0) {
+		rqstp = ERR_PTR(ret);
 		goto out;
+	}
 
 	/*
 	 * Save the svc_serv in the transport so that it can
 	 * be referenced when the session backchannel is initialized
 	 */
-	serv->bc_xprt = bc_xprt;
 	xprt->bc_serv = serv;
 
 	INIT_LIST_HEAD(&serv->sv_cb_list);
 	spin_lock_init(&serv->sv_cb_lock);
 	init_waitqueue_head(&serv->sv_cb_waitq);
 	rqstp = svc_prepare_thread(serv, &serv->sv_pools[0]);
-	if (IS_ERR(rqstp))
-		svc_sock_destroy(bc_xprt);
+	if (IS_ERR(rqstp)) {
+		svc_xprt_put(serv->bc_xprt);
+		serv->bc_xprt = NULL;
+	}
 out:
-	dprintk("--> %s return %p\n", __func__, rqstp);
+	dprintk("--> %s return %ld\n", __func__, PTR_ERR(rqstp));
 	return rqstp;
 }
 
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 3/9] NFS do not clear minor version at nfs_client free
  2010-12-17 18:20   ` [PATCH_V4 2/9] NFS use svc_create_xprt for NFSv4.1 callback service andros
@ 2010-12-17 18:20     ` andros
  2010-12-17 18:20       ` [PATCH_V4 4/9] NFS implement v4.0 callback_ident andros
  0 siblings, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Resetting the client minor version operations causes nfs4_destroy_callback
to fail to shutdown the NFSv4.1 callback service.

There is no reason to reset the client minorversion operations when the
nfs_client struct is being freed.

Remove the minorverion reset and rename the function.

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/client.c |   22 +++++++++-------------
 1 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 0870d0d..855add6 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -170,21 +170,17 @@ error_0:
 }
 
 #ifdef CONFIG_NFS_V4
-/*
- * Clears/puts all minor version specific parts from an nfs_client struct
- * reverting it to minorversion 0.
- */
-static void nfs4_clear_client_minor_version(struct nfs_client *clp)
-{
 #ifdef CONFIG_NFS_V4_1
-	if (nfs4_has_session(clp)) {
+static void nfs4_shutdown_session(struct nfs_client *clp)
+{
+	if (nfs4_has_session(clp))
 		nfs4_destroy_session(clp->cl_session);
-		clp->cl_session = NULL;
-	}
-
-	clp->cl_mvops = nfs_v4_minor_ops[0];
-#endif /* CONFIG_NFS_V4_1 */
 }
+#else /* CONFIG_NFS_V4_1 */
+static void nfs4_shutdown_session(struct nfs_client *clp)
+{
+}
+#endif /* CONFIG_NFS_V4_1 */
 
 /*
  * Destroy the NFS4 callback service
@@ -199,7 +195,7 @@ static void nfs4_shutdown_client(struct nfs_client *clp)
 {
 	if (__test_and_clear_bit(NFS_CS_RENEWD, &clp->cl_res_state))
 		nfs4_kill_renewd(clp);
-	nfs4_clear_client_minor_version(clp);
+	nfs4_shutdown_session(clp);
 	nfs4_destroy_callback(clp);
 	if (__test_and_clear_bit(NFS_CS_IDMAP, &clp->cl_res_state))
 		nfs_idmap_delete(clp);
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 4/9] NFS implement v4.0 callback_ident
  2010-12-17 18:20     ` [PATCH_V4 3/9] NFS do not clear minor version at nfs_client free andros
@ 2010-12-17 18:20       ` andros
  2010-12-17 18:20         ` [PATCH_V4 5/9] NFS associate sessionid with callback connection andros
  0 siblings, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Provide a unique callback identifier per SETCLIENTID call used to identify the
v4.0 callback service associated with the clientid.

Do not worry about wrap around. Zero value means unset.

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/nfs4proc.c         |    5 +++++
 fs/nfs/nfs4state.c        |    1 +
 include/linux/nfs_fs_sb.h |    1 +
 include/linux/nfs_xdr.h   |    1 +
 4 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 4435e5e..ab5a2c4 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -3470,6 +3470,8 @@ do_state_recovery:
 	return -EAGAIN;
 }
 
+static u32 current_cb_ident = 1;
+
 int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 		unsigned short port, struct rpc_cred *cred,
 		struct nfs4_setclientid_res *res)
@@ -3478,6 +3480,7 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 	struct nfs4_setclientid setclientid = {
 		.sc_verifier = &sc_verifier,
 		.sc_prog = program,
+		.sc_cb_ident = current_cb_ident++,
 	};
 	struct rpc_message msg = {
 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_SETCLIENTID],
@@ -3522,6 +3525,8 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program,
 			if (++clp->cl_id_uniquifier == 0)
 				break;
 	}
+	if (status == NFS_OK)
+		res->cb_ident = setclientid.sc_cb_ident;
 	return status;
 }
 
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index f575a31..fe61422 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -79,6 +79,7 @@ int nfs4_init_clientid(struct nfs_client *clp, struct rpc_cred *cred)
 	if (status != 0)
 		goto out;
 	clp->cl_clientid = clid.clientid;
+	clp->cl_cb_ident = clid.cb_ident;
 	nfs4_schedule_state_renewal(clp);
 out:
 	return status;
diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
index 452d964..1eaa054 100644
--- a/include/linux/nfs_fs_sb.h
+++ b/include/linux/nfs_fs_sb.h
@@ -71,6 +71,7 @@ struct nfs_client {
 	 */
 	char			cl_ipaddr[48];
 	unsigned char		cl_id_uniquifier;
+	u32			cl_cb_ident;	/* v4.0 callback identifier */
 	const struct nfs4_minor_version_ops *cl_mvops;
 #endif /* CONFIG_NFS_V4 */
 
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index 80f0719..24e77a6 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -868,6 +868,7 @@ struct nfs4_setclientid {
 struct nfs4_setclientid_res {
 	u64				clientid;
 	nfs4_verifier			confirm;
+	u32				cb_ident;
 };
 
 struct nfs4_statfs_arg {
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 5/9] NFS associate sessionid with callback connection
  2010-12-17 18:20       ` [PATCH_V4 4/9] NFS implement v4.0 callback_ident andros
@ 2010-12-17 18:20         ` andros
  2010-12-17 18:20           ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing andros
  0 siblings, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

The sessions based callback service is started prior to the CREATE_SESSION call
so that it can handle CB_NULL requests which can be sent before the
CREATE_SESSION call returns and the session ID is known.

Set the callback sessionid after a sucessful CREATE_SESSION.

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/callback.c               |   31 +++++++++++++++++++++++++++++++
 fs/nfs/callback.h               |    1 +
 fs/nfs/nfs4state.c              |    6 ++++++
 include/linux/sunrpc/svc_xprt.h |    1 +
 net/sunrpc/svcsock.c            |    1 +
 5 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 06f0ad9..b2fab85 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -137,6 +137,33 @@ out_err:
 
 #if defined(CONFIG_NFS_V4_1)
 /*
+ *  * CB_SEQUENCE operations will fail until the callback sessionid is set.
+ *   */
+int nfs4_set_callback_sessionid(struct nfs_client *clp)
+{
+	struct svc_serv *serv = clp->cl_rpcclient->cl_xprt->bc_serv;
+	struct nfs4_sessionid *bc_sid;
+
+	if (!serv->bc_xprt)
+		return -EINVAL;
+
+	/* on success freed in xprt_free */
+	bc_sid = kmalloc(sizeof(struct nfs4_sessionid), GFP_KERNEL);
+	if (!bc_sid)
+		return -ENOMEM;
+	memcpy(bc_sid->data, &clp->cl_session->sess_id.data,
+		NFS4_MAX_SESSIONID_LEN);
+	spin_lock_bh(&serv->sv_cb_lock);
+	serv->bc_xprt->xpt_bc_sid = bc_sid;
+	spin_unlock_bh(&serv->sv_cb_lock);
+	dprintk("%s set xpt_bc_sid=%u:%u:%u:%u for bc_xprt %p\n", __func__,
+		((u32 *)bc_sid->data)[0], ((u32 *)bc_sid->data)[1],
+		((u32 *)bc_sid->data)[2], ((u32 *)bc_sid->data)[3],
+		serv->bc_xprt);
+	return 0;
+}
+
+/*
  * The callback service for NFSv4.1 callbacks
  */
 static int
@@ -245,6 +272,10 @@ static inline void nfs_callback_bc_serv(u32 minorversion, struct rpc_xprt *xprt,
 		struct nfs_callback_data *cb_info)
 {
 }
+int nfs4_set_callback_sessionid(struct nfs_client *clp)
+{
+	return 0;
+}
 #endif /* CONFIG_NFS_V4_1 */
 
 /*
diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
index 85a7cfd..58d61a8 100644
--- a/fs/nfs/callback.h
+++ b/fs/nfs/callback.h
@@ -137,6 +137,7 @@ extern int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt);
 extern void nfs_callback_down(int minorversion);
 extern int nfs4_validate_delegation_stateid(struct nfs_delegation *delegation,
 					    const nfs4_stateid *stateid);
+extern int nfs4_set_callback_sessionid(struct nfs_client *clp);
 #endif /* CONFIG_NFS_V4 */
 /*
  * nfs41: Callbacks are expected to not cause substantial latency,
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index fe61422..11290de 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -193,6 +193,12 @@ int nfs41_init_clientid(struct nfs_client *clp, struct rpc_cred *cred)
 	status = nfs4_proc_create_session(clp);
 	if (status != 0)
 		goto out;
+	status = nfs4_set_callback_sessionid(clp);
+	if (status != 0) {
+		printk(KERN_WARNING "Sessionid not set. No callback service\n");
+		nfs_callback_down(1);
+		status = 0;
+	}
 	nfs41_setup_state_renewal(clp);
 	nfs_mark_client_ready(clp, NFS_CS_READY);
 out:
diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h
index aea0d43..357da5e 100644
--- a/include/linux/sunrpc/svc_xprt.h
+++ b/include/linux/sunrpc/svc_xprt.h
@@ -78,6 +78,7 @@ struct svc_xprt {
 	size_t			xpt_remotelen;	/* length of address */
 	struct rpc_wait_queue	xpt_bc_pending;	/* backchannel wait queue */
 	struct list_head	xpt_users;	/* callbacks on free */
+	void			*xpt_bc_sid;	/* back channel session ID */
 
 	struct net		*xpt_net;
 };
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 5875551..f8b5ca9 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1626,6 +1626,7 @@ static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
 static void svc_bc_sock_free(struct svc_xprt *xprt)
 {
 	if (xprt)
+		kfree(xprt->xpt_bc_sid);
 		kfree(container_of(xprt, struct svc_sock, sk_xprt));
 }
 #endif /* CONFIG_NFS_V4_1 */
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing
  2010-12-17 18:20         ` [PATCH_V4 5/9] NFS associate sessionid with callback connection andros
@ 2010-12-17 18:20           ` andros
  2010-12-17 18:20             ` [PATCH_V4 7/9] NFS RPC_AUTH_GSS unsupported on v4.1 back channel andros
                               ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

In the NFS layer, V4.0 clients are found using the callback_ident field in the
CB_COMPOUND header.  V4.1 clients are found using the sessionID in the
CB_SEQUENCE operation which is also compared against the sessionID assigned at
back channel thread creation.

Each of these methods finds the one an only nfs_client associated
with the incoming callback request - so nfs_find_client_next is not needed.

Pass the referenced nfs_client to each CB_COMPOUND operation being proceesed
via the new cb_process_state structure.

In the RPC layer, the v4.0 callback identifier is not known for the
pg_authenticate call, so a zero value (unset) is used to find the nfs_client.
The sessionid for the sessions based callback service has (usually) not been
set for the pg_authenticate on a CB_NULL call, so the sessionid is not used to
find the client in pg_authenticate for CB_NULL calls. It is used for
pg_authenticate CB_COMPOUND calls.

Use the new cb_process_state struct to move the NFS4ERR_RETRY_UNCACHED_REP
processing from process_op into nfs4_callback_sequence where it belongs.

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/callback.c              |   21 ++++-
 fs/nfs/callback.h              |   24 +++++-
 fs/nfs/callback_proc.c         |  167 ++++++++++++++++------------------------
 fs/nfs/callback_xdr.c          |   40 ++++++----
 fs/nfs/client.c                |   97 ++++++++++++++++-------
 fs/nfs/internal.h              |    7 +-
 include/linux/sunrpc/bc_xprt.h |   13 +++
 7 files changed, 214 insertions(+), 155 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index b2fab85..1033eae 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -16,9 +16,7 @@
 #include <linux/freezer.h>
 #include <linux/kthread.h>
 #include <linux/sunrpc/svcauth_gss.h>
-#if defined(CONFIG_NFS_V4_1)
 #include <linux/sunrpc/bc_xprt.h>
-#endif
 
 #include <net/inet_sock.h>
 
@@ -388,6 +386,23 @@ static int check_gss_callback_principal(struct nfs_client *clp,
 	return SVC_OK;
 }
 
+/* pg_authenticate method helper */
+static struct nfs_client *nfs_cb_find_client(struct svc_rqst *rqstp)
+{
+	struct nfs4_sessionid *sessionid = bc_xprt_sid(rqstp);
+	int is_cb_compound = rqstp->rq_proc == CB_COMPOUND ? 1 : 0;
+
+	dprintk("--> %s rq_proc %d\n", __func__, rqstp->rq_proc);
+	if (svc_is_backchannel(rqstp))
+		/* Sessionid (usually) set after CB_NULL ping */
+		return nfs4_find_client_sessionid(svc_addr(rqstp), sessionid,
+						  is_cb_compound);
+	else
+		/* No callback identifier in pg_authenticate */
+		return nfs4_find_client_ident(svc_addr(rqstp), 0);
+}
+
+/* pg_authenticate method for nfsv4 callback threads. */
 static int nfs_callback_authenticate(struct svc_rqst *rqstp)
 {
 	struct nfs_client *clp;
@@ -395,7 +410,7 @@ static int nfs_callback_authenticate(struct svc_rqst *rqstp)
 	int ret = SVC_OK;
 
 	/* Don't talk to strangers */
-	clp = nfs_find_client(svc_addr(rqstp), 4);
+	clp = nfs_cb_find_client(rqstp);
 	if (clp == NULL)
 		return SVC_DROP;
 
diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
index 58d61a8..d700d47 100644
--- a/fs/nfs/callback.h
+++ b/fs/nfs/callback.h
@@ -34,10 +34,17 @@ enum nfs4_callback_opnum {
 	OP_CB_ILLEGAL = 10044,
 };
 
+struct cb_process_state {
+	__be32			drc_status;
+	struct nfs_client	*clp;
+	struct nfs4_sessionid	*svc_sid; /* v4.1 callback service sessionid */
+};
+
 struct cb_compound_hdr_arg {
 	unsigned int taglen;
 	const char *tag;
 	unsigned int minorversion;
+	unsigned int cb_ident; /* v4.0 callback identifier */
 	unsigned nops;
 };
 
@@ -104,7 +111,8 @@ struct cb_sequenceres {
 };
 
 extern unsigned nfs4_callback_sequence(struct cb_sequenceargs *args,
-				       struct cb_sequenceres *res);
+				       struct cb_sequenceres *res,
+				       struct cb_process_state *cps);
 
 extern int nfs41_validate_delegation_stateid(struct nfs_delegation *delegation,
 					     const nfs4_stateid *stateid);
@@ -118,19 +126,25 @@ struct cb_recallanyargs {
 	uint32_t	craa_type_mask;
 };
 
-extern unsigned nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy);
+extern unsigned nfs4_callback_recallany(struct cb_recallanyargs *args,
+					void *dummy,
+					struct cb_process_state *cps);
 
 struct cb_recallslotargs {
 	struct sockaddr	*crsa_addr;
 	uint32_t	crsa_target_max_slots;
 };
 extern unsigned nfs4_callback_recallslot(struct cb_recallslotargs *args,
-					  void *dummy);
+					 void *dummy,
+					 struct cb_process_state *cps);
 
 #endif /* CONFIG_NFS_V4_1 */
 
-extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res);
-extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy);
+extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
+				    struct cb_getattrres *res,
+				    struct cb_process_state *cps);
+extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
+				   struct cb_process_state *cps);
 
 #ifdef CONFIG_NFS_V4
 extern int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt);
diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 2950fca..b70e46d 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -16,26 +16,28 @@
 #ifdef NFS_DEBUG
 #define NFSDBG_FACILITY NFSDBG_CALLBACK
 #endif
- 
-__be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res)
+
+__be32 nfs4_callback_getattr(struct cb_getattrargs *args,
+			     struct cb_getattrres *res,
+			     struct cb_process_state *cps)
 {
-	struct nfs_client *clp;
 	struct nfs_delegation *delegation;
 	struct nfs_inode *nfsi;
 	struct inode *inode;
 
+	res->status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
+	if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
+		goto out;
+
 	res->bitmap[0] = res->bitmap[1] = 0;
 	res->status = htonl(NFS4ERR_BADHANDLE);
-	clp = nfs_find_client(args->addr, 4);
-	if (clp == NULL)
-		goto out;
 
 	dprintk("NFS: GETATTR callback request from %s\n",
-		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
+		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
 
-	inode = nfs_delegation_find_inode(clp, &args->fh);
+	inode = nfs_delegation_find_inode(cps->clp, &args->fh);
 	if (inode == NULL)
-		goto out_putclient;
+		goto out;
 	nfsi = NFS_I(inode);
 	rcu_read_lock();
 	delegation = rcu_dereference(nfsi->delegation);
@@ -55,49 +57,41 @@ __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *
 out_iput:
 	rcu_read_unlock();
 	iput(inode);
-out_putclient:
-	nfs_put_client(clp);
 out:
 	dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status));
 	return res->status;
 }
 
-__be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy)
+__be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
+			    struct cb_process_state *cps)
 {
-	struct nfs_client *clp;
 	struct inode *inode;
 	__be32 res;
 	
-	res = htonl(NFS4ERR_BADHANDLE);
-	clp = nfs_find_client(args->addr, 4);
-	if (clp == NULL)
+	res = htonl(NFS4ERR_OP_NOT_IN_SESSION);
+	if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
 		goto out;
 
 	dprintk("NFS: RECALL callback request from %s\n",
-		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
-
-	do {
-		struct nfs_client *prev = clp;
-
-		inode = nfs_delegation_find_inode(clp, &args->fh);
-		if (inode != NULL) {
-			/* Set up a helper thread to actually return the delegation */
-			switch (nfs_async_inode_return_delegation(inode, &args->stateid)) {
-				case 0:
-					res = 0;
-					break;
-				case -ENOENT:
-					if (res != 0)
-						res = htonl(NFS4ERR_BAD_STATEID);
-					break;
-				default:
-					res = htonl(NFS4ERR_RESOURCE);
-			}
-			iput(inode);
-		}
-		clp = nfs_find_client_next(prev);
-		nfs_put_client(prev);
-	} while (clp != NULL);
+		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
+
+	res = htonl(NFS4ERR_BADHANDLE);
+	inode = nfs_delegation_find_inode(cps->clp, &args->fh);
+	if (inode == NULL)
+		goto out;
+	/* Set up a helper thread to actually return the delegation */
+	switch (nfs_async_inode_return_delegation(inode, &args->stateid)) {
+	case 0:
+		res = 0;
+		break;
+	case -ENOENT:
+		if (res != 0)
+			res = htonl(NFS4ERR_BAD_STATEID);
+		break;
+	default:
+		res = htonl(NFS4ERR_RESOURCE);
+	}
+	iput(inode);
 out:
 	dprintk("%s: exit with status = %d\n", __func__, ntohl(res));
 	return res;
@@ -185,42 +179,6 @@ validate_seqid(struct nfs4_slot_table *tbl, struct cb_sequenceargs * args)
 }
 
 /*
- * Returns a pointer to a held 'struct nfs_client' that matches the server's
- * address, major version number, and session ID.  It is the caller's
- * responsibility to release the returned reference.
- *
- * Returns NULL if there are no connections with sessions, or if no session
- * matches the one of interest.
- */
- static struct nfs_client *find_client_with_session(
-	const struct sockaddr *addr, u32 nfsversion,
-	struct nfs4_sessionid *sessionid)
-{
-	struct nfs_client *clp;
-
-	clp = nfs_find_client(addr, 4);
-	if (clp == NULL)
-		return NULL;
-
-	do {
-		struct nfs_client *prev = clp;
-
-		if (clp->cl_session != NULL) {
-			if (memcmp(clp->cl_session->sess_id.data,
-					sessionid->data,
-					NFS4_MAX_SESSIONID_LEN) == 0) {
-				/* Returns a held reference to clp */
-				return clp;
-			}
-		}
-		clp = nfs_find_client_next(prev);
-		nfs_put_client(prev);
-	} while (clp != NULL);
-
-	return NULL;
-}
-
-/*
  * For each referring call triple, check the session's slot table for
  * a match.  If the slot is in use and the sequence numbers match, the
  * client is still waiting for a response to the original request.
@@ -276,20 +234,28 @@ out:
 }
 
 __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
-				struct cb_sequenceres *res)
+			      struct cb_sequenceres *res,
+			      struct cb_process_state *cps)
 {
 	struct nfs_client *clp;
 	int i;
 	__be32 status;
 
+	cps->clp = NULL;
+
 	status = htonl(NFS4ERR_BADSESSION);
-	clp = find_client_with_session(args->csa_addr, 4, &args->csa_sessionid);
+	/* Incoming session must match the callback session */
+	if (memcmp(&args->csa_sessionid, cps->svc_sid, NFS4_MAX_SESSIONID_LEN))
+		goto out;
+
+	clp = nfs4_find_client_sessionid(args->csa_addr,
+					 &args->csa_sessionid, 1);
 	if (clp == NULL)
 		goto out;
 
 	status = validate_seqid(&clp->cl_session->bc_slot_table, args);
 	if (status)
-		goto out_putclient;
+		goto out;
 
 	/*
 	 * Check for pending referring calls.  If a match is found, a
@@ -298,7 +264,7 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
 	 */
 	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists)) {
 		status = htonl(NFS4ERR_DELAY);
-		goto out_putclient;
+		goto out;
 	}
 
 	memcpy(&res->csr_sessionid, &args->csa_sessionid,
@@ -307,36 +273,36 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
 	res->csr_slotid = args->csa_slotid;
 	res->csr_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
 	res->csr_target_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
+	cps->clp = clp; /* put in nfs4_callback_compound */
 
-out_putclient:
-	nfs_put_client(clp);
 out:
 	for (i = 0; i < args->csa_nrclists; i++)
 		kfree(args->csa_rclists[i].rcl_refcalls);
 	kfree(args->csa_rclists);
 
-	if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP))
-		res->csr_status = 0;
-	else
+	if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP)) {
+		cps->drc_status = status;
+		status = 0;
+	} else
 		res->csr_status = status;
+
 	dprintk("%s: exit with status = %d res->csr_status %d\n", __func__,
 		ntohl(status), ntohl(res->csr_status));
 	return status;
 }
 
-__be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy)
+__be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy,
+			       struct cb_process_state *cps)
 {
-	struct nfs_client *clp;
 	__be32 status;
 	fmode_t flags = 0;
 
 	status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
-	clp = nfs_find_client(args->craa_addr, 4);
-	if (clp == NULL)
+	if (!cps->clp) /* set in cb_sequence */
 		goto out;
 
 	dprintk("NFS: RECALL_ANY callback request from %s\n",
-		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
+		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
 
 	if (test_bit(RCA4_TYPE_MASK_RDATA_DLG, (const unsigned long *)
 		     &args->craa_type_mask))
@@ -346,7 +312,7 @@ __be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy)
 		flags |= FMODE_WRITE;
 
 	if (flags)
-		nfs_expire_all_delegation_types(clp, flags);
+		nfs_expire_all_delegation_types(cps->clp, flags);
 	status = htonl(NFS4_OK);
 out:
 	dprintk("%s: exit with status = %d\n", __func__, ntohl(status));
@@ -354,36 +320,33 @@ out:
 }
 
 /* Reduce the fore channel's max_slots to the target value */
-__be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy)
+__be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy,
+				struct cb_process_state *cps)
 {
-	struct nfs_client *clp;
 	struct nfs4_slot_table *fc_tbl;
 	__be32 status;
 
 	status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
-	clp = nfs_find_client(args->crsa_addr, 4);
-	if (clp == NULL)
+	if (!cps->clp) /* set in cb_sequence */
 		goto out;
 
 	dprintk("NFS: CB_RECALL_SLOT request from %s target max slots %d\n",
-		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR),
+		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR),
 		args->crsa_target_max_slots);
 
-	fc_tbl = &clp->cl_session->fc_slot_table;
+	fc_tbl = &cps->clp->cl_session->fc_slot_table;
 
 	status = htonl(NFS4ERR_BAD_HIGH_SLOT);
 	if (args->crsa_target_max_slots > fc_tbl->max_slots ||
 	    args->crsa_target_max_slots < 1)
-		goto out_putclient;
+		goto out;
 
 	status = htonl(NFS4_OK);
 	if (args->crsa_target_max_slots == fc_tbl->max_slots)
-		goto out_putclient;
+		goto out;
 
 	fc_tbl->target_max_slots = args->crsa_target_max_slots;
-	nfs41_handle_recall_slot(clp);
-out_putclient:
-	nfs_put_client(clp);	/* balance nfs_find_client */
+	nfs41_handle_recall_slot(cps->clp);
 out:
 	dprintk("%s: exit with status = %d\n", __func__, ntohl(status));
 	return status;
diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
index 05af212..85cbb8f 100644
--- a/fs/nfs/callback_xdr.c
+++ b/fs/nfs/callback_xdr.c
@@ -10,8 +10,10 @@
 #include <linux/nfs4.h>
 #include <linux/nfs_fs.h>
 #include <linux/slab.h>
+#include <linux/sunrpc/bc_xprt.h>
 #include "nfs4_fs.h"
 #include "callback.h"
+#include "internal.h"
 
 #define CB_OP_TAGLEN_MAXSZ	(512)
 #define CB_OP_HDR_RES_MAXSZ	(2 + CB_OP_TAGLEN_MAXSZ)
@@ -33,7 +35,8 @@
 /* Internal error code */
 #define NFS4ERR_RESOURCE_HDR	11050
 
-typedef __be32 (*callback_process_op_t)(void *, void *);
+typedef __be32 (*callback_process_op_t)(void *, void *,
+					struct cb_process_state *);
 typedef __be32 (*callback_decode_arg_t)(struct svc_rqst *, struct xdr_stream *, void *);
 typedef __be32 (*callback_encode_res_t)(struct svc_rqst *, struct xdr_stream *, void *);
 
@@ -160,7 +163,7 @@ static __be32 decode_compound_hdr_arg(struct xdr_stream *xdr, struct cb_compound
 	hdr->minorversion = ntohl(*p++);
 	/* Check minor version is zero or one. */
 	if (hdr->minorversion <= 1) {
-		p++;	/* skip callback_ident */
+		hdr->cb_ident = ntohl(*p++); /* ignored by v4.1 */
 	} else {
 		printk(KERN_WARNING "%s: NFSv4 server callback with "
 			"illegal minor version %u!\n",
@@ -621,7 +624,8 @@ preprocess_nfs4_op(unsigned int op_nr, struct callback_op **op)
 static __be32 process_op(uint32_t minorversion, int nop,
 		struct svc_rqst *rqstp,
 		struct xdr_stream *xdr_in, void *argp,
-		struct xdr_stream *xdr_out, void *resp, int* drc_status)
+		struct xdr_stream *xdr_out, void *resp,
+		struct cb_process_state *cps)
 {
 	struct callback_op *op = &callback_ops[0];
 	unsigned int op_nr;
@@ -644,8 +648,8 @@ static __be32 process_op(uint32_t minorversion, int nop,
 	if (status)
 		goto encode_hdr;
 
-	if (*drc_status) {
-		status = *drc_status;
+	if (cps->drc_status) {
+		status = cps->drc_status;
 		goto encode_hdr;
 	}
 
@@ -653,16 +657,10 @@ static __be32 process_op(uint32_t minorversion, int nop,
 	if (maxlen > 0 && maxlen < PAGE_SIZE) {
 		status = op->decode_args(rqstp, xdr_in, argp);
 		if (likely(status == 0))
-			status = op->process_op(argp, resp);
+			status = op->process_op(argp, resp, cps);
 	} else
 		status = htonl(NFS4ERR_RESOURCE);
 
-	/* Only set by OP_CB_SEQUENCE processing */
-	if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP)) {
-		*drc_status = status;
-		status = 0;
-	}
-
 encode_hdr:
 	res = encode_op_hdr(xdr_out, op_nr, status);
 	if (unlikely(res))
@@ -681,8 +679,11 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
 	struct cb_compound_hdr_arg hdr_arg = { 0 };
 	struct cb_compound_hdr_res hdr_res = { NULL };
 	struct xdr_stream xdr_in, xdr_out;
-	__be32 *p;
-	__be32 status, drc_status = 0;
+	__be32 *p, status;
+	struct cb_process_state cps = {
+		.drc_status = 0,
+		.clp = NULL,
+	};
 	unsigned int nops = 0;
 
 	dprintk("%s: start\n", __func__);
@@ -696,6 +697,14 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
 	if (status == __constant_htonl(NFS4ERR_RESOURCE))
 		return rpc_garbage_args;
 
+	if (hdr_arg.minorversion == 0) {
+		cps.clp = nfs4_find_client_ident(svc_addr(rqstp),
+						 hdr_arg.cb_ident);
+		if (!cps.clp)
+			return rpc_drop_reply;
+	} else
+		cps.svc_sid = bc_xprt_sid(rqstp);
+
 	hdr_res.taglen = hdr_arg.taglen;
 	hdr_res.tag = hdr_arg.tag;
 	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0)
@@ -703,7 +712,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
 
 	while (status == 0 && nops != hdr_arg.nops) {
 		status = process_op(hdr_arg.minorversion, nops, rqstp,
-				    &xdr_in, argp, &xdr_out, resp, &drc_status);
+				    &xdr_in, argp, &xdr_out, resp, &cps);
 		nops++;
 	}
 
@@ -716,6 +725,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
 
 	*hdr_res.status = status;
 	*hdr_res.nops = htonl(nops);
+	nfs_put_client(cps.clp);
 	dprintk("%s: done, status = %u\n", __func__, ntohl(status));
 	return rpc_success;
 }
diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 855add6..5443e0e 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -359,29 +359,53 @@ static int nfs_sockaddr_cmp(const struct sockaddr *sa1,
 	return 0;
 }
 
+/* Common match routine for v4.0 and v4.1 callback services */
+bool
+nfs4_cb_match_client(const struct sockaddr *addr, struct nfs_client *clp,
+		     u32 minorversion)
+{
+	struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
+
+	/* Don't match clients that failed to initialise */
+	if (!(clp->cl_cons_state == NFS_CS_READY ||
+	    clp->cl_cons_state == NFS_CS_SESSION_INITING))
+		return false;
+
+	/* Match the version and minorversion */
+	if (clp->rpc_ops->version != 4 ||
+	    clp->cl_minorversion != minorversion)
+		return false;
+
+	/* Match only the IP address, not the port number */
+	if (!nfs_sockaddr_match_ipaddr(addr, clap))
+		return false;
+
+	return true;
+}
+
 /*
- * Find a client by IP address and protocol version
- * - returns NULL if no such client
+ * NFSv4.0 callback thread helper
+ *
+ * Find a client by IP address, protocol version, minorversion, and callback
+ * identifier.
+ *
+ * Called from the pg_authenticate method with a zero callback_ident
+ * because the callback identifier has not been decoded.
+ *
+ * Returns NULL if no such client
  */
-struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
+struct nfs_client *
+nfs4_find_client_ident(const struct sockaddr *addr, u32 callback_ident)
 {
 	struct nfs_client *clp;
 
 	spin_lock(&nfs_client_lock);
 	list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
-		struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
-
-		/* Don't match clients that failed to initialise properly */
-		if (!(clp->cl_cons_state == NFS_CS_READY ||
-		      clp->cl_cons_state == NFS_CS_SESSION_INITING))
-			continue;
-
-		/* Different NFS versions cannot share the same nfs_client */
-		if (clp->rpc_ops->version != nfsversion)
+		if (nfs4_cb_match_client(addr, clp, 0) == false)
 			continue;
 
-		/* Match only the IP address, not the port number */
-		if (!nfs_sockaddr_match_ipaddr(addr, clap))
+		/* Match the non-zero callback identifier */
+		if (callback_ident != 0 && clp->cl_cb_ident != callback_ident)
 			continue;
 
 		atomic_inc(&clp->cl_count);
@@ -392,29 +416,36 @@ struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
 	return NULL;
 }
 
+#if defined(CONFIG_NFS_V4_1)
 /*
- * Find a client by IP address and protocol version
- * - returns NULL if no such client
+ * NFSv4.1 callback thread helper
+ * For CB_COMPOUND calls, find a client by IP address, protocol version,
+ * minorversion, and sessionID
+ *
+ * CREATE_SESSION triggers a CB_NULL ping from servers. The callback service
+ * sessionid can only be set after the CREATE_SESSION return, so a CB_NULL
+ * can arrive before the callback sessionid is set. For CB_NULL calls,
+ * find a client by IP address protocol version, and minorversion.
+ *
+ * Returns NULL if no such client
  */
-struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
+struct nfs_client *
+nfs4_find_client_sessionid(const struct sockaddr *addr,
+			   struct nfs4_sessionid *sid, int is_cb_compound)
 {
-	struct sockaddr *sap = (struct sockaddr *)&clp->cl_addr;
-	u32 nfsvers = clp->rpc_ops->version;
+	struct nfs_client *clp;
 
 	spin_lock(&nfs_client_lock);
-	list_for_each_entry_continue(clp, &nfs_client_list, cl_share_link) {
-		struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
-
-		/* Don't match clients that failed to initialise properly */
-		if (clp->cl_cons_state != NFS_CS_READY)
+	list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
+		if (nfs4_cb_match_client(addr, clp, 1) == false)
 			continue;
 
-		/* Different NFS versions cannot share the same nfs_client */
-		if (clp->rpc_ops->version != nfsvers)
+		if (!nfs4_has_session(clp))
 			continue;
 
-		/* Match only the IP address, not the port number */
-		if (!nfs_sockaddr_match_ipaddr(sap, clap))
+		/* Match sessionid unless cb_null call*/
+		if (is_cb_compound && (memcmp(clp->cl_session->sess_id.data,
+		    sid->data, NFS4_MAX_SESSIONID_LEN) != 0))
 			continue;
 
 		atomic_inc(&clp->cl_count);
@@ -425,6 +456,16 @@ struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
 	return NULL;
 }
 
+#else /* CONFIG_NFS_V4_1 */
+
+struct nfs_client *
+nfs4_find_client_sessionid(const struct sockaddr *addr,
+			   struct nfs4_sessionid *sid, int is_cb_compound)
+{
+	return NULL;
+}
+
+#endif /* CONFIG_NFS_V4_1 */
 /*
  * Find an nfs_client on the list that matches the initialisation data
  * that is supplied.
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index e6356b7..f65602c 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -129,8 +129,11 @@ extern void nfs_umount(const struct nfs_mount_request *info);
 extern struct rpc_program nfs_program;
 
 extern void nfs_put_client(struct nfs_client *);
-extern struct nfs_client *nfs_find_client(const struct sockaddr *, u32);
-extern struct nfs_client *nfs_find_client_next(struct nfs_client *);
+extern struct nfs_client *nfs4_find_client_ident(const struct sockaddr *,
+						    u32);
+extern struct nfs_client *
+nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *,
+			   int);
 extern struct nfs_server *nfs_create_server(
 					const struct nfs_parsed_mount_data *,
 					struct nfs_fh *);
diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
index 7c91260..2c60e09 100644
--- a/include/linux/sunrpc/bc_xprt.h
+++ b/include/linux/sunrpc/bc_xprt.h
@@ -47,6 +47,14 @@ static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
 		return 1;
 	return 0;
 }
+static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
+{
+	if (svc_is_backchannel(rqstp))
+		return (struct nfs4_sessionid *)
+					rqstp->rq_server->bc_xprt->xpt_bc_sid;
+	return NULL;
+}
+
 #else /* CONFIG_NFS_V4_1 */
 static inline int xprt_setup_backchannel(struct rpc_xprt *xprt,
 					 unsigned int min_reqs)
@@ -59,6 +67,11 @@ static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
 	return 0;
 }
 
+static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
+{
+	return NULL;
+}
+
 static inline void xprt_free_bc_request(struct rpc_rqst *req)
 {
 }
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 7/9] NFS RPC_AUTH_GSS unsupported on v4.1 back channel
  2010-12-17 18:20           ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing andros
@ 2010-12-17 18:20             ` andros
  2010-12-17 18:20               ` [PATCH_V4 8/9] NFS add session back channel draining andros
  2010-12-20 14:05             ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing Benny Halevy
  2010-12-20 15:44             ` Fred Isaman
  2 siblings, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/callback.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 1033eae..52e6695 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -369,6 +369,9 @@ static int check_gss_callback_principal(struct nfs_client *clp,
 	struct rpc_clnt *r = clp->cl_rpcclient;
 	char *p = svc_gss_principal(rqstp);
 
+	/* No RPC_AUTH_GSS on NFSv4.1 back channel yet */
+	if (clp->cl_minorversion != 0)
+		return SVC_DENIED;
 	/*
 	 * It might just be a normal user principal, in which case
 	 * userspace won't bother to tell us the name at all.
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 8/9] NFS add session back channel draining
  2010-12-17 18:20             ` [PATCH_V4 7/9] NFS RPC_AUTH_GSS unsupported on v4.1 back channel andros
@ 2010-12-17 18:20               ` andros
  2010-12-17 18:20                 ` [PATCH_V4 9/9] NFS rename client back channel transport field andros
  0 siblings, 1 reply; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Currently session draining only drains the fore channel.
The back channel processing must also be drained.

Use the back channel highest_slot_used to indicate that a callback is being
processed by the callback thread.  Move the session complete to be per channel.

When the session is draininig, wait for any current back channel processing
to complete and stop all new back channel processing by returning NFS4ERR_DELAY
to the back channel client.

Drain the back channel, then the fore channel.

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/callback.h         |    2 +-
 fs/nfs/callback_proc.c    |    6 ++++++
 fs/nfs/callback_xdr.c     |   24 ++++++++++++++++++++++++
 fs/nfs/nfs4proc.c         |   26 +++++++++++++++++++-------
 fs/nfs/nfs4state.c        |   29 ++++++++++++++++++++++-------
 include/linux/nfs_fs_sb.h |    2 +-
 6 files changed, 73 insertions(+), 16 deletions(-)

diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
index d700d47..2d927d4 100644
--- a/fs/nfs/callback.h
+++ b/fs/nfs/callback.h
@@ -138,6 +138,7 @@ extern unsigned nfs4_callback_recallslot(struct cb_recallslotargs *args,
 					 void *dummy,
 					 struct cb_process_state *cps);
 
+extern void nfs4_check_drain_bc_complete(struct nfs4_session *ses);
 #endif /* CONFIG_NFS_V4_1 */
 
 extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
@@ -145,7 +146,6 @@ extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
 				    struct cb_process_state *cps);
 extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
 				   struct cb_process_state *cps);
-
 #ifdef CONFIG_NFS_V4
 extern int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt);
 extern void nfs_callback_down(int minorversion);
diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index b70e46d..a6af836 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -253,6 +253,12 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
 	if (clp == NULL)
 		goto out;
 
+	/* state manager is resetting the session */
+	if (test_bit(NFS4_SESSION_DRAINING, &clp->cl_session->session_state)) {
+		status = NFS4ERR_DELAY;
+		goto out;
+	}
+
 	status = validate_seqid(&clp->cl_session->bc_slot_table, args);
 	if (status)
 		goto out;
diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
index 85cbb8f..09ca7c4 100644
--- a/fs/nfs/callback_xdr.c
+++ b/fs/nfs/callback_xdr.c
@@ -596,6 +596,26 @@ preprocess_nfs41_op(int nop, unsigned int op_nr, struct callback_op **op)
 	return htonl(NFS_OK);
 }
 
+static void nfs4_callback_free_slot(struct nfs4_session *session)
+{
+	struct nfs4_slot_table *tbl = &session->bc_slot_table;
+
+	spin_lock(&tbl->slot_tbl_lock);
+	/*
+	 * Let the state manager know callback processing done.
+	 * A single slot, so highest used slotid is either 0 or -1
+	 */
+	tbl->highest_used_slotid--;
+	nfs4_check_drain_bc_complete(session);
+	spin_unlock(&tbl->slot_tbl_lock);
+}
+
+static void nfs4_cb_free_slot(struct nfs_client *clp)
+{
+	if (clp && clp->cl_session)
+		nfs4_callback_free_slot(clp->cl_session);
+}
+
 #else /* CONFIG_NFS_V4_1 */
 
 static __be32
@@ -604,6 +624,9 @@ preprocess_nfs41_op(int nop, unsigned int op_nr, struct callback_op **op)
 	return htonl(NFS4ERR_MINOR_VERS_MISMATCH);
 }
 
+static void nfs4_cb_free_slot(struct nfs_client *clp)
+{
+}
 #endif /* CONFIG_NFS_V4_1 */
 
 static __be32
@@ -725,6 +748,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
 
 	*hdr_res.status = status;
 	*hdr_res.nops = htonl(nops);
+	nfs4_cb_free_slot(cps.clp);
 	nfs_put_client(cps.clp);
 	dprintk("%s: done, status = %u\n", __func__, ntohl(status));
 	return rpc_success;
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index ab5a2c4..ecb2a5e 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -355,9 +355,9 @@ nfs4_free_slot(struct nfs4_slot_table *tbl, struct nfs4_slot *free_slot)
 }
 
 /*
- * Signal state manager thread if session is drained
+ * Signal state manager thread if session fore channel is drained
  */
-static void nfs41_check_drain_session_complete(struct nfs4_session *ses)
+static void nfs4_check_drain_fc_complete(struct nfs4_session *ses)
 {
 	struct rpc_task *task;
 
@@ -371,8 +371,20 @@ static void nfs41_check_drain_session_complete(struct nfs4_session *ses)
 	if (ses->fc_slot_table.highest_used_slotid != -1)
 		return;
 
-	dprintk("%s COMPLETE: Session Drained\n", __func__);
-	complete(&ses->complete);
+	dprintk("%s COMPLETE: Session Fore Channel Drained\n", __func__);
+	complete(&ses->fc_slot_table.complete);
+}
+
+/*
+ * Signal state manager thread if session back channel is drained
+ */
+void nfs4_check_drain_bc_complete(struct nfs4_session *ses)
+{
+	if (!test_bit(NFS4_SESSION_DRAINING, &ses->session_state) ||
+	    ses->bc_slot_table.highest_used_slotid != -1)
+		return;
+	dprintk("%s COMPLETE: Session Back Channel Drained\n", __func__);
+	complete(&ses->bc_slot_table.complete);
 }
 
 static void nfs41_sequence_free_slot(struct nfs4_sequence_res *res)
@@ -389,7 +401,7 @@ static void nfs41_sequence_free_slot(struct nfs4_sequence_res *res)
 
 	spin_lock(&tbl->slot_tbl_lock);
 	nfs4_free_slot(tbl, res->sr_slot);
-	nfs41_check_drain_session_complete(res->sr_session);
+	nfs4_check_drain_fc_complete(res->sr_session);
 	spin_unlock(&tbl->slot_tbl_lock);
 	res->sr_slot = NULL;
 }
@@ -4781,17 +4793,17 @@ struct nfs4_session *nfs4_alloc_session(struct nfs_client *clp)
 	if (!session)
 		return NULL;
 
-	init_completion(&session->complete);
-
 	tbl = &session->fc_slot_table;
 	tbl->highest_used_slotid = -1;
 	spin_lock_init(&tbl->slot_tbl_lock);
 	rpc_init_priority_wait_queue(&tbl->slot_tbl_waitq, "ForeChannel Slot table");
+	init_completion(&tbl->complete);
 
 	tbl = &session->bc_slot_table;
 	tbl->highest_used_slotid = -1;
 	spin_lock_init(&tbl->slot_tbl_lock);
 	rpc_init_wait_queue(&tbl->slot_tbl_waitq, "BackChannel Slot table");
+	init_completion(&tbl->complete);
 
 	session->session_state = 1<<NFS4_SESSION_INITING;
 
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 11290de..fb23a32 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -143,6 +143,11 @@ static int nfs41_setup_state_renewal(struct nfs_client *clp)
 	return status;
 }
 
+/*
+ * Back channel returns NFS4ERR_DELAY for new requests when
+ * NFS4_SESSION_DRAINING is set so there is no work to be done when draining
+ * is ended.
+ */
 static void nfs4_end_drain_session(struct nfs_client *clp)
 {
 	struct nfs4_session *ses = clp->cl_session;
@@ -166,22 +171,32 @@ static void nfs4_end_drain_session(struct nfs_client *clp)
 	}
 }
 
-static int nfs4_begin_drain_session(struct nfs_client *clp)
+static int nfs4_wait_on_slot_tbl(struct nfs4_slot_table *tbl)
 {
-	struct nfs4_session *ses = clp->cl_session;
-	struct nfs4_slot_table *tbl = &ses->fc_slot_table;
-
 	spin_lock(&tbl->slot_tbl_lock);
-	set_bit(NFS4_SESSION_DRAINING, &ses->session_state);
 	if (tbl->highest_used_slotid != -1) {
-		INIT_COMPLETION(ses->complete);
+		INIT_COMPLETION(tbl->complete);
 		spin_unlock(&tbl->slot_tbl_lock);
-		return wait_for_completion_interruptible(&ses->complete);
+		return wait_for_completion_interruptible(&tbl->complete);
 	}
 	spin_unlock(&tbl->slot_tbl_lock);
 	return 0;
 }
 
+static int nfs4_begin_drain_session(struct nfs_client *clp)
+{
+	struct nfs4_session *ses = clp->cl_session;
+	int ret = 0;
+
+	set_bit(NFS4_SESSION_DRAINING, &ses->session_state);
+	/* back channel */
+	ret = nfs4_wait_on_slot_tbl(&ses->bc_slot_table);
+	if (ret)
+		return ret;
+	/* fore channel */
+	return nfs4_wait_on_slot_tbl(&ses->fc_slot_table);
+}
+
 int nfs41_init_clientid(struct nfs_client *clp, struct rpc_cred *cred)
 {
 	int status;
diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
index 1eaa054..e93ada0 100644
--- a/include/linux/nfs_fs_sb.h
+++ b/include/linux/nfs_fs_sb.h
@@ -197,6 +197,7 @@ struct nfs4_slot_table {
 						 * op for dynamic resizing */
 	int		target_max_slots;	/* Set by CB_RECALL_SLOT as
 						 * the new max_slots */
+	struct completion complete;
 };
 
 static inline int slot_idx(struct nfs4_slot_table *tbl, struct nfs4_slot *sp)
@@ -213,7 +214,6 @@ struct nfs4_session {
 	unsigned long			session_state;
 	u32				hash_alg;
 	u32				ssv_len;
-	struct completion		complete;
 
 	/* The fore and back channel */
 	struct nfs4_channel_attrs	fc_attrs;
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH_V4 9/9] NFS rename client back channel transport field
  2010-12-17 18:20               ` [PATCH_V4 8/9] NFS add session back channel draining andros
@ 2010-12-17 18:20                 ` andros
  0 siblings, 0 replies; 21+ messages in thread
From: andros @ 2010-12-17 18:20 UTC (permalink / raw)
  To: trond.myklebust; +Cc: bfields, linux-nfs, Andy Adamson

From: Andy Adamson <andros@netapp.com>

Differentiate from server backchannel

Signed-off-by: Andy Adamson <andros@netapp.com>
---
 fs/nfs/callback.c              |   12 ++++++------
 include/linux/sunrpc/bc_xprt.h |    4 ++--
 include/linux/sunrpc/svc.h     |    2 +-
 net/sunrpc/svc.c               |    6 +++---
 net/sunrpc/svcsock.c           |    2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 52e6695..e9a4453 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -142,7 +142,7 @@ int nfs4_set_callback_sessionid(struct nfs_client *clp)
 	struct svc_serv *serv = clp->cl_rpcclient->cl_xprt->bc_serv;
 	struct nfs4_sessionid *bc_sid;
 
-	if (!serv->bc_xprt)
+	if (!serv->sv_bc_xprt)
 		return -EINVAL;
 
 	/* on success freed in xprt_free */
@@ -152,12 +152,12 @@ int nfs4_set_callback_sessionid(struct nfs_client *clp)
 	memcpy(bc_sid->data, &clp->cl_session->sess_id.data,
 		NFS4_MAX_SESSIONID_LEN);
 	spin_lock_bh(&serv->sv_cb_lock);
-	serv->bc_xprt->xpt_bc_sid = bc_sid;
+	serv->sv_bc_xprt->xpt_bc_sid = bc_sid;
 	spin_unlock_bh(&serv->sv_cb_lock);
-	dprintk("%s set xpt_bc_sid=%u:%u:%u:%u for bc_xprt %p\n", __func__,
+	dprintk("%s set xpt_bc_sid=%u:%u:%u:%u for sv_bc_xprt %p\n", __func__,
 		((u32 *)bc_sid->data)[0], ((u32 *)bc_sid->data)[1],
 		((u32 *)bc_sid->data)[2], ((u32 *)bc_sid->data)[3],
-		serv->bc_xprt);
+		serv->sv_bc_xprt);
 	return 0;
 }
 
@@ -233,8 +233,8 @@ nfs41_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
 	init_waitqueue_head(&serv->sv_cb_waitq);
 	rqstp = svc_prepare_thread(serv, &serv->sv_pools[0]);
 	if (IS_ERR(rqstp)) {
-		svc_xprt_put(serv->bc_xprt);
-		serv->bc_xprt = NULL;
+		svc_xprt_put(serv->sv_bc_xprt);
+		serv->sv_bc_xprt = NULL;
 	}
 out:
 	dprintk("--> %s return %ld\n", __func__, PTR_ERR(rqstp));
diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
index 2c60e09..c50b458 100644
--- a/include/linux/sunrpc/bc_xprt.h
+++ b/include/linux/sunrpc/bc_xprt.h
@@ -43,7 +43,7 @@ int bc_send(struct rpc_rqst *req);
  */
 static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
 {
-	if (rqstp->rq_server->bc_xprt)
+	if (rqstp->rq_server->sv_bc_xprt)
 		return 1;
 	return 0;
 }
@@ -51,7 +51,7 @@ static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
 {
 	if (svc_is_backchannel(rqstp))
 		return (struct nfs4_sessionid *)
-					rqstp->rq_server->bc_xprt->xpt_bc_sid;
+			rqstp->rq_server->sv_bc_xprt->xpt_bc_sid;
 	return NULL;
 }
 
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 5a3085b..c81d4d8 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -99,7 +99,7 @@ struct svc_serv {
 	spinlock_t		sv_cb_lock;	/* protects the svc_cb_list */
 	wait_queue_head_t	sv_cb_waitq;	/* sleep here if there are no
 						 * entries in the svc_cb_list */
-	struct svc_xprt		*bc_xprt;
+	struct svc_xprt		*sv_bc_xprt;	/* callback on fore channel */
 #endif /* CONFIG_NFS_V4_1 */
 };
 
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 3520cb3..5f7397b 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -1263,7 +1263,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
 	int		ret;
 
 	/* Build the svc_rqst used by the common processing routine */
-	rqstp->rq_xprt = serv->bc_xprt;
+	rqstp->rq_xprt = serv->sv_bc_xprt;
 	rqstp->rq_xid = req->rq_xid;
 	rqstp->rq_prot = req->rq_xprt->prot;
 	rqstp->rq_server = serv;
@@ -1289,14 +1289,14 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
 	svc_getnl(argv);	/* CALLDIR */
 
 	/* Hold a reference to the back channel socket across the call */
-	svc_xprt_get(serv->bc_xprt);
+	svc_xprt_get(serv->sv_bc_xprt);
 	ret = svc_process_common(rqstp, argv, resv);
 	if (ret <= 0)
 		return ret;
 
 	memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
 	ret = bc_send(req);
-	svc_xprt_put(serv->bc_xprt);
+	svc_xprt_put(serv->sv_bc_xprt);
 	return ret;
 }
 EXPORT_SYMBOL(bc_svc_process);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index f8b5ca9..e495a45 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1615,7 +1615,7 @@ static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
 	xprt = &svsk->sk_xprt;
 	svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
 
-	serv->bc_xprt = xprt;
+	serv->sv_bc_xprt = xprt;
 
 	return xprt;
 }
-- 
1.6.6


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 18:20 ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel andros
  2010-12-17 18:20   ` [PATCH_V4 2/9] NFS use svc_create_xprt for NFSv4.1 callback service andros
@ 2010-12-17 18:40   ` J. Bruce Fields
  2010-12-17 18:54     ` William A. (Andy) Adamson
  1 sibling, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2010-12-17 18:40 UTC (permalink / raw)
  To: andros; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
> From: Andy Adamson <andros@netapp.com>
> 
> Create a new transport for the shared back channel.l Move the current sock
> create and destroy routines into the new transport ops.
> 
> Reference the back channel transport across message processing (recv/send).
> 
> Signed-off-by: Andy Adamson <andros@netapp.com>
> ---
>  include/linux/sunrpc/svcsock.h |    1 +
>  net/sunrpc/svc.c               |   18 +++---
>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
>  3 files changed, 112 insertions(+), 29 deletions(-)
> 
> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> index 1b353a7..3a45a80 100644
> --- a/include/linux/sunrpc/svcsock.h
> +++ b/include/linux/sunrpc/svcsock.h
> @@ -45,6 +45,7 @@ int		svc_sock_names(struct svc_serv *serv, char *buf,
>  int		svc_addsock(struct svc_serv *serv, const int fd,
>  					char *name_return, const size_t len);
>  void		svc_init_xprt_sock(void);
> +void		svc_init_bc_xprt_sock(void);
>  void		svc_cleanup_xprt_sock(void);
>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
>  void		svc_sock_destroy(struct svc_xprt *);
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index 6359c42..3520cb3 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
>  	if (svc_serv_is_pooled(serv))
>  		svc_pool_map_put();
>  
> -#if defined(CONFIG_NFS_V4_1)
> -	svc_sock_destroy(serv->bc_xprt);
> -#endif /* CONFIG_NFS_V4_1 */
> -
>  	svc_unregister(serv);
>  	kfree(serv->sv_pools);
>  	kfree(serv);
> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>  {
>  	struct kvec	*argv = &rqstp->rq_arg.head[0];
>  	struct kvec	*resv = &rqstp->rq_res.head[0];
> -	int 		error;
> +	int		ret;
>  
>  	/* Build the svc_rqst used by the common processing routine */
>  	rqstp->rq_xprt = serv->bc_xprt;
> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>  	svc_getu32(argv);	/* XID */
>  	svc_getnl(argv);	/* CALLDIR */
>  
> -	error = svc_process_common(rqstp, argv, resv);
> -	if (error <= 0)
> -		return error;
> +	/* Hold a reference to the back channel socket across the call */
> +	svc_xprt_get(serv->bc_xprt);

Either we already have a reference, in which case this is unnecessary,
or we don't, in which case this is risky.

The nfs client code knows when it wants to create and destroy the
backchannel, so I think all you want is to create the svc_xprt when you
create the backchannel and then put it once when you bring it down.

--b.

> +	ret = svc_process_common(rqstp, argv, resv);
> +	if (ret <= 0)
> +		return ret;
>  
>  	memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
> -	return bc_send(req);
> +	ret = bc_send(req);
> +	svc_xprt_put(serv->bc_xprt);
> +	return ret;
>  }
>  EXPORT_SYMBOL(bc_svc_process);
>  #endif /* CONFIG_NFS_V4_1 */
> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> index 07919e1..5875551 100644
> --- a/net/sunrpc/svcsock.c
> +++ b/net/sunrpc/svcsock.c
> @@ -66,6 +66,13 @@ static void		svc_sock_free(struct svc_xprt *);
>  static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
>  					  struct net *, struct sockaddr *,
>  					  int, int);
> +#if defined(CONFIG_NFS_V4_1)
> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
> +					     struct net *, struct sockaddr *,
> +					     int, int);
> +static void svc_bc_sock_free(struct svc_xprt *xprt);
> +#endif /* CONFIG_NFS_V4_1 */
> +
>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>  static struct lock_class_key svc_key[2];
>  static struct lock_class_key svc_slock_key[2];
> @@ -1184,6 +1191,79 @@ static struct svc_xprt *svc_tcp_create(struct svc_serv *serv,
>  	return svc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>  }
>  
> +#if defined(CONFIG_NFS_V4_1)
> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
> +					     struct net *, struct sockaddr *,
> +					     int, int);
> +static void svc_bc_sock_free(struct svc_xprt *xprt);
> +
> +static struct svc_xprt *svc_bc_tcp_create(struct svc_serv *serv,
> +				       struct net *net,
> +				       struct sockaddr *sa, int salen,
> +				       int flags)
> +{
> +	return svc_bc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
> +}
> +
> +static void svc_bc_tcp_sock_detach(struct svc_xprt *xprt)
> +{
> +}
> +
> +/* These bc ops are never called. The shared fore channel is used instead */
> +static int svc_bc_tcp_recvfrom(struct svc_rqst *rqstp)
> +{
> +	return 0;
> +}
> +
> +static int svc_bc_tcp_sendto(struct svc_rqst *rqstp)
> +{
> +	return 0;
> +}
> +
> +static void svc_bc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
> +{
> +}
> +
> +static void svc_bc_release_skb(struct svc_rqst *rqstp)
> +{
> +}
> +
> +static int svc_bc_tcp_has_wspace(struct svc_xprt *xprt)
> +{
> +	return 0;
> +}
> +
> +static struct svc_xprt *svc_bc_tcp_accept(struct svc_xprt *xprt)
> +{
> +	return NULL;
> +}
> +
> +static struct svc_xprt_ops svc_tcp_bc_ops = {
> +	.xpo_create = svc_bc_tcp_create,
> +	.xpo_recvfrom = svc_bc_tcp_recvfrom,
> +	.xpo_sendto = svc_bc_tcp_sendto,
> +	.xpo_release_rqst = svc_bc_release_skb,
> +	.xpo_detach = svc_bc_tcp_sock_detach,
> +	.xpo_free = svc_bc_sock_free,
> +	.xpo_prep_reply_hdr = svc_bc_tcp_prep_reply_hdr,
> +	.xpo_has_wspace = svc_bc_tcp_has_wspace,
> +	.xpo_accept = svc_bc_tcp_accept,
> +};
> +
> +static struct svc_xprt_class svc_tcp_bc_class = {
> +	.xcl_name = "tcp-bc",
> +	.xcl_owner = THIS_MODULE,
> +	.xcl_ops = &svc_tcp_bc_ops,
> +	.xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
> +};
> +
> +void svc_init_bc_xprt_sock(void)
> +{
> +	svc_reg_xprt_class(&svc_tcp_bc_class);
> +}
> +EXPORT_SYMBOL_GPL(svc_init_bc_xprt_sock);
> +#endif /* CONFIG_NFS_V4_1 */
> +
>  static struct svc_xprt_ops svc_tcp_ops = {
>  	.xpo_create = svc_tcp_create,
>  	.xpo_recvfrom = svc_tcp_recvfrom,
> @@ -1509,41 +1589,43 @@ static void svc_sock_free(struct svc_xprt *xprt)
>  	kfree(svsk);
>  }
>  
> +#if defined(CONFIG_NFS_V4_1)
>  /*
> - * Create a svc_xprt.
> - *
> - * For internal use only (e.g. nfsv4.1 backchannel).
> - * Callers should typically use the xpo_create() method.
> + * Create a back channel svc_xprt which shares the fore channel socket.
>   */
> -struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot)
> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
> +					     int protocol,
> +					     struct net *net,
> +					     struct sockaddr *sin, int len,
> +					     int flags)
>  {
>  	struct svc_sock *svsk;
> -	struct svc_xprt *xprt = NULL;
> +	struct svc_xprt *xprt;
> +
> +	if (protocol != IPPROTO_TCP) {
> +		printk(KERN_WARNING "svc: only and TCP sockets"
> +			" supported on shared back channel\n");
> +		return ERR_PTR(-EINVAL);
> +	}
>  
> -	dprintk("svc: %s\n", __func__);
>  	svsk = kzalloc(sizeof(*svsk), GFP_KERNEL);
>  	if (!svsk)
> -		goto out;
> +		return ERR_PTR(-ENOMEM);
>  
>  	xprt = &svsk->sk_xprt;
> -	if (prot == IPPROTO_TCP)
> -		svc_xprt_init(&svc_tcp_class, xprt, serv);
> -	else if (prot == IPPROTO_UDP)
> -		svc_xprt_init(&svc_udp_class, xprt, serv);
> -	else
> -		BUG();
> -out:
> -	dprintk("svc: %s return %p\n", __func__, xprt);
> +	svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
> +
> +	serv->bc_xprt = xprt;
> +
>  	return xprt;
>  }
> -EXPORT_SYMBOL_GPL(svc_sock_create);
>  
>  /*
> - * Destroy a svc_sock.
> + * Free a back channel svc_sock.
>   */
> -void svc_sock_destroy(struct svc_xprt *xprt)
> +static void svc_bc_sock_free(struct svc_xprt *xprt)
>  {
>  	if (xprt)
>  		kfree(container_of(xprt, struct svc_sock, sk_xprt));
>  }
> -EXPORT_SYMBOL_GPL(svc_sock_destroy);
> +#endif /* CONFIG_NFS_V4_1 */
> -- 
> 1.6.6
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 18:40   ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel J. Bruce Fields
@ 2010-12-17 18:54     ` William A. (Andy) Adamson
  2010-12-17 18:57       ` William A. (Andy) Adamson
  0 siblings, 1 reply; 21+ messages in thread
From: William A. (Andy) Adamson @ 2010-12-17 18:54 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
>> From: Andy Adamson <andros@netapp.com>
>>
>> Create a new transport for the shared back channel.l Move the current sock
>> create and destroy routines into the new transport ops.
>>
>> Reference the back channel transport across message processing (recv/send).
>>
>> Signed-off-by: Andy Adamson <andros@netapp.com>
>> ---
>>  include/linux/sunrpc/svcsock.h |    1 +
>>  net/sunrpc/svc.c               |   18 +++---
>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
>>  3 files changed, 112 insertions(+), 29 deletions(-)
>>
>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
>> index 1b353a7..3a45a80 100644
>> --- a/include/linux/sunrpc/svcsock.h
>> +++ b/include/linux/sunrpc/svcsock.h
>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
>>  int          svc_addsock(struct svc_serv *serv, const int fd,
>>                                       char *name_return, const size_t len);
>>  void         svc_init_xprt_sock(void);
>> +void         svc_init_bc_xprt_sock(void);
>>  void         svc_cleanup_xprt_sock(void);
>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
>>  void         svc_sock_destroy(struct svc_xprt *);
>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> index 6359c42..3520cb3 100644
>> --- a/net/sunrpc/svc.c
>> +++ b/net/sunrpc/svc.c
>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
>>       if (svc_serv_is_pooled(serv))
>>               svc_pool_map_put();
>>
>> -#if defined(CONFIG_NFS_V4_1)
>> -     svc_sock_destroy(serv->bc_xprt);
>> -#endif /* CONFIG_NFS_V4_1 */
>> -
>>       svc_unregister(serv);
>>       kfree(serv->sv_pools);
>>       kfree(serv);
>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>>  {
>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
>>       struct kvec     *resv = &rqstp->rq_res.head[0];
>> -     int             error;
>> +     int             ret;
>>
>>       /* Build the svc_rqst used by the common processing routine */
>>       rqstp->rq_xprt = serv->bc_xprt;
>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>>       svc_getu32(argv);       /* XID */
>>       svc_getnl(argv);        /* CALLDIR */
>>
>> -     error = svc_process_common(rqstp, argv, resv);
>> -     if (error <= 0)
>> -             return error;
>> +     /* Hold a reference to the back channel socket across the call */
>> +     svc_xprt_get(serv->bc_xprt);
>
> Either we already have a reference, in which case this is unnecessary,
> or we don't, in which case this is risky.

This is done so that when svc_xprt_put is called due to an svc_drop
(svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
because AFAICS it's the way the rpc server code is intended to work.
Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
for the same reason.

> The nfs client code knows when it wants to create and destroy the
> backchannel, so I think all you want is to create the svc_xprt when you
> create the backchannel and then put it once when you bring it down.

Correct, and without the references in bc_svc_process, svc_drop == freeing.

-->Andy
>
> --b.
>
>> +     ret = svc_process_common(rqstp, argv, resv);
>> +     if (ret <= 0)
>> +             return ret;
>>
>>       memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
>> -     return bc_send(req);
>> +     ret = bc_send(req);
>> +     svc_xprt_put(serv->bc_xprt);
>> +     return ret;
>>  }
>>  EXPORT_SYMBOL(bc_svc_process);
>>  #endif /* CONFIG_NFS_V4_1 */
>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>> index 07919e1..5875551 100644
>> --- a/net/sunrpc/svcsock.c
>> +++ b/net/sunrpc/svcsock.c
>> @@ -66,6 +66,13 @@ static void                svc_sock_free(struct svc_xprt *);
>>  static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
>>                                         struct net *, struct sockaddr *,
>>                                         int, int);
>> +#if defined(CONFIG_NFS_V4_1)
>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
>> +                                          struct net *, struct sockaddr *,
>> +                                          int, int);
>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
>> +#endif /* CONFIG_NFS_V4_1 */
>> +
>>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>>  static struct lock_class_key svc_key[2];
>>  static struct lock_class_key svc_slock_key[2];
>> @@ -1184,6 +1191,79 @@ static struct svc_xprt *svc_tcp_create(struct svc_serv *serv,
>>       return svc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>>  }
>>
>> +#if defined(CONFIG_NFS_V4_1)
>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
>> +                                          struct net *, struct sockaddr *,
>> +                                          int, int);
>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
>> +
>> +static struct svc_xprt *svc_bc_tcp_create(struct svc_serv *serv,
>> +                                    struct net *net,
>> +                                    struct sockaddr *sa, int salen,
>> +                                    int flags)
>> +{
>> +     return svc_bc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>> +}
>> +
>> +static void svc_bc_tcp_sock_detach(struct svc_xprt *xprt)
>> +{
>> +}
>> +
>> +/* These bc ops are never called. The shared fore channel is used instead */
>> +static int svc_bc_tcp_recvfrom(struct svc_rqst *rqstp)
>> +{
>> +     return 0;
>> +}
>> +
>> +static int svc_bc_tcp_sendto(struct svc_rqst *rqstp)
>> +{
>> +     return 0;
>> +}
>> +
>> +static void svc_bc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
>> +{
>> +}
>> +
>> +static void svc_bc_release_skb(struct svc_rqst *rqstp)
>> +{
>> +}
>> +
>> +static int svc_bc_tcp_has_wspace(struct svc_xprt *xprt)
>> +{
>> +     return 0;
>> +}
>> +
>> +static struct svc_xprt *svc_bc_tcp_accept(struct svc_xprt *xprt)
>> +{
>> +     return NULL;
>> +}
>> +
>> +static struct svc_xprt_ops svc_tcp_bc_ops = {
>> +     .xpo_create = svc_bc_tcp_create,
>> +     .xpo_recvfrom = svc_bc_tcp_recvfrom,
>> +     .xpo_sendto = svc_bc_tcp_sendto,
>> +     .xpo_release_rqst = svc_bc_release_skb,
>> +     .xpo_detach = svc_bc_tcp_sock_detach,
>> +     .xpo_free = svc_bc_sock_free,
>> +     .xpo_prep_reply_hdr = svc_bc_tcp_prep_reply_hdr,
>> +     .xpo_has_wspace = svc_bc_tcp_has_wspace,
>> +     .xpo_accept = svc_bc_tcp_accept,
>> +};
>> +
>> +static struct svc_xprt_class svc_tcp_bc_class = {
>> +     .xcl_name = "tcp-bc",
>> +     .xcl_owner = THIS_MODULE,
>> +     .xcl_ops = &svc_tcp_bc_ops,
>> +     .xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
>> +};
>> +
>> +void svc_init_bc_xprt_sock(void)
>> +{
>> +     svc_reg_xprt_class(&svc_tcp_bc_class);
>> +}
>> +EXPORT_SYMBOL_GPL(svc_init_bc_xprt_sock);
>> +#endif /* CONFIG_NFS_V4_1 */
>> +
>>  static struct svc_xprt_ops svc_tcp_ops = {
>>       .xpo_create = svc_tcp_create,
>>       .xpo_recvfrom = svc_tcp_recvfrom,
>> @@ -1509,41 +1589,43 @@ static void svc_sock_free(struct svc_xprt *xprt)
>>       kfree(svsk);
>>  }
>>
>> +#if defined(CONFIG_NFS_V4_1)
>>  /*
>> - * Create a svc_xprt.
>> - *
>> - * For internal use only (e.g. nfsv4.1 backchannel).
>> - * Callers should typically use the xpo_create() method.
>> + * Create a back channel svc_xprt which shares the fore channel socket.
>>   */
>> -struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot)
>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
>> +                                          int protocol,
>> +                                          struct net *net,
>> +                                          struct sockaddr *sin, int len,
>> +                                          int flags)
>>  {
>>       struct svc_sock *svsk;
>> -     struct svc_xprt *xprt = NULL;
>> +     struct svc_xprt *xprt;
>> +
>> +     if (protocol != IPPROTO_TCP) {
>> +             printk(KERN_WARNING "svc: only and TCP sockets"
>> +                     " supported on shared back channel\n");
>> +             return ERR_PTR(-EINVAL);
>> +     }
>>
>> -     dprintk("svc: %s\n", __func__);
>>       svsk = kzalloc(sizeof(*svsk), GFP_KERNEL);
>>       if (!svsk)
>> -             goto out;
>> +             return ERR_PTR(-ENOMEM);
>>
>>       xprt = &svsk->sk_xprt;
>> -     if (prot == IPPROTO_TCP)
>> -             svc_xprt_init(&svc_tcp_class, xprt, serv);
>> -     else if (prot == IPPROTO_UDP)
>> -             svc_xprt_init(&svc_udp_class, xprt, serv);
>> -     else
>> -             BUG();
>> -out:
>> -     dprintk("svc: %s return %p\n", __func__, xprt);
>> +     svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
>> +
>> +     serv->bc_xprt = xprt;
>> +
>>       return xprt;
>>  }
>> -EXPORT_SYMBOL_GPL(svc_sock_create);
>>
>>  /*
>> - * Destroy a svc_sock.
>> + * Free a back channel svc_sock.
>>   */
>> -void svc_sock_destroy(struct svc_xprt *xprt)
>> +static void svc_bc_sock_free(struct svc_xprt *xprt)
>>  {
>>       if (xprt)
>>               kfree(container_of(xprt, struct svc_sock, sk_xprt));
>>  }
>> -EXPORT_SYMBOL_GPL(svc_sock_destroy);
>> +#endif /* CONFIG_NFS_V4_1 */
>> --
>> 1.6.6
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 18:54     ` William A. (Andy) Adamson
@ 2010-12-17 18:57       ` William A. (Andy) Adamson
  2010-12-17 19:10         ` J. Bruce Fields
  0 siblings, 1 reply; 21+ messages in thread
From: William A. (Andy) Adamson @ 2010-12-17 18:57 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
<androsadamson@gmail.com> wrote:
> On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
>> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
>>> From: Andy Adamson <andros@netapp.com>
>>>
>>> Create a new transport for the shared back channel.l Move the current sock
>>> create and destroy routines into the new transport ops.
>>>
>>> Reference the back channel transport across message processing (recv/send).
>>>
>>> Signed-off-by: Andy Adamson <andros@netapp.com>
>>> ---
>>>  include/linux/sunrpc/svcsock.h |    1 +
>>>  net/sunrpc/svc.c               |   18 +++---
>>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
>>>  3 files changed, 112 insertions(+), 29 deletions(-)
>>>
>>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
>>> index 1b353a7..3a45a80 100644
>>> --- a/include/linux/sunrpc/svcsock.h
>>> +++ b/include/linux/sunrpc/svcsock.h
>>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
>>>  int          svc_addsock(struct svc_serv *serv, const int fd,
>>>                                       char *name_return, const size_t len);
>>>  void         svc_init_xprt_sock(void);
>>> +void         svc_init_bc_xprt_sock(void);
>>>  void         svc_cleanup_xprt_sock(void);
>>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
>>>  void         svc_sock_destroy(struct svc_xprt *);
>>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>>> index 6359c42..3520cb3 100644
>>> --- a/net/sunrpc/svc.c
>>> +++ b/net/sunrpc/svc.c
>>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
>>>       if (svc_serv_is_pooled(serv))
>>>               svc_pool_map_put();
>>>
>>> -#if defined(CONFIG_NFS_V4_1)
>>> -     svc_sock_destroy(serv->bc_xprt);
>>> -#endif /* CONFIG_NFS_V4_1 */
>>> -
>>>       svc_unregister(serv);
>>>       kfree(serv->sv_pools);
>>>       kfree(serv);
>>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>>>  {
>>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
>>>       struct kvec     *resv = &rqstp->rq_res.head[0];
>>> -     int             error;
>>> +     int             ret;
>>>
>>>       /* Build the svc_rqst used by the common processing routine */
>>>       rqstp->rq_xprt = serv->bc_xprt;
>>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>>>       svc_getu32(argv);       /* XID */
>>>       svc_getnl(argv);        /* CALLDIR */
>>>
>>> -     error = svc_process_common(rqstp, argv, resv);
>>> -     if (error <= 0)
>>> -             return error;
>>> +     /* Hold a reference to the back channel socket across the call */
>>> +     svc_xprt_get(serv->bc_xprt);
>>
>> Either we already have a reference, in which case this is unnecessary,
>> or we don't, in which case this is risky.
>
> This is done so that when svc_xprt_put is called due to an svc_drop
> (svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
> because AFAICS it's the way the rpc server code is intended to work.
> Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
> for the same reason.

I take it back - as written, it might be risky :) Perhaps I should
make sure that the xpt_ref = 1 before returning from bc_svc_process.

-->Andy
>
>> The nfs client code knows when it wants to create and destroy the
>> backchannel, so I think all you want is to create the svc_xprt when you
>> create the backchannel and then put it once when you bring it down.
>
> Correct, and without the references in bc_svc_process, svc_drop == freeing.
>
> -->Andy
>>
>> --b.
>>
>>> +     ret = svc_process_common(rqstp, argv, resv);
>>> +     if (ret <= 0)
>>> +             return ret;
>>>
>>>       memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
>>> -     return bc_send(req);
>>> +     ret = bc_send(req);
>>> +     svc_xprt_put(serv->bc_xprt);
>>> +     return ret;
>>>  }
>>>  EXPORT_SYMBOL(bc_svc_process);
>>>  #endif /* CONFIG_NFS_V4_1 */
>>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>>> index 07919e1..5875551 100644
>>> --- a/net/sunrpc/svcsock.c
>>> +++ b/net/sunrpc/svcsock.c
>>> @@ -66,6 +66,13 @@ static void                svc_sock_free(struct svc_xprt *);
>>>  static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
>>>                                         struct net *, struct sockaddr *,
>>>                                         int, int);
>>> +#if defined(CONFIG_NFS_V4_1)
>>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
>>> +                                          struct net *, struct sockaddr *,
>>> +                                          int, int);
>>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
>>> +#endif /* CONFIG_NFS_V4_1 */
>>> +
>>>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>>>  static struct lock_class_key svc_key[2];
>>>  static struct lock_class_key svc_slock_key[2];
>>> @@ -1184,6 +1191,79 @@ static struct svc_xprt *svc_tcp_create(struct svc_serv *serv,
>>>       return svc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>>>  }
>>>
>>> +#if defined(CONFIG_NFS_V4_1)
>>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
>>> +                                          struct net *, struct sockaddr *,
>>> +                                          int, int);
>>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
>>> +
>>> +static struct svc_xprt *svc_bc_tcp_create(struct svc_serv *serv,
>>> +                                    struct net *net,
>>> +                                    struct sockaddr *sa, int salen,
>>> +                                    int flags)
>>> +{
>>> +     return svc_bc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>>> +}
>>> +
>>> +static void svc_bc_tcp_sock_detach(struct svc_xprt *xprt)
>>> +{
>>> +}
>>> +
>>> +/* These bc ops are never called. The shared fore channel is used instead */
>>> +static int svc_bc_tcp_recvfrom(struct svc_rqst *rqstp)
>>> +{
>>> +     return 0;
>>> +}
>>> +
>>> +static int svc_bc_tcp_sendto(struct svc_rqst *rqstp)
>>> +{
>>> +     return 0;
>>> +}
>>> +
>>> +static void svc_bc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
>>> +{
>>> +}
>>> +
>>> +static void svc_bc_release_skb(struct svc_rqst *rqstp)
>>> +{
>>> +}
>>> +
>>> +static int svc_bc_tcp_has_wspace(struct svc_xprt *xprt)
>>> +{
>>> +     return 0;
>>> +}
>>> +
>>> +static struct svc_xprt *svc_bc_tcp_accept(struct svc_xprt *xprt)
>>> +{
>>> +     return NULL;
>>> +}
>>> +
>>> +static struct svc_xprt_ops svc_tcp_bc_ops = {
>>> +     .xpo_create = svc_bc_tcp_create,
>>> +     .xpo_recvfrom = svc_bc_tcp_recvfrom,
>>> +     .xpo_sendto = svc_bc_tcp_sendto,
>>> +     .xpo_release_rqst = svc_bc_release_skb,
>>> +     .xpo_detach = svc_bc_tcp_sock_detach,
>>> +     .xpo_free = svc_bc_sock_free,
>>> +     .xpo_prep_reply_hdr = svc_bc_tcp_prep_reply_hdr,
>>> +     .xpo_has_wspace = svc_bc_tcp_has_wspace,
>>> +     .xpo_accept = svc_bc_tcp_accept,
>>> +};
>>> +
>>> +static struct svc_xprt_class svc_tcp_bc_class = {
>>> +     .xcl_name = "tcp-bc",
>>> +     .xcl_owner = THIS_MODULE,
>>> +     .xcl_ops = &svc_tcp_bc_ops,
>>> +     .xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
>>> +};
>>> +
>>> +void svc_init_bc_xprt_sock(void)
>>> +{
>>> +     svc_reg_xprt_class(&svc_tcp_bc_class);
>>> +}
>>> +EXPORT_SYMBOL_GPL(svc_init_bc_xprt_sock);
>>> +#endif /* CONFIG_NFS_V4_1 */
>>> +
>>>  static struct svc_xprt_ops svc_tcp_ops = {
>>>       .xpo_create = svc_tcp_create,
>>>       .xpo_recvfrom = svc_tcp_recvfrom,
>>> @@ -1509,41 +1589,43 @@ static void svc_sock_free(struct svc_xprt *xprt)
>>>       kfree(svsk);
>>>  }
>>>
>>> +#if defined(CONFIG_NFS_V4_1)
>>>  /*
>>> - * Create a svc_xprt.
>>> - *
>>> - * For internal use only (e.g. nfsv4.1 backchannel).
>>> - * Callers should typically use the xpo_create() method.
>>> + * Create a back channel svc_xprt which shares the fore channel socket.
>>>   */
>>> -struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot)
>>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
>>> +                                          int protocol,
>>> +                                          struct net *net,
>>> +                                          struct sockaddr *sin, int len,
>>> +                                          int flags)
>>>  {
>>>       struct svc_sock *svsk;
>>> -     struct svc_xprt *xprt = NULL;
>>> +     struct svc_xprt *xprt;
>>> +
>>> +     if (protocol != IPPROTO_TCP) {
>>> +             printk(KERN_WARNING "svc: only and TCP sockets"
>>> +                     " supported on shared back channel\n");
>>> +             return ERR_PTR(-EINVAL);
>>> +     }
>>>
>>> -     dprintk("svc: %s\n", __func__);
>>>       svsk = kzalloc(sizeof(*svsk), GFP_KERNEL);
>>>       if (!svsk)
>>> -             goto out;
>>> +             return ERR_PTR(-ENOMEM);
>>>
>>>       xprt = &svsk->sk_xprt;
>>> -     if (prot == IPPROTO_TCP)
>>> -             svc_xprt_init(&svc_tcp_class, xprt, serv);
>>> -     else if (prot == IPPROTO_UDP)
>>> -             svc_xprt_init(&svc_udp_class, xprt, serv);
>>> -     else
>>> -             BUG();
>>> -out:
>>> -     dprintk("svc: %s return %p\n", __func__, xprt);
>>> +     svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
>>> +
>>> +     serv->bc_xprt = xprt;
>>> +
>>>       return xprt;
>>>  }
>>> -EXPORT_SYMBOL_GPL(svc_sock_create);
>>>
>>>  /*
>>> - * Destroy a svc_sock.
>>> + * Free a back channel svc_sock.
>>>   */
>>> -void svc_sock_destroy(struct svc_xprt *xprt)
>>> +static void svc_bc_sock_free(struct svc_xprt *xprt)
>>>  {
>>>       if (xprt)
>>>               kfree(container_of(xprt, struct svc_sock, sk_xprt));
>>>  }
>>> -EXPORT_SYMBOL_GPL(svc_sock_destroy);
>>> +#endif /* CONFIG_NFS_V4_1 */
>>> --
>>> 1.6.6
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 18:57       ` William A. (Andy) Adamson
@ 2010-12-17 19:10         ` J. Bruce Fields
  2010-12-17 19:16           ` William A. (Andy) Adamson
  0 siblings, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2010-12-17 19:10 UTC (permalink / raw)
  To: William A. (Andy) Adamson; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 01:57:50PM -0500, William A. (Andy) Adamson wrote:
> On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
> <androsadamson@gmail.com> wrote:
> > On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> >> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
> >>> From: Andy Adamson <andros@netapp.com>
> >>>
> >>> Create a new transport for the shared back channel.l Move the current sock
> >>> create and destroy routines into the new transport ops.
> >>>
> >>> Reference the back channel transport across message processing (recv/send).
> >>>
> >>> Signed-off-by: Andy Adamson <andros@netapp.com>
> >>> ---
> >>>  include/linux/sunrpc/svcsock.h |    1 +
> >>>  net/sunrpc/svc.c               |   18 +++---
> >>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
> >>>  3 files changed, 112 insertions(+), 29 deletions(-)
> >>>
> >>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> >>> index 1b353a7..3a45a80 100644
> >>> --- a/include/linux/sunrpc/svcsock.h
> >>> +++ b/include/linux/sunrpc/svcsock.h
> >>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
> >>>  int          svc_addsock(struct svc_serv *serv, const int fd,
> >>>                                       char *name_return, const size_t len);
> >>>  void         svc_init_xprt_sock(void);
> >>> +void         svc_init_bc_xprt_sock(void);
> >>>  void         svc_cleanup_xprt_sock(void);
> >>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
> >>>  void         svc_sock_destroy(struct svc_xprt *);
> >>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> >>> index 6359c42..3520cb3 100644
> >>> --- a/net/sunrpc/svc.c
> >>> +++ b/net/sunrpc/svc.c
> >>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
> >>>       if (svc_serv_is_pooled(serv))
> >>>               svc_pool_map_put();
> >>>
> >>> -#if defined(CONFIG_NFS_V4_1)
> >>> -     svc_sock_destroy(serv->bc_xprt);
> >>> -#endif /* CONFIG_NFS_V4_1 */
> >>> -
> >>>       svc_unregister(serv);
> >>>       kfree(serv->sv_pools);
> >>>       kfree(serv);
> >>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >>>  {
> >>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
> >>>       struct kvec     *resv = &rqstp->rq_res.head[0];
> >>> -     int             error;
> >>> +     int             ret;
> >>>
> >>>       /* Build the svc_rqst used by the common processing routine */
> >>>       rqstp->rq_xprt = serv->bc_xprt;
> >>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >>>       svc_getu32(argv);       /* XID */
> >>>       svc_getnl(argv);        /* CALLDIR */
> >>>
> >>> -     error = svc_process_common(rqstp, argv, resv);
> >>> -     if (error <= 0)
> >>> -             return error;
> >>> +     /* Hold a reference to the back channel socket across the call */
> >>> +     svc_xprt_get(serv->bc_xprt);
> >>
> >> Either we already have a reference, in which case this is unnecessary,
> >> or we don't, in which case this is risky.
> >
> > This is done so that when svc_xprt_put is called due to an svc_drop
> > (svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
> > because AFAICS it's the way the rpc server code is intended to work.
> > Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
> > for the same reason.

The svc_xprt_put()'s in svc_send/svc_drop balance the get in svc_recv().
They're needed because on a normal server connections can come and go
(because clients come and go) during the lifetime of the server.

As I understand it, the client is just creating a new server for each
backchannel.  So the server will never outlive the one connection it
uses.  So you don't need that stuff.  It's harmless--just leave it
alone--but definitely don't try to add additional reference counting
during the processing.

--b.

> 
> I take it back - as written, it might be risky :) Perhaps I should
> make sure that the xpt_ref = 1 before returning from bc_svc_process.
> 
> -->Andy
> >
> >> The nfs client code knows when it wants to create and destroy the
> >> backchannel, so I think all you want is to create the svc_xprt when you
> >> create the backchannel and then put it once when you bring it down.
> >
> > Correct, and without the references in bc_svc_process, svc_drop == freeing.
> >
> > -->Andy
> >>
> >> --b.
> >>
> >>> +     ret = svc_process_common(rqstp, argv, resv);
> >>> +     if (ret <= 0)
> >>> +             return ret;
> >>>
> >>>       memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
> >>> -     return bc_send(req);
> >>> +     ret = bc_send(req);
> >>> +     svc_xprt_put(serv->bc_xprt);
> >>> +     return ret;
> >>>  }
> >>>  EXPORT_SYMBOL(bc_svc_process);
> >>>  #endif /* CONFIG_NFS_V4_1 */
> >>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> >>> index 07919e1..5875551 100644
> >>> --- a/net/sunrpc/svcsock.c
> >>> +++ b/net/sunrpc/svcsock.c
> >>> @@ -66,6 +66,13 @@ static void                svc_sock_free(struct svc_xprt *);
> >>>  static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
> >>>                                         struct net *, struct sockaddr *,
> >>>                                         int, int);
> >>> +#if defined(CONFIG_NFS_V4_1)
> >>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
> >>> +                                          struct net *, struct sockaddr *,
> >>> +                                          int, int);
> >>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
> >>> +#endif /* CONFIG_NFS_V4_1 */
> >>> +
> >>>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
> >>>  static struct lock_class_key svc_key[2];
> >>>  static struct lock_class_key svc_slock_key[2];
> >>> @@ -1184,6 +1191,79 @@ static struct svc_xprt *svc_tcp_create(struct svc_serv *serv,
> >>>       return svc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
> >>>  }
> >>>
> >>> +#if defined(CONFIG_NFS_V4_1)
> >>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
> >>> +                                          struct net *, struct sockaddr *,
> >>> +                                          int, int);
> >>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
> >>> +
> >>> +static struct svc_xprt *svc_bc_tcp_create(struct svc_serv *serv,
> >>> +                                    struct net *net,
> >>> +                                    struct sockaddr *sa, int salen,
> >>> +                                    int flags)
> >>> +{
> >>> +     return svc_bc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
> >>> +}
> >>> +
> >>> +static void svc_bc_tcp_sock_detach(struct svc_xprt *xprt)
> >>> +{
> >>> +}
> >>> +
> >>> +/* These bc ops are never called. The shared fore channel is used instead */
> >>> +static int svc_bc_tcp_recvfrom(struct svc_rqst *rqstp)
> >>> +{
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static int svc_bc_tcp_sendto(struct svc_rqst *rqstp)
> >>> +{
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static void svc_bc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
> >>> +{
> >>> +}
> >>> +
> >>> +static void svc_bc_release_skb(struct svc_rqst *rqstp)
> >>> +{
> >>> +}
> >>> +
> >>> +static int svc_bc_tcp_has_wspace(struct svc_xprt *xprt)
> >>> +{
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static struct svc_xprt *svc_bc_tcp_accept(struct svc_xprt *xprt)
> >>> +{
> >>> +     return NULL;
> >>> +}
> >>> +
> >>> +static struct svc_xprt_ops svc_tcp_bc_ops = {
> >>> +     .xpo_create = svc_bc_tcp_create,
> >>> +     .xpo_recvfrom = svc_bc_tcp_recvfrom,
> >>> +     .xpo_sendto = svc_bc_tcp_sendto,
> >>> +     .xpo_release_rqst = svc_bc_release_skb,
> >>> +     .xpo_detach = svc_bc_tcp_sock_detach,
> >>> +     .xpo_free = svc_bc_sock_free,
> >>> +     .xpo_prep_reply_hdr = svc_bc_tcp_prep_reply_hdr,
> >>> +     .xpo_has_wspace = svc_bc_tcp_has_wspace,
> >>> +     .xpo_accept = svc_bc_tcp_accept,
> >>> +};
> >>> +
> >>> +static struct svc_xprt_class svc_tcp_bc_class = {
> >>> +     .xcl_name = "tcp-bc",
> >>> +     .xcl_owner = THIS_MODULE,
> >>> +     .xcl_ops = &svc_tcp_bc_ops,
> >>> +     .xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
> >>> +};
> >>> +
> >>> +void svc_init_bc_xprt_sock(void)
> >>> +{
> >>> +     svc_reg_xprt_class(&svc_tcp_bc_class);
> >>> +}
> >>> +EXPORT_SYMBOL_GPL(svc_init_bc_xprt_sock);
> >>> +#endif /* CONFIG_NFS_V4_1 */
> >>> +
> >>>  static struct svc_xprt_ops svc_tcp_ops = {
> >>>       .xpo_create = svc_tcp_create,
> >>>       .xpo_recvfrom = svc_tcp_recvfrom,
> >>> @@ -1509,41 +1589,43 @@ static void svc_sock_free(struct svc_xprt *xprt)
> >>>       kfree(svsk);
> >>>  }
> >>>
> >>> +#if defined(CONFIG_NFS_V4_1)
> >>>  /*
> >>> - * Create a svc_xprt.
> >>> - *
> >>> - * For internal use only (e.g. nfsv4.1 backchannel).
> >>> - * Callers should typically use the xpo_create() method.
> >>> + * Create a back channel svc_xprt which shares the fore channel socket.
> >>>   */
> >>> -struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot)
> >>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
> >>> +                                          int protocol,
> >>> +                                          struct net *net,
> >>> +                                          struct sockaddr *sin, int len,
> >>> +                                          int flags)
> >>>  {
> >>>       struct svc_sock *svsk;
> >>> -     struct svc_xprt *xprt = NULL;
> >>> +     struct svc_xprt *xprt;
> >>> +
> >>> +     if (protocol != IPPROTO_TCP) {
> >>> +             printk(KERN_WARNING "svc: only and TCP sockets"
> >>> +                     " supported on shared back channel\n");
> >>> +             return ERR_PTR(-EINVAL);
> >>> +     }
> >>>
> >>> -     dprintk("svc: %s\n", __func__);
> >>>       svsk = kzalloc(sizeof(*svsk), GFP_KERNEL);
> >>>       if (!svsk)
> >>> -             goto out;
> >>> +             return ERR_PTR(-ENOMEM);
> >>>
> >>>       xprt = &svsk->sk_xprt;
> >>> -     if (prot == IPPROTO_TCP)
> >>> -             svc_xprt_init(&svc_tcp_class, xprt, serv);
> >>> -     else if (prot == IPPROTO_UDP)
> >>> -             svc_xprt_init(&svc_udp_class, xprt, serv);
> >>> -     else
> >>> -             BUG();
> >>> -out:
> >>> -     dprintk("svc: %s return %p\n", __func__, xprt);
> >>> +     svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
> >>> +
> >>> +     serv->bc_xprt = xprt;
> >>> +
> >>>       return xprt;
> >>>  }
> >>> -EXPORT_SYMBOL_GPL(svc_sock_create);
> >>>
> >>>  /*
> >>> - * Destroy a svc_sock.
> >>> + * Free a back channel svc_sock.
> >>>   */
> >>> -void svc_sock_destroy(struct svc_xprt *xprt)
> >>> +static void svc_bc_sock_free(struct svc_xprt *xprt)
> >>>  {
> >>>       if (xprt)
> >>>               kfree(container_of(xprt, struct svc_sock, sk_xprt));
> >>>  }
> >>> -EXPORT_SYMBOL_GPL(svc_sock_destroy);
> >>> +#endif /* CONFIG_NFS_V4_1 */
> >>> --
> >>> 1.6.6
> >>>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 19:10         ` J. Bruce Fields
@ 2010-12-17 19:16           ` William A. (Andy) Adamson
  2010-12-17 19:33             ` J. Bruce Fields
  0 siblings, 1 reply; 21+ messages in thread
From: William A. (Andy) Adamson @ 2010-12-17 19:16 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 2:10 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> On Fri, Dec 17, 2010 at 01:57:50PM -0500, William A. (Andy) Adamson wrote:
>> On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
>> <androsadamson@gmail.com> wrote:
>> > On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
>> >> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
>> >>> From: Andy Adamson <andros@netapp.com>
>> >>>
>> >>> Create a new transport for the shared back channel.l Move the current sock
>> >>> create and destroy routines into the new transport ops.
>> >>>
>> >>> Reference the back channel transport across message processing (recv/send).
>> >>>
>> >>> Signed-off-by: Andy Adamson <andros@netapp.com>
>> >>> ---
>> >>>  include/linux/sunrpc/svcsock.h |    1 +
>> >>>  net/sunrpc/svc.c               |   18 +++---
>> >>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
>> >>>  3 files changed, 112 insertions(+), 29 deletions(-)
>> >>>
>> >>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
>> >>> index 1b353a7..3a45a80 100644
>> >>> --- a/include/linux/sunrpc/svcsock.h
>> >>> +++ b/include/linux/sunrpc/svcsock.h
>> >>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
>> >>>  int          svc_addsock(struct svc_serv *serv, const int fd,
>> >>>                                       char *name_return, const size_t len);
>> >>>  void         svc_init_xprt_sock(void);
>> >>> +void         svc_init_bc_xprt_sock(void);
>> >>>  void         svc_cleanup_xprt_sock(void);
>> >>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
>> >>>  void         svc_sock_destroy(struct svc_xprt *);
>> >>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> >>> index 6359c42..3520cb3 100644
>> >>> --- a/net/sunrpc/svc.c
>> >>> +++ b/net/sunrpc/svc.c
>> >>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
>> >>>       if (svc_serv_is_pooled(serv))
>> >>>               svc_pool_map_put();
>> >>>
>> >>> -#if defined(CONFIG_NFS_V4_1)
>> >>> -     svc_sock_destroy(serv->bc_xprt);
>> >>> -#endif /* CONFIG_NFS_V4_1 */
>> >>> -
>> >>>       svc_unregister(serv);
>> >>>       kfree(serv->sv_pools);
>> >>>       kfree(serv);
>> >>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>> >>>  {
>> >>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
>> >>>       struct kvec     *resv = &rqstp->rq_res.head[0];
>> >>> -     int             error;
>> >>> +     int             ret;
>> >>>
>> >>>       /* Build the svc_rqst used by the common processing routine */
>> >>>       rqstp->rq_xprt = serv->bc_xprt;
>> >>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>> >>>       svc_getu32(argv);       /* XID */
>> >>>       svc_getnl(argv);        /* CALLDIR */
>> >>>
>> >>> -     error = svc_process_common(rqstp, argv, resv);
>> >>> -     if (error <= 0)
>> >>> -             return error;
>> >>> +     /* Hold a reference to the back channel socket across the call */
>> >>> +     svc_xprt_get(serv->bc_xprt);
>> >>
>> >> Either we already have a reference, in which case this is unnecessary,
>> >> or we don't, in which case this is risky.
>> >
>> > This is done so that when svc_xprt_put is called due to an svc_drop
>> > (svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
>> > because AFAICS it's the way the rpc server code is intended to work.
>> > Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
>> > for the same reason.
>
> The svc_xprt_put()'s in svc_send/svc_drop balance the get in svc_recv().
> They're needed because on a normal server connections can come and go
> (because clients come and go) during the lifetime of the server.
>
> As I understand it, the client is just creating a new server for each
> backchannel.  So the server will never outlive the one connection it
> uses.  So you don't need that stuff.  It's harmless--just leave it
> alone--but definitely don't try to add additional reference counting
> during the processing.

If I don't add an svc_xprt_get prior to the svc_process_call, the
svc_xprt_put called in the svc_drop case will free the bc_xprt, which
is not what I want.

-->Andy
>
> --b.
>
>>
>> I take it back - as written, it might be risky :) Perhaps I should
>> make sure that the xpt_ref = 1 before returning from bc_svc_process.
>>
>> -->Andy
>> >
>> >> The nfs client code knows when it wants to create and destroy the
>> >> backchannel, so I think all you want is to create the svc_xprt when you
>> >> create the backchannel and then put it once when you bring it down.
>> >
>> > Correct, and without the references in bc_svc_process, svc_drop == freeing.
>> >
>> > -->Andy
>> >>
>> >> --b.
>> >>
>> >>> +     ret = svc_process_common(rqstp, argv, resv);
>> >>> +     if (ret <= 0)
>> >>> +             return ret;
>> >>>
>> >>>       memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
>> >>> -     return bc_send(req);
>> >>> +     ret = bc_send(req);
>> >>> +     svc_xprt_put(serv->bc_xprt);
>> >>> +     return ret;
>> >>>  }
>> >>>  EXPORT_SYMBOL(bc_svc_process);
>> >>>  #endif /* CONFIG_NFS_V4_1 */
>> >>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>> >>> index 07919e1..5875551 100644
>> >>> --- a/net/sunrpc/svcsock.c
>> >>> +++ b/net/sunrpc/svcsock.c
>> >>> @@ -66,6 +66,13 @@ static void                svc_sock_free(struct svc_xprt *);
>> >>>  static struct svc_xprt *svc_create_socket(struct svc_serv *, int,
>> >>>                                         struct net *, struct sockaddr *,
>> >>>                                         int, int);
>> >>> +#if defined(CONFIG_NFS_V4_1)
>> >>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
>> >>> +                                          struct net *, struct sockaddr *,
>> >>> +                                          int, int);
>> >>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
>> >>> +#endif /* CONFIG_NFS_V4_1 */
>> >>> +
>> >>>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>> >>>  static struct lock_class_key svc_key[2];
>> >>>  static struct lock_class_key svc_slock_key[2];
>> >>> @@ -1184,6 +1191,79 @@ static struct svc_xprt *svc_tcp_create(struct svc_serv *serv,
>> >>>       return svc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>> >>>  }
>> >>>
>> >>> +#if defined(CONFIG_NFS_V4_1)
>> >>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *, int,
>> >>> +                                          struct net *, struct sockaddr *,
>> >>> +                                          int, int);
>> >>> +static void svc_bc_sock_free(struct svc_xprt *xprt);
>> >>> +
>> >>> +static struct svc_xprt *svc_bc_tcp_create(struct svc_serv *serv,
>> >>> +                                    struct net *net,
>> >>> +                                    struct sockaddr *sa, int salen,
>> >>> +                                    int flags)
>> >>> +{
>> >>> +     return svc_bc_create_socket(serv, IPPROTO_TCP, net, sa, salen, flags);
>> >>> +}
>> >>> +
>> >>> +static void svc_bc_tcp_sock_detach(struct svc_xprt *xprt)
>> >>> +{
>> >>> +}
>> >>> +
>> >>> +/* These bc ops are never called. The shared fore channel is used instead */
>> >>> +static int svc_bc_tcp_recvfrom(struct svc_rqst *rqstp)
>> >>> +{
>> >>> +     return 0;
>> >>> +}
>> >>> +
>> >>> +static int svc_bc_tcp_sendto(struct svc_rqst *rqstp)
>> >>> +{
>> >>> +     return 0;
>> >>> +}
>> >>> +
>> >>> +static void svc_bc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
>> >>> +{
>> >>> +}
>> >>> +
>> >>> +static void svc_bc_release_skb(struct svc_rqst *rqstp)
>> >>> +{
>> >>> +}
>> >>> +
>> >>> +static int svc_bc_tcp_has_wspace(struct svc_xprt *xprt)
>> >>> +{
>> >>> +     return 0;
>> >>> +}
>> >>> +
>> >>> +static struct svc_xprt *svc_bc_tcp_accept(struct svc_xprt *xprt)
>> >>> +{
>> >>> +     return NULL;
>> >>> +}
>> >>> +
>> >>> +static struct svc_xprt_ops svc_tcp_bc_ops = {
>> >>> +     .xpo_create = svc_bc_tcp_create,
>> >>> +     .xpo_recvfrom = svc_bc_tcp_recvfrom,
>> >>> +     .xpo_sendto = svc_bc_tcp_sendto,
>> >>> +     .xpo_release_rqst = svc_bc_release_skb,
>> >>> +     .xpo_detach = svc_bc_tcp_sock_detach,
>> >>> +     .xpo_free = svc_bc_sock_free,
>> >>> +     .xpo_prep_reply_hdr = svc_bc_tcp_prep_reply_hdr,
>> >>> +     .xpo_has_wspace = svc_bc_tcp_has_wspace,
>> >>> +     .xpo_accept = svc_bc_tcp_accept,
>> >>> +};
>> >>> +
>> >>> +static struct svc_xprt_class svc_tcp_bc_class = {
>> >>> +     .xcl_name = "tcp-bc",
>> >>> +     .xcl_owner = THIS_MODULE,
>> >>> +     .xcl_ops = &svc_tcp_bc_ops,
>> >>> +     .xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
>> >>> +};
>> >>> +
>> >>> +void svc_init_bc_xprt_sock(void)
>> >>> +{
>> >>> +     svc_reg_xprt_class(&svc_tcp_bc_class);
>> >>> +}
>> >>> +EXPORT_SYMBOL_GPL(svc_init_bc_xprt_sock);
>> >>> +#endif /* CONFIG_NFS_V4_1 */
>> >>> +
>> >>>  static struct svc_xprt_ops svc_tcp_ops = {
>> >>>       .xpo_create = svc_tcp_create,
>> >>>       .xpo_recvfrom = svc_tcp_recvfrom,
>> >>> @@ -1509,41 +1589,43 @@ static void svc_sock_free(struct svc_xprt *xprt)
>> >>>       kfree(svsk);
>> >>>  }
>> >>>
>> >>> +#if defined(CONFIG_NFS_V4_1)
>> >>>  /*
>> >>> - * Create a svc_xprt.
>> >>> - *
>> >>> - * For internal use only (e.g. nfsv4.1 backchannel).
>> >>> - * Callers should typically use the xpo_create() method.
>> >>> + * Create a back channel svc_xprt which shares the fore channel socket.
>> >>>   */
>> >>> -struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot)
>> >>> +static struct svc_xprt *svc_bc_create_socket(struct svc_serv *serv,
>> >>> +                                          int protocol,
>> >>> +                                          struct net *net,
>> >>> +                                          struct sockaddr *sin, int len,
>> >>> +                                          int flags)
>> >>>  {
>> >>>       struct svc_sock *svsk;
>> >>> -     struct svc_xprt *xprt = NULL;
>> >>> +     struct svc_xprt *xprt;
>> >>> +
>> >>> +     if (protocol != IPPROTO_TCP) {
>> >>> +             printk(KERN_WARNING "svc: only and TCP sockets"
>> >>> +                     " supported on shared back channel\n");
>> >>> +             return ERR_PTR(-EINVAL);
>> >>> +     }
>> >>>
>> >>> -     dprintk("svc: %s\n", __func__);
>> >>>       svsk = kzalloc(sizeof(*svsk), GFP_KERNEL);
>> >>>       if (!svsk)
>> >>> -             goto out;
>> >>> +             return ERR_PTR(-ENOMEM);
>> >>>
>> >>>       xprt = &svsk->sk_xprt;
>> >>> -     if (prot == IPPROTO_TCP)
>> >>> -             svc_xprt_init(&svc_tcp_class, xprt, serv);
>> >>> -     else if (prot == IPPROTO_UDP)
>> >>> -             svc_xprt_init(&svc_udp_class, xprt, serv);
>> >>> -     else
>> >>> -             BUG();
>> >>> -out:
>> >>> -     dprintk("svc: %s return %p\n", __func__, xprt);
>> >>> +     svc_xprt_init(&svc_tcp_bc_class, xprt, serv);
>> >>> +
>> >>> +     serv->bc_xprt = xprt;
>> >>> +
>> >>>       return xprt;
>> >>>  }
>> >>> -EXPORT_SYMBOL_GPL(svc_sock_create);
>> >>>
>> >>>  /*
>> >>> - * Destroy a svc_sock.
>> >>> + * Free a back channel svc_sock.
>> >>>   */
>> >>> -void svc_sock_destroy(struct svc_xprt *xprt)
>> >>> +static void svc_bc_sock_free(struct svc_xprt *xprt)
>> >>>  {
>> >>>       if (xprt)
>> >>>               kfree(container_of(xprt, struct svc_sock, sk_xprt));
>> >>>  }
>> >>> -EXPORT_SYMBOL_GPL(svc_sock_destroy);
>> >>> +#endif /* CONFIG_NFS_V4_1 */
>> >>> --
>> >>> 1.6.6
>> >>>
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >>
>> >
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 19:16           ` William A. (Andy) Adamson
@ 2010-12-17 19:33             ` J. Bruce Fields
  2010-12-17 19:55               ` William A. (Andy) Adamson
  0 siblings, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2010-12-17 19:33 UTC (permalink / raw)
  To: William A. (Andy) Adamson; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 02:16:35PM -0500, William A. (Andy) Adamson wrote:
> On Fri, Dec 17, 2010 at 2:10 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> > On Fri, Dec 17, 2010 at 01:57:50PM -0500, William A. (Andy) Adamson wrote:
> >> On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
> >> <androsadamson@gmail.com> wrote:
> >> > On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> >> >> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
> >> >>> From: Andy Adamson <andros@netapp.com>
> >> >>>
> >> >>> Create a new transport for the shared back channel.l Move the current sock
> >> >>> create and destroy routines into the new transport ops.
> >> >>>
> >> >>> Reference the back channel transport across message processing (recv/send).
> >> >>>
> >> >>> Signed-off-by: Andy Adamson <andros@netapp.com>
> >> >>> ---
> >> >>>  include/linux/sunrpc/svcsock.h |    1 +
> >> >>>  net/sunrpc/svc.c               |   18 +++---
> >> >>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
> >> >>>  3 files changed, 112 insertions(+), 29 deletions(-)
> >> >>>
> >> >>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> >> >>> index 1b353a7..3a45a80 100644
> >> >>> --- a/include/linux/sunrpc/svcsock.h
> >> >>> +++ b/include/linux/sunrpc/svcsock.h
> >> >>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
> >> >>>  int          svc_addsock(struct svc_serv *serv, const int fd,
> >> >>>                                       char *name_return, const size_t len);
> >> >>>  void         svc_init_xprt_sock(void);
> >> >>> +void         svc_init_bc_xprt_sock(void);
> >> >>>  void         svc_cleanup_xprt_sock(void);
> >> >>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
> >> >>>  void         svc_sock_destroy(struct svc_xprt *);
> >> >>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> >> >>> index 6359c42..3520cb3 100644
> >> >>> --- a/net/sunrpc/svc.c
> >> >>> +++ b/net/sunrpc/svc.c
> >> >>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
> >> >>>       if (svc_serv_is_pooled(serv))
> >> >>>               svc_pool_map_put();
> >> >>>
> >> >>> -#if defined(CONFIG_NFS_V4_1)
> >> >>> -     svc_sock_destroy(serv->bc_xprt);
> >> >>> -#endif /* CONFIG_NFS_V4_1 */
> >> >>> -
> >> >>>       svc_unregister(serv);
> >> >>>       kfree(serv->sv_pools);
> >> >>>       kfree(serv);
> >> >>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >> >>>  {
> >> >>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
> >> >>>       struct kvec     *resv = &rqstp->rq_res.head[0];
> >> >>> -     int             error;
> >> >>> +     int             ret;
> >> >>>
> >> >>>       /* Build the svc_rqst used by the common processing routine */
> >> >>>       rqstp->rq_xprt = serv->bc_xprt;
> >> >>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >> >>>       svc_getu32(argv);       /* XID */
> >> >>>       svc_getnl(argv);        /* CALLDIR */
> >> >>>
> >> >>> -     error = svc_process_common(rqstp, argv, resv);
> >> >>> -     if (error <= 0)
> >> >>> -             return error;
> >> >>> +     /* Hold a reference to the back channel socket across the call */
> >> >>> +     svc_xprt_get(serv->bc_xprt);
> >> >>
> >> >> Either we already have a reference, in which case this is unnecessary,
> >> >> or we don't, in which case this is risky.
> >> >
> >> > This is done so that when svc_xprt_put is called due to an svc_drop
> >> > (svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
> >> > because AFAICS it's the way the rpc server code is intended to work.
> >> > Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
> >> > for the same reason.
> >
> > The svc_xprt_put()'s in svc_send/svc_drop balance the get in svc_recv().
> > They're needed because on a normal server connections can come and go
> > (because clients come and go) during the lifetime of the server.
> >
> > As I understand it, the client is just creating a new server for each
> > backchannel.  So the server will never outlive the one connection it
> > uses.  So you don't need that stuff.  It's harmless--just leave it
> > alone--but definitely don't try to add additional reference counting
> > during the processing.
> 
> If I don't add an svc_xprt_get prior to the svc_process_call, the
> svc_xprt_put called in the svc_drop case will free the bc_xprt, which
> is not what I want.

Looking at the code some more....  Oh, right, because the backchannel
doesn't call svc_recv, it calls its own bc_send, which doesn't do a put.

Oh, yuch, I see: svc_process_common returns "1" to mean send, "0" to
mean drop, and leaves it up to the caller to do the put in the send
case.  That's confusing.

Maybe it would be simpler to have the caller be made responsible for
both cases?  So:

	if (svc_process_common(rqstp))
		send it
	else
		drop it

?

--b.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 19:33             ` J. Bruce Fields
@ 2010-12-17 19:55               ` William A. (Andy) Adamson
  2010-12-17 20:02                 ` J. Bruce Fields
  0 siblings, 1 reply; 21+ messages in thread
From: William A. (Andy) Adamson @ 2010-12-17 19:55 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 2:33 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> On Fri, Dec 17, 2010 at 02:16:35PM -0500, William A. (Andy) Adamson wrote:
>> On Fri, Dec 17, 2010 at 2:10 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
>> > On Fri, Dec 17, 2010 at 01:57:50PM -0500, William A. (Andy) Adamson wrote:
>> >> On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
>> >> <androsadamson@gmail.com> wrote:
>> >> > On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
>> >> >> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
>> >> >>> From: Andy Adamson <andros@netapp.com>
>> >> >>>
>> >> >>> Create a new transport for the shared back channel.l Move the current sock
>> >> >>> create and destroy routines into the new transport ops.
>> >> >>>
>> >> >>> Reference the back channel transport across message processing (recv/send).
>> >> >>>
>> >> >>> Signed-off-by: Andy Adamson <andros@netapp.com>
>> >> >>> ---
>> >> >>>  include/linux/sunrpc/svcsock.h |    1 +
>> >> >>>  net/sunrpc/svc.c               |   18 +++---
>> >> >>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
>> >> >>>  3 files changed, 112 insertions(+), 29 deletions(-)
>> >> >>>
>> >> >>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
>> >> >>> index 1b353a7..3a45a80 100644
>> >> >>> --- a/include/linux/sunrpc/svcsock.h
>> >> >>> +++ b/include/linux/sunrpc/svcsock.h
>> >> >>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
>> >> >>>  int          svc_addsock(struct svc_serv *serv, const int fd,
>> >> >>>                                       char *name_return, const size_t len);
>> >> >>>  void         svc_init_xprt_sock(void);
>> >> >>> +void         svc_init_bc_xprt_sock(void);
>> >> >>>  void         svc_cleanup_xprt_sock(void);
>> >> >>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
>> >> >>>  void         svc_sock_destroy(struct svc_xprt *);
>> >> >>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> >> >>> index 6359c42..3520cb3 100644
>> >> >>> --- a/net/sunrpc/svc.c
>> >> >>> +++ b/net/sunrpc/svc.c
>> >> >>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
>> >> >>>       if (svc_serv_is_pooled(serv))
>> >> >>>               svc_pool_map_put();
>> >> >>>
>> >> >>> -#if defined(CONFIG_NFS_V4_1)
>> >> >>> -     svc_sock_destroy(serv->bc_xprt);
>> >> >>> -#endif /* CONFIG_NFS_V4_1 */
>> >> >>> -
>> >> >>>       svc_unregister(serv);
>> >> >>>       kfree(serv->sv_pools);
>> >> >>>       kfree(serv);
>> >> >>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>> >> >>>  {
>> >> >>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
>> >> >>>       struct kvec     *resv = &rqstp->rq_res.head[0];
>> >> >>> -     int             error;
>> >> >>> +     int             ret;
>> >> >>>
>> >> >>>       /* Build the svc_rqst used by the common processing routine */
>> >> >>>       rqstp->rq_xprt = serv->bc_xprt;
>> >> >>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>> >> >>>       svc_getu32(argv);       /* XID */
>> >> >>>       svc_getnl(argv);        /* CALLDIR */
>> >> >>>
>> >> >>> -     error = svc_process_common(rqstp, argv, resv);
>> >> >>> -     if (error <= 0)
>> >> >>> -             return error;
>> >> >>> +     /* Hold a reference to the back channel socket across the call */
>> >> >>> +     svc_xprt_get(serv->bc_xprt);
>> >> >>
>> >> >> Either we already have a reference, in which case this is unnecessary,
>> >> >> or we don't, in which case this is risky.
>> >> >
>> >> > This is done so that when svc_xprt_put is called due to an svc_drop
>> >> > (svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
>> >> > because AFAICS it's the way the rpc server code is intended to work.
>> >> > Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
>> >> > for the same reason.
>> >
>> > The svc_xprt_put()'s in svc_send/svc_drop balance the get in svc_recv().
>> > They're needed because on a normal server connections can come and go
>> > (because clients come and go) during the lifetime of the server.
>> >
>> > As I understand it, the client is just creating a new server for each
>> > backchannel.  So the server will never outlive the one connection it
>> > uses.  So you don't need that stuff.  It's harmless--just leave it
>> > alone--but definitely don't try to add additional reference counting
>> > during the processing.
>>
>> If I don't add an svc_xprt_get prior to the svc_process_call, the
>> svc_xprt_put called in the svc_drop case will free the bc_xprt, which
>> is not what I want.
>
> Looking at the code some more....  Oh, right, because the backchannel
> doesn't call svc_recv, it calls its own bc_send, which doesn't do a put.
>
> Oh, yuch, I see: svc_process_common returns "1" to mean send, "0" to
> mean drop, and leaves it up to the caller to do the put in the send
> case.  That's confusing.
>
> Maybe it would be simpler to have the caller be made responsible for
> both cases?  So:
>
>        if (svc_process_common(rqstp))
>                send it
>        else
>                drop it

That would work well. I'll send a patch

-->Andy

>
> ?
>
> --b.
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 19:55               ` William A. (Andy) Adamson
@ 2010-12-17 20:02                 ` J. Bruce Fields
  2010-12-17 20:11                   ` William A. (Andy) Adamson
  0 siblings, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2010-12-17 20:02 UTC (permalink / raw)
  To: William A. (Andy) Adamson; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 02:55:11PM -0500, William A. (Andy) Adamson wrote:
> On Fri, Dec 17, 2010 at 2:33 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> > On Fri, Dec 17, 2010 at 02:16:35PM -0500, William A. (Andy) Adamson wrote:
> >> On Fri, Dec 17, 2010 at 2:10 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> >> > On Fri, Dec 17, 2010 at 01:57:50PM -0500, William A. (Andy) Adamson wrote:
> >> >> On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
> >> >> <androsadamson@gmail.com> wrote:
> >> >> > On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> >> >> >> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com wrote:
> >> >> >>> From: Andy Adamson <andros@netapp.com>
> >> >> >>>
> >> >> >>> Create a new transport for the shared back channel.l Move the current sock
> >> >> >>> create and destroy routines into the new transport ops.
> >> >> >>>
> >> >> >>> Reference the back channel transport across message processing (recv/send).
> >> >> >>>
> >> >> >>> Signed-off-by: Andy Adamson <andros@netapp.com>
> >> >> >>> ---
> >> >> >>>  include/linux/sunrpc/svcsock.h |    1 +
> >> >> >>>  net/sunrpc/svc.c               |   18 +++---
> >> >> >>>  net/sunrpc/svcsock.c           |  122 +++++++++++++++++++++++++++++++++-------
> >> >> >>>  3 files changed, 112 insertions(+), 29 deletions(-)
> >> >> >>>
> >> >> >>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> >> >> >>> index 1b353a7..3a45a80 100644
> >> >> >>> --- a/include/linux/sunrpc/svcsock.h
> >> >> >>> +++ b/include/linux/sunrpc/svcsock.h
> >> >> >>> @@ -45,6 +45,7 @@ int         svc_sock_names(struct svc_serv *serv, char *buf,
> >> >> >>>  int          svc_addsock(struct svc_serv *serv, const int fd,
> >> >> >>>                                       char *name_return, const size_t len);
> >> >> >>>  void         svc_init_xprt_sock(void);
> >> >> >>> +void         svc_init_bc_xprt_sock(void);
> >> >> >>>  void         svc_cleanup_xprt_sock(void);
> >> >> >>>  struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
> >> >> >>>  void         svc_sock_destroy(struct svc_xprt *);
> >> >> >>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> >> >> >>> index 6359c42..3520cb3 100644
> >> >> >>> --- a/net/sunrpc/svc.c
> >> >> >>> +++ b/net/sunrpc/svc.c
> >> >> >>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
> >> >> >>>       if (svc_serv_is_pooled(serv))
> >> >> >>>               svc_pool_map_put();
> >> >> >>>
> >> >> >>> -#if defined(CONFIG_NFS_V4_1)
> >> >> >>> -     svc_sock_destroy(serv->bc_xprt);
> >> >> >>> -#endif /* CONFIG_NFS_V4_1 */
> >> >> >>> -
> >> >> >>>       svc_unregister(serv);
> >> >> >>>       kfree(serv->sv_pools);
> >> >> >>>       kfree(serv);
> >> >> >>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >> >> >>>  {
> >> >> >>>       struct kvec     *argv = &rqstp->rq_arg.head[0];
> >> >> >>>       struct kvec     *resv = &rqstp->rq_res.head[0];
> >> >> >>> -     int             error;
> >> >> >>> +     int             ret;
> >> >> >>>
> >> >> >>>       /* Build the svc_rqst used by the common processing routine */
> >> >> >>>       rqstp->rq_xprt = serv->bc_xprt;
> >> >> >>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >> >> >>>       svc_getu32(argv);       /* XID */
> >> >> >>>       svc_getnl(argv);        /* CALLDIR */
> >> >> >>>
> >> >> >>> -     error = svc_process_common(rqstp, argv, resv);
> >> >> >>> -     if (error <= 0)
> >> >> >>> -             return error;
> >> >> >>> +     /* Hold a reference to the back channel socket across the call */
> >> >> >>> +     svc_xprt_get(serv->bc_xprt);
> >> >> >>
> >> >> >> Either we already have a reference, in which case this is unnecessary,
> >> >> >> or we don't, in which case this is risky.
> >> >> >
> >> >> > This is done so that when svc_xprt_put is called due to an svc_drop
> >> >> > (svc_xprt_release, svc_xprt_put) it is not freed.  It's not risky
> >> >> > because AFAICS it's the way the rpc server code is intended to work.
> >> >> > Note that svc_recv calls svc_xprt_get, and svc_send calls svc_xprt_put
> >> >> > for the same reason.
> >> >
> >> > The svc_xprt_put()'s in svc_send/svc_drop balance the get in svc_recv().
> >> > They're needed because on a normal server connections can come and go
> >> > (because clients come and go) during the lifetime of the server.
> >> >
> >> > As I understand it, the client is just creating a new server for each
> >> > backchannel.  So the server will never outlive the one connection it
> >> > uses.  So you don't need that stuff.  It's harmless--just leave it
> >> > alone--but definitely don't try to add additional reference counting
> >> > during the processing.
> >>
> >> If I don't add an svc_xprt_get prior to the svc_process_call, the
> >> svc_xprt_put called in the svc_drop case will free the bc_xprt, which
> >> is not what I want.
> >
> > Looking at the code some more....  Oh, right, because the backchannel
> > doesn't call svc_recv, it calls its own bc_send, which doesn't do a put.
> >
> > Oh, yuch, I see: svc_process_common returns "1" to mean send, "0" to
> > mean drop, and leaves it up to the caller to do the put in the send
> > case.  That's confusing.
> >
> > Maybe it would be simpler to have the caller be made responsible for
> > both cases?  So:
> >
> >        if (svc_process_common(rqstp))
> >                send it
> >        else
> >                drop it
> 
> That would work well. I'll send a patch

If you do that you may want your own bc_svc_drop that doesn't bother
with the put?

You'd want to look through svc_xprt_release() to see if any of the rest
of that makes sense in the bc case.

--b.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel
  2010-12-17 20:02                 ` J. Bruce Fields
@ 2010-12-17 20:11                   ` William A. (Andy) Adamson
  0 siblings, 0 replies; 21+ messages in thread
From: William A. (Andy) Adamson @ 2010-12-17 20:11 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 3:02 PM, J. Bruce Fields <bfields@fieldses.org>=
 wrote:
> On Fri, Dec 17, 2010 at 02:55:11PM -0500, William A. (Andy) Adamson w=
rote:
>> On Fri, Dec 17, 2010 at 2:33 PM, J. Bruce Fields <bfields@fieldses.o=
rg> wrote:
>> > On Fri, Dec 17, 2010 at 02:16:35PM -0500, William A. (Andy) Adamso=
n wrote:
>> >> On Fri, Dec 17, 2010 at 2:10 PM, J. Bruce Fields <bfields@fieldse=
s.org> wrote:
>> >> > On Fri, Dec 17, 2010 at 01:57:50PM -0500, William A. (Andy) Ada=
mson wrote:
>> >> >> On Fri, Dec 17, 2010 at 1:54 PM, William A. (Andy) Adamson
>> >> >> <androsadamson@gmail.com> wrote:
>> >> >> > On Fri, Dec 17, 2010 at 1:40 PM, J. Bruce Fields <bfields@fi=
eldses.org> wrote:
>> >> >> >> On Fri, Dec 17, 2010 at 01:20:02PM -0500, andros@netapp.com=
 wrote:
>> >> >> >>> From: Andy Adamson <andros@netapp.com>
>> >> >> >>>
>> >> >> >>> Create a new transport for the shared back channel.l Move =
the current sock
>> >> >> >>> create and destroy routines into the new transport ops.
>> >> >> >>>
>> >> >> >>> Reference the back channel transport across message proces=
sing (recv/send).
>> >> >> >>>
>> >> >> >>> Signed-off-by: Andy Adamson <andros@netapp.com>
>> >> >> >>> ---
>> >> >> >>> =A0include/linux/sunrpc/svcsock.h | =A0 =A01 +
>> >> >> >>> =A0net/sunrpc/svc.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 | =A0 18 +=
++---
>> >> >> >>> =A0net/sunrpc/svcsock.c =A0 =A0 =A0 =A0 =A0 | =A0122 +++++=
++++++++++++++++++++++++++++-------
>> >> >> >>> =A03 files changed, 112 insertions(+), 29 deletions(-)
>> >> >> >>>
>> >> >> >>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linu=
x/sunrpc/svcsock.h
>> >> >> >>> index 1b353a7..3a45a80 100644
>> >> >> >>> --- a/include/linux/sunrpc/svcsock.h
>> >> >> >>> +++ b/include/linux/sunrpc/svcsock.h
>> >> >> >>> @@ -45,6 +45,7 @@ int =A0 =A0 =A0 =A0 svc_sock_names(struc=
t svc_serv *serv, char *buf,
>> >> >> >>> =A0int =A0 =A0 =A0 =A0 =A0svc_addsock(struct svc_serv *ser=
v, const int fd,
>> >> >> >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 char *name_return, const size_t len);
>> >> >> >>> =A0void =A0 =A0 =A0 =A0 svc_init_xprt_sock(void);
>> >> >> >>> +void =A0 =A0 =A0 =A0 svc_init_bc_xprt_sock(void);
>> >> >> >>> =A0void =A0 =A0 =A0 =A0 svc_cleanup_xprt_sock(void);
>> >> >> >>> =A0struct svc_xprt *svc_sock_create(struct svc_serv *serv,=
 int prot);
>> >> >> >>> =A0void =A0 =A0 =A0 =A0 svc_sock_destroy(struct svc_xprt *=
);
>> >> >> >>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> >> >> >>> index 6359c42..3520cb3 100644
>> >> >> >>> --- a/net/sunrpc/svc.c
>> >> >> >>> +++ b/net/sunrpc/svc.c
>> >> >> >>> @@ -488,10 +488,6 @@ svc_destroy(struct svc_serv *serv)
>> >> >> >>> =A0 =A0 =A0 if (svc_serv_is_pooled(serv))
>> >> >> >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 svc_pool_map_put();
>> >> >> >>>
>> >> >> >>> -#if defined(CONFIG_NFS_V4_1)
>> >> >> >>> - =A0 =A0 svc_sock_destroy(serv->bc_xprt);
>> >> >> >>> -#endif /* CONFIG_NFS_V4_1 */
>> >> >> >>> -
>> >> >> >>> =A0 =A0 =A0 svc_unregister(serv);
>> >> >> >>> =A0 =A0 =A0 kfree(serv->sv_pools);
>> >> >> >>> =A0 =A0 =A0 kfree(serv);
>> >> >> >>> @@ -1264,7 +1260,7 @@ bc_svc_process(struct svc_serv *serv=
, struct rpc_rqst *req,
>> >> >> >>> =A0{
>> >> >> >>> =A0 =A0 =A0 struct kvec =A0 =A0 *argv =3D &rqstp->rq_arg.h=
ead[0];
>> >> >> >>> =A0 =A0 =A0 struct kvec =A0 =A0 *resv =3D &rqstp->rq_res.h=
ead[0];
>> >> >> >>> - =A0 =A0 int =A0 =A0 =A0 =A0 =A0 =A0 error;
>> >> >> >>> + =A0 =A0 int =A0 =A0 =A0 =A0 =A0 =A0 ret;
>> >> >> >>>
>> >> >> >>> =A0 =A0 =A0 /* Build the svc_rqst used by the common proce=
ssing routine */
>> >> >> >>> =A0 =A0 =A0 rqstp->rq_xprt =3D serv->bc_xprt;
>> >> >> >>> @@ -1292,12 +1288,16 @@ bc_svc_process(struct svc_serv *se=
rv, struct rpc_rqst *req,
>> >> >> >>> =A0 =A0 =A0 svc_getu32(argv); =A0 =A0 =A0 /* XID */
>> >> >> >>> =A0 =A0 =A0 svc_getnl(argv); =A0 =A0 =A0 =A0/* CALLDIR */
>> >> >> >>>
>> >> >> >>> - =A0 =A0 error =3D svc_process_common(rqstp, argv, resv);
>> >> >> >>> - =A0 =A0 if (error <=3D 0)
>> >> >> >>> - =A0 =A0 =A0 =A0 =A0 =A0 return error;
>> >> >> >>> + =A0 =A0 /* Hold a reference to the back channel socket a=
cross the call */
>> >> >> >>> + =A0 =A0 svc_xprt_get(serv->bc_xprt);
>> >> >> >>
>> >> >> >> Either we already have a reference, in which case this is u=
nnecessary,
>> >> >> >> or we don't, in which case this is risky.
>> >> >> >
>> >> >> > This is done so that when svc_xprt_put is called due to an s=
vc_drop
>> >> >> > (svc_xprt_release, svc_xprt_put) it is not freed. =A0It's no=
t risky
>> >> >> > because AFAICS it's the way the rpc server code is intended =
to work.
>> >> >> > Note that svc_recv calls svc_xprt_get, and svc_send calls sv=
c_xprt_put
>> >> >> > for the same reason.
>> >> >
>> >> > The svc_xprt_put()'s in svc_send/svc_drop balance the get in sv=
c_recv().
>> >> > They're needed because on a normal server connections can come =
and go
>> >> > (because clients come and go) during the lifetime of the server=
=2E
>> >> >
>> >> > As I understand it, the client is just creating a new server fo=
r each
>> >> > backchannel. =A0So the server will never outlive the one connec=
tion it
>> >> > uses. =A0So you don't need that stuff. =A0It's harmless--just l=
eave it
>> >> > alone--but definitely don't try to add additional reference cou=
nting
>> >> > during the processing.
>> >>
>> >> If I don't add an svc_xprt_get prior to the svc_process_call, the
>> >> svc_xprt_put called in the svc_drop case will free the bc_xprt, w=
hich
>> >> is not what I want.
>> >
>> > Looking at the code some more.... =A0Oh, right, because the backch=
annel
>> > doesn't call svc_recv, it calls its own bc_send, which doesn't do =
a put.
>> >
>> > Oh, yuch, I see: svc_process_common returns "1" to mean send, "0" =
to
>> > mean drop, and leaves it up to the caller to do the put in the sen=
d
>> > case. =A0That's confusing.
>> >
>> > Maybe it would be simpler to have the caller be made responsible f=
or
>> > both cases? =A0So:
>> >
>> > =A0 =A0 =A0 =A0if (svc_process_common(rqstp))
>> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0send it
>> > =A0 =A0 =A0 =A0else
>> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0drop it
>>
>> That would work well. I'll send a patch
>
> If you do that you may want your own bc_svc_drop that doesn't bother
> with the put?
>
> You'd want to look through svc_xprt_release() to see if any of the re=
st
> of that makes sense in the bc case.

Yep :)

-->Andy
> --b.
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing
  2010-12-17 18:20           ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing andros
  2010-12-17 18:20             ` [PATCH_V4 7/9] NFS RPC_AUTH_GSS unsupported on v4.1 back channel andros
@ 2010-12-20 14:05             ` Benny Halevy
  2010-12-20 15:44             ` Fred Isaman
  2 siblings, 0 replies; 21+ messages in thread
From: Benny Halevy @ 2010-12-20 14:05 UTC (permalink / raw)
  To: andros; +Cc: trond.myklebust, bfields, linux-nfs

On 2010-12-17 20:20, andros@netapp.com wrote:
> From: Andy Adamson <andros@netapp.com>
> 
> In the NFS layer, V4.0 clients are found using the callback_ident field in the
> CB_COMPOUND header.  V4.1 clients are found using the sessionID in the
> CB_SEQUENCE operation which is also compared against the sessionID assigned at
> back channel thread creation.
> 
> Each of these methods finds the one an only nfs_client associated
> with the incoming callback request - so nfs_find_client_next is not needed.
> 
> Pass the referenced nfs_client to each CB_COMPOUND operation being proceesed
> via the new cb_process_state structure.
> 
> In the RPC layer, the v4.0 callback identifier is not known for the
> pg_authenticate call, so a zero value (unset) is used to find the nfs_client.
> The sessionid for the sessions based callback service has (usually) not been
> set for the pg_authenticate on a CB_NULL call, so the sessionid is not used to
> find the client in pg_authenticate for CB_NULL calls. It is used for
> pg_authenticate CB_COMPOUND calls.
> 
> Use the new cb_process_state struct to move the NFS4ERR_RETRY_UNCACHED_REP
> processing from process_op into nfs4_callback_sequence where it belongs.
> 
> Signed-off-by: Andy Adamson <andros@netapp.com>
> ---
>  fs/nfs/callback.c              |   21 ++++-
>  fs/nfs/callback.h              |   24 +++++-
>  fs/nfs/callback_proc.c         |  167 ++++++++++++++++------------------------
>  fs/nfs/callback_xdr.c          |   40 ++++++----
>  fs/nfs/client.c                |   97 ++++++++++++++++-------
>  fs/nfs/internal.h              |    7 +-
>  include/linux/sunrpc/bc_xprt.h |   13 +++
>  7 files changed, 214 insertions(+), 155 deletions(-)
> 
> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
> index b2fab85..1033eae 100644
> --- a/fs/nfs/callback.c
> +++ b/fs/nfs/callback.c
> @@ -16,9 +16,7 @@
>  #include <linux/freezer.h>
>  #include <linux/kthread.h>
>  #include <linux/sunrpc/svcauth_gss.h>
> -#if defined(CONFIG_NFS_V4_1)
>  #include <linux/sunrpc/bc_xprt.h>
> -#endif
>  
>  #include <net/inet_sock.h>
>  
> @@ -388,6 +386,23 @@ static int check_gss_callback_principal(struct nfs_client *clp,
>  	return SVC_OK;
>  }
>  
> +/* pg_authenticate method helper */
> +static struct nfs_client *nfs_cb_find_client(struct svc_rqst *rqstp)
> +{
> +	struct nfs4_sessionid *sessionid = bc_xprt_sid(rqstp);
> +	int is_cb_compound = rqstp->rq_proc == CB_COMPOUND ? 1 : 0;
> +
> +	dprintk("--> %s rq_proc %d\n", __func__, rqstp->rq_proc);
> +	if (svc_is_backchannel(rqstp))
> +		/* Sessionid (usually) set after CB_NULL ping */
> +		return nfs4_find_client_sessionid(svc_addr(rqstp), sessionid,
> +						  is_cb_compound);
> +	else
> +		/* No callback identifier in pg_authenticate */
> +		return nfs4_find_client_ident(svc_addr(rqstp), 0);
> +}
> +
> +/* pg_authenticate method for nfsv4 callback threads. */
>  static int nfs_callback_authenticate(struct svc_rqst *rqstp)
>  {
>  	struct nfs_client *clp;
> @@ -395,7 +410,7 @@ static int nfs_callback_authenticate(struct svc_rqst *rqstp)
>  	int ret = SVC_OK;
>  
>  	/* Don't talk to strangers */
> -	clp = nfs_find_client(svc_addr(rqstp), 4);
> +	clp = nfs_cb_find_client(rqstp);
>  	if (clp == NULL)
>  		return SVC_DROP;
>  
> diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
> index 58d61a8..d700d47 100644
> --- a/fs/nfs/callback.h
> +++ b/fs/nfs/callback.h
> @@ -34,10 +34,17 @@ enum nfs4_callback_opnum {
>  	OP_CB_ILLEGAL = 10044,
>  };
>  
> +struct cb_process_state {
> +	__be32			drc_status;
> +	struct nfs_client	*clp;
> +	struct nfs4_sessionid	*svc_sid; /* v4.1 callback service sessionid */
> +};
> +
>  struct cb_compound_hdr_arg {
>  	unsigned int taglen;
>  	const char *tag;
>  	unsigned int minorversion;
> +	unsigned int cb_ident; /* v4.0 callback identifier */
>  	unsigned nops;
>  };
>  
> @@ -104,7 +111,8 @@ struct cb_sequenceres {
>  };
>  
>  extern unsigned nfs4_callback_sequence(struct cb_sequenceargs *args,
> -				       struct cb_sequenceres *res);
> +				       struct cb_sequenceres *res,
> +				       struct cb_process_state *cps);

nit: the function actually returns __be32

>  
>  extern int nfs41_validate_delegation_stateid(struct nfs_delegation *delegation,
>  					     const nfs4_stateid *stateid);
> @@ -118,19 +126,25 @@ struct cb_recallanyargs {
>  	uint32_t	craa_type_mask;
>  };
>  
> -extern unsigned nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy);
> +extern unsigned nfs4_callback_recallany(struct cb_recallanyargs *args,
> +					void *dummy,
> +					struct cb_process_state *cps);

ditto

>  
>  struct cb_recallslotargs {
>  	struct sockaddr	*crsa_addr;
>  	uint32_t	crsa_target_max_slots;
>  };
>  extern unsigned nfs4_callback_recallslot(struct cb_recallslotargs *args,
> -					  void *dummy);
> +					 void *dummy,
> +					 struct cb_process_state *cps);

ditto

Benny

>  
>  #endif /* CONFIG_NFS_V4_1 */
>  
> -extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res);
> -extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy);
> +extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
> +				    struct cb_getattrres *res,
> +				    struct cb_process_state *cps);
> +extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
> +				   struct cb_process_state *cps);
>  
>  #ifdef CONFIG_NFS_V4
>  extern int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt);
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index 2950fca..b70e46d 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -16,26 +16,28 @@
>  #ifdef NFS_DEBUG
>  #define NFSDBG_FACILITY NFSDBG_CALLBACK
>  #endif
> - 
> -__be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res)
> +
> +__be32 nfs4_callback_getattr(struct cb_getattrargs *args,
> +			     struct cb_getattrres *res,
> +			     struct cb_process_state *cps)
>  {
> -	struct nfs_client *clp;
>  	struct nfs_delegation *delegation;
>  	struct nfs_inode *nfsi;
>  	struct inode *inode;
>  
> +	res->status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> +	if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
> +		goto out;
> +
>  	res->bitmap[0] = res->bitmap[1] = 0;
>  	res->status = htonl(NFS4ERR_BADHANDLE);
> -	clp = nfs_find_client(args->addr, 4);
> -	if (clp == NULL)
> -		goto out;
>  
>  	dprintk("NFS: GETATTR callback request from %s\n",
> -		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> +		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
>  
> -	inode = nfs_delegation_find_inode(clp, &args->fh);
> +	inode = nfs_delegation_find_inode(cps->clp, &args->fh);
>  	if (inode == NULL)
> -		goto out_putclient;
> +		goto out;
>  	nfsi = NFS_I(inode);
>  	rcu_read_lock();
>  	delegation = rcu_dereference(nfsi->delegation);
> @@ -55,49 +57,41 @@ __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *
>  out_iput:
>  	rcu_read_unlock();
>  	iput(inode);
> -out_putclient:
> -	nfs_put_client(clp);
>  out:
>  	dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status));
>  	return res->status;
>  }
>  
> -__be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy)
> +__be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
> +			    struct cb_process_state *cps)
>  {
> -	struct nfs_client *clp;
>  	struct inode *inode;
>  	__be32 res;
>  	
> -	res = htonl(NFS4ERR_BADHANDLE);
> -	clp = nfs_find_client(args->addr, 4);
> -	if (clp == NULL)
> +	res = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> +	if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
>  		goto out;
>  
>  	dprintk("NFS: RECALL callback request from %s\n",
> -		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> -
> -	do {
> -		struct nfs_client *prev = clp;
> -
> -		inode = nfs_delegation_find_inode(clp, &args->fh);
> -		if (inode != NULL) {
> -			/* Set up a helper thread to actually return the delegation */
> -			switch (nfs_async_inode_return_delegation(inode, &args->stateid)) {
> -				case 0:
> -					res = 0;
> -					break;
> -				case -ENOENT:
> -					if (res != 0)
> -						res = htonl(NFS4ERR_BAD_STATEID);
> -					break;
> -				default:
> -					res = htonl(NFS4ERR_RESOURCE);
> -			}
> -			iput(inode);
> -		}
> -		clp = nfs_find_client_next(prev);
> -		nfs_put_client(prev);
> -	} while (clp != NULL);
> +		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> +
> +	res = htonl(NFS4ERR_BADHANDLE);
> +	inode = nfs_delegation_find_inode(cps->clp, &args->fh);
> +	if (inode == NULL)
> +		goto out;
> +	/* Set up a helper thread to actually return the delegation */
> +	switch (nfs_async_inode_return_delegation(inode, &args->stateid)) {
> +	case 0:
> +		res = 0;
> +		break;
> +	case -ENOENT:
> +		if (res != 0)
> +			res = htonl(NFS4ERR_BAD_STATEID);
> +		break;
> +	default:
> +		res = htonl(NFS4ERR_RESOURCE);
> +	}
> +	iput(inode);
>  out:
>  	dprintk("%s: exit with status = %d\n", __func__, ntohl(res));
>  	return res;
> @@ -185,42 +179,6 @@ validate_seqid(struct nfs4_slot_table *tbl, struct cb_sequenceargs * args)
>  }
>  
>  /*
> - * Returns a pointer to a held 'struct nfs_client' that matches the server's
> - * address, major version number, and session ID.  It is the caller's
> - * responsibility to release the returned reference.
> - *
> - * Returns NULL if there are no connections with sessions, or if no session
> - * matches the one of interest.
> - */
> - static struct nfs_client *find_client_with_session(
> -	const struct sockaddr *addr, u32 nfsversion,
> -	struct nfs4_sessionid *sessionid)
> -{
> -	struct nfs_client *clp;
> -
> -	clp = nfs_find_client(addr, 4);
> -	if (clp == NULL)
> -		return NULL;
> -
> -	do {
> -		struct nfs_client *prev = clp;
> -
> -		if (clp->cl_session != NULL) {
> -			if (memcmp(clp->cl_session->sess_id.data,
> -					sessionid->data,
> -					NFS4_MAX_SESSIONID_LEN) == 0) {
> -				/* Returns a held reference to clp */
> -				return clp;
> -			}
> -		}
> -		clp = nfs_find_client_next(prev);
> -		nfs_put_client(prev);
> -	} while (clp != NULL);
> -
> -	return NULL;
> -}
> -
> -/*
>   * For each referring call triple, check the session's slot table for
>   * a match.  If the slot is in use and the sequence numbers match, the
>   * client is still waiting for a response to the original request.
> @@ -276,20 +234,28 @@ out:
>  }
>  
>  __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
> -				struct cb_sequenceres *res)
> +			      struct cb_sequenceres *res,
> +			      struct cb_process_state *cps)
>  {
>  	struct nfs_client *clp;
>  	int i;
>  	__be32 status;
>  
> +	cps->clp = NULL;
> +
>  	status = htonl(NFS4ERR_BADSESSION);
> -	clp = find_client_with_session(args->csa_addr, 4, &args->csa_sessionid);
> +	/* Incoming session must match the callback session */
> +	if (memcmp(&args->csa_sessionid, cps->svc_sid, NFS4_MAX_SESSIONID_LEN))
> +		goto out;
> +
> +	clp = nfs4_find_client_sessionid(args->csa_addr,
> +					 &args->csa_sessionid, 1);
>  	if (clp == NULL)
>  		goto out;
>  
>  	status = validate_seqid(&clp->cl_session->bc_slot_table, args);
>  	if (status)
> -		goto out_putclient;
> +		goto out;
>  
>  	/*
>  	 * Check for pending referring calls.  If a match is found, a
> @@ -298,7 +264,7 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
>  	 */
>  	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists)) {
>  		status = htonl(NFS4ERR_DELAY);
> -		goto out_putclient;
> +		goto out;
>  	}
>  
>  	memcpy(&res->csr_sessionid, &args->csa_sessionid,
> @@ -307,36 +273,36 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
>  	res->csr_slotid = args->csa_slotid;
>  	res->csr_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
>  	res->csr_target_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
> +	cps->clp = clp; /* put in nfs4_callback_compound */
>  
> -out_putclient:
> -	nfs_put_client(clp);
>  out:
>  	for (i = 0; i < args->csa_nrclists; i++)
>  		kfree(args->csa_rclists[i].rcl_refcalls);
>  	kfree(args->csa_rclists);
>  
> -	if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP))
> -		res->csr_status = 0;
> -	else
> +	if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP)) {
> +		cps->drc_status = status;
> +		status = 0;
> +	} else
>  		res->csr_status = status;
> +
>  	dprintk("%s: exit with status = %d res->csr_status %d\n", __func__,
>  		ntohl(status), ntohl(res->csr_status));
>  	return status;
>  }
>  
> -__be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy)
> +__be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy,
> +			       struct cb_process_state *cps)
>  {
> -	struct nfs_client *clp;
>  	__be32 status;
>  	fmode_t flags = 0;
>  
>  	status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> -	clp = nfs_find_client(args->craa_addr, 4);
> -	if (clp == NULL)
> +	if (!cps->clp) /* set in cb_sequence */
>  		goto out;
>  
>  	dprintk("NFS: RECALL_ANY callback request from %s\n",
> -		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> +		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
>  
>  	if (test_bit(RCA4_TYPE_MASK_RDATA_DLG, (const unsigned long *)
>  		     &args->craa_type_mask))
> @@ -346,7 +312,7 @@ __be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy)
>  		flags |= FMODE_WRITE;
>  
>  	if (flags)
> -		nfs_expire_all_delegation_types(clp, flags);
> +		nfs_expire_all_delegation_types(cps->clp, flags);
>  	status = htonl(NFS4_OK);
>  out:
>  	dprintk("%s: exit with status = %d\n", __func__, ntohl(status));
> @@ -354,36 +320,33 @@ out:
>  }
>  
>  /* Reduce the fore channel's max_slots to the target value */
> -__be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy)
> +__be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy,
> +				struct cb_process_state *cps)
>  {
> -	struct nfs_client *clp;
>  	struct nfs4_slot_table *fc_tbl;
>  	__be32 status;
>  
>  	status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> -	clp = nfs_find_client(args->crsa_addr, 4);
> -	if (clp == NULL)
> +	if (!cps->clp) /* set in cb_sequence */
>  		goto out;
>  
>  	dprintk("NFS: CB_RECALL_SLOT request from %s target max slots %d\n",
> -		rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR),
> +		rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR),
>  		args->crsa_target_max_slots);
>  
> -	fc_tbl = &clp->cl_session->fc_slot_table;
> +	fc_tbl = &cps->clp->cl_session->fc_slot_table;
>  
>  	status = htonl(NFS4ERR_BAD_HIGH_SLOT);
>  	if (args->crsa_target_max_slots > fc_tbl->max_slots ||
>  	    args->crsa_target_max_slots < 1)
> -		goto out_putclient;
> +		goto out;
>  
>  	status = htonl(NFS4_OK);
>  	if (args->crsa_target_max_slots == fc_tbl->max_slots)
> -		goto out_putclient;
> +		goto out;
>  
>  	fc_tbl->target_max_slots = args->crsa_target_max_slots;
> -	nfs41_handle_recall_slot(clp);
> -out_putclient:
> -	nfs_put_client(clp);	/* balance nfs_find_client */
> +	nfs41_handle_recall_slot(cps->clp);
>  out:
>  	dprintk("%s: exit with status = %d\n", __func__, ntohl(status));
>  	return status;
> diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
> index 05af212..85cbb8f 100644
> --- a/fs/nfs/callback_xdr.c
> +++ b/fs/nfs/callback_xdr.c
> @@ -10,8 +10,10 @@
>  #include <linux/nfs4.h>
>  #include <linux/nfs_fs.h>
>  #include <linux/slab.h>
> +#include <linux/sunrpc/bc_xprt.h>
>  #include "nfs4_fs.h"
>  #include "callback.h"
> +#include "internal.h"
>  
>  #define CB_OP_TAGLEN_MAXSZ	(512)
>  #define CB_OP_HDR_RES_MAXSZ	(2 + CB_OP_TAGLEN_MAXSZ)
> @@ -33,7 +35,8 @@
>  /* Internal error code */
>  #define NFS4ERR_RESOURCE_HDR	11050
>  
> -typedef __be32 (*callback_process_op_t)(void *, void *);
> +typedef __be32 (*callback_process_op_t)(void *, void *,
> +					struct cb_process_state *);
>  typedef __be32 (*callback_decode_arg_t)(struct svc_rqst *, struct xdr_stream *, void *);
>  typedef __be32 (*callback_encode_res_t)(struct svc_rqst *, struct xdr_stream *, void *);
>  
> @@ -160,7 +163,7 @@ static __be32 decode_compound_hdr_arg(struct xdr_stream *xdr, struct cb_compound
>  	hdr->minorversion = ntohl(*p++);
>  	/* Check minor version is zero or one. */
>  	if (hdr->minorversion <= 1) {
> -		p++;	/* skip callback_ident */
> +		hdr->cb_ident = ntohl(*p++); /* ignored by v4.1 */
>  	} else {
>  		printk(KERN_WARNING "%s: NFSv4 server callback with "
>  			"illegal minor version %u!\n",
> @@ -621,7 +624,8 @@ preprocess_nfs4_op(unsigned int op_nr, struct callback_op **op)
>  static __be32 process_op(uint32_t minorversion, int nop,
>  		struct svc_rqst *rqstp,
>  		struct xdr_stream *xdr_in, void *argp,
> -		struct xdr_stream *xdr_out, void *resp, int* drc_status)
> +		struct xdr_stream *xdr_out, void *resp,
> +		struct cb_process_state *cps)
>  {
>  	struct callback_op *op = &callback_ops[0];
>  	unsigned int op_nr;
> @@ -644,8 +648,8 @@ static __be32 process_op(uint32_t minorversion, int nop,
>  	if (status)
>  		goto encode_hdr;
>  
> -	if (*drc_status) {
> -		status = *drc_status;
> +	if (cps->drc_status) {
> +		status = cps->drc_status;
>  		goto encode_hdr;
>  	}
>  
> @@ -653,16 +657,10 @@ static __be32 process_op(uint32_t minorversion, int nop,
>  	if (maxlen > 0 && maxlen < PAGE_SIZE) {
>  		status = op->decode_args(rqstp, xdr_in, argp);
>  		if (likely(status == 0))
> -			status = op->process_op(argp, resp);
> +			status = op->process_op(argp, resp, cps);
>  	} else
>  		status = htonl(NFS4ERR_RESOURCE);
>  
> -	/* Only set by OP_CB_SEQUENCE processing */
> -	if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP)) {
> -		*drc_status = status;
> -		status = 0;
> -	}
> -
>  encode_hdr:
>  	res = encode_op_hdr(xdr_out, op_nr, status);
>  	if (unlikely(res))
> @@ -681,8 +679,11 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>  	struct cb_compound_hdr_arg hdr_arg = { 0 };
>  	struct cb_compound_hdr_res hdr_res = { NULL };
>  	struct xdr_stream xdr_in, xdr_out;
> -	__be32 *p;
> -	__be32 status, drc_status = 0;
> +	__be32 *p, status;
> +	struct cb_process_state cps = {
> +		.drc_status = 0,
> +		.clp = NULL,
> +	};
>  	unsigned int nops = 0;
>  
>  	dprintk("%s: start\n", __func__);
> @@ -696,6 +697,14 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>  	if (status == __constant_htonl(NFS4ERR_RESOURCE))
>  		return rpc_garbage_args;
>  
> +	if (hdr_arg.minorversion == 0) {
> +		cps.clp = nfs4_find_client_ident(svc_addr(rqstp),
> +						 hdr_arg.cb_ident);
> +		if (!cps.clp)
> +			return rpc_drop_reply;
> +	} else
> +		cps.svc_sid = bc_xprt_sid(rqstp);
> +
>  	hdr_res.taglen = hdr_arg.taglen;
>  	hdr_res.tag = hdr_arg.tag;
>  	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0)
> @@ -703,7 +712,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>  
>  	while (status == 0 && nops != hdr_arg.nops) {
>  		status = process_op(hdr_arg.minorversion, nops, rqstp,
> -				    &xdr_in, argp, &xdr_out, resp, &drc_status);
> +				    &xdr_in, argp, &xdr_out, resp, &cps);
>  		nops++;
>  	}
>  
> @@ -716,6 +725,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>  
>  	*hdr_res.status = status;
>  	*hdr_res.nops = htonl(nops);
> +	nfs_put_client(cps.clp);
>  	dprintk("%s: done, status = %u\n", __func__, ntohl(status));
>  	return rpc_success;
>  }
> diff --git a/fs/nfs/client.c b/fs/nfs/client.c
> index 855add6..5443e0e 100644
> --- a/fs/nfs/client.c
> +++ b/fs/nfs/client.c
> @@ -359,29 +359,53 @@ static int nfs_sockaddr_cmp(const struct sockaddr *sa1,
>  	return 0;
>  }
>  
> +/* Common match routine for v4.0 and v4.1 callback services */
> +bool
> +nfs4_cb_match_client(const struct sockaddr *addr, struct nfs_client *clp,
> +		     u32 minorversion)
> +{
> +	struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
> +
> +	/* Don't match clients that failed to initialise */
> +	if (!(clp->cl_cons_state == NFS_CS_READY ||
> +	    clp->cl_cons_state == NFS_CS_SESSION_INITING))
> +		return false;
> +
> +	/* Match the version and minorversion */
> +	if (clp->rpc_ops->version != 4 ||
> +	    clp->cl_minorversion != minorversion)
> +		return false;
> +
> +	/* Match only the IP address, not the port number */
> +	if (!nfs_sockaddr_match_ipaddr(addr, clap))
> +		return false;
> +
> +	return true;
> +}
> +
>  /*
> - * Find a client by IP address and protocol version
> - * - returns NULL if no such client
> + * NFSv4.0 callback thread helper
> + *
> + * Find a client by IP address, protocol version, minorversion, and callback
> + * identifier.
> + *
> + * Called from the pg_authenticate method with a zero callback_ident
> + * because the callback identifier has not been decoded.
> + *
> + * Returns NULL if no such client
>   */
> -struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
> +struct nfs_client *
> +nfs4_find_client_ident(const struct sockaddr *addr, u32 callback_ident)
>  {
>  	struct nfs_client *clp;
>  
>  	spin_lock(&nfs_client_lock);
>  	list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
> -		struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
> -
> -		/* Don't match clients that failed to initialise properly */
> -		if (!(clp->cl_cons_state == NFS_CS_READY ||
> -		      clp->cl_cons_state == NFS_CS_SESSION_INITING))
> -			continue;
> -
> -		/* Different NFS versions cannot share the same nfs_client */
> -		if (clp->rpc_ops->version != nfsversion)
> +		if (nfs4_cb_match_client(addr, clp, 0) == false)
>  			continue;
>  
> -		/* Match only the IP address, not the port number */
> -		if (!nfs_sockaddr_match_ipaddr(addr, clap))
> +		/* Match the non-zero callback identifier */
> +		if (callback_ident != 0 && clp->cl_cb_ident != callback_ident)
>  			continue;
>  
>  		atomic_inc(&clp->cl_count);
> @@ -392,29 +416,36 @@ struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
>  	return NULL;
>  }
>  
> +#if defined(CONFIG_NFS_V4_1)
>  /*
> - * Find a client by IP address and protocol version
> - * - returns NULL if no such client
> + * NFSv4.1 callback thread helper
> + * For CB_COMPOUND calls, find a client by IP address, protocol version,
> + * minorversion, and sessionID
> + *
> + * CREATE_SESSION triggers a CB_NULL ping from servers. The callback service
> + * sessionid can only be set after the CREATE_SESSION return, so a CB_NULL
> + * can arrive before the callback sessionid is set. For CB_NULL calls,
> + * find a client by IP address protocol version, and minorversion.
> + *
> + * Returns NULL if no such client
>   */
> -struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
> +struct nfs_client *
> +nfs4_find_client_sessionid(const struct sockaddr *addr,
> +			   struct nfs4_sessionid *sid, int is_cb_compound)
>  {
> -	struct sockaddr *sap = (struct sockaddr *)&clp->cl_addr;
> -	u32 nfsvers = clp->rpc_ops->version;
> +	struct nfs_client *clp;
>  
>  	spin_lock(&nfs_client_lock);
> -	list_for_each_entry_continue(clp, &nfs_client_list, cl_share_link) {
> -		struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
> -
> -		/* Don't match clients that failed to initialise properly */
> -		if (clp->cl_cons_state != NFS_CS_READY)
> +	list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
> +		if (nfs4_cb_match_client(addr, clp, 1) == false)
>  			continue;
>  
> -		/* Different NFS versions cannot share the same nfs_client */
> -		if (clp->rpc_ops->version != nfsvers)
> +		if (!nfs4_has_session(clp))
>  			continue;
>  
> -		/* Match only the IP address, not the port number */
> -		if (!nfs_sockaddr_match_ipaddr(sap, clap))
> +		/* Match sessionid unless cb_null call*/
> +		if (is_cb_compound && (memcmp(clp->cl_session->sess_id.data,
> +		    sid->data, NFS4_MAX_SESSIONID_LEN) != 0))
>  			continue;
>  
>  		atomic_inc(&clp->cl_count);
> @@ -425,6 +456,16 @@ struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
>  	return NULL;
>  }
>  
> +#else /* CONFIG_NFS_V4_1 */
> +
> +struct nfs_client *
> +nfs4_find_client_sessionid(const struct sockaddr *addr,
> +			   struct nfs4_sessionid *sid, int is_cb_compound)
> +{
> +	return NULL;
> +}
> +
> +#endif /* CONFIG_NFS_V4_1 */
>  /*
>   * Find an nfs_client on the list that matches the initialisation data
>   * that is supplied.
> diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
> index e6356b7..f65602c 100644
> --- a/fs/nfs/internal.h
> +++ b/fs/nfs/internal.h
> @@ -129,8 +129,11 @@ extern void nfs_umount(const struct nfs_mount_request *info);
>  extern struct rpc_program nfs_program;
>  
>  extern void nfs_put_client(struct nfs_client *);
> -extern struct nfs_client *nfs_find_client(const struct sockaddr *, u32);
> -extern struct nfs_client *nfs_find_client_next(struct nfs_client *);
> +extern struct nfs_client *nfs4_find_client_ident(const struct sockaddr *,
> +						    u32);
> +extern struct nfs_client *
> +nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *,
> +			   int);
>  extern struct nfs_server *nfs_create_server(
>  					const struct nfs_parsed_mount_data *,
>  					struct nfs_fh *);
> diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
> index 7c91260..2c60e09 100644
> --- a/include/linux/sunrpc/bc_xprt.h
> +++ b/include/linux/sunrpc/bc_xprt.h
> @@ -47,6 +47,14 @@ static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
>  		return 1;
>  	return 0;
>  }
> +static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
> +{
> +	if (svc_is_backchannel(rqstp))
> +		return (struct nfs4_sessionid *)
> +					rqstp->rq_server->bc_xprt->xpt_bc_sid;
> +	return NULL;
> +}
> +
>  #else /* CONFIG_NFS_V4_1 */
>  static inline int xprt_setup_backchannel(struct rpc_xprt *xprt,
>  					 unsigned int min_reqs)
> @@ -59,6 +67,11 @@ static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
>  	return 0;
>  }
>  
> +static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
> +{
> +	return NULL;
> +}
> +
>  static inline void xprt_free_bc_request(struct rpc_rqst *req)
>  {
>  }

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing
  2010-12-17 18:20           ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing andros
  2010-12-17 18:20             ` [PATCH_V4 7/9] NFS RPC_AUTH_GSS unsupported on v4.1 back channel andros
  2010-12-20 14:05             ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing Benny Halevy
@ 2010-12-20 15:44             ` Fred Isaman
  2 siblings, 0 replies; 21+ messages in thread
From: Fred Isaman @ 2010-12-20 15:44 UTC (permalink / raw)
  To: andros; +Cc: trond.myklebust, bfields, linux-nfs

On Fri, Dec 17, 2010 at 1:20 PM,  <andros@netapp.com> wrote:
> From: Andy Adamson <andros@netapp.com>
>
> In the NFS layer, V4.0 clients are found using the callback_ident field in the
> CB_COMPOUND header.  V4.1 clients are found using the sessionID in the
> CB_SEQUENCE operation which is also compared against the sessionID assigned at
> back channel thread creation.
>
> Each of these methods finds the one an only nfs_client associated
> with the incoming callback request - so nfs_find_client_next is not needed.
>
> Pass the referenced nfs_client to each CB_COMPOUND operation being proceesed
> via the new cb_process_state structure.
>
> In the RPC layer, the v4.0 callback identifier is not known for the
> pg_authenticate call, so a zero value (unset) is used to find the nfs_client.
> The sessionid for the sessions based callback service has (usually) not been
> set for the pg_authenticate on a CB_NULL call, so the sessionid is not used to
> find the client in pg_authenticate for CB_NULL calls. It is used for
> pg_authenticate CB_COMPOUND calls.
>
> Use the new cb_process_state struct to move the NFS4ERR_RETRY_UNCACHED_REP
> processing from process_op into nfs4_callback_sequence where it belongs.
>
> Signed-off-by: Andy Adamson <andros@netapp.com>
> ---
>  fs/nfs/callback.c              |   21 ++++-
>  fs/nfs/callback.h              |   24 +++++-
>  fs/nfs/callback_proc.c         |  167 ++++++++++++++++------------------------
>  fs/nfs/callback_xdr.c          |   40 ++++++----
>  fs/nfs/client.c                |   97 ++++++++++++++++-------
>  fs/nfs/internal.h              |    7 +-
>  include/linux/sunrpc/bc_xprt.h |   13 +++
>  7 files changed, 214 insertions(+), 155 deletions(-)
>
> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
> index b2fab85..1033eae 100644
> --- a/fs/nfs/callback.c
> +++ b/fs/nfs/callback.c
> @@ -16,9 +16,7 @@
>  #include <linux/freezer.h>
>  #include <linux/kthread.h>
>  #include <linux/sunrpc/svcauth_gss.h>
> -#if defined(CONFIG_NFS_V4_1)
>  #include <linux/sunrpc/bc_xprt.h>
> -#endif
>
>  #include <net/inet_sock.h>
>
> @@ -388,6 +386,23 @@ static int check_gss_callback_principal(struct nfs_client *clp,
>        return SVC_OK;
>  }
>
> +/* pg_authenticate method helper */
> +static struct nfs_client *nfs_cb_find_client(struct svc_rqst *rqstp)
> +{
> +       struct nfs4_sessionid *sessionid = bc_xprt_sid(rqstp);
> +       int is_cb_compound = rqstp->rq_proc == CB_COMPOUND ? 1 : 0;
> +
> +       dprintk("--> %s rq_proc %d\n", __func__, rqstp->rq_proc);
> +       if (svc_is_backchannel(rqstp))
> +               /* Sessionid (usually) set after CB_NULL ping */
> +               return nfs4_find_client_sessionid(svc_addr(rqstp), sessionid,
> +                                                 is_cb_compound);
> +       else
> +               /* No callback identifier in pg_authenticate */
> +               return nfs4_find_client_ident(svc_addr(rqstp), 0);
> +}
> +
> +/* pg_authenticate method for nfsv4 callback threads. */
>  static int nfs_callback_authenticate(struct svc_rqst *rqstp)
>  {
>        struct nfs_client *clp;
> @@ -395,7 +410,7 @@ static int nfs_callback_authenticate(struct svc_rqst *rqstp)
>        int ret = SVC_OK;
>
>        /* Don't talk to strangers */
> -       clp = nfs_find_client(svc_addr(rqstp), 4);
> +       clp = nfs_cb_find_client(rqstp);
>        if (clp == NULL)
>                return SVC_DROP;
>
> diff --git a/fs/nfs/callback.h b/fs/nfs/callback.h
> index 58d61a8..d700d47 100644
> --- a/fs/nfs/callback.h
> +++ b/fs/nfs/callback.h
> @@ -34,10 +34,17 @@ enum nfs4_callback_opnum {
>        OP_CB_ILLEGAL = 10044,
>  };
>
> +struct cb_process_state {
> +       __be32                  drc_status;
> +       struct nfs_client       *clp;
> +       struct nfs4_sessionid   *svc_sid; /* v4.1 callback service sessionid */
> +};
> +
>  struct cb_compound_hdr_arg {
>        unsigned int taglen;
>        const char *tag;
>        unsigned int minorversion;
> +       unsigned int cb_ident; /* v4.0 callback identifier */
>        unsigned nops;
>  };
>
> @@ -104,7 +111,8 @@ struct cb_sequenceres {
>  };
>
>  extern unsigned nfs4_callback_sequence(struct cb_sequenceargs *args,
> -                                      struct cb_sequenceres *res);
> +                                      struct cb_sequenceres *res,
> +                                      struct cb_process_state *cps);
>
>  extern int nfs41_validate_delegation_stateid(struct nfs_delegation *delegation,
>                                             const nfs4_stateid *stateid);
> @@ -118,19 +126,25 @@ struct cb_recallanyargs {
>        uint32_t        craa_type_mask;
>  };
>
> -extern unsigned nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy);
> +extern unsigned nfs4_callback_recallany(struct cb_recallanyargs *args,
> +                                       void *dummy,
> +                                       struct cb_process_state *cps);
>
>  struct cb_recallslotargs {
>        struct sockaddr *crsa_addr;
>        uint32_t        crsa_target_max_slots;
>  };
>  extern unsigned nfs4_callback_recallslot(struct cb_recallslotargs *args,
> -                                         void *dummy);
> +                                        void *dummy,
> +                                        struct cb_process_state *cps);
>
>  #endif /* CONFIG_NFS_V4_1 */
>
> -extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res);
> -extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy);
> +extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args,
> +                                   struct cb_getattrres *res,
> +                                   struct cb_process_state *cps);
> +extern __be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
> +                                  struct cb_process_state *cps);
>
>  #ifdef CONFIG_NFS_V4
>  extern int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt);
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index 2950fca..b70e46d 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -16,26 +16,28 @@
>  #ifdef NFS_DEBUG
>  #define NFSDBG_FACILITY NFSDBG_CALLBACK
>  #endif
> -
> -__be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res)
> +
> +__be32 nfs4_callback_getattr(struct cb_getattrargs *args,
> +                            struct cb_getattrres *res,
> +                            struct cb_process_state *cps)
>  {
> -       struct nfs_client *clp;
>        struct nfs_delegation *delegation;
>        struct nfs_inode *nfsi;
>        struct inode *inode;
>
> +       res->status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> +       if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
> +               goto out;
> +
>        res->bitmap[0] = res->bitmap[1] = 0;
>        res->status = htonl(NFS4ERR_BADHANDLE);
> -       clp = nfs_find_client(args->addr, 4);
> -       if (clp == NULL)
> -               goto out;
>
>        dprintk("NFS: GETATTR callback request from %s\n",
> -               rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> +               rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
>
> -       inode = nfs_delegation_find_inode(clp, &args->fh);
> +       inode = nfs_delegation_find_inode(cps->clp, &args->fh);
>        if (inode == NULL)
> -               goto out_putclient;
> +               goto out;
>        nfsi = NFS_I(inode);
>        rcu_read_lock();
>        delegation = rcu_dereference(nfsi->delegation);
> @@ -55,49 +57,41 @@ __be32 nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *
>  out_iput:
>        rcu_read_unlock();
>        iput(inode);
> -out_putclient:
> -       nfs_put_client(clp);
>  out:
>        dprintk("%s: exit with status = %d\n", __func__, ntohl(res->status));
>        return res->status;
>  }
>
> -__be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy)
> +__be32 nfs4_callback_recall(struct cb_recallargs *args, void *dummy,
> +                           struct cb_process_state *cps)
>  {
> -       struct nfs_client *clp;
>        struct inode *inode;
>        __be32 res;
>
> -       res = htonl(NFS4ERR_BADHANDLE);
> -       clp = nfs_find_client(args->addr, 4);
> -       if (clp == NULL)
> +       res = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> +       if (!cps->clp) /* Always set for v4.0. Set in cb_sequence for v4.1 */
>                goto out;
>
>        dprintk("NFS: RECALL callback request from %s\n",
> -               rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> -
> -       do {
> -               struct nfs_client *prev = clp;
> -
> -               inode = nfs_delegation_find_inode(clp, &args->fh);
> -               if (inode != NULL) {
> -                       /* Set up a helper thread to actually return the delegation */
> -                       switch (nfs_async_inode_return_delegation(inode, &args->stateid)) {
> -                               case 0:
> -                                       res = 0;
> -                                       break;
> -                               case -ENOENT:
> -                                       if (res != 0)
> -                                               res = htonl(NFS4ERR_BAD_STATEID);
> -                                       break;
> -                               default:
> -                                       res = htonl(NFS4ERR_RESOURCE);
> -                       }
> -                       iput(inode);
> -               }
> -               clp = nfs_find_client_next(prev);
> -               nfs_put_client(prev);
> -       } while (clp != NULL);
> +               rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> +
> +       res = htonl(NFS4ERR_BADHANDLE);
> +       inode = nfs_delegation_find_inode(cps->clp, &args->fh);
> +       if (inode == NULL)
> +               goto out;
> +       /* Set up a helper thread to actually return the delegation */
> +       switch (nfs_async_inode_return_delegation(inode, &args->stateid)) {
> +       case 0:
> +               res = 0;
> +               break;
> +       case -ENOENT:
> +               if (res != 0)
> +                       res = htonl(NFS4ERR_BAD_STATEID);
> +               break;
> +       default:
> +               res = htonl(NFS4ERR_RESOURCE);
> +       }
> +       iput(inode);
>  out:
>        dprintk("%s: exit with status = %d\n", __func__, ntohl(res));
>        return res;
> @@ -185,42 +179,6 @@ validate_seqid(struct nfs4_slot_table *tbl, struct cb_sequenceargs * args)
>  }
>
>  /*
> - * Returns a pointer to a held 'struct nfs_client' that matches the server's
> - * address, major version number, and session ID.  It is the caller's
> - * responsibility to release the returned reference.
> - *
> - * Returns NULL if there are no connections with sessions, or if no session
> - * matches the one of interest.
> - */
> - static struct nfs_client *find_client_with_session(
> -       const struct sockaddr *addr, u32 nfsversion,
> -       struct nfs4_sessionid *sessionid)
> -{
> -       struct nfs_client *clp;
> -
> -       clp = nfs_find_client(addr, 4);
> -       if (clp == NULL)
> -               return NULL;
> -
> -       do {
> -               struct nfs_client *prev = clp;
> -
> -               if (clp->cl_session != NULL) {
> -                       if (memcmp(clp->cl_session->sess_id.data,
> -                                       sessionid->data,
> -                                       NFS4_MAX_SESSIONID_LEN) == 0) {
> -                               /* Returns a held reference to clp */
> -                               return clp;
> -                       }
> -               }
> -               clp = nfs_find_client_next(prev);
> -               nfs_put_client(prev);
> -       } while (clp != NULL);
> -
> -       return NULL;
> -}
> -
> -/*
>  * For each referring call triple, check the session's slot table for
>  * a match.  If the slot is in use and the sequence numbers match, the
>  * client is still waiting for a response to the original request.
> @@ -276,20 +234,28 @@ out:
>  }
>
>  __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
> -                               struct cb_sequenceres *res)
> +                             struct cb_sequenceres *res,
> +                             struct cb_process_state *cps)
>  {
>        struct nfs_client *clp;
>        int i;
>        __be32 status;
>
> +       cps->clp = NULL;
> +
>        status = htonl(NFS4ERR_BADSESSION);
> -       clp = find_client_with_session(args->csa_addr, 4, &args->csa_sessionid);
> +       /* Incoming session must match the callback session */
> +       if (memcmp(&args->csa_sessionid, cps->svc_sid, NFS4_MAX_SESSIONID_LEN))
> +               goto out;
> +
> +       clp = nfs4_find_client_sessionid(args->csa_addr,
> +                                        &args->csa_sessionid, 1);
>        if (clp == NULL)
>                goto out;
>
>        status = validate_seqid(&clp->cl_session->bc_slot_table, args);
>        if (status)
> -               goto out_putclient;
> +               goto out;
>
>        /*
>         * Check for pending referring calls.  If a match is found, a
> @@ -298,7 +264,7 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
>         */
>        if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists)) {
>                status = htonl(NFS4ERR_DELAY);
> -               goto out_putclient;
> +               goto out;
>        }
>
>        memcpy(&res->csr_sessionid, &args->csa_sessionid,
> @@ -307,36 +273,36 @@ __be32 nfs4_callback_sequence(struct cb_sequenceargs *args,
>        res->csr_slotid = args->csa_slotid;
>        res->csr_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
>        res->csr_target_highestslotid = NFS41_BC_MAX_CALLBACKS - 1;
> +       cps->clp = clp; /* put in nfs4_callback_compound */
>
> -out_putclient:
> -       nfs_put_client(clp);
>  out:
>        for (i = 0; i < args->csa_nrclists; i++)
>                kfree(args->csa_rclists[i].rcl_refcalls);
>        kfree(args->csa_rclists);
>
> -       if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP))
> -               res->csr_status = 0;
> -       else
> +       if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP)) {
> +               cps->drc_status = status;
> +               status = 0;
> +       } else
>                res->csr_status = status;
> +
>        dprintk("%s: exit with status = %d res->csr_status %d\n", __func__,
>                ntohl(status), ntohl(res->csr_status));
>        return status;
>  }
>
> -__be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy)
> +__be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy,
> +                              struct cb_process_state *cps)
>  {
> -       struct nfs_client *clp;
>        __be32 status;
>        fmode_t flags = 0;
>
>        status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> -       clp = nfs_find_client(args->craa_addr, 4);
> -       if (clp == NULL)
> +       if (!cps->clp) /* set in cb_sequence */
>                goto out;
>
>        dprintk("NFS: RECALL_ANY callback request from %s\n",
> -               rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR));
> +               rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR));
>
>        if (test_bit(RCA4_TYPE_MASK_RDATA_DLG, (const unsigned long *)
>                     &args->craa_type_mask))
> @@ -346,7 +312,7 @@ __be32 nfs4_callback_recallany(struct cb_recallanyargs *args, void *dummy)
>                flags |= FMODE_WRITE;
>
>        if (flags)
> -               nfs_expire_all_delegation_types(clp, flags);
> +               nfs_expire_all_delegation_types(cps->clp, flags);
>        status = htonl(NFS4_OK);
>  out:
>        dprintk("%s: exit with status = %d\n", __func__, ntohl(status));
> @@ -354,36 +320,33 @@ out:
>  }
>
>  /* Reduce the fore channel's max_slots to the target value */
> -__be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy)
> +__be32 nfs4_callback_recallslot(struct cb_recallslotargs *args, void *dummy,
> +                               struct cb_process_state *cps)
>  {
> -       struct nfs_client *clp;
>        struct nfs4_slot_table *fc_tbl;
>        __be32 status;
>
>        status = htonl(NFS4ERR_OP_NOT_IN_SESSION);
> -       clp = nfs_find_client(args->crsa_addr, 4);
> -       if (clp == NULL)
> +       if (!cps->clp) /* set in cb_sequence */
>                goto out;
>
>        dprintk("NFS: CB_RECALL_SLOT request from %s target max slots %d\n",
> -               rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR),
> +               rpc_peeraddr2str(cps->clp->cl_rpcclient, RPC_DISPLAY_ADDR),
>                args->crsa_target_max_slots);
>
> -       fc_tbl = &clp->cl_session->fc_slot_table;
> +       fc_tbl = &cps->clp->cl_session->fc_slot_table;
>
>        status = htonl(NFS4ERR_BAD_HIGH_SLOT);
>        if (args->crsa_target_max_slots > fc_tbl->max_slots ||
>            args->crsa_target_max_slots < 1)
> -               goto out_putclient;
> +               goto out;
>
>        status = htonl(NFS4_OK);
>        if (args->crsa_target_max_slots == fc_tbl->max_slots)
> -               goto out_putclient;
> +               goto out;
>
>        fc_tbl->target_max_slots = args->crsa_target_max_slots;
> -       nfs41_handle_recall_slot(clp);
> -out_putclient:
> -       nfs_put_client(clp);    /* balance nfs_find_client */
> +       nfs41_handle_recall_slot(cps->clp);
>  out:
>        dprintk("%s: exit with status = %d\n", __func__, ntohl(status));
>        return status;
> diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
> index 05af212..85cbb8f 100644
> --- a/fs/nfs/callback_xdr.c
> +++ b/fs/nfs/callback_xdr.c
> @@ -10,8 +10,10 @@
>  #include <linux/nfs4.h>
>  #include <linux/nfs_fs.h>
>  #include <linux/slab.h>
> +#include <linux/sunrpc/bc_xprt.h>
>  #include "nfs4_fs.h"
>  #include "callback.h"
> +#include "internal.h"
>
>  #define CB_OP_TAGLEN_MAXSZ     (512)
>  #define CB_OP_HDR_RES_MAXSZ    (2 + CB_OP_TAGLEN_MAXSZ)
> @@ -33,7 +35,8 @@
>  /* Internal error code */
>  #define NFS4ERR_RESOURCE_HDR   11050
>
> -typedef __be32 (*callback_process_op_t)(void *, void *);
> +typedef __be32 (*callback_process_op_t)(void *, void *,
> +                                       struct cb_process_state *);
>  typedef __be32 (*callback_decode_arg_t)(struct svc_rqst *, struct xdr_stream *, void *);
>  typedef __be32 (*callback_encode_res_t)(struct svc_rqst *, struct xdr_stream *, void *);
>
> @@ -160,7 +163,7 @@ static __be32 decode_compound_hdr_arg(struct xdr_stream *xdr, struct cb_compound
>        hdr->minorversion = ntohl(*p++);
>        /* Check minor version is zero or one. */
>        if (hdr->minorversion <= 1) {
> -               p++;    /* skip callback_ident */
> +               hdr->cb_ident = ntohl(*p++); /* ignored by v4.1 */
>        } else {
>                printk(KERN_WARNING "%s: NFSv4 server callback with "
>                        "illegal minor version %u!\n",
> @@ -621,7 +624,8 @@ preprocess_nfs4_op(unsigned int op_nr, struct callback_op **op)
>  static __be32 process_op(uint32_t minorversion, int nop,
>                struct svc_rqst *rqstp,
>                struct xdr_stream *xdr_in, void *argp,
> -               struct xdr_stream *xdr_out, void *resp, int* drc_status)
> +               struct xdr_stream *xdr_out, void *resp,
> +               struct cb_process_state *cps)
>  {
>        struct callback_op *op = &callback_ops[0];
>        unsigned int op_nr;
> @@ -644,8 +648,8 @@ static __be32 process_op(uint32_t minorversion, int nop,
>        if (status)
>                goto encode_hdr;
>
> -       if (*drc_status) {
> -               status = *drc_status;
> +       if (cps->drc_status) {
> +               status = cps->drc_status;
>                goto encode_hdr;
>        }
>
> @@ -653,16 +657,10 @@ static __be32 process_op(uint32_t minorversion, int nop,
>        if (maxlen > 0 && maxlen < PAGE_SIZE) {
>                status = op->decode_args(rqstp, xdr_in, argp);
>                if (likely(status == 0))
> -                       status = op->process_op(argp, resp);
> +                       status = op->process_op(argp, resp, cps);
>        } else
>                status = htonl(NFS4ERR_RESOURCE);
>
> -       /* Only set by OP_CB_SEQUENCE processing */
> -       if (status == htonl(NFS4ERR_RETRY_UNCACHED_REP)) {
> -               *drc_status = status;
> -               status = 0;
> -       }
> -
>  encode_hdr:
>        res = encode_op_hdr(xdr_out, op_nr, status);
>        if (unlikely(res))
> @@ -681,8 +679,11 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>        struct cb_compound_hdr_arg hdr_arg = { 0 };
>        struct cb_compound_hdr_res hdr_res = { NULL };
>        struct xdr_stream xdr_in, xdr_out;
> -       __be32 *p;
> -       __be32 status, drc_status = 0;
> +       __be32 *p, status;
> +       struct cb_process_state cps = {
> +               .drc_status = 0,
> +               .clp = NULL,
> +       };
>        unsigned int nops = 0;
>
>        dprintk("%s: start\n", __func__);
> @@ -696,6 +697,14 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>        if (status == __constant_htonl(NFS4ERR_RESOURCE))
>                return rpc_garbage_args;
>
> +       if (hdr_arg.minorversion == 0) {
> +               cps.clp = nfs4_find_client_ident(svc_addr(rqstp),
> +                                                hdr_arg.cb_ident);
> +               if (!cps.clp)
> +                       return rpc_drop_reply;
> +       } else
> +               cps.svc_sid = bc_xprt_sid(rqstp);
> +
>        hdr_res.taglen = hdr_arg.taglen;
>        hdr_res.tag = hdr_arg.tag;
>        if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0)
> @@ -703,7 +712,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>
>        while (status == 0 && nops != hdr_arg.nops) {
>                status = process_op(hdr_arg.minorversion, nops, rqstp,
> -                                   &xdr_in, argp, &xdr_out, resp, &drc_status);
> +                                   &xdr_in, argp, &xdr_out, resp, &cps);
>                nops++;
>        }
>
> @@ -716,6 +725,7 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp, void *argp, void *r
>
>        *hdr_res.status = status;
>        *hdr_res.nops = htonl(nops);
> +       nfs_put_client(cps.clp);
>        dprintk("%s: done, status = %u\n", __func__, ntohl(status));
>        return rpc_success;
>  }
> diff --git a/fs/nfs/client.c b/fs/nfs/client.c
> index 855add6..5443e0e 100644
> --- a/fs/nfs/client.c
> +++ b/fs/nfs/client.c
> @@ -359,29 +359,53 @@ static int nfs_sockaddr_cmp(const struct sockaddr *sa1,
>        return 0;
>  }
>
> +/* Common match routine for v4.0 and v4.1 callback services */
> +bool
> +nfs4_cb_match_client(const struct sockaddr *addr, struct nfs_client *clp,
> +                    u32 minorversion)
> +{
> +       struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
> +
> +       /* Don't match clients that failed to initialise */
> +       if (!(clp->cl_cons_state == NFS_CS_READY ||
> +           clp->cl_cons_state == NFS_CS_SESSION_INITING))
> +               return false;
> +
> +       /* Match the version and minorversion */
> +       if (clp->rpc_ops->version != 4 ||
> +           clp->cl_minorversion != minorversion)
> +               return false;
> +
> +       /* Match only the IP address, not the port number */
> +       if (!nfs_sockaddr_match_ipaddr(addr, clap))
> +               return false;
> +
> +       return true;
> +}
> +
>  /*
> - * Find a client by IP address and protocol version
> - * - returns NULL if no such client
> + * NFSv4.0 callback thread helper
> + *
> + * Find a client by IP address, protocol version, minorversion, and callback
> + * identifier.
> + *
> + * Called from the pg_authenticate method with a zero callback_ident
> + * because the callback identifier has not been decoded.
> + *
> + * Returns NULL if no such client
>  */
> -struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
> +struct nfs_client *
> +nfs4_find_client_ident(const struct sockaddr *addr, u32 callback_ident)
>  {
>        struct nfs_client *clp;
>
>        spin_lock(&nfs_client_lock);
>        list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
> -               struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
> -
> -               /* Don't match clients that failed to initialise properly */
> -               if (!(clp->cl_cons_state == NFS_CS_READY ||
> -                     clp->cl_cons_state == NFS_CS_SESSION_INITING))
> -                       continue;
> -
> -               /* Different NFS versions cannot share the same nfs_client */
> -               if (clp->rpc_ops->version != nfsversion)
> +               if (nfs4_cb_match_client(addr, clp, 0) == false)
>                        continue;
>
> -               /* Match only the IP address, not the port number */
> -               if (!nfs_sockaddr_match_ipaddr(addr, clap))
> +               /* Match the non-zero callback identifier */
> +               if (callback_ident != 0 && clp->cl_cb_ident != callback_ident)

This fails to compile with V4 turned off, as the cl_cb_ident exists
only conditionally.

Fred

>                        continue;
>
>                atomic_inc(&clp->cl_count);
> @@ -392,29 +416,36 @@ struct nfs_client *nfs_find_client(const struct sockaddr *addr, u32 nfsversion)
>        return NULL;
>  }
>
> +#if defined(CONFIG_NFS_V4_1)
>  /*
> - * Find a client by IP address and protocol version
> - * - returns NULL if no such client
> + * NFSv4.1 callback thread helper
> + * For CB_COMPOUND calls, find a client by IP address, protocol version,
> + * minorversion, and sessionID
> + *
> + * CREATE_SESSION triggers a CB_NULL ping from servers. The callback service
> + * sessionid can only be set after the CREATE_SESSION return, so a CB_NULL
> + * can arrive before the callback sessionid is set. For CB_NULL calls,
> + * find a client by IP address protocol version, and minorversion.
> + *
> + * Returns NULL if no such client
>  */
> -struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
> +struct nfs_client *
> +nfs4_find_client_sessionid(const struct sockaddr *addr,
> +                          struct nfs4_sessionid *sid, int is_cb_compound)
>  {
> -       struct sockaddr *sap = (struct sockaddr *)&clp->cl_addr;
> -       u32 nfsvers = clp->rpc_ops->version;
> +       struct nfs_client *clp;
>
>        spin_lock(&nfs_client_lock);
> -       list_for_each_entry_continue(clp, &nfs_client_list, cl_share_link) {
> -               struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;
> -
> -               /* Don't match clients that failed to initialise properly */
> -               if (clp->cl_cons_state != NFS_CS_READY)
> +       list_for_each_entry(clp, &nfs_client_list, cl_share_link) {
> +               if (nfs4_cb_match_client(addr, clp, 1) == false)
>                        continue;
>
> -               /* Different NFS versions cannot share the same nfs_client */
> -               if (clp->rpc_ops->version != nfsvers)
> +               if (!nfs4_has_session(clp))
>                        continue;
>
> -               /* Match only the IP address, not the port number */
> -               if (!nfs_sockaddr_match_ipaddr(sap, clap))
> +               /* Match sessionid unless cb_null call*/
> +               if (is_cb_compound && (memcmp(clp->cl_session->sess_id.data,
> +                   sid->data, NFS4_MAX_SESSIONID_LEN) != 0))
>                        continue;
>
>                atomic_inc(&clp->cl_count);
> @@ -425,6 +456,16 @@ struct nfs_client *nfs_find_client_next(struct nfs_client *clp)
>        return NULL;
>  }
>
> +#else /* CONFIG_NFS_V4_1 */
> +
> +struct nfs_client *
> +nfs4_find_client_sessionid(const struct sockaddr *addr,
> +                          struct nfs4_sessionid *sid, int is_cb_compound)
> +{
> +       return NULL;
> +}
> +
> +#endif /* CONFIG_NFS_V4_1 */
>  /*
>  * Find an nfs_client on the list that matches the initialisation data
>  * that is supplied.
> diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
> index e6356b7..f65602c 100644
> --- a/fs/nfs/internal.h
> +++ b/fs/nfs/internal.h
> @@ -129,8 +129,11 @@ extern void nfs_umount(const struct nfs_mount_request *info);
>  extern struct rpc_program nfs_program;
>
>  extern void nfs_put_client(struct nfs_client *);
> -extern struct nfs_client *nfs_find_client(const struct sockaddr *, u32);
> -extern struct nfs_client *nfs_find_client_next(struct nfs_client *);
> +extern struct nfs_client *nfs4_find_client_ident(const struct sockaddr *,
> +                                                   u32);
> +extern struct nfs_client *
> +nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *,
> +                          int);
>  extern struct nfs_server *nfs_create_server(
>                                        const struct nfs_parsed_mount_data *,
>                                        struct nfs_fh *);
> diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
> index 7c91260..2c60e09 100644
> --- a/include/linux/sunrpc/bc_xprt.h
> +++ b/include/linux/sunrpc/bc_xprt.h
> @@ -47,6 +47,14 @@ static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
>                return 1;
>        return 0;
>  }
> +static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
> +{
> +       if (svc_is_backchannel(rqstp))
> +               return (struct nfs4_sessionid *)
> +                                       rqstp->rq_server->bc_xprt->xpt_bc_sid;
> +       return NULL;
> +}
> +
>  #else /* CONFIG_NFS_V4_1 */
>  static inline int xprt_setup_backchannel(struct rpc_xprt *xprt,
>                                         unsigned int min_reqs)
> @@ -59,6 +67,11 @@ static inline int svc_is_backchannel(const struct svc_rqst *rqstp)
>        return 0;
>  }
>
> +static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp)
> +{
> +       return NULL;
> +}
> +
>  static inline void xprt_free_bc_request(struct rpc_rqst *req)
>  {
>  }
> --
> 1.6.6
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2010-12-20 15:44 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-12-17 18:20 [PATCH_V4 0/9] NFSv4 callback find client fix Version 4 andros
2010-12-17 18:20 ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel andros
2010-12-17 18:20   ` [PATCH_V4 2/9] NFS use svc_create_xprt for NFSv4.1 callback service andros
2010-12-17 18:20     ` [PATCH_V4 3/9] NFS do not clear minor version at nfs_client free andros
2010-12-17 18:20       ` [PATCH_V4 4/9] NFS implement v4.0 callback_ident andros
2010-12-17 18:20         ` [PATCH_V4 5/9] NFS associate sessionid with callback connection andros
2010-12-17 18:20           ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing andros
2010-12-17 18:20             ` [PATCH_V4 7/9] NFS RPC_AUTH_GSS unsupported on v4.1 back channel andros
2010-12-17 18:20               ` [PATCH_V4 8/9] NFS add session back channel draining andros
2010-12-17 18:20                 ` [PATCH_V4 9/9] NFS rename client back channel transport field andros
2010-12-20 14:05             ` [PATCH_V4 6/9] NFS reference nfs_client across cb_compound processing Benny Halevy
2010-12-20 15:44             ` Fred Isaman
2010-12-17 18:40   ` [PATCH_V4 1/9] SUNRPC new transport for the NFSv4.1 shared back channel J. Bruce Fields
2010-12-17 18:54     ` William A. (Andy) Adamson
2010-12-17 18:57       ` William A. (Andy) Adamson
2010-12-17 19:10         ` J. Bruce Fields
2010-12-17 19:16           ` William A. (Andy) Adamson
2010-12-17 19:33             ` J. Bruce Fields
2010-12-17 19:55               ` William A. (Andy) Adamson
2010-12-17 20:02                 ` J. Bruce Fields
2010-12-17 20:11                   ` William A. (Andy) Adamson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.