All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/5] nfsd: overhaul the client name tracking code
@ 2012-02-29 17:15 Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 1/5] nfsd: add nfsd4_client_tracking_ops struct and a way to set it Jeff Layton
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 17:15 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs, skinsbursky

This is the seventh iteration of this patchset. The primary motivation
for the respin here is to deal with the changes introduced by Stanislav's
"namespacification" of rpc_pipefs.

My conversion here is fairly simple-minded. Since most of the existing
nfsd code works under the aegis of "init_net", this patchset makes the
new code do basically the same thing.

I've not done any real work to make this do different pipes for different
namespaces or anything. As a result, the initialization of the client
tracking will currently fail if rpc_pipefs isn't mounted.

Comments or suggestions on this are welcome. I'm still trying to wrap
my brain around how all of this namespace stuff is supposed to work,
so it's quite possible I've overlooked something. In particular, I'd
like Stanislav's feedback since he's done the bulk of the rpc_pipefs
namespace work so far.

An earlier version of this patchset can be viewed here. That set also
contains a more comprehensive description of the rationale for doing
this:

    http://www.spinics.net/lists/linux-nfs/msg26324.html

Jeff Layton (5):
  nfsd: add nfsd4_client_tracking_ops struct and a way to set it
  sunrpc: create nfsd dir in rpc_pipefs
  nfsd: convert nfs4_client->cl_cb_flags to a generic flags field
  nfsd: add a header describing upcall to nfsdcld
  nfsd: add the infrastructure to handle the cld upcall

 fs/nfsd/nfs4callback.c   |   14 +-
 fs/nfsd/nfs4proc.c       |    3 +-
 fs/nfsd/nfs4recover.c    |  515 ++++++++++++++++++++++++++++++++++++++++++++--
 fs/nfsd/nfs4state.c      |   50 ++---
 fs/nfsd/state.h          |   24 ++-
 include/linux/nfsd/cld.h |   56 +++++
 net/sunrpc/rpc_pipe.c    |    5 +
 7 files changed, 604 insertions(+), 63 deletions(-)
 create mode 100644 include/linux/nfsd/cld.h

-- 
1.7.7.6


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v7 1/5] nfsd: add nfsd4_client_tracking_ops struct and a way to set it
  2012-02-29 17:15 [PATCH v7 0/5] nfsd: overhaul the client name tracking code Jeff Layton
@ 2012-02-29 17:15 ` Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 2/5] sunrpc: create nfsd dir in rpc_pipefs Jeff Layton
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 17:15 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs, skinsbursky

Abstract out the mechanism that we use to track clients into a set of
client name tracking functions.

This gives us a mechanism to plug in a new set of client tracking
functions without disturbing the callers. It also gives us a way to
decide on what tracking scheme to use at runtime.

For now, this just looks like pointless abstraction, but later we'll
add a new alternate scheme for tracking clients on stable storage.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 fs/nfsd/nfs4recover.c |  121 +++++++++++++++++++++++++++++++++++++++++++++----
 fs/nfsd/nfs4state.c   |   46 +++++++------------
 fs/nfsd/state.h       |   14 +++---
 3 files changed, 136 insertions(+), 45 deletions(-)

diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
index 0b3e875..1dda803 100644
--- a/fs/nfsd/nfs4recover.c
+++ b/fs/nfsd/nfs4recover.c
@@ -43,9 +43,20 @@
 
 #define NFSDDBG_FACILITY                NFSDDBG_PROC
 
+/* Declarations */
+struct nfsd4_client_tracking_ops {
+	int (*init)(void);
+	void (*exit)(void);
+	void (*create)(struct nfs4_client *);
+	void (*remove)(struct nfs4_client *);
+	int (*check)(struct nfs4_client *);
+	void (*grace_done)(time_t);
+};
+
 /* Globals */
 static struct file *rec_file;
 static char user_recovery_dirname[PATH_MAX] = "/var/lib/nfs/v4recovery";
+static struct nfsd4_client_tracking_ops *client_tracking_ops;
 
 static int
 nfs4_save_creds(const struct cred **original_creds)
@@ -117,7 +128,8 @@ out_no_tfm:
 	return status;
 }
 
-void nfsd4_create_clid_dir(struct nfs4_client *clp)
+static void
+nfsd4_create_clid_dir(struct nfs4_client *clp)
 {
 	const struct cred *original_cred;
 	char *dname = clp->cl_recdir;
@@ -265,7 +277,7 @@ out_unlock:
 	return status;
 }
 
-void
+static void
 nfsd4_remove_clid_dir(struct nfs4_client *clp)
 {
 	const struct cred *original_cred;
@@ -292,7 +304,6 @@ out:
 	if (status)
 		printk("NFSD: Failed to remove expired client state directory"
 				" %.*s\n", HEXDIR_LEN, clp->cl_recdir);
-	return;
 }
 
 static int
@@ -311,8 +322,8 @@ purge_old(struct dentry *parent, struct dentry *child)
 	return 0;
 }
 
-void
-nfsd4_recdir_purge_old(void) {
+static void
+nfsd4_recdir_purge_old(time_t boot_time __attribute__ ((unused))) {
 	int status;
 
 	if (!rec_file)
@@ -343,7 +354,7 @@ load_recdir(struct dentry *parent, struct dentry *child)
 	return 0;
 }
 
-int
+static int
 nfsd4_recdir_load(void) {
 	int status;
 
@@ -361,8 +372,8 @@ nfsd4_recdir_load(void) {
  * Hold reference to the recovery directory.
  */
 
-void
-nfsd4_init_recdir()
+static int
+nfsd4_init_recdir(void)
 {
 	const struct cred *original_cred;
 	int status;
@@ -377,20 +388,37 @@ nfsd4_init_recdir()
 		printk("NFSD: Unable to change credentials to find recovery"
 		       " directory: error %d\n",
 		       status);
-		return;
+		return status;
 	}
 
 	rec_file = filp_open(user_recovery_dirname, O_RDONLY | O_DIRECTORY, 0);
 	if (IS_ERR(rec_file)) {
 		printk("NFSD: unable to find recovery directory %s\n",
 				user_recovery_dirname);
+		status = PTR_ERR(rec_file);
 		rec_file = NULL;
 	}
 
 	nfs4_reset_creds(original_cred);
+	return status;
 }
 
-void
+static int
+nfsd4_load_reboot_recovery_data(void)
+{
+	int status;
+
+	nfs4_lock_state();
+	status = nfsd4_init_recdir();
+	if (!status)
+		status = nfsd4_recdir_load();
+	nfs4_unlock_state();
+	if (status)
+		printk(KERN_ERR "NFSD: Failure reading reboot recovery data\n");
+	return status;
+}
+
+static void
 nfsd4_shutdown_recdir(void)
 {
 	if (!rec_file)
@@ -425,3 +453,76 @@ nfs4_recoverydir(void)
 {
 	return user_recovery_dirname;
 }
+
+static int
+nfsd4_check_legacy_client(struct nfs4_client *clp)
+{
+	if (nfsd4_find_reclaim_client(clp) != NULL)
+		return 0;
+	else
+		return -ENOENT;
+}
+
+static struct nfsd4_client_tracking_ops nfsd4_legacy_tracking_ops = {
+	.init		= nfsd4_load_reboot_recovery_data,
+	.exit		= nfsd4_shutdown_recdir,
+	.create		= nfsd4_create_clid_dir,
+	.remove		= nfsd4_remove_clid_dir,
+	.check		= nfsd4_check_legacy_client,
+	.grace_done	= nfsd4_recdir_purge_old,
+};
+
+int
+nfsd4_client_tracking_init(void)
+{
+	int status;
+
+	client_tracking_ops = &nfsd4_legacy_tracking_ops;
+
+	status = client_tracking_ops->init();
+	if (status) {
+		printk(KERN_WARNING "NFSD: Unable to initialize client "
+				    "recovery tracking! (%d)\n", status);
+		client_tracking_ops = NULL;
+	}
+	return status;
+}
+
+void
+nfsd4_client_tracking_exit(void)
+{
+	if (client_tracking_ops) {
+		client_tracking_ops->exit();
+		client_tracking_ops = NULL;
+	}
+}
+
+void
+nfsd4_client_record_create(struct nfs4_client *clp)
+{
+	if (client_tracking_ops)
+		client_tracking_ops->create(clp);
+}
+
+void
+nfsd4_client_record_remove(struct nfs4_client *clp)
+{
+	if (client_tracking_ops)
+		client_tracking_ops->remove(clp);
+}
+
+int
+nfsd4_client_record_check(struct nfs4_client *clp)
+{
+	if (client_tracking_ops)
+		return client_tracking_ops->check(clp);
+
+	return -EOPNOTSUPP;
+}
+
+void
+nfsd4_record_grace_done(time_t boot_time)
+{
+	if (client_tracking_ops)
+		client_tracking_ops->grace_done(boot_time);
+}
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index c5cddd6..0c7ac26 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2045,7 +2045,7 @@ nfsd4_reclaim_complete(struct svc_rqst *rqstp, struct nfsd4_compound_state *csta
 		goto out;
 
 	status = nfs_ok;
-	nfsd4_create_clid_dir(cstate->session->se_client);
+	nfsd4_client_record_create(cstate->session->se_client);
 out:
 	nfs4_unlock_state();
 	return status;
@@ -2240,7 +2240,7 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
 			conf = find_confirmed_client_by_str(unconf->cl_recdir,
 							    hash);
 			if (conf) {
-				nfsd4_remove_clid_dir(conf);
+				nfsd4_client_record_remove(conf);
 				expire_client(conf);
 			}
 			move_to_confirmed(unconf);
@@ -3066,7 +3066,7 @@ static void
 nfsd4_end_grace(void)
 {
 	dprintk("NFSD: end of grace period\n");
-	nfsd4_recdir_purge_old();
+	nfsd4_record_grace_done(boot_time);
 	locks_end_grace(&nfsd4_manager);
 	/*
 	 * Now that every NFSv4 client has had the chance to recover and
@@ -3115,7 +3115,7 @@ nfs4_laundromat(void)
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
 		dprintk("NFSD: purging unused client (clientid %08x)\n",
 			clp->cl_clientid.cl_id);
-		nfsd4_remove_clid_dir(clp);
+		nfsd4_client_record_remove(clp);
 		expire_client(clp);
 	}
 	spin_lock(&recall_lock);
@@ -3539,7 +3539,7 @@ nfsd4_open_confirm(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	dprintk("NFSD: %s: success, seqid=%d stateid=" STATEID_FMT "\n",
 		__func__, oc->oc_seqid, STATEID_VAL(&stp->st_stid.sc_stateid));
 
-	nfsd4_create_clid_dir(oo->oo_owner.so_client);
+	nfsd4_client_record_create(oo->oo_owner.so_client);
 	status = nfs_ok;
 out:
 	if (!cstate->replay_owner)
@@ -4397,19 +4397,13 @@ nfs4_release_reclaim(void)
 
 /*
  * called from OPEN, CLAIM_PREVIOUS with a new clientid. */
-static struct nfs4_client_reclaim *
-nfs4_find_reclaim_client(clientid_t *clid)
+struct nfs4_client_reclaim *
+nfsd4_find_reclaim_client(struct nfs4_client *clp)
 {
 	unsigned int strhashval;
-	struct nfs4_client *clp;
 	struct nfs4_client_reclaim *crp = NULL;
 
 
-	/* find clientid in conf_id_hashtbl */
-	clp = find_confirmed_client(clid);
-	if (clp == NULL)
-		return NULL;
-
 	dprintk("NFSD: nfs4_find_reclaim_client for %.*s with recdir %s\n",
 		            clp->cl_name.len, clp->cl_name.data,
 			    clp->cl_recdir);
@@ -4430,7 +4424,14 @@ nfs4_find_reclaim_client(clientid_t *clid)
 __be32
 nfs4_check_open_reclaim(clientid_t *clid)
 {
-	return nfs4_find_reclaim_client(clid) ? nfs_ok : nfserr_reclaim_bad;
+	struct nfs4_client *clp;
+
+	/* find clientid in conf_id_hashtbl */
+	clp = find_confirmed_client(clid);
+	if (clp == NULL)
+		return nfserr_reclaim_bad;
+
+	return nfsd4_client_record_check(clp) ? nfserr_reclaim_bad : nfs_ok;
 }
 
 #ifdef CONFIG_NFSD_FAULT_INJECTION
@@ -4577,19 +4578,6 @@ nfs4_state_init(void)
 	reclaim_str_hashtbl_size = 0;
 }
 
-static void
-nfsd4_load_reboot_recovery_data(void)
-{
-	int status;
-
-	nfs4_lock_state();
-	nfsd4_init_recdir();
-	status = nfsd4_recdir_load();
-	nfs4_unlock_state();
-	if (status)
-		printk("NFSD: Failure reading reboot recovery data\n");
-}
-
 /*
  * Since the lifetime of a delegation isn't limited to that of an open, a
  * client may quite reasonably hang on to a delegation as long as it has
@@ -4642,7 +4630,7 @@ out_free_laundry:
 int
 nfs4_state_start(void)
 {
-	nfsd4_load_reboot_recovery_data();
+	nfsd4_client_tracking_init();
 	return __nfs4_state_start();
 }
 
@@ -4676,7 +4664,7 @@ __nfs4_state_shutdown(void)
 		unhash_delegation(dp);
 	}
 
-	nfsd4_shutdown_recdir();
+	nfsd4_client_tracking_exit();
 }
 
 void
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index ffb5df1..e22f90f 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -463,6 +463,7 @@ extern __be32 nfs4_preprocess_stateid_op(struct nfsd4_compound_state *cstate,
 extern void nfs4_lock_state(void);
 extern void nfs4_unlock_state(void);
 extern int nfs4_in_grace(void);
+extern struct nfs4_client_reclaim *nfsd4_find_reclaim_client(struct nfs4_client *crp);
 extern __be32 nfs4_check_open_reclaim(clientid_t *clid);
 extern void nfs4_free_openowner(struct nfs4_openowner *);
 extern void nfs4_free_lockowner(struct nfs4_lockowner *);
@@ -477,16 +478,17 @@ extern void nfsd4_destroy_callback_queue(void);
 extern void nfsd4_shutdown_callback(struct nfs4_client *);
 extern void nfs4_put_delegation(struct nfs4_delegation *dp);
 extern __be32 nfs4_make_rec_clidname(char *clidname, struct xdr_netobj *clname);
-extern void nfsd4_init_recdir(void);
-extern int nfsd4_recdir_load(void);
-extern void nfsd4_shutdown_recdir(void);
 extern int nfs4_client_to_reclaim(const char *name);
 extern int nfs4_has_reclaimed_state(const char *name, bool use_exchange_id);
-extern void nfsd4_recdir_purge_old(void);
-extern void nfsd4_create_clid_dir(struct nfs4_client *clp);
-extern void nfsd4_remove_clid_dir(struct nfs4_client *clp);
 extern void release_session_client(struct nfsd4_session *);
 extern __be32 nfs4_validate_stateid(struct nfs4_client *, stateid_t *);
 extern void nfsd4_purge_closed_stateid(struct nfs4_stateowner *);
 
+/* nfs4recover operations */
+extern int nfsd4_client_tracking_init(void);
+extern void nfsd4_client_tracking_exit(void);
+extern void nfsd4_client_record_create(struct nfs4_client *clp);
+extern void nfsd4_client_record_remove(struct nfs4_client *clp);
+extern int nfsd4_client_record_check(struct nfs4_client *clp);
+extern void nfsd4_record_grace_done(time_t boot_time);
 #endif   /* NFSD4_STATE_H */
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 2/5] sunrpc: create nfsd dir in rpc_pipefs
  2012-02-29 17:15 [PATCH v7 0/5] nfsd: overhaul the client name tracking code Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 1/5] nfsd: add nfsd4_client_tracking_ops struct and a way to set it Jeff Layton
@ 2012-02-29 17:15 ` Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 3/5] nfsd: convert nfs4_client->cl_cb_flags to a generic flags field Jeff Layton
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 17:15 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs, skinsbursky

Add a new top-level dir in rpc_pipefs to hold the pipe for the clientid
upcall.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 net/sunrpc/rpc_pipe.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
index 6873c9b..c81915d 100644
--- a/net/sunrpc/rpc_pipe.c
+++ b/net/sunrpc/rpc_pipe.c
@@ -992,6 +992,7 @@ enum {
 	RPCAUTH_statd,
 	RPCAUTH_nfsd4_cb,
 	RPCAUTH_cache,
+	RPCAUTH_nfsd,
 	RPCAUTH_RootEOF
 };
 
@@ -1024,6 +1025,10 @@ static const struct rpc_filelist files[] = {
 		.name = "cache",
 		.mode = S_IFDIR | S_IRUGO | S_IXUGO,
 	},
+	[RPCAUTH_nfsd] = {
+		.name = "nfsd",
+		.mode = S_IFDIR | S_IRUGO | S_IXUGO,
+	},
 };
 
 /*
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 3/5] nfsd: convert nfs4_client->cl_cb_flags to a generic flags field
  2012-02-29 17:15 [PATCH v7 0/5] nfsd: overhaul the client name tracking code Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 1/5] nfsd: add nfsd4_client_tracking_ops struct and a way to set it Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 2/5] sunrpc: create nfsd dir in rpc_pipefs Jeff Layton
@ 2012-02-29 17:15 ` Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 4/5] nfsd: add a header describing upcall to nfsdcld Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall Jeff Layton
  4 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 17:15 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs, skinsbursky

We'll need a way to flag the nfs4_client as already being recorded on
stable storage so that we don't continually upcall. Currently, that's
recorded in the cl_firststate field of the client struct. Using an
entire u32 to store a flag is rather wasteful though.

The cl_cb_flags field is only using 2 bits right now, so repurpose that
to a generic flags field. Rename NFSD4_CLIENT_KILL to
NFSD4_CLIENT_CB_KILL to make it evident that it's part of the callback
flags. Add a mask that we can use for existing checks that look to see
whether any flags are set, so that the new flags don't interfere.

Convert all references to cl_firstate to the NFSD4_CLIENT_STABLE flag,
and add a new NFSD4_CLIENT_RECLAIM_COMPLETE flag. I believe there's an
existing bug here too that this should fix:

nfs4_set_claim_prev sets cl_firststate on the first CLAIM_PREV open.
nfsd4_reclaim_complete looks for that flag though, and returns
NFS4ERR_COMPLETE_ALREADY if it's set. The upshot here is that once a
client does a CLAIM_PREV open, the RECLAIM_COMPLETE call will fail.
Let's fix this by adding a new RECLAIM_COMPLETE flag on the client to
indicate that that's already been done.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 fs/nfsd/nfs4callback.c |   14 +++++++-------
 fs/nfsd/nfs4proc.c     |    3 ++-
 fs/nfsd/nfs4recover.c  |   20 +++++++++++++-------
 fs/nfsd/nfs4state.c    |    4 ++--
 fs/nfsd/state.h        |   10 ++++++----
 5 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 0e262f3..a09dcc4 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -754,9 +754,9 @@ static void do_probe_callback(struct nfs4_client *clp)
  */
 void nfsd4_probe_callback(struct nfs4_client *clp)
 {
-	/* XXX: atomicity?  Also, should we be using cl_cb_flags? */
+	/* XXX: atomicity?  Also, should we be using cl_flags? */
 	clp->cl_cb_state = NFSD4_CB_UNKNOWN;
-	set_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_cb_flags);
+	set_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags);
 	do_probe_callback(clp);
 }
 
@@ -915,7 +915,7 @@ void nfsd4_destroy_callback_queue(void)
 /* must be called under the state lock */
 void nfsd4_shutdown_callback(struct nfs4_client *clp)
 {
-	set_bit(NFSD4_CLIENT_KILL, &clp->cl_cb_flags);
+	set_bit(NFSD4_CLIENT_CB_KILL, &clp->cl_flags);
 	/*
 	 * Note this won't actually result in a null callback;
 	 * instead, nfsd4_do_callback_rpc() will detect the killed
@@ -966,15 +966,15 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
 		svc_xprt_put(clp->cl_cb_conn.cb_xprt);
 		clp->cl_cb_conn.cb_xprt = NULL;
 	}
-	if (test_bit(NFSD4_CLIENT_KILL, &clp->cl_cb_flags))
+	if (test_bit(NFSD4_CLIENT_CB_KILL, &clp->cl_flags))
 		return;
 	spin_lock(&clp->cl_lock);
 	/*
 	 * Only serialized callback code is allowed to clear these
 	 * flags; main nfsd code can only set them:
 	 */
-	BUG_ON(!clp->cl_cb_flags);
-	clear_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_cb_flags);
+	BUG_ON(!(clp->cl_flags & NFSD4_CLIENT_CB_FLAG_MASK));
+	clear_bit(NFSD4_CLIENT_CB_UPDATE, &clp->cl_flags);
 	memcpy(&conn, &cb->cb_clp->cl_cb_conn, sizeof(struct nfs4_cb_conn));
 	c = __nfsd4_find_backchannel(clp);
 	if (c) {
@@ -1000,7 +1000,7 @@ void nfsd4_do_callback_rpc(struct work_struct *w)
 	struct nfs4_client *clp = cb->cb_clp;
 	struct rpc_clnt *clnt;
 
-	if (clp->cl_cb_flags)
+	if (clp->cl_flags & NFSD4_CLIENT_CB_FLAG_MASK)
 		nfsd4_process_cb_update(cb);
 
 	clnt = clp->cl_cb_client;
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 896da74..af2c3e8 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -319,7 +319,8 @@ nfsd4_open(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
 	 * Before RECLAIM_COMPLETE done, server should deny new lock
 	 */
 	if (nfsd4_has_session(cstate) &&
-	    !cstate->session->se_client->cl_firststate &&
+	    !test_bit(NFSD4_CLIENT_RECLAIM_COMPLETE,
+		      &cstate->session->se_client->cl_flags) &&
 	    open->op_claim_type != NFS4_OPEN_CLAIM_PREVIOUS)
 		return nfserr_grace;
 
diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
index 1dda803..ccd9b97 100644
--- a/fs/nfsd/nfs4recover.c
+++ b/fs/nfsd/nfs4recover.c
@@ -138,9 +138,8 @@ nfsd4_create_clid_dir(struct nfs4_client *clp)
 
 	dprintk("NFSD: nfsd4_create_clid_dir for \"%s\"\n", dname);
 
-	if (clp->cl_firststate)
+	if (test_and_set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags))
 		return;
-	clp->cl_firststate = 1;
 	if (!rec_file)
 		return;
 	status = nfs4_save_creds(&original_cred);
@@ -283,13 +282,13 @@ nfsd4_remove_clid_dir(struct nfs4_client *clp)
 	const struct cred *original_cred;
 	int status;
 
-	if (!rec_file || !clp->cl_firststate)
+	if (!rec_file || !test_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags))
 		return;
 
 	status = mnt_want_write_file(rec_file);
 	if (status)
 		goto out;
-	clp->cl_firststate = 0;
+	clear_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
 
 	status = nfs4_save_creds(&original_cred);
 	if (status < 0)
@@ -457,10 +456,17 @@ nfs4_recoverydir(void)
 static int
 nfsd4_check_legacy_client(struct nfs4_client *clp)
 {
-	if (nfsd4_find_reclaim_client(clp) != NULL)
+	/* did we already find that this client is stable? */
+	if (test_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags))
 		return 0;
-	else
-		return -ENOENT;
+
+	/* look for it in the reclaim hashtable otherwise */
+	if (nfsd4_find_reclaim_client(clp)) {
+		set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+		return 0;
+	}
+
+	return -ENOENT;
 }
 
 static struct nfsd4_client_tracking_ops nfsd4_legacy_tracking_ops = {
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 0c7ac26..e831ac6 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2030,7 +2030,8 @@ nfsd4_reclaim_complete(struct svc_rqst *rqstp, struct nfsd4_compound_state *csta
 
 	nfs4_lock_state();
 	status = nfserr_complete_already;
-	if (cstate->session->se_client->cl_firststate)
+	if (test_and_set_bit(NFSD4_CLIENT_RECLAIM_COMPLETE,
+			     &cstate->session->se_client->cl_flags))
 		goto out;
 
 	status = nfserr_stale_clientid;
@@ -2779,7 +2780,6 @@ static void
 nfs4_set_claim_prev(struct nfsd4_open *open)
 {
 	open->op_openowner->oo_flags |= NFS4_OO_CONFIRMED;
-	open->op_openowner->oo_owner.so_client->cl_firststate = 1;
 }
 
 /* Should we give out recallable state?: */
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index e22f90f..fac413b 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -245,14 +245,16 @@ struct nfs4_client {
 	struct svc_cred		cl_cred; 	/* setclientid principal */
 	clientid_t		cl_clientid;	/* generated by server */
 	nfs4_verifier		cl_confirm;	/* generated by server */
-	u32			cl_firststate;	/* recovery dir creation */
 	u32			cl_minorversion;
 
 	/* for v4.0 and v4.1 callbacks: */
 	struct nfs4_cb_conn	cl_cb_conn;
-#define NFSD4_CLIENT_CB_UPDATE	1
-#define NFSD4_CLIENT_KILL	2
-	unsigned long		cl_cb_flags;
+#define NFSD4_CLIENT_CB_UPDATE		(0)
+#define NFSD4_CLIENT_CB_KILL		(1)
+#define NFSD4_CLIENT_STABLE		(2)	/* client on stable storage */
+#define NFSD4_CLIENT_RECLAIM_COMPLETE	(3)	/* reclaim_complete done */
+#define NFSD4_CLIENT_CB_FLAG_MASK	(0x3)
+	unsigned long		cl_flags;
 	struct rpc_clnt		*cl_cb_client;
 	u32			cl_cb_ident;
 #define NFSD4_CB_UP		0
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 4/5] nfsd: add a header describing upcall to nfsdcld
  2012-02-29 17:15 [PATCH v7 0/5] nfsd: overhaul the client name tracking code Jeff Layton
                   ` (2 preceding siblings ...)
  2012-02-29 17:15 ` [PATCH v7 3/5] nfsd: convert nfs4_client->cl_cb_flags to a generic flags field Jeff Layton
@ 2012-02-29 17:15 ` Jeff Layton
  2012-02-29 17:15 ` [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall Jeff Layton
  4 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 17:15 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs, skinsbursky

The daemon takes a versioned binary struct. Hopefully this should allow
us to revise the struct later if it becomes necessary.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 include/linux/nfsd/cld.h |   56 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 56 insertions(+), 0 deletions(-)
 create mode 100644 include/linux/nfsd/cld.h

diff --git a/include/linux/nfsd/cld.h b/include/linux/nfsd/cld.h
new file mode 100644
index 0000000..f14a9ab
--- /dev/null
+++ b/include/linux/nfsd/cld.h
@@ -0,0 +1,56 @@
+/*
+ * Upcall description for nfsdcld communication
+ *
+ * Copyright (c) 2012 Red Hat, Inc.
+ * Author(s): Jeff Layton <jlayton@redhat.com>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, write to the Free Software
+ *  Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef _NFSD_CLD_H
+#define _NFSD_CLD_H
+
+/* latest upcall version available */
+#define CLD_UPCALL_VERSION 1
+
+/* defined by RFC3530 */
+#define NFS4_OPAQUE_LIMIT 1024
+
+enum cld_command {
+	Cld_Create,		/* create a record for this cm_id */
+	Cld_Remove,		/* remove record of this cm_id */
+	Cld_Check,		/* is this cm_id allowed? */
+	Cld_GraceDone,		/* grace period is complete */
+};
+
+/* representation of long-form NFSv4 client ID */
+struct cld_name {
+	uint16_t	cn_len;				/* length of cm_id */
+	unsigned char	cn_id[NFS4_OPAQUE_LIMIT];	/* client-provided */
+} __attribute__((packed));
+
+/* message struct for communication with userspace */
+struct cld_msg {
+	uint8_t		cm_vers;		/* upcall version */
+	uint8_t		cm_cmd;			/* upcall command */
+	int16_t		cm_status;		/* return code */
+	uint32_t	cm_xid;			/* transaction id */
+	union {
+		int64_t		cm_gracetime;	/* grace period start time */
+		struct cld_name	cm_name;
+	} __attribute__((packed)) cm_u;
+} __attribute__((packed));
+
+#endif /* !_NFSD_CLD_H */
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall
  2012-02-29 17:15 [PATCH v7 0/5] nfsd: overhaul the client name tracking code Jeff Layton
                   ` (3 preceding siblings ...)
  2012-02-29 17:15 ` [PATCH v7 4/5] nfsd: add a header describing upcall to nfsdcld Jeff Layton
@ 2012-02-29 17:15 ` Jeff Layton
  2012-02-29 18:39   ` Stanislav Kinsbursky
  4 siblings, 1 reply; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 17:15 UTC (permalink / raw)
  To: bfields; +Cc: linux-nfs, skinsbursky

...and add a mechanism for switching between the "legacy" tracker and
the new one. The decision is made by looking to see whether the
v4recoverydir exists. If it does, then the legacy client tracker is
used.

If it's not, then the kernel will create a "cld" pipe in rpc_pipefs.
That pipe is used to talk to a daemon for handling the upcall.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 fs/nfsd/nfs4recover.c |  382 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 381 insertions(+), 1 deletions(-)

diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
index ccd9b97..34fc843 100644
--- a/fs/nfsd/nfs4recover.c
+++ b/fs/nfsd/nfs4recover.c
@@ -1,5 +1,6 @@
 /*
 *  Copyright (c) 2004 The Regents of the University of Michigan.
+*  Copyright (c) 2012 Jeff Layton <jlayton@redhat.com>
 *  All rights reserved.
 *
 *  Andy Adamson <andros@citi.umich.edu>
@@ -36,6 +37,11 @@
 #include <linux/namei.h>
 #include <linux/crypto.h>
 #include <linux/sched.h>
+#include <linux/fs.h>
+#include <net/net_namespace.h>
+#include <linux/sunrpc/rpc_pipe_fs.h>
+#include <linux/sunrpc/clnt.h>
+#include <linux/nfsd/cld.h>
 
 #include "nfsd.h"
 #include "state.h"
@@ -478,12 +484,386 @@ static struct nfsd4_client_tracking_ops nfsd4_legacy_tracking_ops = {
 	.grace_done	= nfsd4_recdir_purge_old,
 };
 
+/* Globals */
+#define NFSD_PIPE_DIR		"nfsd"
+#define NFSD_CLD_PIPE		"cld"
+
+static struct rpc_pipe *cld_pipe;
+
+/* list of cld_msg's that are currently in use */
+static DEFINE_SPINLOCK(cld_lock);
+static LIST_HEAD(cld_list);
+static unsigned int cld_xid;
+
+struct cld_upcall {
+	struct list_head	cu_list;
+	struct task_struct	*cu_task;
+	struct cld_msg	cu_msg;
+};
+
+static int
+__cld_pipe_upcall(struct cld_msg *cmsg)
+{
+	int ret;
+	struct rpc_pipe_msg msg;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.data = cmsg;
+	msg.len = sizeof(*cmsg);
+
+	/*
+	 * Set task state before we queue the upcall. That prevents
+	 * wake_up_process in the downcall from racing with schedule.
+	 */
+	set_current_state(TASK_UNINTERRUPTIBLE);
+	ret = rpc_queue_upcall(cld_pipe, &msg);
+	if (ret < 0) {
+		set_current_state(TASK_RUNNING);
+		goto out;
+	}
+
+	schedule();
+	set_current_state(TASK_RUNNING);
+
+	if (msg.errno < 0)
+		ret = msg.errno;
+out:
+	return ret;
+}
+
+static int
+cld_pipe_upcall(struct cld_msg *cmsg)
+{
+	int ret;
+
+	/*
+	 * -EAGAIN occurs when pipe is closed and reopened while there are
+	 *  upcalls queued.
+	 */
+	do {
+		ret = __cld_pipe_upcall(cmsg);
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+static ssize_t
+cld_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+{
+	struct cld_upcall *tmp, *cup;
+	struct cld_msg *cmsg = (struct cld_msg *)src;
+	uint32_t xid;
+
+	if (mlen != sizeof(*cmsg)) {
+		dprintk("%s: got %lu bytes, expected %lu\n", __func__, mlen,
+			sizeof(*cmsg));
+		return -EINVAL;
+	}
+
+	/* copy just the xid so we can try to find that */
+	if (copy_from_user(&xid, &cmsg->cm_xid, sizeof(xid)) != 0) {
+		dprintk("%s: error when copying xid from userspace", __func__);
+		return -EFAULT;
+	}
+
+	/* walk the list and find corresponding xid */
+	cup = NULL;
+	spin_lock(&cld_lock);
+	list_for_each_entry(tmp, &cld_list, cu_list) {
+		if (get_unaligned(&tmp->cu_msg.cm_xid) == xid) {
+			cup = tmp;
+			list_del_init(&cup->cu_list);
+			break;
+		}
+	}
+	spin_unlock(&cld_lock);
+
+	/* couldn't find upcall? */
+	if (!cup) {
+		dprintk("%s: couldn't find upcall -- xid=%u\n", __func__,
+			cup->cu_msg.cm_xid);
+		return -EINVAL;
+	}
+
+	if (copy_from_user(&cup->cu_msg, src, mlen) != 0)
+		return -EFAULT;
+
+	wake_up_process(cup->cu_task);
+	return mlen;
+}
+
+static void
+cld_pipe_destroy_msg(struct rpc_pipe_msg *msg)
+{
+	struct cld_msg *cmsg = msg->data;
+	struct cld_upcall *cup = container_of(cmsg, struct cld_upcall,
+						 cu_msg);
+
+	/* errno >= 0 means we got a downcall */
+	if (msg->errno >= 0)
+		return;
+
+	wake_up_process(cup->cu_task);
+}
+
+static const struct rpc_pipe_ops cld_upcall_ops = {
+	.upcall		= rpc_pipe_generic_upcall,
+	.downcall	= cld_pipe_downcall,
+	.destroy_msg	= cld_pipe_destroy_msg,
+};
+
+/* Initialize rpc_pipefs pipe for communication with client tracking daemon */
+static int
+nfsd4_init_cld_pipe(void)
+{
+	int ret;
+	struct dentry *dir, *dentry;
+	struct super_block *sb;
+
+	if (cld_pipe)
+		return 0;
+
+	cld_pipe = rpc_mkpipe_data(&cld_upcall_ops, RPC_PIPE_WAIT_FOR_OPEN);
+	if (IS_ERR(cld_pipe)) {
+		ret = PTR_ERR(cld_pipe);
+		goto err;
+	}
+
+	/* since this is a global thing for now, hang it off init_net ns */
+	sb = rpc_get_sb_net(&init_net);
+	if (!sb) {
+		ret = -ENOMEM;
+		goto err_destroy_data;
+	}
+
+	dir = rpc_d_lookup_sb(sb, NFSD_PIPE_DIR);
+	rpc_put_sb_net(&init_net);
+	if (!dir) {
+		ret = -ENOENT;
+		goto err_destroy_data;
+	}
+
+	dentry = rpc_mkpipe_dentry(dir, NFSD_CLD_PIPE, NULL, cld_pipe);
+	dput(dir);
+	if (IS_ERR(dentry)) {
+		ret = PTR_ERR(dentry);
+		goto err_destroy_data;
+	}
+
+	cld_pipe->dentry = dentry;
+	return 0;
+
+err_destroy_data:
+	rpc_destroy_pipe_data(cld_pipe);
+err:
+	cld_pipe = NULL;
+	printk(KERN_ERR "NFSD: unable to create nfsdcld upcall pipe (%d)\n",
+			ret);
+	return ret;
+}
+
+static void
+nfsd4_remove_cld_pipe(void)
+{
+	int ret;
+
+	ret = rpc_unlink(cld_pipe->dentry);
+	if (ret)
+		printk(KERN_ERR "NFSD: error removing cld pipe: %d\n", ret);
+	cld_pipe = NULL;
+}
+
+static struct cld_upcall *
+alloc_cld_upcall(void)
+{
+	struct cld_upcall *new, *tmp;
+
+	new = kzalloc(sizeof(*new), GFP_KERNEL);
+	if (!new)
+		return new;
+
+	/* FIXME: hard cap on number in flight? */
+restart_search:
+	spin_lock(&cld_lock);
+	list_for_each_entry(tmp, &cld_list, cu_list) {
+		if (tmp->cu_msg.cm_xid == cld_xid) {
+			cld_xid++;
+			spin_unlock(&cld_lock);
+			goto restart_search;
+		}
+	}
+	new->cu_task = current;
+	new->cu_msg.cm_vers = CLD_UPCALL_VERSION;
+	put_unaligned(cld_xid++, &new->cu_msg.cm_xid);
+	list_add(&new->cu_list, &cld_list);
+	spin_unlock(&cld_lock);
+
+	dprintk("%s: allocated xid %u\n", __func__, new->cu_msg.cm_xid);
+
+	return new;
+}
+
+static void
+free_cld_upcall(struct cld_upcall *victim)
+{
+	spin_lock(&cld_lock);
+	list_del(&victim->cu_list);
+	spin_unlock(&cld_lock);
+	kfree(victim);
+}
+
+/* Ask daemon to create a new record */
+static void
+nfsd4_cld_create(struct nfs4_client *clp)
+{
+	int ret;
+	struct cld_upcall *cup;
+
+	/* Don't upcall if it's already stored */
+	if (test_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags))
+		return;
+
+	cup = alloc_cld_upcall();
+	if (!cup) {
+		ret = -ENOMEM;
+		goto out_err;
+	}
+
+	cup->cu_msg.cm_cmd = Cld_Create;
+	cup->cu_msg.cm_u.cm_name.cn_len = clp->cl_name.len;
+	memcpy(cup->cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+			clp->cl_name.len);
+
+	ret = cld_pipe_upcall(&cup->cu_msg);
+	if (!ret) {
+		ret = cup->cu_msg.cm_status;
+		set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+	}
+
+	free_cld_upcall(cup);
+out_err:
+	if (ret)
+		printk(KERN_ERR "NFSD: Unable to create client "
+				"record on stable storage: %d\n", ret);
+}
+
+/* Ask daemon to create a new record */
+static void
+nfsd4_cld_remove(struct nfs4_client *clp)
+{
+	int ret;
+	struct cld_upcall *cup;
+
+	/* Don't upcall if it's already removed */
+	if (!test_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags))
+		return;
+
+	cup = alloc_cld_upcall();
+	if (!cup) {
+		ret = -ENOMEM;
+		goto out_err;
+	}
+
+	cup->cu_msg.cm_cmd = Cld_Remove;
+	cup->cu_msg.cm_u.cm_name.cn_len = clp->cl_name.len;
+	memcpy(cup->cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+			clp->cl_name.len);
+
+	ret = cld_pipe_upcall(&cup->cu_msg);
+	if (!ret) {
+		ret = cup->cu_msg.cm_status;
+		clear_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+	}
+
+	free_cld_upcall(cup);
+out_err:
+	if (ret)
+		printk(KERN_ERR "NFSD: Unable to remove client "
+				"record from stable storage: %d\n", ret);
+}
+
+/* Check for presence of a record, and update its timestamp */
+static int
+nfsd4_cld_check(struct nfs4_client *clp)
+{
+	int ret;
+	struct cld_upcall *cup;
+
+	/* Don't upcall if one was already stored during this grace pd */
+	if (test_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags))
+		return 0;
+
+	cup = alloc_cld_upcall();
+	if (!cup) {
+		printk(KERN_ERR "NFSD: Unable to check client record on "
+				"stable storage: %d\n", -ENOMEM);
+		return -ENOMEM;
+	}
+
+	cup->cu_msg.cm_cmd = Cld_Check;
+	cup->cu_msg.cm_u.cm_name.cn_len = clp->cl_name.len;
+	memcpy(cup->cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
+			clp->cl_name.len);
+
+	ret = cld_pipe_upcall(&cup->cu_msg);
+	if (!ret) {
+		ret = cup->cu_msg.cm_status;
+		set_bit(NFSD4_CLIENT_STABLE, &clp->cl_flags);
+	}
+
+	free_cld_upcall(cup);
+	return ret;
+}
+
+static void
+nfsd4_cld_grace_done(time_t boot_time)
+{
+	int ret;
+	struct cld_upcall *cup;
+
+	cup = alloc_cld_upcall();
+	if (!cup) {
+		ret = -ENOMEM;
+		goto out_err;
+	}
+
+	cup->cu_msg.cm_cmd = Cld_GraceDone;
+	cup->cu_msg.cm_u.cm_gracetime = (int64_t)boot_time;
+	ret = cld_pipe_upcall(&cup->cu_msg);
+	if (!ret)
+		ret = cup->cu_msg.cm_status;
+
+	free_cld_upcall(cup);
+out_err:
+	if (ret)
+		printk(KERN_ERR "NFSD: Unable to end grace period: %d\n", ret);
+}
+
+static struct nfsd4_client_tracking_ops nfsd4_cld_tracking_ops = {
+	.init		= nfsd4_init_cld_pipe,
+	.exit		= nfsd4_remove_cld_pipe,
+	.create		= nfsd4_cld_create,
+	.remove		= nfsd4_cld_remove,
+	.check		= nfsd4_cld_check,
+	.grace_done	= nfsd4_cld_grace_done,
+};
+
 int
 nfsd4_client_tracking_init(void)
 {
 	int status;
+	struct path path;
 
-	client_tracking_ops = &nfsd4_legacy_tracking_ops;
+	if (!client_tracking_ops) {
+		client_tracking_ops = &nfsd4_cld_tracking_ops;
+		status = kern_path(nfs4_recoverydir(), LOOKUP_FOLLOW, &path);
+		if (!status) {
+			if (S_ISDIR(path.dentry->d_inode->i_mode))
+				client_tracking_ops =
+						&nfsd4_legacy_tracking_ops;
+			path_put(&path);
+		}
+	}
 
 	status = client_tracking_ops->init();
 	if (status) {
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall
  2012-02-29 17:15 ` [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall Jeff Layton
@ 2012-02-29 18:39   ` Stanislav Kinsbursky
  2012-02-29 19:45     ` Jeff Layton
  0 siblings, 1 reply; 11+ messages in thread
From: Stanislav Kinsbursky @ 2012-02-29 18:39 UTC (permalink / raw)
  To: Jeff Layton; +Cc: bfields, linux-nfs

Hi, Jeff.
I'll try to explain, how all this namespaces-aware SUNRPC PipeFS works.
The main idea is, that today we have 2 types of independently created and 
destroyed but linked objects:
1) pipe data itself. This object is created whenever kernel requires (on mount, 
module install, whatever)
2) dentry/inode for this pipe. This object is created on PipeFS superblock creation.
This means, any kernel user of SUNRPC pipes have to provide two mechanisms:
1) Safe call (in per-net operations, if the object is global your case) for 
PipeFS dentry/inode pair. This is done this in nfsd4_init_cld_pipe().
2) Notifier callback for PipeFS mount/umount operations. Note: this callback is 
done from SUNRPC module (i.e. you have to get nfsd module); this callback is 
safe - i.e you can be sure, that superblock is valid.

Important: notifier callback have to be registered prior to per-net operations, 
because otherwise you have races like below:

CPU0						CPU1
----------------------------------		-------------------------------
per-net ops register (no PipeFS sb)
						PipeFS mount - event sent
notifier register (event lost)

Unregister have to be done in the same (not reversed!) sequence to prevent races 
as below:

CPU0						CPU1
----------------------------------		-------------------------------
per-net ops unregister (no PipeFS sb)
						PipeFS mount - dentry created
notifier unregister (stale dentry)

The best example for you is in fs/nfs/blocklayout/blocklayout.c.
Feel free to ask questions.
Also, have a look at my comments below.


29.02.2012 21:15, Jeff Layton пишет:
> ...and add a mechanism for switching between the "legacy" tracker and
> the new one. The decision is made by looking to see whether the
> v4recoverydir exists. If it does, then the legacy client tracker is
> used.
>
> If it's not, then the kernel will create a "cld" pipe in rpc_pipefs.
> That pipe is used to talk to a daemon for handling the upcall.
>
> Signed-off-by: Jeff Layton<jlayton@redhat.com>
> ---
>   fs/nfsd/nfs4recover.c |  382 ++++++++++++++++++++++++++++++++++++++++++++++++-
>   1 files changed, 381 insertions(+), 1 deletions(-)
>
> diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
> index ccd9b97..34fc843 100644
> --- a/fs/nfsd/nfs4recover.c
> +++ b/fs/nfsd/nfs4recover.c
> @@ -1,5 +1,6 @@
>   /*
>   *  Copyright (c) 2004 The Regents of the University of Michigan.
> +*  Copyright (c) 2012 Jeff Layton<jlayton@redhat.com>
>   *  All rights reserved.
>   *
>   *  Andy Adamson<andros@citi.umich.edu>
> @@ -36,6 +37,11 @@
>   #include<linux/namei.h>
>   #include<linux/crypto.h>
>   #include<linux/sched.h>
> +#include<linux/fs.h>
> +#include<net/net_namespace.h>
> +#include<linux/sunrpc/rpc_pipe_fs.h>
> +#include<linux/sunrpc/clnt.h>
> +#include<linux/nfsd/cld.h>
>
>   #include "nfsd.h"
>   #include "state.h"
> @@ -478,12 +484,386 @@ static struct nfsd4_client_tracking_ops nfsd4_legacy_tracking_ops = {
>   	.grace_done	= nfsd4_recdir_purge_old,
>   };
>
> +/* Globals */
> +#define NFSD_PIPE_DIR		"nfsd"
> +#define NFSD_CLD_PIPE		"cld"
> +
> +static struct rpc_pipe *cld_pipe;
> +
> +/* list of cld_msg's that are currently in use */
> +static DEFINE_SPINLOCK(cld_lock);
> +static LIST_HEAD(cld_list);
> +static unsigned int cld_xid;
> +
> +struct cld_upcall {
> +	struct list_head	cu_list;
> +	struct task_struct	*cu_task;
> +	struct cld_msg	cu_msg;
> +};
> +
> +static int
> +__cld_pipe_upcall(struct cld_msg *cmsg)
> +{
> +	int ret;
> +	struct rpc_pipe_msg msg;
> +
> +	memset(&msg, 0, sizeof(msg));
> +	msg.data = cmsg;
> +	msg.len = sizeof(*cmsg);
> +
> +	/*
> +	 * Set task state before we queue the upcall. That prevents
> +	 * wake_up_process in the downcall from racing with schedule.
> +	 */
> +	set_current_state(TASK_UNINTERRUPTIBLE);
> +	ret = rpc_queue_upcall(cld_pipe,&msg);
> +	if (ret<  0) {
> +		set_current_state(TASK_RUNNING);
> +		goto out;
> +	}
> +
> +	schedule();
> +	set_current_state(TASK_RUNNING);
> +
> +	if (msg.errno<  0)
> +		ret = msg.errno;
> +out:
> +	return ret;
> +}
> +
> +static int
> +cld_pipe_upcall(struct cld_msg *cmsg)
> +{
> +	int ret;
> +
> +	/*
> +	 * -EAGAIN occurs when pipe is closed and reopened while there are
> +	 *  upcalls queued.
> +	 */
> +	do {
> +		ret = __cld_pipe_upcall(cmsg);
> +	} while (ret == -EAGAIN);
> +
> +	return ret;
> +}
> +
> +static ssize_t
> +cld_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
> +{
> +	struct cld_upcall *tmp, *cup;
> +	struct cld_msg *cmsg = (struct cld_msg *)src;
> +	uint32_t xid;
> +
> +	if (mlen != sizeof(*cmsg)) {
> +		dprintk("%s: got %lu bytes, expected %lu\n", __func__, mlen,
> +			sizeof(*cmsg));
> +		return -EINVAL;
> +	}
> +
> +	/* copy just the xid so we can try to find that */
> +	if (copy_from_user(&xid,&cmsg->cm_xid, sizeof(xid)) != 0) {
> +		dprintk("%s: error when copying xid from userspace", __func__);
> +		return -EFAULT;
> +	}
> +
> +	/* walk the list and find corresponding xid */
> +	cup = NULL;
> +	spin_lock(&cld_lock);
> +	list_for_each_entry(tmp,&cld_list, cu_list) {
> +		if (get_unaligned(&tmp->cu_msg.cm_xid) == xid) {
> +			cup = tmp;
> +			list_del_init(&cup->cu_list);
> +			break;
> +		}
> +	}
> +	spin_unlock(&cld_lock);
> +
> +	/* couldn't find upcall? */
> +	if (!cup) {
> +		dprintk("%s: couldn't find upcall -- xid=%u\n", __func__,
> +			cup->cu_msg.cm_xid);
> +		return -EINVAL;
> +	}
> +
> +	if (copy_from_user(&cup->cu_msg, src, mlen) != 0)
> +		return -EFAULT;
> +
> +	wake_up_process(cup->cu_task);
> +	return mlen;
> +}
> +
> +static void
> +cld_pipe_destroy_msg(struct rpc_pipe_msg *msg)
> +{
> +	struct cld_msg *cmsg = msg->data;
> +	struct cld_upcall *cup = container_of(cmsg, struct cld_upcall,
> +						 cu_msg);
> +
> +	/* errno>= 0 means we got a downcall */
> +	if (msg->errno>= 0)
> +		return;
> +
> +	wake_up_process(cup->cu_task);
> +}
> +
> +static const struct rpc_pipe_ops cld_upcall_ops = {
> +	.upcall		= rpc_pipe_generic_upcall,
> +	.downcall	= cld_pipe_downcall,
> +	.destroy_msg	= cld_pipe_destroy_msg,
> +};
> +
> +/* Initialize rpc_pipefs pipe for communication with client tracking daemon */
> +static int
> +nfsd4_init_cld_pipe(void)
> +{

1) Please, pass network context to the function instead of hard-coding init_net 
in it's body.
2) please move dentry creation to separated function and split this new function 
into unsafe one (for nofier callback) and safe one, which covers unsafe with 
rpc_get_sb_net() and rpc_get_sb_net().

> +	int ret;
> +	struct dentry *dir, *dentry;
> +	struct super_block *sb;
> +
> +	if (cld_pipe)F
> +		return 0;
> +
> +	cld_pipe = rpc_mkpipe_data(&cld_upcall_ops, RPC_PIPE_WAIT_FOR_OPEN);
> +	if (IS_ERR(cld_pipe)) {
> +		ret = PTR_ERR(cld_pipe);
> +		goto err;
> +	}
> +
> +	/* since this is a global thing for now, hang it off init_net ns */
> +	sb = rpc_get_sb_net(&init_net);
> +	if (!sb) {
> +		ret = -ENOMEM;
> +		goto err_destroy_data;

You don't need to fail here and destroy but have to implement notifier callback 
for PipeFS MOUNT/UMOUNT operations.
Don't worry about this dentry - this is not a NFSd problem.

> +	}
> +
> +	dir = rpc_d_lookup_sb(sb, NFSD_PIPE_DIR);
> +	rpc_put_sb_net(&init_net);

You can't release PipeFS SB here - it will be dereferenced in 
rpc_mkpipe_dentry() and you are not protected against umount.

> +	if (!dir) {
> +		ret = -ENOENT;
> +		goto err_destroy_data;
> +	}
> +
> +	dentry = rpc_mkpipe_dentry(dir, NFSD_CLD_PIPE, NULL, cld_pipe);
> +	dput(dir);
> +	if (IS_ERR(dentry)) {
> +		ret = PTR_ERR(dentry);
> +		goto err_destroy_data;
> +	}
> +
> +	cld_pipe->dentry = dentry;
> +	return 0;
> +
> +err_destroy_data:
> +	rpc_destroy_pipe_data(cld_pipe);
> +err:
> +	cld_pipe = NULL;
> +	printk(KERN_ERR "NFSD: unable to create nfsdcld upcall pipe (%d)\n",
> +			ret);
> +	return ret;
> +}
> +
> +static void
> +nfsd4_remove_cld_pipe(void)
> +{
> +	int ret;
> +
> +	ret = rpc_unlink(cld_pipe->dentry);

You have to unlink cld_pipe->dentry only if it's non-NULL

> +	if (ret)
> +		printk(KERN_ERR "NFSD: error removing cld pipe: %d\n", ret);
> +	cld_pipe = NULL;
> +}
> +
> +static struct cld_upcall *
> +alloc_cld_upcall(void)
> +{
> +	struct cld_upcall *new, *tmp;
> +
> +	new = kzalloc(sizeof(*new), GFP_KERNEL);
> +	if (!new)
> +		return new;
> +
> +	/* FIXME: hard cap on number in flight? */
> +restart_search:
> +	spin_lock(&cld_lock);
> +	list_for_each_entry(tmp,&cld_list, cu_list) {
> +		if (tmp->cu_msg.cm_xid == cld_xid) {
> +			cld_xid++;
> +			spin_unlock(&cld_lock);
> +			goto restart_search;
> +		}
> +	}
> +	new->cu_task = current;
> +	new->cu_msg.cm_vers = CLD_UPCALL_VERSION;
> +	put_unaligned(cld_xid++,&new->cu_msg.cm_xid);
> +	list_add(&new->cu_list,&cld_list);
> +	spin_unlock(&cld_lock);
> +
> +	dprintk("%s: allocated xid %u\n", __func__, new->cu_msg.cm_xid);
> +
> +	return new;
> +}
> +
> +static void
> +free_cld_upcall(struct cld_upcall *victim)
> +{
> +	spin_lock(&cld_lock);
> +	list_del(&victim->cu_list);
> +	spin_unlock(&cld_lock);
> +	kfree(victim);
> +}
> +
> +/* Ask daemon to create a new record */
> +static void
> +nfsd4_cld_create(struct nfs4_client *clp)
> +{
> +	int ret;
> +	struct cld_upcall *cup;
> +
> +	/* Don't upcall if it's already stored */
> +	if (test_bit(NFSD4_CLIENT_STABLE,&clp->cl_flags))
> +		return;
> +
> +	cup = alloc_cld_upcall();
> +	if (!cup) {
> +		ret = -ENOMEM;
> +		goto out_err;
> +	}
> +
> +	cup->cu_msg.cm_cmd = Cld_Create;
> +	cup->cu_msg.cm_u.cm_name.cn_len = clp->cl_name.len;
> +	memcpy(cup->cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
> +			clp->cl_name.len);
> +
> +	ret = cld_pipe_upcall(&cup->cu_msg);
> +	if (!ret) {
> +		ret = cup->cu_msg.cm_status;
> +		set_bit(NFSD4_CLIENT_STABLE,&clp->cl_flags);
> +	}
> +
> +	free_cld_upcall(cup);
> +out_err:
> +	if (ret)
> +		printk(KERN_ERR "NFSD: Unable to create client "
> +				"record on stable storage: %d\n", ret);
> +}
> +
> +/* Ask daemon to create a new record */
> +static void
> +nfsd4_cld_remove(struct nfs4_client *clp)
> +{
> +	int ret;
> +	struct cld_upcall *cup;
> +
> +	/* Don't upcall if it's already removed */
> +	if (!test_bit(NFSD4_CLIENT_STABLE,&clp->cl_flags))
> +		return;
> +
> +	cup = alloc_cld_upcall();
> +	if (!cup) {
> +		ret = -ENOMEM;
> +		goto out_err;
> +	}
> +
> +	cup->cu_msg.cm_cmd = Cld_Remove;
> +	cup->cu_msg.cm_u.cm_name.cn_len = clp->cl_name.len;
> +	memcpy(cup->cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
> +			clp->cl_name.len);
> +
> +	ret = cld_pipe_upcall(&cup->cu_msg);
> +	if (!ret) {
> +		ret = cup->cu_msg.cm_status;
> +		clear_bit(NFSD4_CLIENT_STABLE,&clp->cl_flags);
> +	}
> +
> +	free_cld_upcall(cup);
> +out_err:
> +	if (ret)
> +		printk(KERN_ERR "NFSD: Unable to remove client "
> +				"record from stable storage: %d\n", ret);
> +}
> +
> +/* Check for presence of a record, and update its timestamp */
> +static int
> +nfsd4_cld_check(struct nfs4_client *clp)
> +{
> +	int ret;
> +	struct cld_upcall *cup;
> +
> +	/* Don't upcall if one was already stored during this grace pd */
> +	if (test_bit(NFSD4_CLIENT_STABLE,&clp->cl_flags))
> +		return 0;
> +
> +	cup = alloc_cld_upcall();
> +	if (!cup) {
> +		printk(KERN_ERR "NFSD: Unable to check client record on "
> +				"stable storage: %d\n", -ENOMEM);
> +		return -ENOMEM;
> +	}
> +
> +	cup->cu_msg.cm_cmd = Cld_Check;
> +	cup->cu_msg.cm_u.cm_name.cn_len = clp->cl_name.len;
> +	memcpy(cup->cu_msg.cm_u.cm_name.cn_id, clp->cl_name.data,
> +			clp->cl_name.len);
> +
> +	ret = cld_pipe_upcall(&cup->cu_msg);
> +	if (!ret) {
> +		ret = cup->cu_msg.cm_status;
> +		set_bit(NFSD4_CLIENT_STABLE,&clp->cl_flags);
> +	}
> +
> +	free_cld_upcall(cup);
> +	return ret;
> +}
> +
> +static void
> +nfsd4_cld_grace_done(time_t boot_time)
> +{
> +	int ret;
> +	struct cld_upcall *cup;
> +
> +	cup = alloc_cld_upcall();
> +	if (!cup) {
> +		ret = -ENOMEM;
> +		goto out_err;
> +	}
> +
> +	cup->cu_msg.cm_cmd = Cld_GraceDone;
> +	cup->cu_msg.cm_u.cm_gracetime = (int64_t)boot_time;
> +	ret = cld_pipe_upcall(&cup->cu_msg);
> +	if (!ret)
> +		ret = cup->cu_msg.cm_status;
> +
> +	free_cld_upcall(cup);
> +out_err:
> +	if (ret)
> +		printk(KERN_ERR "NFSD: Unable to end grace period: %d\n", ret);
> +}
> +
> +static struct nfsd4_client_tracking_ops nfsd4_cld_tracking_ops = {
> +	.init		= nfsd4_init_cld_pipe,
> +	.exit		= nfsd4_remove_cld_pipe,
> +	.create		= nfsd4_cld_create,
> +	.remove		= nfsd4_cld_remove,
> +	.check		= nfsd4_cld_check,
> +	.grace_done	= nfsd4_cld_grace_done,
> +};
> +
>   int
>   nfsd4_client_tracking_init(void)
>   {
>   	int status;
> +	struct path path;
>
> -	client_tracking_ops =&nfsd4_legacy_tracking_ops;
> +	if (!client_tracking_ops) {
> +		client_tracking_ops =&nfsd4_cld_tracking_ops;
> +		status = kern_path(nfs4_recoverydir(), LOOKUP_FOLLOW,&path);
> +		if (!status) {
> +			if (S_ISDIR(path.dentry->d_inode->i_mode))
> +				client_tracking_ops =
> +						&nfsd4_legacy_tracking_ops;
> +			path_put(&path);
> +		}
> +	}
>
>   	status = client_tracking_ops->init();
>   	if (status) {


-- 
Best regards,
Stanislav Kinsbursky

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall
  2012-02-29 18:39   ` Stanislav Kinsbursky
@ 2012-02-29 19:45     ` Jeff Layton
  2012-02-29 21:44       ` bfields
  2012-03-01  7:29       ` Stanislav Kinsbursky
  0 siblings, 2 replies; 11+ messages in thread
From: Jeff Layton @ 2012-02-29 19:45 UTC (permalink / raw)
  To: Stanislav Kinsbursky; +Cc: bfields, linux-nfs

On Wed, 29 Feb 2012 22:39:01 +0400
Stanislav Kinsbursky <skinsbursky@parallels.com> wrote:

> Hi, Jeff.
> I'll try to explain, how all this namespaces-aware SUNRPC PipeFS works.
> The main idea is, that today we have 2 types of independently created and 
> destroyed but linked objects:
> 1) pipe data itself. This object is created whenever kernel requires (on mount, 
> module install, whatever)
> 2) dentry/inode for this pipe. This object is created on PipeFS superblock creation.
> This means, any kernel user of SUNRPC pipes have to provide two mechanisms:
> 1) Safe call (in per-net operations, if the object is global your case) for 
> PipeFS dentry/inode pair. This is done this in nfsd4_init_cld_pipe().
>
> 2) Notifier callback for PipeFS mount/umount operations. Note: this callback is 
> done from SUNRPC module (i.e. you have to get nfsd module); this callback is 
> safe - i.e you can be sure, that superblock is valid.
> 

This is the part I'm having a hard time with. IIUC, the existing
examples are fairly clear since you have a 1:1 ratio of objects: a
per-net object (the pipe data) that is hooked up to a dentry/inode in a
per-net sb.

Here though, we don't really have that. We have a global pipe object
and I don't see how you can hook that up to multiple dentries/inodes.

One possibility is to completely "namespacify" this code -- don't use a
global rpc_pipe object and make it per-net instead. If I do that though,
then I suppose I'll also need to make all of the
nfsd4_client_tracking_ops take a struct net arg as well?

Thanks for the help so far...
-- 
Jeff Layton <jlayton@redhat.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall
  2012-02-29 19:45     ` Jeff Layton
@ 2012-02-29 21:44       ` bfields
  2012-03-01  7:31         ` Stanislav Kinsbursky
  2012-03-01  7:29       ` Stanislav Kinsbursky
  1 sibling, 1 reply; 11+ messages in thread
From: bfields @ 2012-02-29 21:44 UTC (permalink / raw)
  To: Jeff Layton; +Cc: Stanislav Kinsbursky, linux-nfs

On Wed, Feb 29, 2012 at 02:45:12PM -0500, Jeff Layton wrote:
> On Wed, 29 Feb 2012 22:39:01 +0400
> Stanislav Kinsbursky <skinsbursky@parallels.com> wrote:
> 
> > Hi, Jeff.
> > I'll try to explain, how all this namespaces-aware SUNRPC PipeFS works.
> > The main idea is, that today we have 2 types of independently created and 
> > destroyed but linked objects:
> > 1) pipe data itself. This object is created whenever kernel requires (on mount, 
> > module install, whatever)
> > 2) dentry/inode for this pipe. This object is created on PipeFS superblock creation.
> > This means, any kernel user of SUNRPC pipes have to provide two mechanisms:
> > 1) Safe call (in per-net operations, if the object is global your case) for 
> > PipeFS dentry/inode pair. This is done this in nfsd4_init_cld_pipe().
> >
> > 2) Notifier callback for PipeFS mount/umount operations. Note: this callback is 
> > done from SUNRPC module (i.e. you have to get nfsd module); this callback is 
> > safe - i.e you can be sure, that superblock is valid.
> > 
> 
> This is the part I'm having a hard time with. IIUC, the existing
> examples are fairly clear since you have a 1:1 ratio of objects: a
> per-net object (the pipe data) that is hooked up to a dentry/inode in a
> per-net sb.
> 
> Here though, we don't really have that. We have a global pipe object
> and I don't see how you can hook that up to multiple dentries/inodes.
> 
> One possibility is to completely "namespacify" this code -- don't use a
> global rpc_pipe object and make it per-net instead. If I do that though,
> then I suppose I'll also need to make all of the
> nfsd4_client_tracking_ops take a struct net arg as well?

That does sound like what we'll want eventually.

And callers will use pass svc_rqst->rq_xprt->xpt_net there, I assume.

(Or actually we may need a pointer to the netns from the nfs4 client?)

--b.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall
  2012-02-29 19:45     ` Jeff Layton
  2012-02-29 21:44       ` bfields
@ 2012-03-01  7:29       ` Stanislav Kinsbursky
  1 sibling, 0 replies; 11+ messages in thread
From: Stanislav Kinsbursky @ 2012-03-01  7:29 UTC (permalink / raw)
  To: Jeff Layton; +Cc: bfields, linux-nfs

29.02.2012 23:45, Jeff Layton пишет:
> On Wed, 29 Feb 2012 22:39:01 +0400
> Stanislav Kinsbursky<skinsbursky@parallels.com>  wrote:
>
>> Hi, Jeff.
>> I'll try to explain, how all this namespaces-aware SUNRPC PipeFS works.
>> The main idea is, that today we have 2 types of independently created and
>> destroyed but linked objects:
>> 1) pipe data itself. This object is created whenever kernel requires (on mount,
>> module install, whatever)
>> 2) dentry/inode for this pipe. This object is created on PipeFS superblock creation.
>> This means, any kernel user of SUNRPC pipes have to provide two mechanisms:
>> 1) Safe call (in per-net operations, if the object is global your case) for
>> PipeFS dentry/inode pair. This is done this in nfsd4_init_cld_pipe().
>>
>> 2) Notifier callback for PipeFS mount/umount operations. Note: this callback is
>> done from SUNRPC module (i.e. you have to get nfsd module); this callback is
>> safe - i.e you can be sure, that superblock is valid.
>>
>
> This is the part I'm having a hard time with. IIUC, the existing
> examples are fairly clear since you have a 1:1 ratio of objects: a
> per-net object (the pipe data) that is hooked up to a dentry/inode in a
> per-net sb.
>
> Here though, we don't really have that. We have a global pipe object
> and I don't see how you can hook that up to multiple dentries/inodes.
>
> One possibility is to completely "namespacify" this code -- don't use a
> global rpc_pipe object and make it per-net instead. If I do that though,
> then I suppose I'll also need to make all of the
> nfsd4_client_tracking_ops take a struct net arg as well?
>
> Thanks for the help so far...

There is no problem at all.
Would be great (for me - eventially I'll update it to namespace aware code) if 
you'll convert you code into something like this:
1) Safe and unsafe dentry creation routines, parametrized by net. Don't use 
per-net data - hard-code your global pointer.
2) Pipe data creation routine, parametrized by net. Call this one only for 
"init_net" for far - no per-net ops required. Also call safe dentry creation 
routine here.
3) Notifier callback - just ignore any callbacks from not init_net. Otherwise 
call unsafe dentry creation routine.

If you'll do so, then it will be very easy to convert this feature to network 
namespace aware.

-- 
Best regards,
Stanislav Kinsbursky

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall
  2012-02-29 21:44       ` bfields
@ 2012-03-01  7:31         ` Stanislav Kinsbursky
  0 siblings, 0 replies; 11+ messages in thread
From: Stanislav Kinsbursky @ 2012-03-01  7:31 UTC (permalink / raw)
  To: bfields; +Cc: Jeff Layton, linux-nfs

01.03.2012 01:44, bfields@fieldses.org пишет:
> On Wed, Feb 29, 2012 at 02:45:12PM -0500, Jeff Layton wrote:
>> On Wed, 29 Feb 2012 22:39:01 +0400
>> Stanislav Kinsbursky<skinsbursky@parallels.com>  wrote:
>>
>>> Hi, Jeff.
>>> I'll try to explain, how all this namespaces-aware SUNRPC PipeFS works.
>>> The main idea is, that today we have 2 types of independently created and
>>> destroyed but linked objects:
>>> 1) pipe data itself. This object is created whenever kernel requires (on mount,
>>> module install, whatever)
>>> 2) dentry/inode for this pipe. This object is created on PipeFS superblock creation.
>>> This means, any kernel user of SUNRPC pipes have to provide two mechanisms:
>>> 1) Safe call (in per-net operations, if the object is global your case) for
>>> PipeFS dentry/inode pair. This is done this in nfsd4_init_cld_pipe().
>>>
>>> 2) Notifier callback for PipeFS mount/umount operations. Note: this callback is
>>> done from SUNRPC module (i.e. you have to get nfsd module); this callback is
>>> safe - i.e you can be sure, that superblock is valid.
>>>
>>
>> This is the part I'm having a hard time with. IIUC, the existing
>> examples are fairly clear since you have a 1:1 ratio of objects: a
>> per-net object (the pipe data) that is hooked up to a dentry/inode in a
>> per-net sb.
>>
>> Here though, we don't really have that. We have a global pipe object
>> and I don't see how you can hook that up to multiple dentries/inodes.
>>
>> One possibility is to completely "namespacify" this code -- don't use a
>> global rpc_pipe object and make it per-net instead. If I do that though,
>> then I suppose I'll also need to make all of the
>> nfsd4_client_tracking_ops take a struct net arg as well?
>
> That does sound like what we'll want eventually.
>
> And callers will use pass svc_rqst->rq_xprt->xpt_net there, I assume.
>
> (Or actually we may need a pointer to the netns from the nfs4 client?)
>

Hi, Bruce. I haven't think about the whole structure yet. But I'm going to work 
on NFSd containerization soon.
Hope for your assistance.

-- 
Best regards,
Stanislav Kinsbursky

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-03-01  7:31 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-29 17:15 [PATCH v7 0/5] nfsd: overhaul the client name tracking code Jeff Layton
2012-02-29 17:15 ` [PATCH v7 1/5] nfsd: add nfsd4_client_tracking_ops struct and a way to set it Jeff Layton
2012-02-29 17:15 ` [PATCH v7 2/5] sunrpc: create nfsd dir in rpc_pipefs Jeff Layton
2012-02-29 17:15 ` [PATCH v7 3/5] nfsd: convert nfs4_client->cl_cb_flags to a generic flags field Jeff Layton
2012-02-29 17:15 ` [PATCH v7 4/5] nfsd: add a header describing upcall to nfsdcld Jeff Layton
2012-02-29 17:15 ` [PATCH v7 5/5] nfsd: add the infrastructure to handle the cld upcall Jeff Layton
2012-02-29 18:39   ` Stanislav Kinsbursky
2012-02-29 19:45     ` Jeff Layton
2012-02-29 21:44       ` bfields
2012-03-01  7:31         ` Stanislav Kinsbursky
2012-03-01  7:29       ` Stanislav Kinsbursky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.