All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] sunrpc: not refcounting svc_serv
@ 2023-10-30  1:08 NeilBrown
  2023-10-30  1:08 ` [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put() NeilBrown
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: NeilBrown @ 2023-10-30  1:08 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

This patch set continues earlier work of improving how threads and
services are managed.  Specifically it drop the refcount.

The refcount is always changed under the mutex, and almost always is
exactly equal to the number of threads.  Those few cases where it is
more than the number of threads can usefully be handled other ways as
see in the patches.

The first patches fixes a potential use-after-free when adding a socket
fails.  This might be the UAF that Jeff mentioned recently.

The second patch which removes the use of a refcount in pool_stats
handling is more complex than I would have liked, but I think it is
worth if for the result seen in 4/5 of substantial simplification.

NeilBrown




^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put()
  2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
@ 2023-10-30  1:08 ` NeilBrown
  2023-10-30 13:15   ` Jeff Layton
  2023-10-30  1:08 ` [PATCH 2/5] svc: don't hold reference for poolstats, only mutex NeilBrown
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-30  1:08 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

If write_ports_addfd or write_ports_addxprt fail, they call nfsD_put()
without calling nfsd_last_thread().  This leaves nn->nfsd_serv pointing
to a structure that has been freed.

So export nfsd_last_thread() and call it when the nfsd_serv is about to
be destroy.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfsctl.c | 9 +++++++--
 fs/nfsd/nfsd.h   | 1 +
 fs/nfsd/nfssvc.c | 2 +-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 739ed5bf71cd..79efb1075f38 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -705,8 +705,10 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
 
 	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
 
-	if (err >= 0 &&
-	    !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
+	if (err < 0 && !nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
+		nfsd_last_thread(net);
+	else if (err >= 0 &&
+		 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
 		svc_get(nn->nfsd_serv);
 
 	nfsd_put(net);
@@ -757,6 +759,9 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
 		svc_xprt_put(xprt);
 	}
 out_err:
+	if (!nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
+		nfsd_last_thread(net);
+
 	nfsd_put(net);
 	return err;
 }
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index f5ff42f41ee7..3286ffacbc56 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -155,6 +155,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
 int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
 void nfsd_reset_versions(struct nfsd_net *nn);
 int nfsd_create_serv(struct net *net);
+void nfsd_last_thread(struct net *net);
 
 extern int nfsd_max_blksize;
 
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index d6122bb2d167..6c968c02cc29 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -542,7 +542,7 @@ static struct notifier_block nfsd_inet6addr_notifier = {
 /* Only used under nfsd_mutex, so this atomic may be overkill: */
 static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
 
-static void nfsd_last_thread(struct net *net)
+void nfsd_last_thread(struct net *net)
 {
 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
 	struct svc_serv *serv = nn->nfsd_serv;
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/5] svc: don't hold reference for poolstats, only mutex.
  2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
  2023-10-30  1:08 ` [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put() NeilBrown
@ 2023-10-30  1:08 ` NeilBrown
  2023-10-30 13:01   ` Jeff Layton
  2023-10-30  1:08 ` [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation NeilBrown
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-30  1:08 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

A future patch will remove refcounting on svc_serv as it is of little
use.
It is currently used to keep the svc around while the pool_stats file is
open.
Change this to get the pointer, protected by the mutex, only in
seq_start, and the release the mutex in seq_stop.
This means that if the nfsd server is stopped and restarted while the
pool_stats file it open, then some pool stats info could be from the
first instance and some from the second.  This might appear odd, but is
unlikely to be a problem in practice.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfsctl.c           |  2 +-
 fs/nfsd/nfssvc.c           | 30 ++++++++---------------
 include/linux/sunrpc/svc.h |  5 +++-
 net/sunrpc/svc_xprt.c      | 49 ++++++++++++++++++++++++++++++++------
 4 files changed, 57 insertions(+), 29 deletions(-)

diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 79efb1075f38..d78ae4452946 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -179,7 +179,7 @@ static const struct file_operations pool_stats_operations = {
 	.open		= nfsd_pool_stats_open,
 	.read		= seq_read,
 	.llseek		= seq_lseek,
-	.release	= nfsd_pool_stats_release,
+	.release	= svc_pool_stats_release,
 };
 
 DEFINE_SHOW_ATTRIBUTE(nfsd_reply_cache_stats);
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 6c968c02cc29..203e1cfc1cad 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -1072,30 +1072,20 @@ bool nfssvc_encode_voidres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
 	return true;
 }
 
-int nfsd_pool_stats_open(struct inode *inode, struct file *file)
+static struct svc_serv *nfsd_get_serv(struct seq_file *s, bool start)
 {
-	int ret;
-	struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
-
-	mutex_lock(&nfsd_mutex);
-	if (nn->nfsd_serv == NULL) {
+	struct nfsd_net *nn = net_generic(file_inode(s->file)->i_sb->s_fs_info,
+					  nfsd_net_id);
+	if (start) {
+		mutex_lock(&nfsd_mutex);
+		return nn->nfsd_serv;
+	} else {
 		mutex_unlock(&nfsd_mutex);
-		return -ENODEV;
+		return NULL;
 	}
-	svc_get(nn->nfsd_serv);
-	ret = svc_pool_stats_open(nn->nfsd_serv, file);
-	mutex_unlock(&nfsd_mutex);
-	return ret;
 }
 
-int nfsd_pool_stats_release(struct inode *inode, struct file *file)
+int nfsd_pool_stats_open(struct inode *inode, struct file *file)
 {
-	struct seq_file *seq = file->private_data;
-	struct svc_serv *serv = seq->private;
-	int ret = seq_release(inode, file);
-
-	mutex_lock(&nfsd_mutex);
-	svc_put(serv);
-	mutex_unlock(&nfsd_mutex);
-	return ret;
+	return svc_pool_stats_open(nfsd_get_serv, file);
 }
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index b10f987509cc..11acad6988a2 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -433,7 +433,10 @@ void		   svc_exit_thread(struct svc_rqst *);
 struct svc_serv *  svc_create_pooled(struct svc_program *, unsigned int,
 				     int (*threadfn)(void *data));
 int		   svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
-int		   svc_pool_stats_open(struct svc_serv *serv, struct file *file);
+int		   svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
+				       struct file *file);
+int		   svc_pool_stats_release(struct inode *inode,
+					  struct file *file);
 void		   svc_process(struct svc_rqst *rqstp);
 void		   svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp);
 int		   svc_register(const struct svc_serv *, struct net *, const int,
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index fee83d1024bc..2f99f7475b7b 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -1366,26 +1366,38 @@ EXPORT_SYMBOL_GPL(svc_xprt_names);
 
 /*----------------------------------------------------------------------------*/
 
+struct pool_private {
+	struct svc_serv *(*get_serv)(struct seq_file *, bool);
+	struct svc_serv *serv;
+};
+
 static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos)
 {
 	unsigned int pidx = (unsigned int)*pos;
-	struct svc_serv *serv = m->private;
+	struct pool_private *pp = m->private;
 
 	dprintk("svc_pool_stats_start, *pidx=%u\n", pidx);
 
+	pp->serv = pp->get_serv(m, true);
+
 	if (!pidx)
 		return SEQ_START_TOKEN;
-	return (pidx > serv->sv_nrpools ? NULL : &serv->sv_pools[pidx-1]);
+	if (!pp->serv)
+		return NULL;
+	return (pidx > pp->serv->sv_nrpools ? NULL : &pp->serv->sv_pools[pidx-1]);
 }
 
 static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
 {
 	struct svc_pool *pool = p;
-	struct svc_serv *serv = m->private;
+	struct pool_private *pp = m->private;
+	struct svc_serv *serv = pp->serv;
 
 	dprintk("svc_pool_stats_next, *pos=%llu\n", *pos);
 
-	if (p == SEQ_START_TOKEN) {
+	if (!serv) {
+		pool = NULL;
+	} else if (p == SEQ_START_TOKEN) {
 		pool = &serv->sv_pools[0];
 	} else {
 		unsigned int pidx = (pool - &serv->sv_pools[0]);
@@ -1400,6 +1412,9 @@ static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
 
 static void svc_pool_stats_stop(struct seq_file *m, void *p)
 {
+	struct pool_private *pp = m->private;
+
+	pp->get_serv(m, false);
 }
 
 static int svc_pool_stats_show(struct seq_file *m, void *p)
@@ -1427,15 +1442,35 @@ static const struct seq_operations svc_pool_stats_seq_ops = {
 	.show	= svc_pool_stats_show,
 };
 
-int svc_pool_stats_open(struct svc_serv *serv, struct file *file)
+int svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
+			struct file *file)
 {
+	struct pool_private *pp;
 	int err;
 
+	pp = kmalloc(sizeof(*pp), GFP_KERNEL);
+	if (!pp)
+		return -ENOMEM;
+
 	err = seq_open(file, &svc_pool_stats_seq_ops);
-	if (!err)
-		((struct seq_file *) file->private_data)->private = serv;
+	if (!err) {
+		pp->get_serv = get_serv;
+		((struct seq_file *) file->private_data)->private = pp;
+	} else
+		kfree(pp);
+
 	return err;
 }
 EXPORT_SYMBOL(svc_pool_stats_open);
 
+int svc_pool_stats_release(struct inode *inode, struct file *file)
+{
+	struct seq_file *seq = file->private_data;
+
+	kfree(seq->private);
+	seq->private = NULL;
+	return seq_release(inode, file);
+}
+EXPORT_SYMBOL(svc_pool_stats_release);
+
 /*----------------------------------------------------------------------------*/
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation
  2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
  2023-10-30  1:08 ` [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put() NeilBrown
  2023-10-30  1:08 ` [PATCH 2/5] svc: don't hold reference for poolstats, only mutex NeilBrown
@ 2023-10-30  1:08 ` NeilBrown
  2023-10-30 13:03   ` Jeff Layton
  2023-10-30  1:08 ` [PATCH 4/5] SUNRPC: discard sv_refcnt, and svc_get/svc_put NeilBrown
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-30  1:08 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

Rather than using svc_get() and svc_put() to hold a stable reference to
the nfsd_svc for netlink lookups, simply hold the mutex for the entire
time.

The "entire" time isn't very long, and the mutex is not often contented.

This makes way for use to remove the refcounts of svc, which is more
confusing than useful.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfsctl.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index d78ae4452946..8f644f1d157c 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -1515,11 +1515,10 @@ int nfsd_nl_rpc_status_get_start(struct netlink_callback *cb)
 	int ret = -ENODEV;
 
 	mutex_lock(&nfsd_mutex);
-	if (nn->nfsd_serv) {
-		svc_get(nn->nfsd_serv);
+	if (nn->nfsd_serv)
 		ret = 0;
-	}
-	mutex_unlock(&nfsd_mutex);
+	else
+		mutex_unlock(&nfsd_mutex);
 
 	return ret;
 }
@@ -1691,8 +1690,6 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb,
  */
 int nfsd_nl_rpc_status_get_done(struct netlink_callback *cb)
 {
-	mutex_lock(&nfsd_mutex);
-	nfsd_put(sock_net(cb->skb->sk));
 	mutex_unlock(&nfsd_mutex);
 
 	return 0;
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/5] SUNRPC: discard sv_refcnt, and svc_get/svc_put
  2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
                   ` (2 preceding siblings ...)
  2023-10-30  1:08 ` [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation NeilBrown
@ 2023-10-30  1:08 ` NeilBrown
  2023-10-30 13:15   ` Jeff Layton
  2023-10-30  1:08 ` [PATCH 5/5] nfsd: rename nfsd_last_thread() to nfsd_destroy_serv() NeilBrown
  2023-10-31 15:39 ` [PATCH 0/5] sunrpc: not refcounting svc_serv Chuck Lever III
  5 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-30  1:08 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

sv_refcnt is no longer useful.
lockd and nfs-cb only ever have the svc active when there are a non-zero
number of threads, so sv_refcnt mirrors sv_nrthreads.

nfsd also keeps the svc active between when a socket is added and when
the first thread is started, but we don't really need a refcount for
that.  We can simple not destory the svc after adding a socket.

So remove sv_refcnt and the get/put functions.
Instead of a final call to svc_put(), call svc_destroy() instead.
This is changed to also store NULL in the passed-in pointer to make it
easier to avoid use-after-free situations.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/lockd/svc.c             | 10 ++++------
 fs/nfs/callback.c          | 13 ++++++-------
 fs/nfsd/netns.h            |  7 -------
 fs/nfsd/nfsctl.c           | 15 ++++-----------
 fs/nfsd/nfsd.h             |  7 -------
 fs/nfsd/nfssvc.c           | 26 ++++----------------------
 include/linux/sunrpc/svc.h | 27 +--------------------------
 net/sunrpc/svc.c           | 13 ++++---------
 8 files changed, 23 insertions(+), 95 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 81be07c1d3d1..0d6cb3fdc0e1 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -345,10 +345,10 @@ static int lockd_get(void)
 
 	serv->sv_maxconn = nlm_max_connections;
 	error = svc_set_num_threads(serv, NULL, 1);
-	/* The thread now holds the only reference */
-	svc_put(serv);
-	if (error < 0)
+	if (error < 0) {
+		svc_destroy(&serv);
 		return error;
+	}
 
 	nlmsvc_serv = serv;
 	register_inetaddr_notifier(&lockd_inetaddr_notifier);
@@ -372,11 +372,9 @@ static void lockd_put(void)
 	unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
 #endif
 
-	svc_get(nlmsvc_serv);
 	svc_set_num_threads(nlmsvc_serv, NULL, 0);
-	svc_put(nlmsvc_serv);
 	timer_delete_sync(&nlmsvc_retry);
-	nlmsvc_serv = NULL;
+	svc_destroy(&nlmsvc_serv);
 	dprintk("lockd_down: service destroyed\n");
 }
 
diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 4ffa1f469e90..760d27dd7225 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -187,7 +187,7 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
 	 * Check whether we're already up and running.
 	 */
 	if (cb_info->serv)
-		return svc_get(cb_info->serv);
+		return cb_info->serv;
 
 	/*
 	 * Sanity check: if there's no task,
@@ -245,9 +245,10 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
 
 	cb_info->users++;
 err_net:
-	if (!cb_info->users)
-		cb_info->serv = NULL;
-	svc_put(serv);
+	if (!cb_info->users) {
+		svc_set_num_threads(cb_info->serv, NULL, 0);
+		svc_destroy(&cb_info->serv);
+	}
 err_create:
 	mutex_unlock(&nfs_callback_mutex);
 	return ret;
@@ -271,11 +272,9 @@ void nfs_callback_down(int minorversion, struct net *net)
 	nfs_callback_down_net(minorversion, serv, net);
 	cb_info->users--;
 	if (cb_info->users == 0) {
-		svc_get(serv);
 		svc_set_num_threads(serv, NULL, 0);
-		svc_put(serv);
 		dprintk("nfs_callback_down: service destroyed\n");
-		cb_info->serv = NULL;
+		svc_destroy(&cb_info->serv);
 	}
 	mutex_unlock(&nfs_callback_mutex);
 }
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index ec49b200b797..7fc39aca5363 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -124,13 +124,6 @@ struct nfsd_net {
 	u32 clverifier_counter;
 
 	struct svc_serv *nfsd_serv;
-	/* When a listening socket is added to nfsd, keep_active is set
-	 * and this justifies a reference on nfsd_serv.  This stops
-	 * nfsd_serv from being freed.  When the number of threads is
-	 * set, keep_active is cleared and the reference is dropped.  So
-	 * when the last thread exits, the service will be destroyed.
-	 */
-	int keep_active;
 
 	/*
 	 * clientid and stateid data for construction of net unique COPY
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 8f644f1d157c..86cab5281fd2 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -705,13 +705,10 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
 
 	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
 
-	if (err < 0 && !nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
+	if (!nn->nfsd_serv->sv_nrthreads &&
+	    list_empty(&nn->nfsd_serv->sv_permsocks))
 		nfsd_last_thread(net);
-	else if (err >= 0 &&
-		 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
-		svc_get(nn->nfsd_serv);
 
-	nfsd_put(net);
 	return err;
 }
 
@@ -747,10 +744,6 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
 	if (err < 0 && err != -EAFNOSUPPORT)
 		goto out_close;
 
-	if (!nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
-		svc_get(nn->nfsd_serv);
-
-	nfsd_put(net);
 	return 0;
 out_close:
 	xprt = svc_find_xprt(nn->nfsd_serv, transport, net, PF_INET, port);
@@ -759,10 +752,10 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
 		svc_xprt_put(xprt);
 	}
 out_err:
-	if (!nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
+	if (!nn->nfsd_serv->sv_nrthreads &&
+	    list_empty(&nn->nfsd_serv->sv_permsocks))
 		nfsd_last_thread(net);
 
-	nfsd_put(net);
 	return err;
 }
 
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 3286ffacbc56..9ed0e08d16c2 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -113,13 +113,6 @@ int		nfsd_pool_stats_open(struct inode *, struct file *);
 int		nfsd_pool_stats_release(struct inode *, struct file *);
 void		nfsd_shutdown_threads(struct net *net);
 
-static inline void nfsd_put(struct net *net)
-{
-	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
-
-	svc_put(nn->nfsd_serv);
-}
-
 bool		i_am_nfsd(void);
 
 struct nfsdfs_client {
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 203e1cfc1cad..61a1d966ca48 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -59,15 +59,6 @@ static __be32			nfsd_init_request(struct svc_rqst *,
  * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members
  * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks.
  *
- * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
- * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
- * nn->keep_active is set).  That number of nfsd threads must
- * exist and each must be listed in ->sp_all_threads in some entry of
- * ->sv_pools[].
- *
- * Each active thread holds a counted reference on nn->nfsd_serv, as does
- * the nn->keep_active flag and various transient calls to svc_get().
- *
  * Finally, the nfsd_mutex also protects some of the global variables that are
  * accessed when nfsd starts and that are settable via the write_* routines in
  * nfsctl.c. In particular:
@@ -573,6 +564,7 @@ void nfsd_last_thread(struct net *net)
 
 	nfsd_shutdown_net(net);
 	nfsd_export_flush(net);
+	svc_destroy(&serv);
 }
 
 void nfsd_reset_versions(struct nfsd_net *nn)
@@ -647,11 +639,9 @@ void nfsd_shutdown_threads(struct net *net)
 		return;
 	}
 
-	svc_get(serv);
 	/* Kill outstanding nfsd threads */
 	svc_set_num_threads(serv, NULL, 0);
 	nfsd_last_thread(net);
-	svc_put(serv);
 	mutex_unlock(&nfsd_mutex);
 }
 
@@ -667,10 +657,9 @@ int nfsd_create_serv(struct net *net)
 	struct svc_serv *serv;
 
 	WARN_ON(!mutex_is_locked(&nfsd_mutex));
-	if (nn->nfsd_serv) {
-		svc_get(nn->nfsd_serv);
+	if (nn->nfsd_serv)
 		return 0;
-	}
+
 	if (nfsd_max_blksize == 0)
 		nfsd_max_blksize = nfsd_get_default_max_blksize();
 	nfsd_reset_versions(nn);
@@ -681,7 +670,7 @@ int nfsd_create_serv(struct net *net)
 	serv->sv_maxconn = nn->max_connections;
 	error = svc_bind(serv, net);
 	if (error < 0) {
-		svc_put(serv);
+		svc_destroy(&serv);
 		return error;
 	}
 	spin_lock(&nfsd_notifier_lock);
@@ -764,7 +753,6 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
 		nthreads[0] = 1;
 
 	/* apply the new numbers */
-	svc_get(nn->nfsd_serv);
 	for (i = 0; i < n; i++) {
 		err = svc_set_num_threads(nn->nfsd_serv,
 					  &nn->nfsd_serv->sv_pools[i],
@@ -772,7 +760,6 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
 		if (err)
 			break;
 	}
-	svc_put(nn->nfsd_serv);
 	return err;
 }
 
@@ -814,13 +801,8 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
 		goto out_put;
 	error = serv->sv_nrthreads;
 out_put:
-	/* Threads now hold service active */
-	if (xchg(&nn->keep_active, 0))
-		svc_put(serv);
-
 	if (serv->sv_nrthreads == 0)
 		nfsd_last_thread(net);
-	svc_put(serv);
 out:
 	mutex_unlock(&nfsd_mutex);
 	return error;
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 11acad6988a2..e50e12ed4d12 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -69,7 +69,6 @@ struct svc_serv {
 	struct svc_program *	sv_program;	/* RPC program */
 	struct svc_stat *	sv_stats;	/* RPC statistics */
 	spinlock_t		sv_lock;
-	struct kref		sv_refcnt;
 	unsigned int		sv_nrthreads;	/* # of server threads */
 	unsigned int		sv_maxconn;	/* max connections allowed or
 						 * '0' causing max to be based
@@ -97,31 +96,7 @@ struct svc_serv {
 #endif /* CONFIG_SUNRPC_BACKCHANNEL */
 };
 
-/**
- * svc_get() - increment reference count on a SUNRPC serv
- * @serv:  the svc_serv to have count incremented
- *
- * Returns: the svc_serv that was passed in.
- */
-static inline struct svc_serv *svc_get(struct svc_serv *serv)
-{
-	kref_get(&serv->sv_refcnt);
-	return serv;
-}
-
-void svc_destroy(struct kref *);
-
-/**
- * svc_put - decrement reference count on a SUNRPC serv
- * @serv:  the svc_serv to have count decremented
- *
- * When the reference count reaches zero, svc_destroy()
- * is called to clean up and free the serv.
- */
-static inline void svc_put(struct svc_serv *serv)
-{
-	kref_put(&serv->sv_refcnt, svc_destroy);
-}
+void svc_destroy(struct svc_serv **svcp);
 
 /*
  * Maximum payload size supported by a kernel RPC server.
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 3f2ea7a0496f..0b5889e058c5 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -463,7 +463,6 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
 		return NULL;
 	serv->sv_name      = prog->pg_name;
 	serv->sv_program   = prog;
-	kref_init(&serv->sv_refcnt);
 	serv->sv_stats     = prog->pg_stats;
 	if (bufsize > RPCSVC_MAXPAYLOAD)
 		bufsize = RPCSVC_MAXPAYLOAD;
@@ -564,11 +563,13 @@ EXPORT_SYMBOL_GPL(svc_create_pooled);
  * protect sv_permsocks and sv_tempsocks.
  */
 void
-svc_destroy(struct kref *ref)
+svc_destroy(struct svc_serv **servp)
 {
-	struct svc_serv *serv = container_of(ref, struct svc_serv, sv_refcnt);
+	struct svc_serv *serv = *servp;
 	unsigned int i;
 
+	*servp = NULL;
+
 	dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);
 	timer_shutdown_sync(&serv->sv_temptimer);
 
@@ -675,7 +676,6 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
 	if (!rqstp)
 		return ERR_PTR(-ENOMEM);
 
-	svc_get(serv);
 	spin_lock_bh(&serv->sv_lock);
 	serv->sv_nrthreads += 1;
 	spin_unlock_bh(&serv->sv_lock);
@@ -935,11 +935,6 @@ svc_exit_thread(struct svc_rqst *rqstp)
 
 	svc_rqst_free(rqstp);
 
-	svc_put(serv);
-	/* That svc_put() cannot be the last, because the thread
-	 * waiting for SP_VICTIM_REMAINS to clear must hold
-	 * a reference. So it is still safe to access pool.
-	 */
 	clear_and_wake_up_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
 }
 EXPORT_SYMBOL_GPL(svc_exit_thread);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 5/5] nfsd: rename nfsd_last_thread() to nfsd_destroy_serv()
  2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
                   ` (3 preceding siblings ...)
  2023-10-30  1:08 ` [PATCH 4/5] SUNRPC: discard sv_refcnt, and svc_get/svc_put NeilBrown
@ 2023-10-30  1:08 ` NeilBrown
  2023-10-30 13:15   ` Jeff Layton
  2023-10-31 15:39 ` [PATCH 0/5] sunrpc: not refcounting svc_serv Chuck Lever III
  5 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-30  1:08 UTC (permalink / raw)
  To: Chuck Lever, Jeff Layton
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

As this function now destroys the svc_serv, this is a better name.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfsctl.c | 4 ++--
 fs/nfsd/nfsd.h   | 2 +-
 fs/nfsd/nfssvc.c | 8 ++++----
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 86cab5281fd2..d603e672d568 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -707,7 +707,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
 
 	if (!nn->nfsd_serv->sv_nrthreads &&
 	    list_empty(&nn->nfsd_serv->sv_permsocks))
-		nfsd_last_thread(net);
+		nfsd_destroy_serv(net);
 
 	return err;
 }
@@ -754,7 +754,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
 out_err:
 	if (!nn->nfsd_serv->sv_nrthreads &&
 	    list_empty(&nn->nfsd_serv->sv_permsocks))
-		nfsd_last_thread(net);
+		nfsd_destroy_serv(net);
 
 	return err;
 }
diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 9ed0e08d16c2..304e9728b929 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -148,7 +148,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
 int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
 void nfsd_reset_versions(struct nfsd_net *nn);
 int nfsd_create_serv(struct net *net);
-void nfsd_last_thread(struct net *net);
+void nfsd_destroy_serv(struct net *net);
 
 extern int nfsd_max_blksize;
 
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 61a1d966ca48..88c2e2c94829 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -533,7 +533,7 @@ static struct notifier_block nfsd_inet6addr_notifier = {
 /* Only used under nfsd_mutex, so this atomic may be overkill: */
 static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
 
-void nfsd_last_thread(struct net *net)
+void nfsd_destroy_serv(struct net *net)
 {
 	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
 	struct svc_serv *serv = nn->nfsd_serv;
@@ -555,7 +555,7 @@ void nfsd_last_thread(struct net *net)
 	/*
 	 * write_ports can create the server without actually starting
 	 * any threads--if we get shut down before any threads are
-	 * started, then nfsd_last_thread will be run before any of this
+	 * started, then nfsd_destroy_serv will be run before any of this
 	 * other initialization has been done except the rpcb information.
 	 */
 	svc_rpcb_cleanup(serv, net);
@@ -641,7 +641,7 @@ void nfsd_shutdown_threads(struct net *net)
 
 	/* Kill outstanding nfsd threads */
 	svc_set_num_threads(serv, NULL, 0);
-	nfsd_last_thread(net);
+	nfsd_destroy_serv(net);
 	mutex_unlock(&nfsd_mutex);
 }
 
@@ -802,7 +802,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
 	error = serv->sv_nrthreads;
 out_put:
 	if (serv->sv_nrthreads == 0)
-		nfsd_last_thread(net);
+		nfsd_destroy_serv(net);
 out:
 	mutex_unlock(&nfsd_mutex);
 	return error;
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/5] svc: don't hold reference for poolstats, only mutex.
  2023-10-30  1:08 ` [PATCH 2/5] svc: don't hold reference for poolstats, only mutex NeilBrown
@ 2023-10-30 13:01   ` Jeff Layton
  2023-10-30 21:48     ` NeilBrown
  0 siblings, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2023-10-30 13:01 UTC (permalink / raw)
  To: NeilBrown, Chuck Lever; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> A future patch will remove refcounting on svc_serv as it is of little
> use.
> It is currently used to keep the svc around while the pool_stats file is
> open.
> Change this to get the pointer, protected by the mutex, only in
> seq_start, and the release the mutex in seq_stop.
> This means that if the nfsd server is stopped and restarted while the
> pool_stats file it open, then some pool stats info could be from the
> first instance and some from the second.  This might appear odd, but is
> unlikely to be a problem in practice.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfsctl.c           |  2 +-
>  fs/nfsd/nfssvc.c           | 30 ++++++++---------------
>  include/linux/sunrpc/svc.h |  5 +++-
>  net/sunrpc/svc_xprt.c      | 49 ++++++++++++++++++++++++++++++++------
>  4 files changed, 57 insertions(+), 29 deletions(-)
> 
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 79efb1075f38..d78ae4452946 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -179,7 +179,7 @@ static const struct file_operations pool_stats_operations = {
>  	.open		= nfsd_pool_stats_open,
>  	.read		= seq_read,
>  	.llseek		= seq_lseek,
> -	.release	= nfsd_pool_stats_release,
> +	.release	= svc_pool_stats_release,
>  };
>  
>  DEFINE_SHOW_ATTRIBUTE(nfsd_reply_cache_stats);
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index 6c968c02cc29..203e1cfc1cad 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -1072,30 +1072,20 @@ bool nfssvc_encode_voidres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
>  	return true;
>  }
>  
> -int nfsd_pool_stats_open(struct inode *inode, struct file *file)
> +static struct svc_serv *nfsd_get_serv(struct seq_file *s, bool start)
>  {
> -	int ret;
> -	struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
> -
> -	mutex_lock(&nfsd_mutex);
> -	if (nn->nfsd_serv == NULL) {
> +	struct nfsd_net *nn = net_generic(file_inode(s->file)->i_sb->s_fs_info,
> +					  nfsd_net_id);
> +	if (start) {
> +		mutex_lock(&nfsd_mutex);
> +		return nn->nfsd_serv;
> +	} else {
>  		mutex_unlock(&nfsd_mutex);
> -		return -ENODEV;
> +		return NULL;
>  	}
> -	svc_get(nn->nfsd_serv);
> -	ret = svc_pool_stats_open(nn->nfsd_serv, file);
> -	mutex_unlock(&nfsd_mutex);
> -	return ret;
>  }
>  
> -int nfsd_pool_stats_release(struct inode *inode, struct file *file)
> +int nfsd_pool_stats_open(struct inode *inode, struct file *file)
>  {
> -	struct seq_file *seq = file->private_data;
> -	struct svc_serv *serv = seq->private;
> -	int ret = seq_release(inode, file);
> -
> -	mutex_lock(&nfsd_mutex);
> -	svc_put(serv);
> -	mutex_unlock(&nfsd_mutex);
> -	return ret;
> +	return svc_pool_stats_open(nfsd_get_serv, file);
>  }
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index b10f987509cc..11acad6988a2 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -433,7 +433,10 @@ void		   svc_exit_thread(struct svc_rqst *);
>  struct svc_serv *  svc_create_pooled(struct svc_program *, unsigned int,
>  				     int (*threadfn)(void *data));
>  int		   svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
> -int		   svc_pool_stats_open(struct svc_serv *serv, struct file *file);
> +int		   svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
> +				       struct file *file);
> +int		   svc_pool_stats_release(struct inode *inode,
> +					  struct file *file);
>  void		   svc_process(struct svc_rqst *rqstp);
>  void		   svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp);
>  int		   svc_register(const struct svc_serv *, struct net *, const int,
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index fee83d1024bc..2f99f7475b7b 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -1366,26 +1366,38 @@ EXPORT_SYMBOL_GPL(svc_xprt_names);
>  
>  /*----------------------------------------------------------------------------*/
>  
> +struct pool_private {
> +	struct svc_serv *(*get_serv)(struct seq_file *, bool);

This bool is pretty ugly. I think I'd rather see two operations here
(get_serv/put_serv). Also, this could use a kerneldoc comment.

> +	struct svc_serv *serv;
> +};
> +
>  static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos)
>  {
>  	unsigned int pidx = (unsigned int)*pos;
> -	struct svc_serv *serv = m->private;
> +	struct pool_private *pp = m->private;
>  
>  	dprintk("svc_pool_stats_start, *pidx=%u\n", pidx);
>  
> +	pp->serv = pp->get_serv(m, true);
> +
>  	if (!pidx)
>  		return SEQ_START_TOKEN;
> -	return (pidx > serv->sv_nrpools ? NULL : &serv->sv_pools[pidx-1]);
> +	if (!pp->serv)
> +		return NULL;
> +	return (pidx > pp->serv->sv_nrpools ? NULL : &pp->serv->sv_pools[pidx-1]);
>  }
>  
>  static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
>  {
>  	struct svc_pool *pool = p;
> -	struct svc_serv *serv = m->private;
> +	struct pool_private *pp = m->private;
> +	struct svc_serv *serv = pp->serv;
>  
>  	dprintk("svc_pool_stats_next, *pos=%llu\n", *pos);
>  
> -	if (p == SEQ_START_TOKEN) {
> +	if (!serv) {
> +		pool = NULL;
> +	} else if (p == SEQ_START_TOKEN) {
>  		pool = &serv->sv_pools[0];
>  	} else {
>  		unsigned int pidx = (pool - &serv->sv_pools[0]);
> @@ -1400,6 +1412,9 @@ static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
>  
>  static void svc_pool_stats_stop(struct seq_file *m, void *p)
>  {
> +	struct pool_private *pp = m->private;
> +
> +	pp->get_serv(m, false);
>  }
>  
>  static int svc_pool_stats_show(struct seq_file *m, void *p)
> @@ -1427,15 +1442,35 @@ static const struct seq_operations svc_pool_stats_seq_ops = {
>  	.show	= svc_pool_stats_show,
>  };
>  
> -int svc_pool_stats_open(struct svc_serv *serv, struct file *file)
> +int svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
> +			struct file *file)
>  {
> +	struct pool_private *pp;
>  	int err;
>  
> +	pp = kmalloc(sizeof(*pp), GFP_KERNEL);
> +	if (!pp)
> +		return -ENOMEM;
> +
>  	err = seq_open(file, &svc_pool_stats_seq_ops);
> -	if (!err)
> -		((struct seq_file *) file->private_data)->private = serv;
> +	if (!err) {
> +		pp->get_serv = get_serv;
> +		((struct seq_file *) file->private_data)->private = pp;
> +	} else
> +		kfree(pp);
> +
>  	return err;
>  }
>  EXPORT_SYMBOL(svc_pool_stats_open);
>  
> +int svc_pool_stats_release(struct inode *inode, struct file *file)
> +{
> +	struct seq_file *seq = file->private_data;
> +
> +	kfree(seq->private);
> +	seq->private = NULL;
> +	return seq_release(inode, file);
> +}
> +EXPORT_SYMBOL(svc_pool_stats_release);
> +
>  /*----------------------------------------------------------------------------*/

-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation
  2023-10-30  1:08 ` [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation NeilBrown
@ 2023-10-30 13:03   ` Jeff Layton
  2023-10-30 13:27     ` Lorenzo Bianconi
  0 siblings, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2023-10-30 13:03 UTC (permalink / raw)
  To: NeilBrown, Chuck Lever
  Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Lorenzo Bianconi

On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> Rather than using svc_get() and svc_put() to hold a stable reference to
> the nfsd_svc for netlink lookups, simply hold the mutex for the entire
> time.
> 
> The "entire" time isn't very long, and the mutex is not often contented.
> 
> This makes way for use to remove the refcounts of svc, which is more
> confusing than useful.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfsctl.c | 9 +++------
>  1 file changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index d78ae4452946..8f644f1d157c 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -1515,11 +1515,10 @@ int nfsd_nl_rpc_status_get_start(struct netlink_callback *cb)
>  	int ret = -ENODEV;
>  
>  	mutex_lock(&nfsd_mutex);
> -	if (nn->nfsd_serv) {
> -		svc_get(nn->nfsd_serv);
> +	if (nn->nfsd_serv)
>  		ret = 0;
> -	}
> -	mutex_unlock(&nfsd_mutex);
> +	else
> +		mutex_unlock(&nfsd_mutex);
>  
>  	return ret;
>  }
> @@ -1691,8 +1690,6 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb,
>   */
>  int nfsd_nl_rpc_status_get_done(struct netlink_callback *cb)
>  {
> -	mutex_lock(&nfsd_mutex);
> -	nfsd_put(sock_net(cb->skb->sk));
>  	mutex_unlock(&nfsd_mutex);
>  
>  	return 0;

(cc'ing Lorenzo since he wrote this)

I think Lorenzo did it this way originally, and I convinced him to take
a reference instead. This should be fine though.

Reviewed-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put()
  2023-10-30  1:08 ` [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put() NeilBrown
@ 2023-10-30 13:15   ` Jeff Layton
  2023-10-30 15:52     ` Chuck Lever
  0 siblings, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2023-10-30 13:15 UTC (permalink / raw)
  To: NeilBrown, Chuck Lever; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> If write_ports_addfd or write_ports_addxprt fail, they call nfsD_put()
> without calling nfsd_last_thread().  This leaves nn->nfsd_serv pointing
> to a structure that has been freed.
> 
> So export nfsd_last_thread() and call it when the nfsd_serv is about to
> be destroy.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfsctl.c | 9 +++++++--
>  fs/nfsd/nfsd.h   | 1 +
>  fs/nfsd/nfssvc.c | 2 +-
>  3 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 739ed5bf71cd..79efb1075f38 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -705,8 +705,10 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
>  
>  	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
>  
> -	if (err >= 0 &&
> -	    !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> +	if (err < 0 && !nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> +		nfsd_last_thread(net);
> +	else if (err >= 0 &&
> +		 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
>  		svc_get(nn->nfsd_serv);
>  
>  	nfsd_put(net);
> @@ -757,6 +759,9 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
>  		svc_xprt_put(xprt);
>  	}
>  out_err:
> +	if (!nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> +		nfsd_last_thread(net);
> +
>  	nfsd_put(net);
>  	return err;
>  }
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index f5ff42f41ee7..3286ffacbc56 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -155,6 +155,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
>  int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
>  void nfsd_reset_versions(struct nfsd_net *nn);
>  int nfsd_create_serv(struct net *net);
> +void nfsd_last_thread(struct net *net);
>  
>  extern int nfsd_max_blksize;
>  
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index d6122bb2d167..6c968c02cc29 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -542,7 +542,7 @@ static struct notifier_block nfsd_inet6addr_notifier = {
>  /* Only used under nfsd_mutex, so this atomic may be overkill: */
>  static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
>  
> -static void nfsd_last_thread(struct net *net)
> +void nfsd_last_thread(struct net *net)
>  {
>  	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
>  	struct svc_serv *serv = nn->nfsd_serv;

This patch should fix the problem that I was seeing with write_ports,
but it won't fix the hinky error cleanup in nfsd_svc. It looks like that
does get fixed in patch #4 though, so I'm not too concerned.

Reviewed-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/5] SUNRPC: discard sv_refcnt, and svc_get/svc_put
  2023-10-30  1:08 ` [PATCH 4/5] SUNRPC: discard sv_refcnt, and svc_get/svc_put NeilBrown
@ 2023-10-30 13:15   ` Jeff Layton
  0 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2023-10-30 13:15 UTC (permalink / raw)
  To: NeilBrown, Chuck Lever; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> sv_refcnt is no longer useful.
> lockd and nfs-cb only ever have the svc active when there are a non-zero
> number of threads, so sv_refcnt mirrors sv_nrthreads.
> 
> nfsd also keeps the svc active between when a socket is added and when
> the first thread is started, but we don't really need a refcount for
> that.  We can simple not destory the svc after adding a socket.
> 
> So remove sv_refcnt and the get/put functions.
> Instead of a final call to svc_put(), call svc_destroy() instead.
> This is changed to also store NULL in the passed-in pointer to make it
> easier to avoid use-after-free situations.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/lockd/svc.c             | 10 ++++------
>  fs/nfs/callback.c          | 13 ++++++-------
>  fs/nfsd/netns.h            |  7 -------
>  fs/nfsd/nfsctl.c           | 15 ++++-----------
>  fs/nfsd/nfsd.h             |  7 -------
>  fs/nfsd/nfssvc.c           | 26 ++++----------------------
>  include/linux/sunrpc/svc.h | 27 +--------------------------
>  net/sunrpc/svc.c           | 13 ++++---------
>  8 files changed, 23 insertions(+), 95 deletions(-)
> 
> diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
> index 81be07c1d3d1..0d6cb3fdc0e1 100644
> --- a/fs/lockd/svc.c
> +++ b/fs/lockd/svc.c
> @@ -345,10 +345,10 @@ static int lockd_get(void)
>  
>  	serv->sv_maxconn = nlm_max_connections;
>  	error = svc_set_num_threads(serv, NULL, 1);
> -	/* The thread now holds the only reference */
> -	svc_put(serv);
> -	if (error < 0)
> +	if (error < 0) {
> +		svc_destroy(&serv);
>  		return error;
> +	}
>  
>  	nlmsvc_serv = serv;
>  	register_inetaddr_notifier(&lockd_inetaddr_notifier);
> @@ -372,11 +372,9 @@ static void lockd_put(void)
>  	unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
>  #endif
>  
> -	svc_get(nlmsvc_serv);
>  	svc_set_num_threads(nlmsvc_serv, NULL, 0);
> -	svc_put(nlmsvc_serv);
>  	timer_delete_sync(&nlmsvc_retry);
> -	nlmsvc_serv = NULL;
> +	svc_destroy(&nlmsvc_serv);
>  	dprintk("lockd_down: service destroyed\n");
>  }
>  
> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
> index 4ffa1f469e90..760d27dd7225 100644
> --- a/fs/nfs/callback.c
> +++ b/fs/nfs/callback.c
> @@ -187,7 +187,7 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
>  	 * Check whether we're already up and running.
>  	 */
>  	if (cb_info->serv)
> -		return svc_get(cb_info->serv);
> +		return cb_info->serv;
>  
>  	/*
>  	 * Sanity check: if there's no task,
> @@ -245,9 +245,10 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
>  
>  	cb_info->users++;
>  err_net:
> -	if (!cb_info->users)
> -		cb_info->serv = NULL;
> -	svc_put(serv);
> +	if (!cb_info->users) {
> +		svc_set_num_threads(cb_info->serv, NULL, 0);
> +		svc_destroy(&cb_info->serv);
> +	}
>  err_create:
>  	mutex_unlock(&nfs_callback_mutex);
>  	return ret;
> @@ -271,11 +272,9 @@ void nfs_callback_down(int minorversion, struct net *net)
>  	nfs_callback_down_net(minorversion, serv, net);
>  	cb_info->users--;
>  	if (cb_info->users == 0) {
> -		svc_get(serv);
>  		svc_set_num_threads(serv, NULL, 0);
> -		svc_put(serv);
>  		dprintk("nfs_callback_down: service destroyed\n");
> -		cb_info->serv = NULL;
> +		svc_destroy(&cb_info->serv);
>  	}
>  	mutex_unlock(&nfs_callback_mutex);
>  }
> diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
> index ec49b200b797..7fc39aca5363 100644
> --- a/fs/nfsd/netns.h
> +++ b/fs/nfsd/netns.h
> @@ -124,13 +124,6 @@ struct nfsd_net {
>  	u32 clverifier_counter;
>  
>  	struct svc_serv *nfsd_serv;
> -	/* When a listening socket is added to nfsd, keep_active is set
> -	 * and this justifies a reference on nfsd_serv.  This stops
> -	 * nfsd_serv from being freed.  When the number of threads is
> -	 * set, keep_active is cleared and the reference is dropped.  So
> -	 * when the last thread exits, the service will be destroyed.
> -	 */
> -	int keep_active;
>  
>  	/*
>  	 * clientid and stateid data for construction of net unique COPY
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 8f644f1d157c..86cab5281fd2 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -705,13 +705,10 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
>  
>  	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
>  
> -	if (err < 0 && !nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> +	if (!nn->nfsd_serv->sv_nrthreads &&
> +	    list_empty(&nn->nfsd_serv->sv_permsocks))
>  		nfsd_last_thread(net);
> -	else if (err >= 0 &&
> -		 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> -		svc_get(nn->nfsd_serv);
>  
> -	nfsd_put(net);
>  	return err;
>  }
>  
> @@ -747,10 +744,6 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
>  	if (err < 0 && err != -EAFNOSUPPORT)
>  		goto out_close;
>  
> -	if (!nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> -		svc_get(nn->nfsd_serv);
> -
> -	nfsd_put(net);
>  	return 0;
>  out_close:
>  	xprt = svc_find_xprt(nn->nfsd_serv, transport, net, PF_INET, port);
> @@ -759,10 +752,10 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
>  		svc_xprt_put(xprt);
>  	}
>  out_err:
> -	if (!nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> +	if (!nn->nfsd_serv->sv_nrthreads &&
> +	    list_empty(&nn->nfsd_serv->sv_permsocks))
>  		nfsd_last_thread(net);
>  
> -	nfsd_put(net);
>  	return err;
>  }
>  
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index 3286ffacbc56..9ed0e08d16c2 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -113,13 +113,6 @@ int		nfsd_pool_stats_open(struct inode *, struct file *);
>  int		nfsd_pool_stats_release(struct inode *, struct file *);
>  void		nfsd_shutdown_threads(struct net *net);
>  
> -static inline void nfsd_put(struct net *net)
> -{
> -	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
> -
> -	svc_put(nn->nfsd_serv);
> -}
> -
>  bool		i_am_nfsd(void);
>  
>  struct nfsdfs_client {
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index 203e1cfc1cad..61a1d966ca48 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -59,15 +59,6 @@ static __be32			nfsd_init_request(struct svc_rqst *,
>   * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members
>   * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks.
>   *
> - * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
> - * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
> - * nn->keep_active is set).  That number of nfsd threads must
> - * exist and each must be listed in ->sp_all_threads in some entry of
> - * ->sv_pools[].
> - *
> - * Each active thread holds a counted reference on nn->nfsd_serv, as does
> - * the nn->keep_active flag and various transient calls to svc_get().
> - *
>   * Finally, the nfsd_mutex also protects some of the global variables that are
>   * accessed when nfsd starts and that are settable via the write_* routines in
>   * nfsctl.c. In particular:
> @@ -573,6 +564,7 @@ void nfsd_last_thread(struct net *net)
>  
>  	nfsd_shutdown_net(net);
>  	nfsd_export_flush(net);
> +	svc_destroy(&serv);
>  }
>  
>  void nfsd_reset_versions(struct nfsd_net *nn)
> @@ -647,11 +639,9 @@ void nfsd_shutdown_threads(struct net *net)
>  		return;
>  	}
>  
> -	svc_get(serv);
>  	/* Kill outstanding nfsd threads */
>  	svc_set_num_threads(serv, NULL, 0);
>  	nfsd_last_thread(net);
> -	svc_put(serv);
>  	mutex_unlock(&nfsd_mutex);
>  }
>  
> @@ -667,10 +657,9 @@ int nfsd_create_serv(struct net *net)
>  	struct svc_serv *serv;
>  
>  	WARN_ON(!mutex_is_locked(&nfsd_mutex));
> -	if (nn->nfsd_serv) {
> -		svc_get(nn->nfsd_serv);
> +	if (nn->nfsd_serv)
>  		return 0;
> -	}
> +
>  	if (nfsd_max_blksize == 0)
>  		nfsd_max_blksize = nfsd_get_default_max_blksize();
>  	nfsd_reset_versions(nn);
> @@ -681,7 +670,7 @@ int nfsd_create_serv(struct net *net)
>  	serv->sv_maxconn = nn->max_connections;
>  	error = svc_bind(serv, net);
>  	if (error < 0) {
> -		svc_put(serv);
> +		svc_destroy(&serv);
>  		return error;
>  	}
>  	spin_lock(&nfsd_notifier_lock);
> @@ -764,7 +753,6 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
>  		nthreads[0] = 1;
>  
>  	/* apply the new numbers */
> -	svc_get(nn->nfsd_serv);
>  	for (i = 0; i < n; i++) {
>  		err = svc_set_num_threads(nn->nfsd_serv,
>  					  &nn->nfsd_serv->sv_pools[i],
> @@ -772,7 +760,6 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
>  		if (err)
>  			break;
>  	}
> -	svc_put(nn->nfsd_serv);
>  	return err;
>  }
>  
> @@ -814,13 +801,8 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
>  		goto out_put;
>  	error = serv->sv_nrthreads;
>  out_put:
> -	/* Threads now hold service active */
> -	if (xchg(&nn->keep_active, 0))
> -		svc_put(serv);
> -
>  	if (serv->sv_nrthreads == 0)
>  		nfsd_last_thread(net);
> -	svc_put(serv);
>  out:
>  	mutex_unlock(&nfsd_mutex);
>  	return error;
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 11acad6988a2..e50e12ed4d12 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -69,7 +69,6 @@ struct svc_serv {
>  	struct svc_program *	sv_program;	/* RPC program */
>  	struct svc_stat *	sv_stats;	/* RPC statistics */
>  	spinlock_t		sv_lock;
> -	struct kref		sv_refcnt;
>  	unsigned int		sv_nrthreads;	/* # of server threads */
>  	unsigned int		sv_maxconn;	/* max connections allowed or
>  						 * '0' causing max to be based
> @@ -97,31 +96,7 @@ struct svc_serv {
>  #endif /* CONFIG_SUNRPC_BACKCHANNEL */
>  };
>  
> -/**
> - * svc_get() - increment reference count on a SUNRPC serv
> - * @serv:  the svc_serv to have count incremented
> - *
> - * Returns: the svc_serv that was passed in.
> - */
> -static inline struct svc_serv *svc_get(struct svc_serv *serv)
> -{
> -	kref_get(&serv->sv_refcnt);
> -	return serv;
> -}
> -
> -void svc_destroy(struct kref *);
> -
> -/**
> - * svc_put - decrement reference count on a SUNRPC serv
> - * @serv:  the svc_serv to have count decremented
> - *
> - * When the reference count reaches zero, svc_destroy()
> - * is called to clean up and free the serv.
> - */
> -static inline void svc_put(struct svc_serv *serv)
> -{
> -	kref_put(&serv->sv_refcnt, svc_destroy);
> -}
> +void svc_destroy(struct svc_serv **svcp);
>  
>  /*
>   * Maximum payload size supported by a kernel RPC server.
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index 3f2ea7a0496f..0b5889e058c5 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -463,7 +463,6 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
>  		return NULL;
>  	serv->sv_name      = prog->pg_name;
>  	serv->sv_program   = prog;
> -	kref_init(&serv->sv_refcnt);
>  	serv->sv_stats     = prog->pg_stats;
>  	if (bufsize > RPCSVC_MAXPAYLOAD)
>  		bufsize = RPCSVC_MAXPAYLOAD;
> @@ -564,11 +563,13 @@ EXPORT_SYMBOL_GPL(svc_create_pooled);
>   * protect sv_permsocks and sv_tempsocks.
>   */
>  void
> -svc_destroy(struct kref *ref)
> +svc_destroy(struct svc_serv **servp)
>  {
> -	struct svc_serv *serv = container_of(ref, struct svc_serv, sv_refcnt);
> +	struct svc_serv *serv = *servp;
>  	unsigned int i;
>  
> +	*servp = NULL;
> +
>  	dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);
>  	timer_shutdown_sync(&serv->sv_temptimer);
>  
> @@ -675,7 +676,6 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
>  	if (!rqstp)
>  		return ERR_PTR(-ENOMEM);
>  
> -	svc_get(serv);
>  	spin_lock_bh(&serv->sv_lock);
>  	serv->sv_nrthreads += 1;
>  	spin_unlock_bh(&serv->sv_lock);
> @@ -935,11 +935,6 @@ svc_exit_thread(struct svc_rqst *rqstp)
>  
>  	svc_rqst_free(rqstp);
>  
> -	svc_put(serv);
> -	/* That svc_put() cannot be the last, because the thread
> -	 * waiting for SP_VICTIM_REMAINS to clear must hold
> -	 * a reference. So it is still safe to access pool.
> -	 */
>  	clear_and_wake_up_bit(SP_VICTIM_REMAINS, &pool->sp_flags);
>  }
>  EXPORT_SYMBOL_GPL(svc_exit_thread);

Ok, now I'm seeing the point of the change. This is a lot nicer than
dealing with a refcount.

Reviewed-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] nfsd: rename nfsd_last_thread() to nfsd_destroy_serv()
  2023-10-30  1:08 ` [PATCH 5/5] nfsd: rename nfsd_last_thread() to nfsd_destroy_serv() NeilBrown
@ 2023-10-30 13:15   ` Jeff Layton
  0 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2023-10-30 13:15 UTC (permalink / raw)
  To: NeilBrown, Chuck Lever; +Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> As this function now destroys the svc_serv, this is a better name.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfsctl.c | 4 ++--
>  fs/nfsd/nfsd.h   | 2 +-
>  fs/nfsd/nfssvc.c | 8 ++++----
>  3 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 86cab5281fd2..d603e672d568 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -707,7 +707,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
>  
>  	if (!nn->nfsd_serv->sv_nrthreads &&
>  	    list_empty(&nn->nfsd_serv->sv_permsocks))
> -		nfsd_last_thread(net);
> +		nfsd_destroy_serv(net);
>  
>  	return err;
>  }
> @@ -754,7 +754,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
>  out_err:
>  	if (!nn->nfsd_serv->sv_nrthreads &&
>  	    list_empty(&nn->nfsd_serv->sv_permsocks))
> -		nfsd_last_thread(net);
> +		nfsd_destroy_serv(net);
>  
>  	return err;
>  }
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index 9ed0e08d16c2..304e9728b929 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -148,7 +148,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
>  int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
>  void nfsd_reset_versions(struct nfsd_net *nn);
>  int nfsd_create_serv(struct net *net);
> -void nfsd_last_thread(struct net *net);
> +void nfsd_destroy_serv(struct net *net);
>  
>  extern int nfsd_max_blksize;
>  
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index 61a1d966ca48..88c2e2c94829 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -533,7 +533,7 @@ static struct notifier_block nfsd_inet6addr_notifier = {
>  /* Only used under nfsd_mutex, so this atomic may be overkill: */
>  static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
>  
> -void nfsd_last_thread(struct net *net)
> +void nfsd_destroy_serv(struct net *net)
>  {
>  	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
>  	struct svc_serv *serv = nn->nfsd_serv;
> @@ -555,7 +555,7 @@ void nfsd_last_thread(struct net *net)
>  	/*
>  	 * write_ports can create the server without actually starting
>  	 * any threads--if we get shut down before any threads are
> -	 * started, then nfsd_last_thread will be run before any of this
> +	 * started, then nfsd_destroy_serv will be run before any of this
>  	 * other initialization has been done except the rpcb information.
>  	 */
>  	svc_rpcb_cleanup(serv, net);
> @@ -641,7 +641,7 @@ void nfsd_shutdown_threads(struct net *net)
>  
>  	/* Kill outstanding nfsd threads */
>  	svc_set_num_threads(serv, NULL, 0);
> -	nfsd_last_thread(net);
> +	nfsd_destroy_serv(net);
>  	mutex_unlock(&nfsd_mutex);
>  }
>  
> @@ -802,7 +802,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
>  	error = serv->sv_nrthreads;
>  out_put:
>  	if (serv->sv_nrthreads == 0)
> -		nfsd_last_thread(net);
> +		nfsd_destroy_serv(net);
>  out:
>  	mutex_unlock(&nfsd_mutex);
>  	return error;

LGTM

Reviewed-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation
  2023-10-30 13:03   ` Jeff Layton
@ 2023-10-30 13:27     ` Lorenzo Bianconi
  0 siblings, 0 replies; 21+ messages in thread
From: Lorenzo Bianconi @ 2023-10-30 13:27 UTC (permalink / raw)
  To: Jeff Layton
  Cc: NeilBrown, Chuck Lever, linux-nfs, Olga Kornievskaia, Dai Ngo,
	Tom Talpey

[-- Attachment #1: Type: text/plain, Size: 1759 bytes --]

> On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> > Rather than using svc_get() and svc_put() to hold a stable reference to
> > the nfsd_svc for netlink lookups, simply hold the mutex for the entire
> > time.
> > 
> > The "entire" time isn't very long, and the mutex is not often contented.
> > 
> > This makes way for use to remove the refcounts of svc, which is more
> > confusing than useful.
> > 
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  fs/nfsd/nfsctl.c | 9 +++------
> >  1 file changed, 3 insertions(+), 6 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > index d78ae4452946..8f644f1d157c 100644
> > --- a/fs/nfsd/nfsctl.c
> > +++ b/fs/nfsd/nfsctl.c
> > @@ -1515,11 +1515,10 @@ int nfsd_nl_rpc_status_get_start(struct netlink_callback *cb)
> >  	int ret = -ENODEV;
> >  
> >  	mutex_lock(&nfsd_mutex);
> > -	if (nn->nfsd_serv) {
> > -		svc_get(nn->nfsd_serv);
> > +	if (nn->nfsd_serv)
> >  		ret = 0;
> > -	}
> > -	mutex_unlock(&nfsd_mutex);
> > +	else
> > +		mutex_unlock(&nfsd_mutex);
> >  
> >  	return ret;
> >  }
> > @@ -1691,8 +1690,6 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb,
> >   */
> >  int nfsd_nl_rpc_status_get_done(struct netlink_callback *cb)
> >  {
> > -	mutex_lock(&nfsd_mutex);
> > -	nfsd_put(sock_net(cb->skb->sk));
> >  	mutex_unlock(&nfsd_mutex);
> >  
> >  	return 0;
> 
> (cc'ing Lorenzo since he wrote this)
> 
> I think Lorenzo did it this way originally, and I convinced him to take
> a reference instead. This should be fine though.

yep, the idea was to avoid grabbing the mutex and dump the refcount instead but
I think it is fine.

Regards,
Lorenzo

> 
> Reviewed-by: Jeff Layton <jlayton@kernel.org>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put()
  2023-10-30 13:15   ` Jeff Layton
@ 2023-10-30 15:52     ` Chuck Lever
  2023-10-30 16:41       ` Jeff Layton
  0 siblings, 1 reply; 21+ messages in thread
From: Chuck Lever @ 2023-10-30 15:52 UTC (permalink / raw)
  To: Jeff Layton; +Cc: NeilBrown, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, Oct 30, 2023 at 09:15:17AM -0400, Jeff Layton wrote:
> On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> > If write_ports_addfd or write_ports_addxprt fail, they call nfsD_put()
> > without calling nfsd_last_thread().  This leaves nn->nfsd_serv pointing
> > to a structure that has been freed.
> > 
> > So export nfsd_last_thread() and call it when the nfsd_serv is about to
> > be destroy.
> > 
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  fs/nfsd/nfsctl.c | 9 +++++++--
> >  fs/nfsd/nfsd.h   | 1 +
> >  fs/nfsd/nfssvc.c | 2 +-
> >  3 files changed, 9 insertions(+), 3 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > index 739ed5bf71cd..79efb1075f38 100644
> > --- a/fs/nfsd/nfsctl.c
> > +++ b/fs/nfsd/nfsctl.c
> > @@ -705,8 +705,10 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
> >  
> >  	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
> >  
> > -	if (err >= 0 &&
> > -	    !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> > +	if (err < 0 && !nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> > +		nfsd_last_thread(net);
> > +	else if (err >= 0 &&
> > +		 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> >  		svc_get(nn->nfsd_serv);
> >  
> >  	nfsd_put(net);
> > @@ -757,6 +759,9 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
> >  		svc_xprt_put(xprt);
> >  	}
> >  out_err:
> > +	if (!nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> > +		nfsd_last_thread(net);
> > +
> >  	nfsd_put(net);
> >  	return err;
> >  }
> > diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> > index f5ff42f41ee7..3286ffacbc56 100644
> > --- a/fs/nfsd/nfsd.h
> > +++ b/fs/nfsd/nfsd.h
> > @@ -155,6 +155,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
> >  int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
> >  void nfsd_reset_versions(struct nfsd_net *nn);
> >  int nfsd_create_serv(struct net *net);
> > +void nfsd_last_thread(struct net *net);
> >  
> >  extern int nfsd_max_blksize;
> >  
> > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > index d6122bb2d167..6c968c02cc29 100644
> > --- a/fs/nfsd/nfssvc.c
> > +++ b/fs/nfsd/nfssvc.c
> > @@ -542,7 +542,7 @@ static struct notifier_block nfsd_inet6addr_notifier = {
> >  /* Only used under nfsd_mutex, so this atomic may be overkill: */
> >  static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
> >  
> > -static void nfsd_last_thread(struct net *net)
> > +void nfsd_last_thread(struct net *net)
> >  {
> >  	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
> >  	struct svc_serv *serv = nn->nfsd_serv;
> 
> This patch should fix the problem that I was seeing with write_ports,

Then let's add

Fixes: ec52361df99b ("SUNRPC: stop using ->sv_nrthreads as a refcount")

to this one, since it addresses a crasher seen in the wild.


> but it won't fix the hinky error cleanup in nfsd_svc. It looks like that
> does get fixed in patch #4 though, so I'm not too concerned.

Does that fix also need to be backported?


-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put()
  2023-10-30 15:52     ` Chuck Lever
@ 2023-10-30 16:41       ` Jeff Layton
  0 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2023-10-30 16:41 UTC (permalink / raw)
  To: Chuck Lever; +Cc: NeilBrown, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Mon, 2023-10-30 at 11:52 -0400, Chuck Lever wrote:
> On Mon, Oct 30, 2023 at 09:15:17AM -0400, Jeff Layton wrote:
> > On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> > > If write_ports_addfd or write_ports_addxprt fail, they call nfsD_put()
> > > without calling nfsd_last_thread().  This leaves nn->nfsd_serv pointing
> > > to a structure that has been freed.
> > > 
> > > So export nfsd_last_thread() and call it when the nfsd_serv is about to
> > > be destroy.
> > > 
> > > Signed-off-by: NeilBrown <neilb@suse.de>
> > > ---
> > >  fs/nfsd/nfsctl.c | 9 +++++++--
> > >  fs/nfsd/nfsd.h   | 1 +
> > >  fs/nfsd/nfssvc.c | 2 +-
> > >  3 files changed, 9 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > > index 739ed5bf71cd..79efb1075f38 100644
> > > --- a/fs/nfsd/nfsctl.c
> > > +++ b/fs/nfsd/nfsctl.c
> > > @@ -705,8 +705,10 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
> > >  
> > >  	err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
> > >  
> > > -	if (err >= 0 &&
> > > -	    !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> > > +	if (err < 0 && !nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> > > +		nfsd_last_thread(net);
> > > +	else if (err >= 0 &&
> > > +		 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
> > >  		svc_get(nn->nfsd_serv);
> > >  
> > >  	nfsd_put(net);
> > > @@ -757,6 +759,9 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
> > >  		svc_xprt_put(xprt);
> > >  	}
> > >  out_err:
> > > +	if (!nn->nfsd_serv->sv_nrthreads && !nn->keep_active)
> > > +		nfsd_last_thread(net);
> > > +
> > >  	nfsd_put(net);
> > >  	return err;
> > >  }
> > > diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> > > index f5ff42f41ee7..3286ffacbc56 100644
> > > --- a/fs/nfsd/nfsd.h
> > > +++ b/fs/nfsd/nfsd.h
> > > @@ -155,6 +155,7 @@ int nfsd_vers(struct nfsd_net *nn, int vers, enum vers_op change);
> > >  int nfsd_minorversion(struct nfsd_net *nn, u32 minorversion, enum vers_op change);
> > >  void nfsd_reset_versions(struct nfsd_net *nn);
> > >  int nfsd_create_serv(struct net *net);
> > > +void nfsd_last_thread(struct net *net);
> > >  
> > >  extern int nfsd_max_blksize;
> > >  
> > > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > > index d6122bb2d167..6c968c02cc29 100644
> > > --- a/fs/nfsd/nfssvc.c
> > > +++ b/fs/nfsd/nfssvc.c
> > > @@ -542,7 +542,7 @@ static struct notifier_block nfsd_inet6addr_notifier = {
> > >  /* Only used under nfsd_mutex, so this atomic may be overkill: */
> > >  static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0);
> > >  
> > > -static void nfsd_last_thread(struct net *net)
> > > +void nfsd_last_thread(struct net *net)
> > >  {
> > >  	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
> > >  	struct svc_serv *serv = nn->nfsd_serv;
> > 
> > This patch should fix the problem that I was seeing with write_ports,
> 
> Then let's add
> 
> Fixes: ec52361df99b ("SUNRPC: stop using ->sv_nrthreads as a refcount")
> 
> to this one, since it addresses a crasher seen in the wild.
> 
> 

Sounds good.

> > but it won't fix the hinky error cleanup in nfsd_svc. It looks like that
> > does get fixed in patch #4 though, so I'm not too concerned.
> 
> Does that fix also need to be backported?
> 

I think so, but we might want to split that out into a more targeted
patch and apply it ahead of the rest of the series. Our QA folks seem to
be able to hit the problem somehow, so it's likely to be triggered by
people in the field too.

-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/5] svc: don't hold reference for poolstats, only mutex.
  2023-10-30 13:01   ` Jeff Layton
@ 2023-10-30 21:48     ` NeilBrown
  2023-10-31 13:08       ` Jeff Layton
  0 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-30 21:48 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Chuck Lever, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Tue, 31 Oct 2023, Jeff Layton wrote:
> On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> > A future patch will remove refcounting on svc_serv as it is of little
> > use.
> > It is currently used to keep the svc around while the pool_stats file is
> > open.
> > Change this to get the pointer, protected by the mutex, only in
> > seq_start, and the release the mutex in seq_stop.
> > This means that if the nfsd server is stopped and restarted while the
> > pool_stats file it open, then some pool stats info could be from the
> > first instance and some from the second.  This might appear odd, but is
> > unlikely to be a problem in practice.
> > 
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  fs/nfsd/nfsctl.c           |  2 +-
> >  fs/nfsd/nfssvc.c           | 30 ++++++++---------------
> >  include/linux/sunrpc/svc.h |  5 +++-
> >  net/sunrpc/svc_xprt.c      | 49 ++++++++++++++++++++++++++++++++------
> >  4 files changed, 57 insertions(+), 29 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > index 79efb1075f38..d78ae4452946 100644
> > --- a/fs/nfsd/nfsctl.c
> > +++ b/fs/nfsd/nfsctl.c
> > @@ -179,7 +179,7 @@ static const struct file_operations pool_stats_operations = {
> >  	.open		= nfsd_pool_stats_open,
> >  	.read		= seq_read,
> >  	.llseek		= seq_lseek,
> > -	.release	= nfsd_pool_stats_release,
> > +	.release	= svc_pool_stats_release,
> >  };
> >  
> >  DEFINE_SHOW_ATTRIBUTE(nfsd_reply_cache_stats);
> > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > index 6c968c02cc29..203e1cfc1cad 100644
> > --- a/fs/nfsd/nfssvc.c
> > +++ b/fs/nfsd/nfssvc.c
> > @@ -1072,30 +1072,20 @@ bool nfssvc_encode_voidres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
> >  	return true;
> >  }
> >  
> > -int nfsd_pool_stats_open(struct inode *inode, struct file *file)
> > +static struct svc_serv *nfsd_get_serv(struct seq_file *s, bool start)
> >  {
> > -	int ret;
> > -	struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
> > -
> > -	mutex_lock(&nfsd_mutex);
> > -	if (nn->nfsd_serv == NULL) {
> > +	struct nfsd_net *nn = net_generic(file_inode(s->file)->i_sb->s_fs_info,
> > +					  nfsd_net_id);
> > +	if (start) {
> > +		mutex_lock(&nfsd_mutex);
> > +		return nn->nfsd_serv;
> > +	} else {
> >  		mutex_unlock(&nfsd_mutex);
> > -		return -ENODEV;
> > +		return NULL;
> >  	}
> > -	svc_get(nn->nfsd_serv);
> > -	ret = svc_pool_stats_open(nn->nfsd_serv, file);
> > -	mutex_unlock(&nfsd_mutex);
> > -	return ret;
> >  }
> >  
> > -int nfsd_pool_stats_release(struct inode *inode, struct file *file)
> > +int nfsd_pool_stats_open(struct inode *inode, struct file *file)
> >  {
> > -	struct seq_file *seq = file->private_data;
> > -	struct svc_serv *serv = seq->private;
> > -	int ret = seq_release(inode, file);
> > -
> > -	mutex_lock(&nfsd_mutex);
> > -	svc_put(serv);
> > -	mutex_unlock(&nfsd_mutex);
> > -	return ret;
> > +	return svc_pool_stats_open(nfsd_get_serv, file);
> >  }
> > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> > index b10f987509cc..11acad6988a2 100644
> > --- a/include/linux/sunrpc/svc.h
> > +++ b/include/linux/sunrpc/svc.h
> > @@ -433,7 +433,10 @@ void		   svc_exit_thread(struct svc_rqst *);
> >  struct svc_serv *  svc_create_pooled(struct svc_program *, unsigned int,
> >  				     int (*threadfn)(void *data));
> >  int		   svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
> > -int		   svc_pool_stats_open(struct svc_serv *serv, struct file *file);
> > +int		   svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
> > +				       struct file *file);
> > +int		   svc_pool_stats_release(struct inode *inode,
> > +					  struct file *file);
> >  void		   svc_process(struct svc_rqst *rqstp);
> >  void		   svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp);
> >  int		   svc_register(const struct svc_serv *, struct net *, const int,
> > diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> > index fee83d1024bc..2f99f7475b7b 100644
> > --- a/net/sunrpc/svc_xprt.c
> > +++ b/net/sunrpc/svc_xprt.c
> > @@ -1366,26 +1366,38 @@ EXPORT_SYMBOL_GPL(svc_xprt_names);
> >  
> >  /*----------------------------------------------------------------------------*/
> >  
> > +struct pool_private {
> > +	struct svc_serv *(*get_serv)(struct seq_file *, bool);
> 
> This bool is pretty ugly. I think I'd rather see two operations here
> (get_serv/put_serv). Also, this could use a kerneldoc comment.

I agree that bool is ugly, but two function pointers as function args
seemed ugly, and stashing them in 'struct svc_serv' seemed ugly.
So I picked one.  I'd be keen to find an approach that didn't require a
function pointer.

Maybe sunrpc could declare

   struct svc_ref {
         struct mutex mutex;
         struct svc_serv *serv;
   }

and nfsd could use one of those instead of nfsd_mutex and nfsd_serv, and
pass a pointer to it to the open function.

But then the mutex would have to be in the per-net structure.  And maybe
that isn't a bad idea, but it is a change...

I guess I could pass pointers to nfsd_mutex and nn->nfsd_serv to the
open function....

Any other ideas?

Thanks,
NeilBrown

> 
> > +	struct svc_serv *serv;
> > +};
> > +
> >  static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos)
> >  {
> >  	unsigned int pidx = (unsigned int)*pos;
> > -	struct svc_serv *serv = m->private;
> > +	struct pool_private *pp = m->private;
> >  
> >  	dprintk("svc_pool_stats_start, *pidx=%u\n", pidx);
> >  
> > +	pp->serv = pp->get_serv(m, true);
> > +
> >  	if (!pidx)
> >  		return SEQ_START_TOKEN;
> > -	return (pidx > serv->sv_nrpools ? NULL : &serv->sv_pools[pidx-1]);
> > +	if (!pp->serv)
> > +		return NULL;
> > +	return (pidx > pp->serv->sv_nrpools ? NULL : &pp->serv->sv_pools[pidx-1]);
> >  }
> >  
> >  static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
> >  {
> >  	struct svc_pool *pool = p;
> > -	struct svc_serv *serv = m->private;
> > +	struct pool_private *pp = m->private;
> > +	struct svc_serv *serv = pp->serv;
> >  
> >  	dprintk("svc_pool_stats_next, *pos=%llu\n", *pos);
> >  
> > -	if (p == SEQ_START_TOKEN) {
> > +	if (!serv) {
> > +		pool = NULL;
> > +	} else if (p == SEQ_START_TOKEN) {
> >  		pool = &serv->sv_pools[0];
> >  	} else {
> >  		unsigned int pidx = (pool - &serv->sv_pools[0]);
> > @@ -1400,6 +1412,9 @@ static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
> >  
> >  static void svc_pool_stats_stop(struct seq_file *m, void *p)
> >  {
> > +	struct pool_private *pp = m->private;
> > +
> > +	pp->get_serv(m, false);
> >  }
> >  
> >  static int svc_pool_stats_show(struct seq_file *m, void *p)
> > @@ -1427,15 +1442,35 @@ static const struct seq_operations svc_pool_stats_seq_ops = {
> >  	.show	= svc_pool_stats_show,
> >  };
> >  
> > -int svc_pool_stats_open(struct svc_serv *serv, struct file *file)
> > +int svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
> > +			struct file *file)
> >  {
> > +	struct pool_private *pp;
> >  	int err;
> >  
> > +	pp = kmalloc(sizeof(*pp), GFP_KERNEL);
> > +	if (!pp)
> > +		return -ENOMEM;
> > +
> >  	err = seq_open(file, &svc_pool_stats_seq_ops);
> > -	if (!err)
> > -		((struct seq_file *) file->private_data)->private = serv;
> > +	if (!err) {
> > +		pp->get_serv = get_serv;
> > +		((struct seq_file *) file->private_data)->private = pp;
> > +	} else
> > +		kfree(pp);
> > +
> >  	return err;
> >  }
> >  EXPORT_SYMBOL(svc_pool_stats_open);
> >  
> > +int svc_pool_stats_release(struct inode *inode, struct file *file)
> > +{
> > +	struct seq_file *seq = file->private_data;
> > +
> > +	kfree(seq->private);
> > +	seq->private = NULL;
> > +	return seq_release(inode, file);
> > +}
> > +EXPORT_SYMBOL(svc_pool_stats_release);
> > +
> >  /*----------------------------------------------------------------------------*/
> 
> -- 
> Jeff Layton <jlayton@kernel.org>
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/5] svc: don't hold reference for poolstats, only mutex.
  2023-10-30 21:48     ` NeilBrown
@ 2023-10-31 13:08       ` Jeff Layton
  0 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2023-10-31 13:08 UTC (permalink / raw)
  To: NeilBrown; +Cc: Chuck Lever, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey

On Tue, 2023-10-31 at 08:48 +1100, NeilBrown wrote:
> On Tue, 31 Oct 2023, Jeff Layton wrote:
> > On Mon, 2023-10-30 at 12:08 +1100, NeilBrown wrote:
> > > A future patch will remove refcounting on svc_serv as it is of little
> > > use.
> > > It is currently used to keep the svc around while the pool_stats file is
> > > open.
> > > Change this to get the pointer, protected by the mutex, only in
> > > seq_start, and the release the mutex in seq_stop.
> > > This means that if the nfsd server is stopped and restarted while the
> > > pool_stats file it open, then some pool stats info could be from the
> > > first instance and some from the second.  This might appear odd, but is
> > > unlikely to be a problem in practice.
> > > 
> > > Signed-off-by: NeilBrown <neilb@suse.de>
> > > ---
> > >  fs/nfsd/nfsctl.c           |  2 +-
> > >  fs/nfsd/nfssvc.c           | 30 ++++++++---------------
> > >  include/linux/sunrpc/svc.h |  5 +++-
> > >  net/sunrpc/svc_xprt.c      | 49 ++++++++++++++++++++++++++++++++------
> > >  4 files changed, 57 insertions(+), 29 deletions(-)
> > > 
> > > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > > index 79efb1075f38..d78ae4452946 100644
> > > --- a/fs/nfsd/nfsctl.c
> > > +++ b/fs/nfsd/nfsctl.c
> > > @@ -179,7 +179,7 @@ static const struct file_operations pool_stats_operations = {
> > >  	.open		= nfsd_pool_stats_open,
> > >  	.read		= seq_read,
> > >  	.llseek		= seq_lseek,
> > > -	.release	= nfsd_pool_stats_release,
> > > +	.release	= svc_pool_stats_release,
> > >  };
> > >  
> > >  DEFINE_SHOW_ATTRIBUTE(nfsd_reply_cache_stats);
> > > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> > > index 6c968c02cc29..203e1cfc1cad 100644
> > > --- a/fs/nfsd/nfssvc.c
> > > +++ b/fs/nfsd/nfssvc.c
> > > @@ -1072,30 +1072,20 @@ bool nfssvc_encode_voidres(struct svc_rqst *rqstp, struct xdr_stream *xdr)
> > >  	return true;
> > >  }
> > >  
> > > -int nfsd_pool_stats_open(struct inode *inode, struct file *file)
> > > +static struct svc_serv *nfsd_get_serv(struct seq_file *s, bool start)
> > >  {
> > > -	int ret;
> > > -	struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
> > > -
> > > -	mutex_lock(&nfsd_mutex);
> > > -	if (nn->nfsd_serv == NULL) {
> > > +	struct nfsd_net *nn = net_generic(file_inode(s->file)->i_sb->s_fs_info,
> > > +					  nfsd_net_id);
> > > +	if (start) {
> > > +		mutex_lock(&nfsd_mutex);
> > > +		return nn->nfsd_serv;
> > > +	} else {
> > >  		mutex_unlock(&nfsd_mutex);
> > > -		return -ENODEV;
> > > +		return NULL;
> > >  	}
> > > -	svc_get(nn->nfsd_serv);
> > > -	ret = svc_pool_stats_open(nn->nfsd_serv, file);
> > > -	mutex_unlock(&nfsd_mutex);
> > > -	return ret;
> > >  }
> > >  
> > > -int nfsd_pool_stats_release(struct inode *inode, struct file *file)
> > > +int nfsd_pool_stats_open(struct inode *inode, struct file *file)
> > >  {
> > > -	struct seq_file *seq = file->private_data;
> > > -	struct svc_serv *serv = seq->private;
> > > -	int ret = seq_release(inode, file);
> > > -
> > > -	mutex_lock(&nfsd_mutex);
> > > -	svc_put(serv);
> > > -	mutex_unlock(&nfsd_mutex);
> > > -	return ret;
> > > +	return svc_pool_stats_open(nfsd_get_serv, file);
> > >  }
> > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> > > index b10f987509cc..11acad6988a2 100644
> > > --- a/include/linux/sunrpc/svc.h
> > > +++ b/include/linux/sunrpc/svc.h
> > > @@ -433,7 +433,10 @@ void		   svc_exit_thread(struct svc_rqst *);
> > >  struct svc_serv *  svc_create_pooled(struct svc_program *, unsigned int,
> > >  				     int (*threadfn)(void *data));
> > >  int		   svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
> > > -int		   svc_pool_stats_open(struct svc_serv *serv, struct file *file);
> > > +int		   svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
> > > +				       struct file *file);
> > > +int		   svc_pool_stats_release(struct inode *inode,
> > > +					  struct file *file);
> > >  void		   svc_process(struct svc_rqst *rqstp);
> > >  void		   svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp);
> > >  int		   svc_register(const struct svc_serv *, struct net *, const int,
> > > diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> > > index fee83d1024bc..2f99f7475b7b 100644
> > > --- a/net/sunrpc/svc_xprt.c
> > > +++ b/net/sunrpc/svc_xprt.c
> > > @@ -1366,26 +1366,38 @@ EXPORT_SYMBOL_GPL(svc_xprt_names);
> > >  
> > >  /*----------------------------------------------------------------------------*/
> > >  
> > > +struct pool_private {
> > > +	struct svc_serv *(*get_serv)(struct seq_file *, bool);
> > 
> > This bool is pretty ugly. I think I'd rather see two operations here
> > (get_serv/put_serv). Also, this could use a kerneldoc comment.
> 
> I agree that bool is ugly, but two function pointers as function args
> seemed ugly, and stashing them in 'struct svc_serv' seemed ugly.
> So I picked one.  I'd be keen to find an approach that didn't require a
> function pointer.
> 
> Maybe sunrpc could declare
> 
>    struct svc_ref {
>          struct mutex mutex;
>          struct svc_serv *serv;
>    }
> 
> and nfsd could use one of those instead of nfsd_mutex and nfsd_serv, and
> pass a pointer to it to the open function.
> 
> But then the mutex would have to be in the per-net structure.  And maybe
> that isn't a bad idea, but it is a change...
> 
> I guess I could pass pointers to nfsd_mutex and nn->nfsd_serv to the
> open function....
> 
> Any other ideas?
> 
> 

I think just passing two function pointers to svc_pool_stats_open, and
storing them both in the serv is the best solution (for now). Like you
said, there are no clean options here. That function only has one caller
though, so at least the nastiness will be confined to that.

Moving the mutex to be per-net does make a lot of sense, but I think
that's a separate project. If you decide to do that and it allows you to
make a simpler interface for handling the get/put_serv pointers, then
the interface can be reworked at that point.

> > 
> > > +	struct svc_serv *serv;
> > > +};
> > > +
> > >  static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos)
> > >  {
> > >  	unsigned int pidx = (unsigned int)*pos;
> > > -	struct svc_serv *serv = m->private;
> > > +	struct pool_private *pp = m->private;
> > >  
> > >  	dprintk("svc_pool_stats_start, *pidx=%u\n", pidx);
> > >  
> > > +	pp->serv = pp->get_serv(m, true);
> > > +
> > >  	if (!pidx)
> > >  		return SEQ_START_TOKEN;
> > > -	return (pidx > serv->sv_nrpools ? NULL : &serv->sv_pools[pidx-1]);
> > > +	if (!pp->serv)
> > > +		return NULL;
> > > +	return (pidx > pp->serv->sv_nrpools ? NULL : &pp->serv->sv_pools[pidx-1]);
> > >  }
> > >  
> > >  static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
> > >  {
> > >  	struct svc_pool *pool = p;
> > > -	struct svc_serv *serv = m->private;
> > > +	struct pool_private *pp = m->private;
> > > +	struct svc_serv *serv = pp->serv;
> > >  
> > >  	dprintk("svc_pool_stats_next, *pos=%llu\n", *pos);
> > >  
> > > -	if (p == SEQ_START_TOKEN) {
> > > +	if (!serv) {
> > > +		pool = NULL;
> > > +	} else if (p == SEQ_START_TOKEN) {
> > >  		pool = &serv->sv_pools[0];
> > >  	} else {
> > >  		unsigned int pidx = (pool - &serv->sv_pools[0]);
> > > @@ -1400,6 +1412,9 @@ static void *svc_pool_stats_next(struct seq_file *m, void *p, loff_t *pos)
> > >  
> > >  static void svc_pool_stats_stop(struct seq_file *m, void *p)
> > >  {
> > > +	struct pool_private *pp = m->private;
> > > +
> > > +	pp->get_serv(m, false);
> > >  }
> > >  
> > >  static int svc_pool_stats_show(struct seq_file *m, void *p)
> > > @@ -1427,15 +1442,35 @@ static const struct seq_operations svc_pool_stats_seq_ops = {
> > >  	.show	= svc_pool_stats_show,
> > >  };
> > >  
> > > -int svc_pool_stats_open(struct svc_serv *serv, struct file *file)
> > > +int svc_pool_stats_open(struct svc_serv *(*get_serv)(struct seq_file *, bool),
> > > +			struct file *file)
> > >  {
> > > +	struct pool_private *pp;
> > >  	int err;
> > >  
> > > +	pp = kmalloc(sizeof(*pp), GFP_KERNEL);
> > > +	if (!pp)
> > > +		return -ENOMEM;
> > > +
> > >  	err = seq_open(file, &svc_pool_stats_seq_ops);
> > > -	if (!err)
> > > -		((struct seq_file *) file->private_data)->private = serv;
> > > +	if (!err) {
> > > +		pp->get_serv = get_serv;
> > > +		((struct seq_file *) file->private_data)->private = pp;
> > > +	} else
> > > +		kfree(pp);
> > > +
> > >  	return err;
> > >  }
> > >  EXPORT_SYMBOL(svc_pool_stats_open);
> > >  
> > > +int svc_pool_stats_release(struct inode *inode, struct file *file)
> > > +{
> > > +	struct seq_file *seq = file->private_data;
> > > +
> > > +	kfree(seq->private);
> > > +	seq->private = NULL;
> > > +	return seq_release(inode, file);
> > > +}
> > > +EXPORT_SYMBOL(svc_pool_stats_release);
> > > +
> > >  /*----------------------------------------------------------------------------*/
> > 
> > -- 
> > Jeff Layton <jlayton@kernel.org>
> > 
> 

-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/5] sunrpc: not refcounting svc_serv
  2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
                   ` (4 preceding siblings ...)
  2023-10-30  1:08 ` [PATCH 5/5] nfsd: rename nfsd_last_thread() to nfsd_destroy_serv() NeilBrown
@ 2023-10-31 15:39 ` Chuck Lever III
  2023-10-31 20:40   ` NeilBrown
  5 siblings, 1 reply; 21+ messages in thread
From: Chuck Lever III @ 2023-10-31 15:39 UTC (permalink / raw)
  To: Neil Brown
  Cc: Jeff Layton, Linux NFS Mailing List, Olga Kornievskaia, Dai Ngo,
	Tom Talpey



> On Oct 29, 2023, at 6:08 PM, NeilBrown <neilb@suse.de> wrote:
> 
> This patch set continues earlier work of improving how threads and
> services are managed.  Specifically it drop the refcount.
> 
> The refcount is always changed under the mutex, and almost always is
> exactly equal to the number of threads.  Those few cases where it is
> more than the number of threads can usefully be handled other ways as
> see in the patches.
> 
> The first patches fixes a potential use-after-free when adding a socket
> fails.  This might be the UAF that Jeff mentioned recently.
> 
> The second patch which removes the use of a refcount in pool_stats
> handling is more complex than I would have liked, but I think it is
> worth if for the result seen in 4/5 of substantial simplification.

So I need a v2 of this series, then...

 - Move 4/5 to the beginning of the series (patch 1 or 2, doesn't matter)

 - Add R-b, Fixes, and Cc: stable tags to those two patches

 - Address Jeff's other comments

AFAICT the 0-day bot reports were for the admin-revoked state series,
not this one.


--
Chuck Lever



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/5] sunrpc: not refcounting svc_serv
  2023-10-31 15:39 ` [PATCH 0/5] sunrpc: not refcounting svc_serv Chuck Lever III
@ 2023-10-31 20:40   ` NeilBrown
  2023-10-31 20:42     ` Chuck Lever III
  0 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-31 20:40 UTC (permalink / raw)
  To: Chuck Lever III
  Cc: Jeff Layton, Linux NFS Mailing List, Olga Kornievskaia, Dai Ngo,
	Tom Talpey

On Wed, 01 Nov 2023, Chuck Lever III wrote:
> 
> > On Oct 29, 2023, at 6:08 PM, NeilBrown <neilb@suse.de> wrote:
> > 
> > This patch set continues earlier work of improving how threads and
> > services are managed.  Specifically it drop the refcount.
> > 
> > The refcount is always changed under the mutex, and almost always is
> > exactly equal to the number of threads.  Those few cases where it is
> > more than the number of threads can usefully be handled other ways as
> > see in the patches.
> > 
> > The first patches fixes a potential use-after-free when adding a socket
> > fails.  This might be the UAF that Jeff mentioned recently.
> > 
> > The second patch which removes the use of a refcount in pool_stats
> > handling is more complex than I would have liked, but I think it is
> > worth if for the result seen in 4/5 of substantial simplification.
> 
> So I need a v2 of this series, then...
> 
>  - Move 4/5 to the beginning of the series (patch 1 or 2, doesn't matter)

I don't understand....  2 and 3 are necessary prerequisites for 4.  The
remove places where the refcount is used.

> 
>  - Add R-b, Fixes, and Cc: stable tags to those two patches

Will do

> 
>  - Address Jeff's other comments

Yep.

> 
> AFAICT the 0-day bot reports were for the admin-revoked state series,
> not this one.

Agreed.

NeilBrown

> 
> 
> --
> Chuck Lever
> 
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/5] sunrpc: not refcounting svc_serv
  2023-10-31 20:40   ` NeilBrown
@ 2023-10-31 20:42     ` Chuck Lever III
  2023-10-31 20:46       ` NeilBrown
  0 siblings, 1 reply; 21+ messages in thread
From: Chuck Lever III @ 2023-10-31 20:42 UTC (permalink / raw)
  To: Neil Brown
  Cc: Jeff Layton, Linux NFS Mailing List, Olga Kornievskaia, Dai Ngo,
	Tom Talpey



> On Oct 31, 2023, at 1:40 PM, NeilBrown <neilb@suse.de> wrote:
> 
> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>> 
>>> On Oct 29, 2023, at 6:08 PM, NeilBrown <neilb@suse.de> wrote:
>>> 
>>> This patch set continues earlier work of improving how threads and
>>> services are managed.  Specifically it drop the refcount.
>>> 
>>> The refcount is always changed under the mutex, and almost always is
>>> exactly equal to the number of threads.  Those few cases where it is
>>> more than the number of threads can usefully be handled other ways as
>>> see in the patches.
>>> 
>>> The first patches fixes a potential use-after-free when adding a socket
>>> fails.  This might be the UAF that Jeff mentioned recently.
>>> 
>>> The second patch which removes the use of a refcount in pool_stats
>>> handling is more complex than I would have liked, but I think it is
>>> worth if for the result seen in 4/5 of substantial simplification.
>> 
>> So I need a v2 of this series, then...
>> 
>> - Move 4/5 to the beginning of the series (patch 1 or 2, doesn't matter)
> 
> I don't understand....  2 and 3 are necessary prerequisites for 4.  The
> remove places where the refcount is used.

Hrm. that's what I was afraid of.

Isn't there a fix in 4/5 that we want applied to stable kernels,
or did I misunderstand the email exchange between you and Jeff?


--
Chuck Lever



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/5] sunrpc: not refcounting svc_serv
  2023-10-31 20:42     ` Chuck Lever III
@ 2023-10-31 20:46       ` NeilBrown
  2023-10-31 20:50         ` Chuck Lever III
  0 siblings, 1 reply; 21+ messages in thread
From: NeilBrown @ 2023-10-31 20:46 UTC (permalink / raw)
  To: Chuck Lever III
  Cc: Jeff Layton, Linux NFS Mailing List, Olga Kornievskaia, Dai Ngo,
	Tom Talpey

On Wed, 01 Nov 2023, Chuck Lever III wrote:
> 
> > On Oct 31, 2023, at 1:40 PM, NeilBrown <neilb@suse.de> wrote:
> > 
> > On Wed, 01 Nov 2023, Chuck Lever III wrote:
> >> 
> >>> On Oct 29, 2023, at 6:08 PM, NeilBrown <neilb@suse.de> wrote:
> >>> 
> >>> This patch set continues earlier work of improving how threads and
> >>> services are managed.  Specifically it drop the refcount.
> >>> 
> >>> The refcount is always changed under the mutex, and almost always is
> >>> exactly equal to the number of threads.  Those few cases where it is
> >>> more than the number of threads can usefully be handled other ways as
> >>> see in the patches.
> >>> 
> >>> The first patches fixes a potential use-after-free when adding a socket
> >>> fails.  This might be the UAF that Jeff mentioned recently.
> >>> 
> >>> The second patch which removes the use of a refcount in pool_stats
> >>> handling is more complex than I would have liked, but I think it is
> >>> worth if for the result seen in 4/5 of substantial simplification.
> >> 
> >> So I need a v2 of this series, then...
> >> 
> >> - Move 4/5 to the beginning of the series (patch 1 or 2, doesn't matter)
> > 
> > I don't understand....  2 and 3 are necessary prerequisites for 4.  The
> > remove places where the refcount is used.
> 
> Hrm. that's what I was afraid of.
> 
> Isn't there a fix in 4/5 that we want applied to stable kernels,
> or did I misunderstand the email exchange between you and Jeff?
> 
That is 
 Commit bf32075256e9 ("NFSD: simplify error paths in nfsd_svc()")
which is already in nfsd-next.

NeilBrown


> 
> --
> Chuck Lever
> 
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/5] sunrpc: not refcounting svc_serv
  2023-10-31 20:46       ` NeilBrown
@ 2023-10-31 20:50         ` Chuck Lever III
  0 siblings, 0 replies; 21+ messages in thread
From: Chuck Lever III @ 2023-10-31 20:50 UTC (permalink / raw)
  To: Neil Brown
  Cc: Jeff Layton, Linux NFS Mailing List, Olga Kornievskaia, Dai Ngo,
	Tom Talpey



> On Oct 31, 2023, at 1:46 PM, NeilBrown <neilb@suse.de> wrote:
> 
> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>> 
>>> On Oct 31, 2023, at 1:40 PM, NeilBrown <neilb@suse.de> wrote:
>>> 
>>> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>>>> 
>>>>> On Oct 29, 2023, at 6:08 PM, NeilBrown <neilb@suse.de> wrote:
>>>>> 
>>>>> This patch set continues earlier work of improving how threads and
>>>>> services are managed.  Specifically it drop the refcount.
>>>>> 
>>>>> The refcount is always changed under the mutex, and almost always is
>>>>> exactly equal to the number of threads.  Those few cases where it is
>>>>> more than the number of threads can usefully be handled other ways as
>>>>> see in the patches.
>>>>> 
>>>>> The first patches fixes a potential use-after-free when adding a socket
>>>>> fails.  This might be the UAF that Jeff mentioned recently.
>>>>> 
>>>>> The second patch which removes the use of a refcount in pool_stats
>>>>> handling is more complex than I would have liked, but I think it is
>>>>> worth if for the result seen in 4/5 of substantial simplification.
>>>> 
>>>> So I need a v2 of this series, then...
>>>> 
>>>> - Move 4/5 to the beginning of the series (patch 1 or 2, doesn't matter)
>>> 
>>> I don't understand....  2 and 3 are necessary prerequisites for 4.  The
>>> remove places where the refcount is used.
>> 
>> Hrm. that's what I was afraid of.
>> 
>> Isn't there a fix in 4/5 that we want applied to stable kernels,
>> or did I misunderstand the email exchange between you and Jeff?
>> 
> That is 
> Commit bf32075256e9 ("NFSD: simplify error paths in nfsd_svc()")
> which is already in nfsd-next.

OK. I'm still confused, but let's just keep the patch ordering
the way it was in v1, then. We'll work out any missing fixes
as they arise.

--
Chuck Lever



^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-10-31 20:50 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-30  1:08 [PATCH 0/5] sunrpc: not refcounting svc_serv NeilBrown
2023-10-30  1:08 ` [PATCH 1/5] nfsd: call nfsd_last_thread() before final nfsd_put() NeilBrown
2023-10-30 13:15   ` Jeff Layton
2023-10-30 15:52     ` Chuck Lever
2023-10-30 16:41       ` Jeff Layton
2023-10-30  1:08 ` [PATCH 2/5] svc: don't hold reference for poolstats, only mutex NeilBrown
2023-10-30 13:01   ` Jeff Layton
2023-10-30 21:48     ` NeilBrown
2023-10-31 13:08       ` Jeff Layton
2023-10-30  1:08 ` [PATCH 3/5] nfsd: hold nfsd_mutex across entire netlink operation NeilBrown
2023-10-30 13:03   ` Jeff Layton
2023-10-30 13:27     ` Lorenzo Bianconi
2023-10-30  1:08 ` [PATCH 4/5] SUNRPC: discard sv_refcnt, and svc_get/svc_put NeilBrown
2023-10-30 13:15   ` Jeff Layton
2023-10-30  1:08 ` [PATCH 5/5] nfsd: rename nfsd_last_thread() to nfsd_destroy_serv() NeilBrown
2023-10-30 13:15   ` Jeff Layton
2023-10-31 15:39 ` [PATCH 0/5] sunrpc: not refcounting svc_serv Chuck Lever III
2023-10-31 20:40   ` NeilBrown
2023-10-31 20:42     ` Chuck Lever III
2023-10-31 20:46       ` NeilBrown
2023-10-31 20:50         ` Chuck Lever III

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.