linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RFC: Fixing net/sunrpc/cache.c: cache_listeners_exist() function for rogue process reading a 'channel' file
@ 2019-07-25 16:48 Dave Wysochanski
  2019-07-25 18:54 ` Bruce Fields
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Wysochanski @ 2019-07-25 16:48 UTC (permalink / raw)
  To: NeilBrown, Bruce Fields; +Cc: Linux NFS Mailing List

Neil, Bruce, and others,

I want to see if we can improve cache_listeners_exist() to not be
fooled at all by a random process reading a 'channel' file.  Prior
attempts have been made and Neil your most recent commit mitigated the
effects however doesn't really solve it completely:
9d69338c8c5f "sunrpc/cache: handle missing listeners better"

Here are a couple approaches, based on my understanding of the
interface and what any legitimate "user of the channel files" (aka
daemons or userspace programs, most if not all live in nfs-utils) do in
practice:
1) rather than tracking opens for read, track opens for write on the
channel file (i.e. the 'readers' member in cache_detail)
2) in addition to or in place of #1, track calls to cache_poll()

Basically the above would create a 'cache_daemon_exists()' function in
place of the existing 'cache_listeners_exist()' and then add some
further logic to it.  Do you think that is a valid approach or do you
see problems with it?

Because this keeps coming up in one shape or form and is hard to
troubleshoot when it occurs, I think we should fix this once and for
all so I'm looking for feedback on approaches.  I thought of going down
the road of a more elaborate daemon / kernel registration but that
would require carefully making sure we have backward compatibility when
variants of nfs-utils and kernel are installed.  I think it may be
worth looking at less invasive approaches first such as the above, but
am open to feedback.

Thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: RFC: Fixing net/sunrpc/cache.c: cache_listeners_exist() function for rogue process reading a 'channel' file
  2019-07-25 16:48 RFC: Fixing net/sunrpc/cache.c: cache_listeners_exist() function for rogue process reading a 'channel' file Dave Wysochanski
@ 2019-07-25 18:54 ` Bruce Fields
  2019-07-25 21:44   ` [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons Dave Wysochanski
  2019-07-26 22:33   ` [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist Dave Wysochanski
  0 siblings, 2 replies; 11+ messages in thread
From: Bruce Fields @ 2019-07-25 18:54 UTC (permalink / raw)
  To: Dave Wysochanski; +Cc: NeilBrown, Linux NFS Mailing List

On Thu, Jul 25, 2019 at 12:48:31PM -0400, Dave Wysochanski wrote:
> Neil, Bruce, and others,
> 
> I want to see if we can improve cache_listeners_exist() to not be
> fooled at all by a random process reading a 'channel' file.  Prior
> attempts have been made and Neil your most recent commit mitigated the
> effects however doesn't really solve it completely:
> 9d69338c8c5f "sunrpc/cache: handle missing listeners better"
> 
> Here are a couple approaches, based on my understanding of the
> interface and what any legitimate "user of the channel files" (aka
> daemons or userspace programs, most if not all live in nfs-utils) do in
> practice:
> 1) rather than tracking opens for read, track opens for write on the
> channel file (i.e. the 'readers' member in cache_detail)

Assuming we've checked that none of those random processes are opening
for write, that sounds reasonable to me.

> 2) in addition to or in place of #1, track calls to cache_poll()

I'm not sure how this would work.  What exactly would be the rule, and
how would we document the required behavior for somebody working on the
userland (rpc.mountd) side?

> Because this keeps coming up in one shape or form and is hard to
> troubleshoot when it occurs, I think we should fix this once and for
> all so I'm looking for feedback on approaches.  I thought of going down
> the road of a more elaborate daemon / kernel registration but that
> would require carefully making sure we have backward compatibility when
> variants of nfs-utils and kernel are installed.

It might be worth at least sketching out a design to get an idea how
complicated it would be.  Agreed that backwards compatibility would be
the annoying part.

--b.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons
  2019-07-25 18:54 ` Bruce Fields
@ 2019-07-25 21:44   ` Dave Wysochanski
  2019-07-25 21:50     ` J. Bruce Fields
  2019-07-26 22:33   ` [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist Dave Wysochanski
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Wysochanski @ 2019-07-25 21:44 UTC (permalink / raw)
  To: bfields; +Cc: neilb, linux-nfs

The 'channel' interface has a heuristic that is based on the number of
times any process opens it for reading.  This has shown to be problematic
and any rogue userspace process that just happens to open the 'channel'
file for reading may throw off the kernel logic and even steal a message
from the kernel.

Harden this interface by making a small daemon state machine that is based
on what the current userspace daemons actually do.  Specifically they open
the file either RW or W-only, then issue a poll().  Once these two operations
have been done by the same pid, we mark it as 'registered' as the daemon for
this cache.  We then disallow any other pid to read or write to the 'channel'
file by EIO any attempt.

Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
---
 fs/nfsd/nfs4idmap.c          |  4 ++--
 include/linux/sunrpc/cache.h | 34 +++++++++++++++++++++++++----
 net/sunrpc/cache.c           | 52 +++++++++++++++++++++++++++++---------------
 3 files changed, 66 insertions(+), 24 deletions(-)

diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
index d1f2852..a1f1f02 100644
--- a/fs/nfsd/nfs4idmap.c
+++ b/fs/nfsd/nfs4idmap.c
@@ -187,7 +187,7 @@ static struct ent *idtoname_update(struct cache_detail *, struct ent *,
 	.cache_request	= idtoname_request,
 	.cache_parse	= idtoname_parse,
 	.cache_show	= idtoname_show,
-	.warn_no_listener = warn_no_idmapd,
+	.warn_no_daemon = warn_no_idmapd,
 	.match		= idtoname_match,
 	.init		= ent_init,
 	.update		= ent_init,
@@ -350,7 +350,7 @@ static struct ent *nametoid_update(struct cache_detail *, struct ent *,
 	.cache_request	= nametoid_request,
 	.cache_parse	= nametoid_parse,
 	.cache_show	= nametoid_show,
-	.warn_no_listener = warn_no_idmapd,
+	.warn_no_daemon = warn_no_idmapd,
 	.match		= nametoid_match,
 	.init		= ent_init,
 	.update		= ent_init,
diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
index c7f38e8..7fa9300 100644
--- a/include/linux/sunrpc/cache.h
+++ b/include/linux/sunrpc/cache.h
@@ -61,6 +61,31 @@ struct cache_head {
 
 #define	CACHE_NEW_EXPIRY 120	/* keep new things pending confirmation for 120 seconds */
 
+/*
+ * State machine for the userspace daemon servicing a cache.
+ * Only one pid may be registered to the 'channel' file at any given time.
+ * A pid must transition to the final "POLL" state to finish registration.
+ * Any read or write to a 'channel' file by any pid other than the registered
+ * daemon pid will result in an EIO.
+ *
+ * Close
+ * Open -------------------------> Open (daemon_pid = current)
+ *         open(channel)
+ *
+ * Open -------------------------> Poll
+ *          poll(channel) &&
+ *          daemon_pid == current
+ *
+ * Poll -------------------------> Close (daemon_pid = -1)
+ *          close(channel) &&
+ *          daemon_pid == current
+ */
+enum cache_daemon_state {
+	CACHE_DAEMON_STATE_CLOSE = 1, /* Close: daemon unknown */
+	CACHE_DAEMON_STATE_OPEN = 2,  /* Open: daemon open() 'channel' */
+	CACHE_DAEMON_STATE_POLL = 3   /* Poll: daemon poll() 'channel' */
+};
+
 struct cache_detail {
 	struct module *		owner;
 	int			hash_size;
@@ -83,7 +108,7 @@ struct cache_detail {
 	int			(*cache_show)(struct seq_file *m,
 					      struct cache_detail *cd,
 					      struct cache_head *h);
-	void			(*warn_no_listener)(struct cache_detail *cd,
+	void			(*warn_no_daemon)(struct cache_detail *cd,
 					      int has_died);
 
 	struct cache_head *	(*alloc)(void);
@@ -107,15 +132,16 @@ struct cache_detail {
 	/* fields for communication over channel */
 	struct list_head	queue;
 
-	atomic_t		readers;		/* how many time is /chennel open */
-	time_t			last_close;		/* if no readers, when did last close */
-	time_t			last_warn;		/* when we last warned about no readers */
+	time_t			last_close;		/* when did last close */
+	time_t			last_warn;		/* when we last warned about no daemon */
 
 	union {
 		struct proc_dir_entry	*procfs;
 		struct dentry		*pipefs;
 	};
 	struct net		*net;
+	int			daemon_pid;
+	enum cache_daemon_state daemon_state;
 };
 
 
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 6f1528f..5ea24c8 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -38,7 +38,7 @@
 
 static bool cache_defer_req(struct cache_req *req, struct cache_head *item);
 static void cache_revisit_request(struct cache_head *item);
-static bool cache_listeners_exist(struct cache_detail *detail);
+static bool cache_daemon_exists(struct cache_detail *detail);
 
 static void cache_init(struct cache_head *h, struct cache_detail *detail)
 {
@@ -305,7 +305,7 @@ int cache_check(struct cache_detail *detail,
 				cache_fresh_unlocked(h, detail);
 				break;
 			}
-		} else if (!cache_listeners_exist(detail))
+		} else if (!cache_daemon_exists(detail))
 			rv = try_to_negate_entry(detail, h);
 	}
 
@@ -373,9 +373,10 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
 	spin_lock(&cache_list_lock);
 	cd->nextcheck = 0;
 	cd->entries = 0;
-	atomic_set(&cd->readers, 0);
 	cd->last_close = 0;
 	cd->last_warn = -1;
+	cd->daemon_pid = -1;
+	cd->daemon_state = CACHE_DAEMON_STATE_CLOSE;
 	list_add(&cd->others, &cache_list);
 	spin_unlock(&cache_list_lock);
 
@@ -810,6 +811,10 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
 	if (count == 0)
 		return 0;
 
+	if (cd->daemon_pid != task_pid_nr(current) ||
+	    cd->daemon_state != CACHE_DAEMON_STATE_POLL)
+		return -EIO;
+
 	inode_lock(inode); /* protect against multiple concurrent
 			      * readers on this file */
  again:
@@ -948,6 +953,10 @@ static ssize_t cache_write(struct file *filp, const char __user *buf,
 	if (!cd->cache_parse)
 		goto out;
 
+	if (cd->daemon_pid != task_pid_nr(current) ||
+	    cd->daemon_state != CACHE_DAEMON_STATE_POLL)
+		return -EIO;
+
 	inode_lock(inode);
 	ret = cache_downcall(mapping, buf, count, cd);
 	inode_unlock(inode);
@@ -964,6 +973,9 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
 	struct cache_reader *rp = filp->private_data;
 	struct cache_queue *cq;
 
+	if (cd->daemon_pid == task_pid_nr(current))
+		cd->daemon_state = CACHE_DAEMON_STATE_POLL;
+
 	poll_wait(filp, &queue_wait, wait);
 
 	/* alway allow write */
@@ -1029,11 +1041,14 @@ static int cache_open(struct inode *inode, struct file *filp,
 		}
 		rp->offset = 0;
 		rp->q.reader = 1;
-		atomic_inc(&cd->readers);
 		spin_lock(&queue_lock);
 		list_add(&rp->q.list, &cd->queue);
 		spin_unlock(&queue_lock);
 	}
+	if (filp->f_mode & FMODE_WRITE) {
+		cd->daemon_pid = task_pid_nr(current);
+		cd->daemon_state = CACHE_DAEMON_STATE_OPEN;
+	}
 	filp->private_data = rp;
 	return 0;
 }
@@ -1063,7 +1078,10 @@ static int cache_release(struct inode *inode, struct file *filp,
 		kfree(rp);
 
 		cd->last_close = seconds_since_boot();
-		atomic_dec(&cd->readers);
+	}
+	if (cd->daemon_pid == task_pid_nr(current)) {
+		cd->daemon_pid = -1;
+		cd->daemon_state = CACHE_DAEMON_STATE_CLOSE;
 	}
 	module_put(cd->owner);
 	return 0;
@@ -1160,30 +1178,28 @@ void qword_addhex(char **bpp, int *lp, char *buf, int blen)
 }
 EXPORT_SYMBOL_GPL(qword_addhex);
 
-static void warn_no_listener(struct cache_detail *detail)
+static void warn_no_daemon(struct cache_detail *detail)
 {
 	if (detail->last_warn != detail->last_close) {
 		detail->last_warn = detail->last_close;
-		if (detail->warn_no_listener)
-			detail->warn_no_listener(detail, detail->last_close != 0);
+		if (detail->warn_no_daemon)
+			detail->warn_no_daemon(detail, detail->last_close != 0);
 	}
 }
 
-static bool cache_listeners_exist(struct cache_detail *detail)
+static bool cache_daemon_exists(struct cache_detail *detail)
 {
-	if (atomic_read(&detail->readers))
+	if (detail->daemon_pid != -1 &&
+	    detail->daemon_state == CACHE_DAEMON_STATE_POLL)
 		return true;
-	if (detail->last_close == 0)
-		/* This cache was never opened */
-		return false;
-	if (detail->last_close < seconds_since_boot() - 30)
+	if (detail->last_close > seconds_since_boot() - 30)
 		/*
 		 * We allow for the possibility that someone might
 		 * restart a userspace daemon without restarting the
 		 * server; but after 30 seconds, we give up.
 		 */
-		 return false;
-	return true;
+		 return true;
+	return false;
 }
 
 /*
@@ -1202,8 +1218,8 @@ int sunrpc_cache_pipe_upcall(struct cache_detail *detail, struct cache_head *h)
 	if (!detail->cache_request)
 		return -EINVAL;
 
-	if (!cache_listeners_exist(detail)) {
-		warn_no_listener(detail);
+	if (!cache_daemon_exists(detail)) {
+		warn_no_daemon(detail);
 		return -EINVAL;
 	}
 	if (test_bit(CACHE_CLEANED, &h->flags))
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons
  2019-07-25 21:44   ` [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons Dave Wysochanski
@ 2019-07-25 21:50     ` J. Bruce Fields
  2019-07-26 13:59       ` Dave Wysochanski
  0 siblings, 1 reply; 11+ messages in thread
From: J. Bruce Fields @ 2019-07-25 21:50 UTC (permalink / raw)
  To: Dave Wysochanski; +Cc: neilb, linux-nfs

On Thu, Jul 25, 2019 at 05:44:48PM -0400, Dave Wysochanski wrote:
> The 'channel' interface has a heuristic that is based on the number of
> times any process opens it for reading.  This has shown to be problematic
> and any rogue userspace process that just happens to open the 'channel'
> file for reading may throw off the kernel logic and even steal a message
> from the kernel.
> 
> Harden this interface by making a small daemon state machine that is based
> on what the current userspace daemons actually do.  Specifically they open
> the file either RW or W-only, then issue a poll().  Once these two operations
> have been done by the same pid, we mark it as 'registered' as the daemon for
> this cache.  We then disallow any other pid to read or write to the 'channel'
> file by EIO any attempt.

Is that part really necessary?  mountd has a --num-threads option.
Despite the name, I believe that creates actual process with fork (hence
different pids).

--b.

> 
> Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
> ---
>  fs/nfsd/nfs4idmap.c          |  4 ++--
>  include/linux/sunrpc/cache.h | 34 +++++++++++++++++++++++++----
>  net/sunrpc/cache.c           | 52 +++++++++++++++++++++++++++++---------------
>  3 files changed, 66 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
> index d1f2852..a1f1f02 100644
> --- a/fs/nfsd/nfs4idmap.c
> +++ b/fs/nfsd/nfs4idmap.c
> @@ -187,7 +187,7 @@ static struct ent *idtoname_update(struct cache_detail *, struct ent *,
>  	.cache_request	= idtoname_request,
>  	.cache_parse	= idtoname_parse,
>  	.cache_show	= idtoname_show,
> -	.warn_no_listener = warn_no_idmapd,
> +	.warn_no_daemon = warn_no_idmapd,
>  	.match		= idtoname_match,
>  	.init		= ent_init,
>  	.update		= ent_init,
> @@ -350,7 +350,7 @@ static struct ent *nametoid_update(struct cache_detail *, struct ent *,
>  	.cache_request	= nametoid_request,
>  	.cache_parse	= nametoid_parse,
>  	.cache_show	= nametoid_show,
> -	.warn_no_listener = warn_no_idmapd,
> +	.warn_no_daemon = warn_no_idmapd,
>  	.match		= nametoid_match,
>  	.init		= ent_init,
>  	.update		= ent_init,
> diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
> index c7f38e8..7fa9300 100644
> --- a/include/linux/sunrpc/cache.h
> +++ b/include/linux/sunrpc/cache.h
> @@ -61,6 +61,31 @@ struct cache_head {
>  
>  #define	CACHE_NEW_EXPIRY 120	/* keep new things pending confirmation for 120 seconds */
>  
> +/*
> + * State machine for the userspace daemon servicing a cache.
> + * Only one pid may be registered to the 'channel' file at any given time.
> + * A pid must transition to the final "POLL" state to finish registration.
> + * Any read or write to a 'channel' file by any pid other than the registered
> + * daemon pid will result in an EIO.
> + *
> + * Close
> + * Open -------------------------> Open (daemon_pid = current)
> + *         open(channel)
> + *
> + * Open -------------------------> Poll
> + *          poll(channel) &&
> + *          daemon_pid == current
> + *
> + * Poll -------------------------> Close (daemon_pid = -1)
> + *          close(channel) &&
> + *          daemon_pid == current
> + */
> +enum cache_daemon_state {
> +	CACHE_DAEMON_STATE_CLOSE = 1, /* Close: daemon unknown */
> +	CACHE_DAEMON_STATE_OPEN = 2,  /* Open: daemon open() 'channel' */
> +	CACHE_DAEMON_STATE_POLL = 3   /* Poll: daemon poll() 'channel' */
> +};
> +
>  struct cache_detail {
>  	struct module *		owner;
>  	int			hash_size;
> @@ -83,7 +108,7 @@ struct cache_detail {
>  	int			(*cache_show)(struct seq_file *m,
>  					      struct cache_detail *cd,
>  					      struct cache_head *h);
> -	void			(*warn_no_listener)(struct cache_detail *cd,
> +	void			(*warn_no_daemon)(struct cache_detail *cd,
>  					      int has_died);
>  
>  	struct cache_head *	(*alloc)(void);
> @@ -107,15 +132,16 @@ struct cache_detail {
>  	/* fields for communication over channel */
>  	struct list_head	queue;
>  
> -	atomic_t		readers;		/* how many time is /chennel open */
> -	time_t			last_close;		/* if no readers, when did last close */
> -	time_t			last_warn;		/* when we last warned about no readers */
> +	time_t			last_close;		/* when did last close */
> +	time_t			last_warn;		/* when we last warned about no daemon */
>  
>  	union {
>  		struct proc_dir_entry	*procfs;
>  		struct dentry		*pipefs;
>  	};
>  	struct net		*net;
> +	int			daemon_pid;
> +	enum cache_daemon_state daemon_state;
>  };
>  
>  
> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
> index 6f1528f..5ea24c8 100644
> --- a/net/sunrpc/cache.c
> +++ b/net/sunrpc/cache.c
> @@ -38,7 +38,7 @@
>  
>  static bool cache_defer_req(struct cache_req *req, struct cache_head *item);
>  static void cache_revisit_request(struct cache_head *item);
> -static bool cache_listeners_exist(struct cache_detail *detail);
> +static bool cache_daemon_exists(struct cache_detail *detail);
>  
>  static void cache_init(struct cache_head *h, struct cache_detail *detail)
>  {
> @@ -305,7 +305,7 @@ int cache_check(struct cache_detail *detail,
>  				cache_fresh_unlocked(h, detail);
>  				break;
>  			}
> -		} else if (!cache_listeners_exist(detail))
> +		} else if (!cache_daemon_exists(detail))
>  			rv = try_to_negate_entry(detail, h);
>  	}
>  
> @@ -373,9 +373,10 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
>  	spin_lock(&cache_list_lock);
>  	cd->nextcheck = 0;
>  	cd->entries = 0;
> -	atomic_set(&cd->readers, 0);
>  	cd->last_close = 0;
>  	cd->last_warn = -1;
> +	cd->daemon_pid = -1;
> +	cd->daemon_state = CACHE_DAEMON_STATE_CLOSE;
>  	list_add(&cd->others, &cache_list);
>  	spin_unlock(&cache_list_lock);
>  
> @@ -810,6 +811,10 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
>  	if (count == 0)
>  		return 0;
>  
> +	if (cd->daemon_pid != task_pid_nr(current) ||
> +	    cd->daemon_state != CACHE_DAEMON_STATE_POLL)
> +		return -EIO;
> +
>  	inode_lock(inode); /* protect against multiple concurrent
>  			      * readers on this file */
>   again:
> @@ -948,6 +953,10 @@ static ssize_t cache_write(struct file *filp, const char __user *buf,
>  	if (!cd->cache_parse)
>  		goto out;
>  
> +	if (cd->daemon_pid != task_pid_nr(current) ||
> +	    cd->daemon_state != CACHE_DAEMON_STATE_POLL)
> +		return -EIO;
> +
>  	inode_lock(inode);
>  	ret = cache_downcall(mapping, buf, count, cd);
>  	inode_unlock(inode);
> @@ -964,6 +973,9 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
>  	struct cache_reader *rp = filp->private_data;
>  	struct cache_queue *cq;
>  
> +	if (cd->daemon_pid == task_pid_nr(current))
> +		cd->daemon_state = CACHE_DAEMON_STATE_POLL;
> +
>  	poll_wait(filp, &queue_wait, wait);
>  
>  	/* alway allow write */
> @@ -1029,11 +1041,14 @@ static int cache_open(struct inode *inode, struct file *filp,
>  		}
>  		rp->offset = 0;
>  		rp->q.reader = 1;
> -		atomic_inc(&cd->readers);
>  		spin_lock(&queue_lock);
>  		list_add(&rp->q.list, &cd->queue);
>  		spin_unlock(&queue_lock);
>  	}
> +	if (filp->f_mode & FMODE_WRITE) {
> +		cd->daemon_pid = task_pid_nr(current);
> +		cd->daemon_state = CACHE_DAEMON_STATE_OPEN;
> +	}
>  	filp->private_data = rp;
>  	return 0;
>  }
> @@ -1063,7 +1078,10 @@ static int cache_release(struct inode *inode, struct file *filp,
>  		kfree(rp);
>  
>  		cd->last_close = seconds_since_boot();
> -		atomic_dec(&cd->readers);
> +	}
> +	if (cd->daemon_pid == task_pid_nr(current)) {
> +		cd->daemon_pid = -1;
> +		cd->daemon_state = CACHE_DAEMON_STATE_CLOSE;
>  	}
>  	module_put(cd->owner);
>  	return 0;
> @@ -1160,30 +1178,28 @@ void qword_addhex(char **bpp, int *lp, char *buf, int blen)
>  }
>  EXPORT_SYMBOL_GPL(qword_addhex);
>  
> -static void warn_no_listener(struct cache_detail *detail)
> +static void warn_no_daemon(struct cache_detail *detail)
>  {
>  	if (detail->last_warn != detail->last_close) {
>  		detail->last_warn = detail->last_close;
> -		if (detail->warn_no_listener)
> -			detail->warn_no_listener(detail, detail->last_close != 0);
> +		if (detail->warn_no_daemon)
> +			detail->warn_no_daemon(detail, detail->last_close != 0);
>  	}
>  }
>  
> -static bool cache_listeners_exist(struct cache_detail *detail)
> +static bool cache_daemon_exists(struct cache_detail *detail)
>  {
> -	if (atomic_read(&detail->readers))
> +	if (detail->daemon_pid != -1 &&
> +	    detail->daemon_state == CACHE_DAEMON_STATE_POLL)
>  		return true;
> -	if (detail->last_close == 0)
> -		/* This cache was never opened */
> -		return false;
> -	if (detail->last_close < seconds_since_boot() - 30)
> +	if (detail->last_close > seconds_since_boot() - 30)
>  		/*
>  		 * We allow for the possibility that someone might
>  		 * restart a userspace daemon without restarting the
>  		 * server; but after 30 seconds, we give up.
>  		 */
> -		 return false;
> -	return true;
> +		 return true;
> +	return false;
>  }
>  
>  /*
> @@ -1202,8 +1218,8 @@ int sunrpc_cache_pipe_upcall(struct cache_detail *detail, struct cache_head *h)
>  	if (!detail->cache_request)
>  		return -EINVAL;
>  
> -	if (!cache_listeners_exist(detail)) {
> -		warn_no_listener(detail);
> +	if (!cache_daemon_exists(detail)) {
> +		warn_no_daemon(detail);
>  		return -EINVAL;
>  	}
>  	if (test_bit(CACHE_CLEANED, &h->flags))
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons
  2019-07-25 21:50     ` J. Bruce Fields
@ 2019-07-26 13:59       ` Dave Wysochanski
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Wysochanski @ 2019-07-26 13:59 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: neilb, linux-nfs

On Thu, 2019-07-25 at 17:50 -0400, J. Bruce Fields wrote:
> On Thu, Jul 25, 2019 at 05:44:48PM -0400, Dave Wysochanski wrote:
> > The 'channel' interface has a heuristic that is based on the number
> > of
> > times any process opens it for reading.  This has shown to be
> > problematic
> > and any rogue userspace process that just happens to open the
> > 'channel'
> > file for reading may throw off the kernel logic and even steal a
> > message
> > from the kernel.
> > 
> > Harden this interface by making a small daemon state machine that
> > is based
> > on what the current userspace daemons actually do.  Specifically
> > they open
> > the file either RW or W-only, then issue a poll().  Once these two
> > operations
> > have been done by the same pid, we mark it as 'registered' as the
> > daemon for
> > this cache.  We then disallow any other pid to read or write to the
> > 'channel'
> > file by EIO any attempt.
> 
> Is that part really necessary?  mountd has a --num-threads option.
> Despite the name, I believe that creates actual process with fork
> (hence
> different pids).
> 

This is so any random process reading a channel file is not able to
"steal" a message.

Yes you're right about mountd and I think this should be perfectly
valid to fork and so the pid check needs fixed or removed.  I'll see if
I can work this out some other way without requiring a lockstep change
of the daemons and the kernel.


> --b.
> 
> > 
> > Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
> > ---
> >  fs/nfsd/nfs4idmap.c          |  4 ++--
> >  include/linux/sunrpc/cache.h | 34 +++++++++++++++++++++++++----
> >  net/sunrpc/cache.c           | 52 +++++++++++++++++++++++++++++---
> > ------------
> >  3 files changed, 66 insertions(+), 24 deletions(-)
> > 
> > diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
> > index d1f2852..a1f1f02 100644
> > --- a/fs/nfsd/nfs4idmap.c
> > +++ b/fs/nfsd/nfs4idmap.c
> > @@ -187,7 +187,7 @@ static struct ent *idtoname_update(struct
> > cache_detail *, struct ent *,
> >  	.cache_request	= idtoname_request,
> >  	.cache_parse	= idtoname_parse,
> >  	.cache_show	= idtoname_show,
> > -	.warn_no_listener = warn_no_idmapd,
> > +	.warn_no_daemon = warn_no_idmapd,
> >  	.match		= idtoname_match,
> >  	.init		= ent_init,
> >  	.update		= ent_init,
> > @@ -350,7 +350,7 @@ static struct ent *nametoid_update(struct
> > cache_detail *, struct ent *,
> >  	.cache_request	= nametoid_request,
> >  	.cache_parse	= nametoid_parse,
> >  	.cache_show	= nametoid_show,
> > -	.warn_no_listener = warn_no_idmapd,
> > +	.warn_no_daemon = warn_no_idmapd,
> >  	.match		= nametoid_match,
> >  	.init		= ent_init,
> >  	.update		= ent_init,
> > diff --git a/include/linux/sunrpc/cache.h
> > b/include/linux/sunrpc/cache.h
> > index c7f38e8..7fa9300 100644
> > --- a/include/linux/sunrpc/cache.h
> > +++ b/include/linux/sunrpc/cache.h
> > @@ -61,6 +61,31 @@ struct cache_head {
> >  
> >  #define	CACHE_NEW_EXPIRY 120	/* keep new things pending
> > confirmation for 120 seconds */
> >  
> > +/*
> > + * State machine for the userspace daemon servicing a cache.
> > + * Only one pid may be registered to the 'channel' file at any
> > given time.
> > + * A pid must transition to the final "POLL" state to finish
> > registration.
> > + * Any read or write to a 'channel' file by any pid other than the
> > registered
> > + * daemon pid will result in an EIO.
> > + *
> > + * Close
> > + * Open -------------------------> Open (daemon_pid = current)
> > + *         open(channel)
> > + *
> > + * Open -------------------------> Poll
> > + *          poll(channel) &&
> > + *          daemon_pid == current
> > + *
> > + * Poll -------------------------> Close (daemon_pid = -1)
> > + *          close(channel) &&
> > + *          daemon_pid == current
> > + */
> > +enum cache_daemon_state {
> > +	CACHE_DAEMON_STATE_CLOSE = 1, /* Close: daemon unknown */
> > +	CACHE_DAEMON_STATE_OPEN = 2,  /* Open: daemon open() 'channel'
> > */
> > +	CACHE_DAEMON_STATE_POLL = 3   /* Poll: daemon poll() 'channel'
> > */
> > +};
> > +
> >  struct cache_detail {
> >  	struct module *		owner;
> >  	int			hash_size;
> > @@ -83,7 +108,7 @@ struct cache_detail {
> >  	int			(*cache_show)(struct seq_file *m,
> >  					      struct cache_detail *cd,
> >  					      struct cache_head *h);
> > -	void			(*warn_no_listener)(struct
> > cache_detail *cd,
> > +	void			(*warn_no_daemon)(struct
> > cache_detail *cd,
> >  					      int has_died);
> >  
> >  	struct cache_head *	(*alloc)(void);
> > @@ -107,15 +132,16 @@ struct cache_detail {
> >  	/* fields for communication over channel */
> >  	struct list_head	queue;
> >  
> > -	atomic_t		readers;		/* how many time is
> > /chennel open */
> > -	time_t			last_close;		/* if no
> > readers, when did last close */
> > -	time_t			last_warn;		/* when we
> > last warned about no readers */
> > +	time_t			last_close;		/* when did
> > last close */
> > +	time_t			last_warn;		/* when we
> > last warned about no daemon */
> >  
> >  	union {
> >  		struct proc_dir_entry	*procfs;
> >  		struct dentry		*pipefs;
> >  	};
> >  	struct net		*net;
> > +	int			daemon_pid;
> > +	enum cache_daemon_state daemon_state;
> >  };
> >  
> >  
> > diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
> > index 6f1528f..5ea24c8 100644
> > --- a/net/sunrpc/cache.c
> > +++ b/net/sunrpc/cache.c
> > @@ -38,7 +38,7 @@
> >  
> >  static bool cache_defer_req(struct cache_req *req, struct
> > cache_head *item);
> >  static void cache_revisit_request(struct cache_head *item);
> > -static bool cache_listeners_exist(struct cache_detail *detail);
> > +static bool cache_daemon_exists(struct cache_detail *detail);
> >  
> >  static void cache_init(struct cache_head *h, struct cache_detail
> > *detail)
> >  {
> > @@ -305,7 +305,7 @@ int cache_check(struct cache_detail *detail,
> >  				cache_fresh_unlocked(h, detail);
> >  				break;
> >  			}
> > -		} else if (!cache_listeners_exist(detail))
> > +		} else if (!cache_daemon_exists(detail))
> >  			rv = try_to_negate_entry(detail, h);
> >  	}
> >  
> > @@ -373,9 +373,10 @@ void sunrpc_init_cache_detail(struct
> > cache_detail *cd)
> >  	spin_lock(&cache_list_lock);
> >  	cd->nextcheck = 0;
> >  	cd->entries = 0;
> > -	atomic_set(&cd->readers, 0);
> >  	cd->last_close = 0;
> >  	cd->last_warn = -1;
> > +	cd->daemon_pid = -1;
> > +	cd->daemon_state = CACHE_DAEMON_STATE_CLOSE;
> >  	list_add(&cd->others, &cache_list);
> >  	spin_unlock(&cache_list_lock);
> >  
> > @@ -810,6 +811,10 @@ static ssize_t cache_read(struct file *filp,
> > char __user *buf, size_t count,
> >  	if (count == 0)
> >  		return 0;
> >  
> > +	if (cd->daemon_pid != task_pid_nr(current) ||
> > +	    cd->daemon_state != CACHE_DAEMON_STATE_POLL)
> > +		return -EIO;
> > +
> >  	inode_lock(inode); /* protect against multiple concurrent
> >  			      * readers on this file */
> >   again:
> > @@ -948,6 +953,10 @@ static ssize_t cache_write(struct file *filp,
> > const char __user *buf,
> >  	if (!cd->cache_parse)
> >  		goto out;
> >  
> > +	if (cd->daemon_pid != task_pid_nr(current) ||
> > +	    cd->daemon_state != CACHE_DAEMON_STATE_POLL)
> > +		return -EIO;
> > +
> >  	inode_lock(inode);
> >  	ret = cache_downcall(mapping, buf, count, cd);
> >  	inode_unlock(inode);
> > @@ -964,6 +973,9 @@ static __poll_t cache_poll(struct file *filp,
> > poll_table *wait,
> >  	struct cache_reader *rp = filp->private_data;
> >  	struct cache_queue *cq;
> >  
> > +	if (cd->daemon_pid == task_pid_nr(current))
> > +		cd->daemon_state = CACHE_DAEMON_STATE_POLL;
> > +
> >  	poll_wait(filp, &queue_wait, wait);
> >  
> >  	/* alway allow write */
> > @@ -1029,11 +1041,14 @@ static int cache_open(struct inode *inode,
> > struct file *filp,
> >  		}
> >  		rp->offset = 0;
> >  		rp->q.reader = 1;
> > -		atomic_inc(&cd->readers);
> >  		spin_lock(&queue_lock);
> >  		list_add(&rp->q.list, &cd->queue);
> >  		spin_unlock(&queue_lock);
> >  	}
> > +	if (filp->f_mode & FMODE_WRITE) {
> > +		cd->daemon_pid = task_pid_nr(current);
> > +		cd->daemon_state = CACHE_DAEMON_STATE_OPEN;
> > +	}
> >  	filp->private_data = rp;
> >  	return 0;
> >  }
> > @@ -1063,7 +1078,10 @@ static int cache_release(struct inode
> > *inode, struct file *filp,
> >  		kfree(rp);
> >  
> >  		cd->last_close = seconds_since_boot();
> > -		atomic_dec(&cd->readers);
> > +	}
> > +	if (cd->daemon_pid == task_pid_nr(current)) {
> > +		cd->daemon_pid = -1;
> > +		cd->daemon_state = CACHE_DAEMON_STATE_CLOSE;
> >  	}
> >  	module_put(cd->owner);
> >  	return 0;
> > @@ -1160,30 +1178,28 @@ void qword_addhex(char **bpp, int *lp, char
> > *buf, int blen)
> >  }
> >  EXPORT_SYMBOL_GPL(qword_addhex);
> >  
> > -static void warn_no_listener(struct cache_detail *detail)
> > +static void warn_no_daemon(struct cache_detail *detail)
> >  {
> >  	if (detail->last_warn != detail->last_close) {
> >  		detail->last_warn = detail->last_close;
> > -		if (detail->warn_no_listener)
> > -			detail->warn_no_listener(detail, detail-
> > >last_close != 0);
> > +		if (detail->warn_no_daemon)
> > +			detail->warn_no_daemon(detail, detail-
> > >last_close != 0);
> >  	}
> >  }
> >  
> > -static bool cache_listeners_exist(struct cache_detail *detail)
> > +static bool cache_daemon_exists(struct cache_detail *detail)
> >  {
> > -	if (atomic_read(&detail->readers))
> > +	if (detail->daemon_pid != -1 &&
> > +	    detail->daemon_state == CACHE_DAEMON_STATE_POLL)
> >  		return true;
> > -	if (detail->last_close == 0)
> > -		/* This cache was never opened */
> > -		return false;
> > -	if (detail->last_close < seconds_since_boot() - 30)
> > +	if (detail->last_close > seconds_since_boot() - 30)
> >  		/*
> >  		 * We allow for the possibility that someone might
> >  		 * restart a userspace daemon without restarting the
> >  		 * server; but after 30 seconds, we give up.
> >  		 */
> > -		 return false;
> > -	return true;
> > +		 return true;
> > +	return false;
> >  }
> >  
> >  /*
> > @@ -1202,8 +1218,8 @@ int sunrpc_cache_pipe_upcall(struct
> > cache_detail *detail, struct cache_head *h)
> >  	if (!detail->cache_request)
> >  		return -EINVAL;
> >  
> > -	if (!cache_listeners_exist(detail)) {
> > -		warn_no_listener(detail);
> > +	if (!cache_daemon_exists(detail)) {
> > +		warn_no_daemon(detail);
> >  		return -EINVAL;
> >  	}
> >  	if (test_bit(CACHE_CLEANED, &h->flags))
> > -- 
> > 1.8.3.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist
  2019-07-25 18:54 ` Bruce Fields
  2019-07-25 21:44   ` [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons Dave Wysochanski
@ 2019-07-26 22:33   ` Dave Wysochanski
  2019-07-29 21:51     ` J. Bruce Fields
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Wysochanski @ 2019-07-26 22:33 UTC (permalink / raw)
  To: bfields; +Cc: neilb, linux-nfs

The sunrpc cache interface is susceptible to being fooled by a rogue
process just reading a 'channel' file.  If this happens the kernel
may think a valid daemon exists to service the cache when it does not.
For example, the following may fool the kernel:
cat /proc/net/rpc/auth.unix.gid/channel

Change the tracking of readers to writers when considering whether a
listener exists as all valid daemon processes either open a channel
file O_RDWR or O_WRONLY.  While this does not prevent a rogue process
from "stealing" a message from the kernel, it does at least improve
the kernels perception of whether a valid process servicing the cache
exists.

Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
---
 include/linux/sunrpc/cache.h |  6 +++---
 net/sunrpc/cache.c           | 12 ++++++++----
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
index c7f38e8..f7d086b 100644
--- a/include/linux/sunrpc/cache.h
+++ b/include/linux/sunrpc/cache.h
@@ -107,9 +107,9 @@ struct cache_detail {
 	/* fields for communication over channel */
 	struct list_head	queue;
 
-	atomic_t		readers;		/* how many time is /chennel open */
-	time_t			last_close;		/* if no readers, when did last close */
-	time_t			last_warn;		/* when we last warned about no readers */
+	atomic_t		writers;		/* how many time is /channel open */
+	time_t			last_close;		/* if no writers, when did last close */
+	time_t			last_warn;		/* when we last warned about no writers */
 
 	union {
 		struct proc_dir_entry	*procfs;
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 6f1528f..a6a6190 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -373,7 +373,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
 	spin_lock(&cache_list_lock);
 	cd->nextcheck = 0;
 	cd->entries = 0;
-	atomic_set(&cd->readers, 0);
+	atomic_set(&cd->writers, 0);
 	cd->last_close = 0;
 	cd->last_warn = -1;
 	list_add(&cd->others, &cache_list);
@@ -1029,11 +1029,13 @@ static int cache_open(struct inode *inode, struct file *filp,
 		}
 		rp->offset = 0;
 		rp->q.reader = 1;
-		atomic_inc(&cd->readers);
+
 		spin_lock(&queue_lock);
 		list_add(&rp->q.list, &cd->queue);
 		spin_unlock(&queue_lock);
 	}
+	if (filp->f_mode & FMODE_WRITE)
+		atomic_inc(&cd->writers);
 	filp->private_data = rp;
 	return 0;
 }
@@ -1062,8 +1064,10 @@ static int cache_release(struct inode *inode, struct file *filp,
 		filp->private_data = NULL;
 		kfree(rp);
 
+	}
+	if (filp->f_mode & FMODE_WRITE) {
+		atomic_dec(&cd->writers);
 		cd->last_close = seconds_since_boot();
-		atomic_dec(&cd->readers);
 	}
 	module_put(cd->owner);
 	return 0;
@@ -1171,7 +1175,7 @@ static void warn_no_listener(struct cache_detail *detail)
 
 static bool cache_listeners_exist(struct cache_detail *detail)
 {
-	if (atomic_read(&detail->readers))
+	if (atomic_read(&detail->writers))
 		return true;
 	if (detail->last_close == 0)
 		/* This cache was never opened */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist
  2019-07-26 22:33   ` [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist Dave Wysochanski
@ 2019-07-29 21:51     ` J. Bruce Fields
  2019-07-30  0:02       ` NeilBrown
  0 siblings, 1 reply; 11+ messages in thread
From: J. Bruce Fields @ 2019-07-29 21:51 UTC (permalink / raw)
  To: Dave Wysochanski; +Cc: neilb, linux-nfs

On Fri, Jul 26, 2019 at 06:33:01PM -0400, Dave Wysochanski wrote:
> The sunrpc cache interface is susceptible to being fooled by a rogue
> process just reading a 'channel' file.  If this happens the kernel
> may think a valid daemon exists to service the cache when it does not.
> For example, the following may fool the kernel:
> cat /proc/net/rpc/auth.unix.gid/channel
> 
> Change the tracking of readers to writers when considering whether a
> listener exists as all valid daemon processes either open a channel
> file O_RDWR or O_WRONLY.  While this does not prevent a rogue process
> from "stealing" a message from the kernel, it does at least improve
> the kernels perception of whether a valid process servicing the cache
> exists.
> 
> Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
> ---
>  include/linux/sunrpc/cache.h |  6 +++---
>  net/sunrpc/cache.c           | 12 ++++++++----
>  2 files changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
> index c7f38e8..f7d086b 100644
> --- a/include/linux/sunrpc/cache.h
> +++ b/include/linux/sunrpc/cache.h
> @@ -107,9 +107,9 @@ struct cache_detail {
>  	/* fields for communication over channel */
>  	struct list_head	queue;
>  
> -	atomic_t		readers;		/* how many time is /chennel open */
> -	time_t			last_close;		/* if no readers, when did last close */
> -	time_t			last_warn;		/* when we last warned about no readers */
> +	atomic_t		writers;		/* how many time is /channel open */
> +	time_t			last_close;		/* if no writers, when did last close */
> +	time_t			last_warn;		/* when we last warned about no writers */
>  
>  	union {
>  		struct proc_dir_entry	*procfs;
> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
> index 6f1528f..a6a6190 100644
> --- a/net/sunrpc/cache.c
> +++ b/net/sunrpc/cache.c
> @@ -373,7 +373,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
>  	spin_lock(&cache_list_lock);
>  	cd->nextcheck = 0;
>  	cd->entries = 0;
> -	atomic_set(&cd->readers, 0);
> +	atomic_set(&cd->writers, 0);
>  	cd->last_close = 0;
>  	cd->last_warn = -1;
>  	list_add(&cd->others, &cache_list);
> @@ -1029,11 +1029,13 @@ static int cache_open(struct inode *inode, struct file *filp,
>  		}
>  		rp->offset = 0;
>  		rp->q.reader = 1;
> -		atomic_inc(&cd->readers);
> +
>  		spin_lock(&queue_lock);
>  		list_add(&rp->q.list, &cd->queue);
>  		spin_unlock(&queue_lock);
>  	}
> +	if (filp->f_mode & FMODE_WRITE)
> +		atomic_inc(&cd->writers);

This patch would be even simpler if we just modified the condition of
the preceding if clause:

-	if (filp->f_mode & FMODE_READ) {
+	if (filp->f_mode & FMODE_WRITE) {

and then we could drop the following chunk completely.

Is there any reason not to do that?

Or if the resulting behavior isn't right for write-only openers, we
could make the condition ((filp->f_mode & FMODE_READ) && (filp->f_mode &
FMODE_WRITE)).

--b.

>  	filp->private_data = rp;
>  	return 0;
>  }
> @@ -1062,8 +1064,10 @@ static int cache_release(struct inode *inode, struct file *filp,
>  		filp->private_data = NULL;
>  		kfree(rp);
>  
> +	}
> +	if (filp->f_mode & FMODE_WRITE) {
> +		atomic_dec(&cd->writers);
>  		cd->last_close = seconds_since_boot();
> -		atomic_dec(&cd->readers);
>  	}
>  	module_put(cd->owner);
>  	return 0;
> @@ -1171,7 +1175,7 @@ static void warn_no_listener(struct cache_detail *detail)
>  
>  static bool cache_listeners_exist(struct cache_detail *detail)
>  {
> -	if (atomic_read(&detail->readers))
> +	if (atomic_read(&detail->writers))
>  		return true;
>  	if (detail->last_close == 0)
>  		/* This cache was never opened */
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist
  2019-07-29 21:51     ` J. Bruce Fields
@ 2019-07-30  0:02       ` NeilBrown
  2019-07-30  0:49         ` J. Bruce Fields
  0 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2019-07-30  0:02 UTC (permalink / raw)
  To: J. Bruce Fields, Dave Wysochanski; +Cc: Neil F Brown, linux-nfs

[-- Attachment #1: Type: text/plain, Size: 4575 bytes --]

On Mon, Jul 29 2019,  J. Bruce Fields  wrote:

> On Fri, Jul 26, 2019 at 06:33:01PM -0400, Dave Wysochanski wrote:
>> The sunrpc cache interface is susceptible to being fooled by a rogue
>> process just reading a 'channel' file.  If this happens the kernel
>> may think a valid daemon exists to service the cache when it does not.
>> For example, the following may fool the kernel:
>> cat /proc/net/rpc/auth.unix.gid/channel
>> 
>> Change the tracking of readers to writers when considering whether a
>> listener exists as all valid daemon processes either open a channel
>> file O_RDWR or O_WRONLY.  While this does not prevent a rogue process
>> from "stealing" a message from the kernel, it does at least improve
>> the kernels perception of whether a valid process servicing the cache
>> exists.
>> 
>> Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
>> ---
>>  include/linux/sunrpc/cache.h |  6 +++---
>>  net/sunrpc/cache.c           | 12 ++++++++----
>>  2 files changed, 11 insertions(+), 7 deletions(-)
>> 
>> diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
>> index c7f38e8..f7d086b 100644
>> --- a/include/linux/sunrpc/cache.h
>> +++ b/include/linux/sunrpc/cache.h
>> @@ -107,9 +107,9 @@ struct cache_detail {
>>  	/* fields for communication over channel */
>>  	struct list_head	queue;
>>  
>> -	atomic_t		readers;		/* how many time is /chennel open */
>> -	time_t			last_close;		/* if no readers, when did last close */
>> -	time_t			last_warn;		/* when we last warned about no readers */
>> +	atomic_t		writers;		/* how many time is /channel open */
>> +	time_t			last_close;		/* if no writers, when did last close */
>> +	time_t			last_warn;		/* when we last warned about no writers */
>>  
>>  	union {
>>  		struct proc_dir_entry	*procfs;
>> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
>> index 6f1528f..a6a6190 100644
>> --- a/net/sunrpc/cache.c
>> +++ b/net/sunrpc/cache.c
>> @@ -373,7 +373,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
>>  	spin_lock(&cache_list_lock);
>>  	cd->nextcheck = 0;
>>  	cd->entries = 0;
>> -	atomic_set(&cd->readers, 0);
>> +	atomic_set(&cd->writers, 0);
>>  	cd->last_close = 0;
>>  	cd->last_warn = -1;
>>  	list_add(&cd->others, &cache_list);
>> @@ -1029,11 +1029,13 @@ static int cache_open(struct inode *inode, struct file *filp,
>>  		}
>>  		rp->offset = 0;
>>  		rp->q.reader = 1;
>> -		atomic_inc(&cd->readers);
>> +
>>  		spin_lock(&queue_lock);
>>  		list_add(&rp->q.list, &cd->queue);
>>  		spin_unlock(&queue_lock);
>>  	}
>> +	if (filp->f_mode & FMODE_WRITE)
>> +		atomic_inc(&cd->writers);
>
> This patch would be even simpler if we just modified the condition of
> the preceding if clause:
>
> -	if (filp->f_mode & FMODE_READ) {
> +	if (filp->f_mode & FMODE_WRITE) {
>
> and then we could drop the following chunk completely.
>
> Is there any reason not to do that?

I can see how this would be tempting, but I think the reason not to do
that is it is ... wrong.

The bulk of the code is for setting up context to support reading, so it
really should be conditional on FMODE_READ.
We always want to set that up, because if a process opens for read, and
not write, we want to respond properly to read requests.  This is useful
for debugging.

I think this patch from Dave is good.  A process opening for read might
just be inquisitive.  A program opening for write is making more of a
commitment to being involved in managing the cache.

 Reviewed-by: NeilBrown <neilb@suse.com>

Thanks,
NeilBrown


>
> Or if the resulting behavior isn't right for write-only openers, we
> could make the condition ((filp->f_mode & FMODE_READ) && (filp->f_mode &
> FMODE_WRITE)).
>
> --b.
>
>>  	filp->private_data = rp;
>>  	return 0;
>>  }
>> @@ -1062,8 +1064,10 @@ static int cache_release(struct inode *inode, struct file *filp,
>>  		filp->private_data = NULL;
>>  		kfree(rp);
>>  
>> +	}
>> +	if (filp->f_mode & FMODE_WRITE) {
>> +		atomic_dec(&cd->writers);
>>  		cd->last_close = seconds_since_boot();
>> -		atomic_dec(&cd->readers);
>>  	}
>>  	module_put(cd->owner);
>>  	return 0;
>> @@ -1171,7 +1175,7 @@ static void warn_no_listener(struct cache_detail *detail)
>>  
>>  static bool cache_listeners_exist(struct cache_detail *detail)
>>  {
>> -	if (atomic_read(&detail->readers))
>> +	if (atomic_read(&detail->writers))
>>  		return true;
>>  	if (detail->last_close == 0)
>>  		/* This cache was never opened */
>> -- 
>> 1.8.3.1

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist
  2019-07-30  0:02       ` NeilBrown
@ 2019-07-30  0:49         ` J. Bruce Fields
  2019-07-30  1:14           ` NeilBrown
  0 siblings, 1 reply; 11+ messages in thread
From: J. Bruce Fields @ 2019-07-30  0:49 UTC (permalink / raw)
  To: NeilBrown; +Cc: Dave Wysochanski, Neil F Brown, linux-nfs

On Tue, Jul 30, 2019 at 10:02:37AM +1000, NeilBrown wrote:
> On Mon, Jul 29 2019,  J. Bruce Fields  wrote:
> 
> > On Fri, Jul 26, 2019 at 06:33:01PM -0400, Dave Wysochanski wrote:
> >> The sunrpc cache interface is susceptible to being fooled by a rogue
> >> process just reading a 'channel' file.  If this happens the kernel
> >> may think a valid daemon exists to service the cache when it does not.
> >> For example, the following may fool the kernel:
> >> cat /proc/net/rpc/auth.unix.gid/channel
> >> 
> >> Change the tracking of readers to writers when considering whether a
> >> listener exists as all valid daemon processes either open a channel
> >> file O_RDWR or O_WRONLY.  While this does not prevent a rogue process
> >> from "stealing" a message from the kernel, it does at least improve
> >> the kernels perception of whether a valid process servicing the cache
> >> exists.
> >> 
> >> Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
> >> ---
> >>  include/linux/sunrpc/cache.h |  6 +++---
> >>  net/sunrpc/cache.c           | 12 ++++++++----
> >>  2 files changed, 11 insertions(+), 7 deletions(-)
> >> 
> >> diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
> >> index c7f38e8..f7d086b 100644
> >> --- a/include/linux/sunrpc/cache.h
> >> +++ b/include/linux/sunrpc/cache.h
> >> @@ -107,9 +107,9 @@ struct cache_detail {
> >>  	/* fields for communication over channel */
> >>  	struct list_head	queue;
> >>  
> >> -	atomic_t		readers;		/* how many time is /chennel open */
> >> -	time_t			last_close;		/* if no readers, when did last close */
> >> -	time_t			last_warn;		/* when we last warned about no readers */
> >> +	atomic_t		writers;		/* how many time is /channel open */
> >> +	time_t			last_close;		/* if no writers, when did last close */
> >> +	time_t			last_warn;		/* when we last warned about no writers */
> >>  
> >>  	union {
> >>  		struct proc_dir_entry	*procfs;
> >> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
> >> index 6f1528f..a6a6190 100644
> >> --- a/net/sunrpc/cache.c
> >> +++ b/net/sunrpc/cache.c
> >> @@ -373,7 +373,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
> >>  	spin_lock(&cache_list_lock);
> >>  	cd->nextcheck = 0;
> >>  	cd->entries = 0;
> >> -	atomic_set(&cd->readers, 0);
> >> +	atomic_set(&cd->writers, 0);
> >>  	cd->last_close = 0;
> >>  	cd->last_warn = -1;
> >>  	list_add(&cd->others, &cache_list);
> >> @@ -1029,11 +1029,13 @@ static int cache_open(struct inode *inode, struct file *filp,
> >>  		}
> >>  		rp->offset = 0;
> >>  		rp->q.reader = 1;
> >> -		atomic_inc(&cd->readers);
> >> +
> >>  		spin_lock(&queue_lock);
> >>  		list_add(&rp->q.list, &cd->queue);
> >>  		spin_unlock(&queue_lock);
> >>  	}
> >> +	if (filp->f_mode & FMODE_WRITE)
> >> +		atomic_inc(&cd->writers);
> >
> > This patch would be even simpler if we just modified the condition of
> > the preceding if clause:
> >
> > -	if (filp->f_mode & FMODE_READ) {
> > +	if (filp->f_mode & FMODE_WRITE) {
> >
> > and then we could drop the following chunk completely.
> >
> > Is there any reason not to do that?
> 
> I can see how this would be tempting, but I think the reason not to do
> that is it is ... wrong.
> 
> The bulk of the code is for setting up context to support reading, so it
> really should be conditional on FMODE_READ.
> We always want to set that up, because if a process opens for read, and
> not write, we want to respond properly to read requests.  This is useful
> for debugging.

How is it useful for debugging?

--b.

> I think this patch from Dave is good.  A process opening for read might
> just be inquisitive.  A program opening for write is making more of a
> commitment to being involved in managing the cache.
> 
>  Reviewed-by: NeilBrown <neilb@suse.com>
> 
> Thanks,
> NeilBrown
> 
> 
> >
> > Or if the resulting behavior isn't right for write-only openers, we
> > could make the condition ((filp->f_mode & FMODE_READ) && (filp->f_mode &
> > FMODE_WRITE)).
> >
> > --b.
> >
> >>  	filp->private_data = rp;
> >>  	return 0;
> >>  }
> >> @@ -1062,8 +1064,10 @@ static int cache_release(struct inode *inode, struct file *filp,
> >>  		filp->private_data = NULL;
> >>  		kfree(rp);
> >>  
> >> +	}
> >> +	if (filp->f_mode & FMODE_WRITE) {
> >> +		atomic_dec(&cd->writers);
> >>  		cd->last_close = seconds_since_boot();
> >> -		atomic_dec(&cd->readers);
> >>  	}
> >>  	module_put(cd->owner);
> >>  	return 0;
> >> @@ -1171,7 +1175,7 @@ static void warn_no_listener(struct cache_detail *detail)
> >>  
> >>  static bool cache_listeners_exist(struct cache_detail *detail)
> >>  {
> >> -	if (atomic_read(&detail->readers))
> >> +	if (atomic_read(&detail->writers))
> >>  		return true;
> >>  	if (detail->last_close == 0)
> >>  		/* This cache was never opened */
> >> -- 
> >> 1.8.3.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist
  2019-07-30  0:49         ` J. Bruce Fields
@ 2019-07-30  1:14           ` NeilBrown
  2019-07-30 15:46             ` J. Bruce Fields
  0 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2019-07-30  1:14 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Dave Wysochanski, Neil F Brown, linux-nfs

[-- Attachment #1: Type: text/plain, Size: 5299 bytes --]

On Mon, Jul 29 2019,  J. Bruce Fields  wrote:

> On Tue, Jul 30, 2019 at 10:02:37AM +1000, NeilBrown wrote:
>> On Mon, Jul 29 2019,  J. Bruce Fields  wrote:
>> 
>> > On Fri, Jul 26, 2019 at 06:33:01PM -0400, Dave Wysochanski wrote:
>> >> The sunrpc cache interface is susceptible to being fooled by a rogue
>> >> process just reading a 'channel' file.  If this happens the kernel
>> >> may think a valid daemon exists to service the cache when it does not.
>> >> For example, the following may fool the kernel:
>> >> cat /proc/net/rpc/auth.unix.gid/channel
>> >> 
>> >> Change the tracking of readers to writers when considering whether a
>> >> listener exists as all valid daemon processes either open a channel
>> >> file O_RDWR or O_WRONLY.  While this does not prevent a rogue process
>> >> from "stealing" a message from the kernel, it does at least improve
>> >> the kernels perception of whether a valid process servicing the cache
>> >> exists.
>> >> 
>> >> Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
>> >> ---
>> >>  include/linux/sunrpc/cache.h |  6 +++---
>> >>  net/sunrpc/cache.c           | 12 ++++++++----
>> >>  2 files changed, 11 insertions(+), 7 deletions(-)
>> >> 
>> >> diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
>> >> index c7f38e8..f7d086b 100644
>> >> --- a/include/linux/sunrpc/cache.h
>> >> +++ b/include/linux/sunrpc/cache.h
>> >> @@ -107,9 +107,9 @@ struct cache_detail {
>> >>  	/* fields for communication over channel */
>> >>  	struct list_head	queue;
>> >>  
>> >> -	atomic_t		readers;		/* how many time is /chennel open */
>> >> -	time_t			last_close;		/* if no readers, when did last close */
>> >> -	time_t			last_warn;		/* when we last warned about no readers */
>> >> +	atomic_t		writers;		/* how many time is /channel open */
>> >> +	time_t			last_close;		/* if no writers, when did last close */
>> >> +	time_t			last_warn;		/* when we last warned about no writers */
>> >>  
>> >>  	union {
>> >>  		struct proc_dir_entry	*procfs;
>> >> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
>> >> index 6f1528f..a6a6190 100644
>> >> --- a/net/sunrpc/cache.c
>> >> +++ b/net/sunrpc/cache.c
>> >> @@ -373,7 +373,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
>> >>  	spin_lock(&cache_list_lock);
>> >>  	cd->nextcheck = 0;
>> >>  	cd->entries = 0;
>> >> -	atomic_set(&cd->readers, 0);
>> >> +	atomic_set(&cd->writers, 0);
>> >>  	cd->last_close = 0;
>> >>  	cd->last_warn = -1;
>> >>  	list_add(&cd->others, &cache_list);
>> >> @@ -1029,11 +1029,13 @@ static int cache_open(struct inode *inode, struct file *filp,
>> >>  		}
>> >>  		rp->offset = 0;
>> >>  		rp->q.reader = 1;
>> >> -		atomic_inc(&cd->readers);
>> >> +
>> >>  		spin_lock(&queue_lock);
>> >>  		list_add(&rp->q.list, &cd->queue);
>> >>  		spin_unlock(&queue_lock);
>> >>  	}
>> >> +	if (filp->f_mode & FMODE_WRITE)
>> >> +		atomic_inc(&cd->writers);
>> >
>> > This patch would be even simpler if we just modified the condition of
>> > the preceding if clause:
>> >
>> > -	if (filp->f_mode & FMODE_READ) {
>> > +	if (filp->f_mode & FMODE_WRITE) {
>> >
>> > and then we could drop the following chunk completely.
>> >
>> > Is there any reason not to do that?
>> 
>> I can see how this would be tempting, but I think the reason not to do
>> that is it is ... wrong.
>> 
>> The bulk of the code is for setting up context to support reading, so it
>> really should be conditional on FMODE_READ.
>> We always want to set that up, because if a process opens for read, and
>> not write, we want to respond properly to read requests.  This is useful
>> for debugging.
>
> How is it useful for debugging?

I often ask for

   grep . /proc/net/rpc/*/*

If nothing is reported for "channel", then I know that the problem isn't
that mountd is dead or stuck or similar.

NeilBrown


>
> --b.
>
>> I think this patch from Dave is good.  A process opening for read might
>> just be inquisitive.  A program opening for write is making more of a
>> commitment to being involved in managing the cache.
>> 
>>  Reviewed-by: NeilBrown <neilb@suse.com>
>> 
>> Thanks,
>> NeilBrown
>> 
>> 
>> >
>> > Or if the resulting behavior isn't right for write-only openers, we
>> > could make the condition ((filp->f_mode & FMODE_READ) && (filp->f_mode &
>> > FMODE_WRITE)).
>> >
>> > --b.
>> >
>> >>  	filp->private_data = rp;
>> >>  	return 0;
>> >>  }
>> >> @@ -1062,8 +1064,10 @@ static int cache_release(struct inode *inode, struct file *filp,
>> >>  		filp->private_data = NULL;
>> >>  		kfree(rp);
>> >>  
>> >> +	}
>> >> +	if (filp->f_mode & FMODE_WRITE) {
>> >> +		atomic_dec(&cd->writers);
>> >>  		cd->last_close = seconds_since_boot();
>> >> -		atomic_dec(&cd->readers);
>> >>  	}
>> >>  	module_put(cd->owner);
>> >>  	return 0;
>> >> @@ -1171,7 +1175,7 @@ static void warn_no_listener(struct cache_detail *detail)
>> >>  
>> >>  static bool cache_listeners_exist(struct cache_detail *detail)
>> >>  {
>> >> -	if (atomic_read(&detail->readers))
>> >> +	if (atomic_read(&detail->writers))
>> >>  		return true;
>> >>  	if (detail->last_close == 0)
>> >>  		/* This cache was never opened */
>> >> -- 
>> >> 1.8.3.1

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist
  2019-07-30  1:14           ` NeilBrown
@ 2019-07-30 15:46             ` J. Bruce Fields
  0 siblings, 0 replies; 11+ messages in thread
From: J. Bruce Fields @ 2019-07-30 15:46 UTC (permalink / raw)
  To: NeilBrown; +Cc: Dave Wysochanski, Neil F Brown, linux-nfs

On Tue, Jul 30, 2019 at 11:14:55AM +1000, NeilBrown wrote:
> On Mon, Jul 29 2019,  J. Bruce Fields  wrote:
> 
> > On Tue, Jul 30, 2019 at 10:02:37AM +1000, NeilBrown wrote:
> >> On Mon, Jul 29 2019,  J. Bruce Fields  wrote:
> >> 
> >> > On Fri, Jul 26, 2019 at 06:33:01PM -0400, Dave Wysochanski wrote:
> >> >> The sunrpc cache interface is susceptible to being fooled by a rogue
> >> >> process just reading a 'channel' file.  If this happens the kernel
> >> >> may think a valid daemon exists to service the cache when it does not.
> >> >> For example, the following may fool the kernel:
> >> >> cat /proc/net/rpc/auth.unix.gid/channel
> >> >> 
> >> >> Change the tracking of readers to writers when considering whether a
> >> >> listener exists as all valid daemon processes either open a channel
> >> >> file O_RDWR or O_WRONLY.  While this does not prevent a rogue process
> >> >> from "stealing" a message from the kernel, it does at least improve
> >> >> the kernels perception of whether a valid process servicing the cache
> >> >> exists.
> >> >> 
> >> >> Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
> >> >> ---
> >> >>  include/linux/sunrpc/cache.h |  6 +++---
> >> >>  net/sunrpc/cache.c           | 12 ++++++++----
> >> >>  2 files changed, 11 insertions(+), 7 deletions(-)
> >> >> 
> >> >> diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
> >> >> index c7f38e8..f7d086b 100644
> >> >> --- a/include/linux/sunrpc/cache.h
> >> >> +++ b/include/linux/sunrpc/cache.h
> >> >> @@ -107,9 +107,9 @@ struct cache_detail {
> >> >>  	/* fields for communication over channel */
> >> >>  	struct list_head	queue;
> >> >>  
> >> >> -	atomic_t		readers;		/* how many time is /chennel open */
> >> >> -	time_t			last_close;		/* if no readers, when did last close */
> >> >> -	time_t			last_warn;		/* when we last warned about no readers */
> >> >> +	atomic_t		writers;		/* how many time is /channel open */
> >> >> +	time_t			last_close;		/* if no writers, when did last close */
> >> >> +	time_t			last_warn;		/* when we last warned about no writers */
> >> >>  
> >> >>  	union {
> >> >>  		struct proc_dir_entry	*procfs;
> >> >> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
> >> >> index 6f1528f..a6a6190 100644
> >> >> --- a/net/sunrpc/cache.c
> >> >> +++ b/net/sunrpc/cache.c
> >> >> @@ -373,7 +373,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
> >> >>  	spin_lock(&cache_list_lock);
> >> >>  	cd->nextcheck = 0;
> >> >>  	cd->entries = 0;
> >> >> -	atomic_set(&cd->readers, 0);
> >> >> +	atomic_set(&cd->writers, 0);
> >> >>  	cd->last_close = 0;
> >> >>  	cd->last_warn = -1;
> >> >>  	list_add(&cd->others, &cache_list);
> >> >> @@ -1029,11 +1029,13 @@ static int cache_open(struct inode *inode, struct file *filp,
> >> >>  		}
> >> >>  		rp->offset = 0;
> >> >>  		rp->q.reader = 1;
> >> >> -		atomic_inc(&cd->readers);
> >> >> +
> >> >>  		spin_lock(&queue_lock);
> >> >>  		list_add(&rp->q.list, &cd->queue);
> >> >>  		spin_unlock(&queue_lock);
> >> >>  	}
> >> >> +	if (filp->f_mode & FMODE_WRITE)
> >> >> +		atomic_inc(&cd->writers);
> >> >
> >> > This patch would be even simpler if we just modified the condition of
> >> > the preceding if clause:
> >> >
> >> > -	if (filp->f_mode & FMODE_READ) {
> >> > +	if (filp->f_mode & FMODE_WRITE) {
> >> >
> >> > and then we could drop the following chunk completely.
> >> >
> >> > Is there any reason not to do that?
> >> 
> >> I can see how this would be tempting, but I think the reason not to do
> >> that is it is ... wrong.
> >> 
> >> The bulk of the code is for setting up context to support reading, so it
> >> really should be conditional on FMODE_READ.
> >> We always want to set that up, because if a process opens for read, and
> >> not write, we want to respond properly to read requests.  This is useful
> >> for debugging.
> >
> > How is it useful for debugging?
> 
> I often ask for
> 
>    grep . /proc/net/rpc/*/*
> 
> If nothing is reported for "channel", then I know that the problem isn't
> that mountd is dead or stuck or similar.

Eh, OK.  Anyway I've got no actual serious complaint.

Applying with the reviewed-by:.

--b.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-07-30 15:46 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-25 16:48 RFC: Fixing net/sunrpc/cache.c: cache_listeners_exist() function for rogue process reading a 'channel' file Dave Wysochanski
2019-07-25 18:54 ` Bruce Fields
2019-07-25 21:44   ` [RFC PATCH] SUNRPC: Harden the cache 'channel' interface to only allow legitimate daemons Dave Wysochanski
2019-07-25 21:50     ` J. Bruce Fields
2019-07-26 13:59       ` Dave Wysochanski
2019-07-26 22:33   ` [RFC PATCH] SUNRPC: Track writers of the 'channel' file to improve cache_listeners_exist Dave Wysochanski
2019-07-29 21:51     ` J. Bruce Fields
2019-07-30  0:02       ` NeilBrown
2019-07-30  0:49         ` J. Bruce Fields
2019-07-30  1:14           ` NeilBrown
2019-07-30 15:46             ` J. Bruce Fields

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).