linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.com>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: [PATCH] sunrpc/cache: handle missing listeners better.
Date: Fri, 22 Mar 2019 13:16:56 +1100	[thread overview]
Message-ID: <87pnqj64br.fsf@notabene.neil.brown.name> (raw)

[-- Attachment #1: Type: text/plain, Size: 2216 bytes --]


If no handler (such as rpc.mountd) has opened
a cache 'channel', the sunrpc cache responds to
all lookup requests with -ENOENT.  This is particularly
important for the auth.unix.gid cache which is
optional.

If the channel was open briefly and an upcall was written to it,
this upcall remains pending even when the handler closes the
channel.  When an upcall is pending, the code currently
doesn't check if there are still listeners, it only performs
that check before sending an upcall.

As the cache treads a recently closes channel (closed less than
30 seconds ago) as "potentially still open", there is a
reasonable sized window when a request can become pending
in a closed channel, and thereby block lookups indefinitely.

This can easily be demonstrated by running
  cat /proc/net/rpc/auth.unix.gid/channel

and then trying to mount an NFS filesystem from this host.  It
will block indefinitely (unless mountd is run with --manage-gids,
or krb5 is used).

When cache_check() finds that an upcall is pending, it should
perform the "cache_listeners_exist()" exist test.  If no
listeners do exist, the request should be negated.

With this change in place, there can still be a 30second wait on
mount, until the cache gives up waiting for a handler to come
back, but this is much better than an indefinite wait.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 net/sunrpc/cache.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 12bb23b8e0c5..be9e29385adc 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -40,6 +40,7 @@
 
 static bool cache_defer_req(struct cache_req *req, struct cache_head *item);
 static void cache_revisit_request(struct cache_head *item);
+static bool cache_listeners_exist(struct cache_detail *detail);
 
 static void cache_init(struct cache_head *h, struct cache_detail *detail)
 {
@@ -303,7 +304,8 @@ int cache_check(struct cache_detail *detail,
 				cache_fresh_unlocked(h, detail);
 				break;
 			}
-		}
+		} else if (!cache_listeners_exist(detail))
+			rv = try_to_negate_entry(detail, h);
 	}
 
 	if (rv == -EAGAIN) {
-- 
2.14.0.rc0.dirty


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

             reply	other threads:[~2019-03-22  2:17 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-22  2:16 NeilBrown [this message]
2019-03-22 16:53 ` [PATCH] sunrpc/cache: handle missing listeners better J. Bruce Fields
2019-03-22 22:32   ` NeilBrown
2019-03-25 22:21     ` J. Bruce Fields
2019-03-25 23:00       ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pnqj64br.fsf@notabene.neil.brown.name \
    --to=neilb@suse.com \
    --cc=bfields@fieldses.org \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).