linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bruce Fields <bfields@fieldses.org>
To: Frank van der Linden <fllinden@amazon.com>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>,
	Chuck Lever <chuck.lever@oracle.com>,
	linux-nfs@vger.kernel.org
Subject: Re: nfsd filecache issues with v4
Date: Thu, 25 Jun 2020 15:48:21 -0400	[thread overview]
Message-ID: <20200625194821.GA6605@fieldses.org> (raw)
In-Reply-To: <20200625191205.GC29600@dev-dsk-fllinden-2c-c1893d73.us-west-2.amazon.com>

On Thu, Jun 25, 2020 at 07:12:05PM +0000, Frank van der Linden wrote:
> On Thu, Jun 25, 2020 at 01:10:21PM -0400, Bruce Fields wrote:
> > 
> > On Mon, Jun 08, 2020 at 07:21:22PM +0000, Frank van der Linden wrote:
> > > So here's what happens: for NFSv4, files that are associated with an
> > > open stateid can stick around for a long time, as long as there's no
> > > CLOSE done on them. That's what's happening here. Also, since those files
> > > have a refcount of >= 2 (one for the hash table, one for being pointed to
> > > by the state), they are never eligible for removal from the file cache.
> > > Worse, since the code call nfs_file_gc inline if the upper bound is crossed
> > > (8192), every single operation that calls nfsd_file_acquire will end up
> > > walking the entire LRU, trying to free files, and failing every time.
> > > Walking a list with millions of files every single time isn't great.
> > 
> > Thanks for tracking this down.
> > 
> > >
> > > There are some ways to fix this behavior like:
> > >
> > > * Always allow v4 cached file structured to be purged from the cache.
> > >   They will stick around, since they still have a reference, but
> > >   at least they won't slow down cache handling to a crawl.
> > 
> > If they have to stick around anyway it seems too bad not to be able to
> > use them.
> > 
> > I mean, just because a file's opened first by a v4 user doesn't mean it
> > might not also have other users, right?
> > 
> > Would it be that hard to make nfsd_file_gc() a little smarter?
> > 
> > I don't know, maybe it's not worth it.
> > 
> > --b.
> 
> Basically, opening, and keeping open, a very large number of v4 files on
> a client blows up these data structures:
> 
> * nfs4state.c:file_hashtbl (FH -> nfs4_file)
> 
> ..and with the addition of filecache:
> 
> * filecache.c:nfsd_file_hashtbl (ino -> nfsd_file)
> * filecache.c:nfsd_file_lru
> 
> nfsd_file_lru causes the most pain, see my description. But the other ones
> aren't without pain either. I tried an experiment where v4 files don't
> get added to the filecache, and file_hashtbl started showing up in perf
> output in a serious way. Not surprising, really, if you hash millions
> of items in a hash table with 256 buckets.
> 
> I guess there is an argument to be made that it's such an extreme use case
> that it's not worth it.
> 
> On the other hand, clients running the server out of resources and slowing
> down everything by a lot for all clients isn't great either.
> 
> Generally, the only way to enforce an upper bound on resource usage without
> returning permanent errors (to which the client might react badly) seems
> to be to start invaliding v4 state under pressure. Clients should be prepared
> for this, as they should be able to recover from a server reboot. On the
> other hand, it's something you probably only should be doing as a last resort.
> I'm not sure if consistent behavior for e.g. locks could be guaranteed, I
> am not very familiar with the locking code.

I don't think that would work, for a bunch of reasons.

Off hand I don't think I've actually seen reports in the wild of hitting
resource limits due to number of opens.  Though I admit it bothers me
that we're not prepared for it.

--b.

> Some ideas to alleviate the pain short of doing the above:
> 
> * Count v4 references to nfsd_file (filecache) structures. If there
>   is a v4 reference, don't have the file on the LRU, as it's pointless.
>   Do include it in the hash table so that v2/v3 users can find it. This
>   avoids the worst offender (nfsd_file_lru), but does still blow up
>   nfsd_file_hashtbl.
> 
> * Use rhashtable for the hashtables, as it can automatically grow/shrink
>   the number of buckets. I don't know if the rhashtable code could handle
>   the load, but it might be worth a shot.
> 
> - Frank

      parent reply	other threads:[~2020-06-25 19:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-08 19:21 nfsd filecache issues with v4 Frank van der Linden
2020-06-25 17:10 ` Bruce Fields
2020-06-25 19:12   ` Frank van der Linden
2020-06-25 19:20     ` Frank van der Linden
2020-06-25 19:48     ` Bruce Fields [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200625194821.GA6605@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=fllinden@amazon.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=trond.myklebust@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).