All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daire Byrne <daire@dneg.com>
To: Jeff Layton <jlayton@kernel.org>
Cc: linux-nfs <linux-nfs@vger.kernel.org>,
	linux-cachefs <linux-cachefs@redhat.com>
Subject: Re: [Linux-cachefs] Adventures in NFS re-exporting
Date: Tue, 13 Oct 2020 10:59:26 +0100 (BST)	[thread overview]
Message-ID: <1855231972.64370906.1602583166516.JavaMail.zimbra@dneg.com> (raw)
In-Reply-To: <1106572445.58581277.1601902473249.JavaMail.zimbra@dneg.com>


----- On 5 Oct, 2020, at 13:54, Daire Byrne daire@dneg.com wrote:
> ----- On 1 Oct, 2020, at 11:36, Jeff Layton jlayton@kernel.org wrote:
> 
>> On Thu, 2020-10-01 at 01:09 +0100, Daire Byrne wrote:
>>> ----- On 30 Sep, 2020, at 20:30, Jeff Layton jlayton@kernel.org wrote:
>>> 
>>> > On Tue, 2020-09-22 at 13:31 +0100, Daire Byrne wrote:
>>> > > Hi,
>>> > > 
>>> > > I just thought I'd flesh out the other two issues I have found with re-exporting
>>> > > that are ultimately responsible for the biggest performance bottlenecks. And
>>> > > both of them revolve around the caching of metadata file lookups in the NFS
>>> > > client.
>>> > > 
>>> > > Especially for the case where we are re-exporting a server many milliseconds
>>> > > away (i.e. on-premise -> cloud), we want to be able to control how much the
>>> > > client caches metadata and file data so that it's many LAN clients all benefit
>>> > > from the re-export server only having to do the WAN lookups once (within a
>>> > > specified coherency time).
>>> > > 
>>> > > Keeping the file data in the vfs page cache or on disk using fscache/cachefiles
>>> > > is fairly straightforward, but keeping the metadata cached is particularly
>>> > > difficult. And without the cached metadata we introduce long delays before we
>>> > > can serve the already present and locally cached file data to many waiting
>>> > > clients.
>>> > > 
>>> > > ----- On 7 Sep, 2020, at 18:31, Daire Byrne daire@dneg.com wrote:
>>> > > > 2) If we cache metadata on the re-export server using actimeo=3600,nocto we can
>>> > > > cut the network packets back to the origin server to zero for repeated lookups.
>>> > > > However, if a client of the re-export server walks paths and memory maps those
>>> > > > files (i.e. loading an application), the re-export server starts issuing
>>> > > > unexpected calls back to the origin server again, ignoring/invalidating the
>>> > > > re-export server's NFS client cache. We worked around this this by patching an
>>> > > > inode/iversion validity check in inode.c so that the NFS client cache on the
>>> > > > re-export server is used. I'm not sure about the correctness of this patch but
>>> > > > it works for our corner case.
>>> > > 
>>> > > If we use actimeo=3600,nocto (say) to mount a remote software volume on the
>>> > > re-export server, we can successfully cache the loading of applications and
>>> > > walking of paths directly on the re-export server such that after a couple of
>>> > > runs, there are practically zero packets back to the originating NFS server
>>> > > (great!). But, if we then do the same thing on a client which is mounting that
>>> > > re-export server, the re-export server now starts issuing lots of calls back to
>>> > > the originating server and invalidating it's client cache (bad!).
>>> > > 
>>> > > I'm not exactly sure why, but the iversion of the inode gets changed locally
>>> > > (due to atime modification?) most likely via invocation of method
>>> > > inode_inc_iversion_raw. Each time it gets incremented the following call to
>>> > > validate attributes detects changes causing it to be reloaded from the
>>> > > originating server.
>>> > > 
>>> > 
>>> > I'd expect the change attribute to track what's in actual inode on the
>>> > "home" server. The NFS client is supposed to (mostly) keep the raw
>>> > change attribute in its i_version field.
>>> > 
>>> > The only place we call inode_inc_iversion_raw is in
>>> > nfs_inode_add_request, which I don't think you'd be hitting unless you
>>> > were writing to the file while holding a write delegation.
>>> > 
>>> > What sort of server is hosting the actual data in your setup?
>>> 
>>> We mostly use RHEL7.6 NFS servers with XFS backed filesystems and a couple of
>>> (older) Netapps too. The re-export server is running the latest mainline
>>> kernel(s).
>>> 
>>> As far as I can make out, both these originating (home) server types exhibit a
>>> similar (but not exactly the same) effect on the Linux NFS client cache when it
>>> is being re-exported and accessed by other clients. I can replicate it when
>>> only using a read-only mount at every hop so I don't think that writes are
>>> related.
>>> 
>>> Our RHEL7 NFS servers actually mount XFS with noatime too so any atime updates
>>> that might be causing this client invalidation (which is what I initially
>>> thought) are ultimately a wasted effort.
>>> 
>> 
>> Ok. I suspect there is a bug here somewhere, but with such a complicated
>> setup though it's not clear to me where that bug would be though. You
>> might need to do some packet sniffing and look at what the servers are
>> sending for change attributes.
>> 
>> nfsd4_change_attribute does mix in the ctime, so your hunch about the
>> atime may be correct. atime updates imply a ctime update and that could
>> cause nfsd to continually send a new one, even on files that aren't
>> being changed.
>> 
>> It might be interesting to doctor nfsd4_change_attribute() to not mix in
>> the ctime and see whether that improves things. If it does, then we may
>> want to teach nfsd how to avoid doing that for certain types of
>> filesystems.
> 
> Okay, I started to run back through all my tests again with various combinations
> of server, client mount options, NFS version etc. with the intention of packet
> capturing as Jeff has suggested.
> 
> But I quickly realised that I had mixed up some previous results before I
> reported them here. The summary is that using an NFS RHEL76 server, a client
> mounting with a recent mainline kernel and re-exporting using NFSv4.x all the
> way through does NOT invalidate the re-export server's NFS client cache
> (great!) like I had assumed before. It does when we mount the originating RHEL7
> server using NFSv3 and re-export, but not with any version of NFSv4 on Linux.
> 
> But I think I know how I got confused - the Netapp NFSv4 case is different. When
> we mount our (old) 7-mode Netapp using NFSv4.0 and re-export that, the
> re-export server's client cache is invalidated often in the same way as for an
> NFSv3 server. On top of that, I think I wrongly mistook some of the NFSv4
> client's natural dropping of metadata from page cache as client invalidations
> caused by the re-export and client access (without vfs_cache_pressure=0 and see
> my #3 bullet point).
> 
> Both of these conspired to make me think that both NFSv3 AND NFSv4 re-exporting
> showed the same issue when in fact, it's just NFSv3 and the Netapp's v4.0 that
> require my "hack" to stop the client cache being invalidated. Sorry for any
> confusion (it is indeed a complicated setup!). Let me summarise then once and
> for all:
> 
> rhel76 server (xfs noatime) -> re-export server (vers=4.x,nocto,actimeo=3600,ro;
> vfs_cache_pressure=0) = good client cache metadata performance, my hacky patch
> is not required.
> rhel76 server (xfs noatime) -> re-export server (vers=3,nocto,actimeo=3600,ro;
> vfs_cache_pressure=0) = bad performance (new lookups & getattrs), my hacky
> patch is required for better performance.
> netapp (7-mode) -> re-export server (vers=4.0,nocto,actimeo=3600,ro;
> vfs_cache_pressure=0) = bad performance, my hacky patch is required for better
> performance.
> 
> So for Jeff's original intention of proxying a NFSv3 server -> NFSv4 clients by
> re-exporting, the metadata lookup performance will degrade severely as more
> clients access the same files because the re-export server's client cache is
> not being used as effectively (re-exported) and lookups are happening for the
> same files many times within the re-export server's actimeo even with
> vfs_cache_pressure=0.
> 
> For our particular use case, we could live without NFSv3 (and my horrible hack)
> except for the fact that the Netapp shows similar behaviour with NFSv4.0 (but
> Linux servers do not). I don't know if turning off atime updates on the Netapp
> volume will change anything - I might try it. Of course, re-exporting NFSv3
> with good meatadata cache performance is still a nice thing to have too.
> 
> I'll now see if I can decipher the network calls back to the Netapp (NFSv4.0) as
> suggested by Jeff to see why it is different.

I did a little more digging and the big jump in client ops on the re-export server back to the originating Netapp using NFSv4.0 seems to be mostly because it is issuing lots of READDIR calls. The same workload to a Linux NFS server does not issue a single READDIR/READDIRPLUS call (once cached). As to why these are not cached in the client for repeated lookups (without my hack), I have no idea.

However, I was eventually able to devise a workload that could also cause the NFSv4.2 client cache on the re-export server to unexpectedly "lose" entries such that it needed to reissue calls back to an originating Linux server. A large proportion of these were NFS4ERR_NOENT (but not all) so I don't know if maybe it is something specific to the negative entry cache.

It is really hard following the packets from the re-export's client through the re-export server and on to the originating server, but as far as I can make out, it was mostly issuing access/lookup/getattr for directories (that should already be cached) when the re-export server's clients are issuing calls like readlink (for example resolving a library directory with symlinks).

I have also noticed another couple of new curiosities. If we run a typical small workload against a client mount such that it is all cached for repeat runs and then re-export that same directory to a remote client and run the same workload, the reads that should already be cached are all fetched again from the originating server. Only then are they are cached for repeat runs or for different clients. It's almost like the NFS client cache on the re-export server sees the locally accessed client mount as a different filesystem (and cache) to the knfsd re-exported one. A consequence of embedding the filehandles?

And while looking at the packet traces for this, I also noticed that when re-exported to a client, all the read calls back to the originating server are being chopped up into a maximum of 128k. It's as if I had mounted the originating server using rsize=131072 (it's definitely 1MB). So a client of the re-export server is receiving rsize=1MB reads, but the re-export server is pulling them from the originating server in 128k chunks. This was using NFSV4.2 all the way through.

Is this an expected side-effect of re-exporting? Is it some weird interaction with the nfs client's readahead? It has the effect of large reads requiring 8x more round-trips for re-export clients than if they had just gone direct to the originating server (and gotten 1MB reads).

Daire

  reply	other threads:[~2020-10-13  9:59 UTC|newest]

Thread overview: 129+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-07 17:31 Adventures in NFS re-exporting Daire Byrne
2020-09-08  9:40 ` Mkrtchyan, Tigran
2020-09-08 11:06   ` Daire Byrne
2020-09-15 17:21 ` J. Bruce Fields
2020-09-15 19:59   ` Trond Myklebust
2020-09-16 16:01     ` Daire Byrne
2020-10-19 16:19       ` Daire Byrne
2020-10-19 17:53         ` [PATCH 0/2] Add NFSv3 emulation of the lookupp operation trondmy
2020-10-19 17:53           ` [PATCH 1/2] NFSv3: Refactor nfs3_proc_lookup() to split out the dentry trondmy
2020-10-19 17:53             ` [PATCH 2/2] NFSv3: Add emulation of the lookupp() operation trondmy
2020-10-19 20:05         ` [PATCH v2 0/2] Add NFSv3 emulation of the lookupp operation trondmy
2020-10-19 20:05           ` [PATCH v2 1/2] NFSv3: Refactor nfs3_proc_lookup() to split out the dentry trondmy
2020-10-19 20:05             ` [PATCH v2 2/2] NFSv3: Add emulation of the lookupp() operation trondmy
2020-10-20 18:37         ` [PATCH v3 0/3] Add NFSv3 emulation of the lookupp operation trondmy
2020-10-20 18:37           ` [PATCH v3 1/3] NFSv3: Refactor nfs3_proc_lookup() to split out the dentry trondmy
2020-10-20 18:37             ` [PATCH v3 2/3] NFSv3: Add emulation of the lookupp() operation trondmy
2020-10-20 18:37               ` [PATCH v3 3/3] NFSv4: Observe the NFS_MOUNT_SOFTREVAL flag in _nfs4_proc_lookupp trondmy
2020-10-21  9:33         ` Adventures in NFS re-exporting Daire Byrne
2020-11-09 16:02           ` bfields
2020-11-12 13:01             ` Daire Byrne
2020-11-12 13:57               ` bfields
2020-11-12 18:33                 ` Daire Byrne
2020-11-12 20:55                   ` bfields
2020-11-12 23:05                     ` Daire Byrne
2020-11-13 14:50                       ` bfields
2020-11-13 22:26                         ` bfields
2020-11-14 12:57                           ` Daire Byrne
2020-11-16 15:18                             ` bfields
2020-11-16 15:53                             ` bfields
2020-11-16 19:21                               ` Daire Byrne
2020-11-16 15:29                           ` Jeff Layton
2020-11-16 15:56                             ` bfields
2020-11-16 16:03                               ` Jeff Layton
2020-11-16 16:14                                 ` bfields
2020-11-16 16:38                                   ` Jeff Layton
2020-11-16 19:03                                     ` bfields
2020-11-16 20:03                                       ` Jeff Layton
2020-11-17  3:16                                         ` bfields
2020-11-17  3:18                                           ` [PATCH 1/4] nfsd: move fill_{pre,post}_wcc to nfsfh.c J. Bruce Fields
2020-11-17  3:18                                             ` [PATCH 2/4] nfsd: pre/post attr is using wrong change attribute J. Bruce Fields
2020-11-17 12:34                                               ` Jeff Layton
2020-11-17 15:26                                                 ` J. Bruce Fields
2020-11-17 15:34                                                   ` Jeff Layton
2020-11-20 22:38                                                     ` J. Bruce Fields
2020-11-20 22:39                                                       ` [PATCH 1/8] nfsd: only call inode_query_iversion in the I_VERSION case J. Bruce Fields
2020-11-20 22:39                                                         ` [PATCH 2/8] nfsd: simplify nfsd4_change_info J. Bruce Fields
2020-11-20 22:39                                                         ` [PATCH 3/8] nfsd: minor nfsd4_change_attribute cleanup J. Bruce Fields
2020-11-21  0:34                                                           ` Jeff Layton
2020-11-20 22:39                                                         ` [PATCH 4/8] nfsd4: don't query change attribute in v2/v3 case J. Bruce Fields
2020-11-20 22:39                                                         ` [PATCH 5/8] nfs: use change attribute for NFS re-exports J. Bruce Fields
2020-11-20 22:39                                                         ` [PATCH 6/8] nfsd: move change attribute generation to filesystem J. Bruce Fields
2020-11-21  0:58                                                           ` Jeff Layton
2020-11-21  1:01                                                             ` J. Bruce Fields
2020-11-21 13:00                                                           ` Jeff Layton
2020-11-20 22:39                                                         ` [PATCH 7/8] nfsd: skip some unnecessary stats in the v4 case J. Bruce Fields
2020-11-20 22:39                                                         ` [PATCH 8/8] Revert "nfsd4: support change_attr_type attribute" J. Bruce Fields
2020-11-20 22:44                                                       ` [PATCH 2/4] nfsd: pre/post attr is using wrong change attribute J. Bruce Fields
2020-11-21  1:03                                                         ` Jeff Layton
2020-11-21 21:44                                                           ` Daire Byrne
2020-11-22  0:02                                                             ` bfields
2020-11-22  1:55                                                               ` Daire Byrne
2020-11-22  3:03                                                                 ` bfields
2020-11-23 20:07                                                                   ` Daire Byrne
2020-11-17 15:25                                               ` J. Bruce Fields
2020-11-17  3:18                                             ` [PATCH 3/4] nfs: don't mangle i_version on NFS J. Bruce Fields
2020-11-17 12:27                                               ` Jeff Layton
2020-11-17 14:14                                                 ` J. Bruce Fields
2020-11-17  3:18                                             ` [PATCH 4/4] nfs: support i_version in the NFSv4 case J. Bruce Fields
2020-11-17 12:34                                               ` Jeff Layton
2020-11-24 20:35               ` Adventures in NFS re-exporting Daire Byrne
2020-11-24 21:15                 ` bfields
2020-11-24 22:15                   ` Frank Filz
2020-11-25 14:47                     ` 'bfields'
2020-11-25 16:25                       ` Frank Filz
2020-11-25 19:03                         ` 'bfields'
2020-11-26  0:04                           ` Frank Filz
2020-11-25 17:14                   ` Daire Byrne
2020-11-25 19:31                     ` bfields
2020-12-03 12:20                     ` Daire Byrne
2020-12-03 18:51                       ` bfields
2020-12-03 20:27                         ` Trond Myklebust
2020-12-03 21:13                           ` bfields
2020-12-03 21:32                             ` Frank Filz
2020-12-03 21:34                             ` Trond Myklebust
2020-12-03 21:45                               ` Frank Filz
2020-12-03 21:57                                 ` Trond Myklebust
2020-12-03 22:04                                   ` bfields
2020-12-03 22:14                                     ` Trond Myklebust
2020-12-03 22:39                                       ` Frank Filz
2020-12-03 22:50                                         ` Trond Myklebust
2020-12-03 23:34                                           ` Frank Filz
2020-12-03 22:44                                       ` bfields
2020-12-03 21:54                               ` bfields
2020-12-03 22:45                               ` bfields
2020-12-03 22:53                                 ` Trond Myklebust
2020-12-03 23:16                                   ` bfields
2020-12-03 23:28                                     ` Frank Filz
2020-12-04  1:02                                     ` Trond Myklebust
2020-12-04  1:41                                       ` bfields
2020-12-04  2:27                                         ` Trond Myklebust
2020-09-17 16:01   ` Daire Byrne
2020-09-17 19:09     ` bfields
2020-09-17 20:23       ` Frank van der Linden
2020-09-17 21:57         ` bfields
2020-09-19 11:08           ` Daire Byrne
2020-09-22 16:43         ` Chuck Lever
2020-09-23 20:25           ` Daire Byrne
2020-09-23 21:01             ` Frank van der Linden
2020-09-26  9:00               ` Daire Byrne
2020-09-28 15:49                 ` Frank van der Linden
2020-09-28 16:08                   ` Chuck Lever
2020-09-28 17:42                     ` Frank van der Linden
2020-09-22 12:31 ` Daire Byrne
2020-09-22 13:52   ` Trond Myklebust
2020-09-23 12:40     ` J. Bruce Fields
2020-09-23 13:09       ` Trond Myklebust
2020-09-23 17:07         ` bfields
2020-09-30 19:30   ` [Linux-cachefs] " Jeff Layton
2020-10-01  0:09     ` Daire Byrne
2020-10-01 10:36       ` Jeff Layton
2020-10-01 12:38         ` Trond Myklebust
2020-10-01 16:39           ` Jeff Layton
2020-10-05 12:54         ` Daire Byrne
2020-10-13  9:59           ` Daire Byrne [this message]
2020-10-01 18:41     ` J. Bruce Fields
2020-10-01 19:24       ` Trond Myklebust
2020-10-01 19:26         ` bfields
2020-10-01 19:29           ` Trond Myklebust
2020-10-01 19:51             ` bfields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1855231972.64370906.1602583166516.JavaMail.zimbra@dneg.com \
    --to=daire@dneg.com \
    --cc=jlayton@kernel.org \
    --cc=linux-cachefs@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.