linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
	netdev <netdev@vger.kernel.org>, "tgraf@suug.ch" <tgraf@suug.ch>,
	Jeff Layton <jlayton@redhat.com>
Subject: Re: [PATCH RFC 29/30] NFSD: Convert the filecache to use rhashtable
Date: Thu, 23 Jun 2022 23:59:45 +0000	[thread overview]
Message-ID: <EDD9404B-ACBA-4284-8AFC-8AB4536481A3@oracle.com> (raw)
In-Reply-To: <20220623223320.GG1098723@dread.disaster.area>



> On Jun 23, 2022, at 6:33 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Thu, Jun 23, 2022 at 05:27:20PM +0000, Chuck Lever III wrote:
>> Also I just found Neil's nice rhashtable explainer:
>> 
>>   https://lwn.net/Articles/751374/
>> 
>> Where he writes that:
>> 
>>> Sometimes you might want a hash table to potentially contain
>>> multiple objects for any given key. In that case you can use
>>> "rhltables" — rhashtables with lists of objects.
>> 
>> I believe that is the case for the filecache. The hash value is
>> computed based on the inode pointer, and therefore there can be more
>> than one nfsd_file object for a particular inode (depending on who
>> is opening and for what access). So I think filecache needs to use
>> rhltable, not rhashtable. Any thoughts from rhashtable experts?
> 
> Huh, I assumed the file cache was just hashing the whole key so that
> every object in the rht has it's own unique key and hash and there's
> no need to handle multiple objects per key...
> 
> What are you trying to optimise by hashing only the inode *pointer*
> in the nfsd_file object keyspace?

Well, this design is inherited from the current filecache
implementation.

It assumes that all nfsd_file objects that refer to the same
inode will always get chained into the same bucket. That way:

 506 static void
 507 __nfsd_file_close_inode(struct inode *inode, unsigned int hashval,
 508                         struct list_head *dispose)
 509 {
 510         struct nfsd_file        *nf;
 511         struct hlist_node       *tmp;
 512 
 513         spin_lock(&nfsd_file_hashtbl[hashval].nfb_lock);
 514         hlist_for_each_entry_safe(nf, tmp, &nfsd_file_hashtbl[hashval].nfb_head, nf_node) {
 515                 if (inode == nf->nf_inode)
 516                         nfsd_file_unhash_and_release_locked(nf, dispose);
 517         }
 518         spin_unlock(&nfsd_file_hashtbl[hashval].nfb_lock);
 519 }

nfsd_file_close_inode() can lock one hash bucket and just
walk that hash chain to find all the nfsd_file's associated
with a particular in-core inode.

Actually I don't think there's any other reason to keep that
hashing design, but Jeff can confirm that.

So I guess we could use rhltable and keep the nfsd_file items
for the same inode on the same hash list? I'm not sure it's
worth the trouble: this part of filecache isn't really on the
hot path.


--
Chuck Lever




  reply	other threads:[~2022-06-23 23:59 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-22 14:12 [PATCH RFC 00/30] Overhaul NFSD filecache Chuck Lever
2022-06-22 14:12 ` [PATCH RFC 01/30] NFSD: Report filecache LRU size Chuck Lever
2022-06-22 14:12 ` [PATCH RFC 02/30] NFSD: Report count of calls to nfsd_file_acquire() Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 03/30] NFSD: Report count of freed filecache items Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 04/30] NFSD: Report average age of " Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 05/30] NFSD: Add nfsd_file_lru_dispose_list() helper Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 06/30] NFSD: Refactor nfsd_file_gc() Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 07/30] NFSD: Refactor nfsd_file_lru_scan() Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 08/30] NFSD: Report the number of items evicted by the LRU walk Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 09/30] NFSD: Record number of flush calls Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 10/30] NFSD: Report filecache item construction failures Chuck Lever
2022-06-22 14:13 ` [PATCH RFC 11/30] NFSD: Zero counters when the filecache is re-initialized Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 12/30] NFSD: Hook up the filecache stat file Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 13/30] NFSD: WARN when freeing an item still linked via nf_lru Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 14/30] NFSD: Trace filecache LRU activity Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 15/30] NFSD: Leave open files out of the filecache LRU Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 16/30] NFSD: Fix the filecache LRU shrinker Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 17/30] NFSD: Never call nfsd_file_gc() in foreground paths Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 18/30] NFSD: No longer record nf_hashval in the trace log Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 19/30] NFSD: Remove lockdep assertion from unhash_and_release_locked() Chuck Lever
2022-06-22 14:14 ` [PATCH RFC 20/30] NFSD: nfsd_file_unhash can compute hashval from nf->nf_inode Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 21/30] NFSD: Refactor __nfsd_file_close_inode() Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 22/30] NFSD: nfsd_file_hash_remove can compute hashval Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 23/30] NFSD: Remove nfsd_file::nf_hashval Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 24/30] NFSD: Remove stale comment from nfsd_file_acquire() Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 25/30] NFSD: Clean up "open file" case in nfsd_file_acquire() Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 26/30] NFSD: Document nfsd_file_cache_purge() API contract Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 27/30] NFSD: Replace the "init once" mechanism Chuck Lever
2022-06-22 14:15 ` [PATCH RFC 28/30] NFSD: Set up an rhashtable for the filecache Chuck Lever
2022-06-23 22:56   ` Al Viro
2022-06-23 23:51     ` Chuck Lever III
2022-06-24  0:14       ` Chuck Lever III
2022-06-24  0:29         ` Al Viro
2022-06-22 14:15 ` [PATCH RFC 29/30] NFSD: Convert the filecache to use rhashtable Chuck Lever
2022-06-23  0:38   ` Dave Chinner
2022-06-23  0:58     ` Chuck Lever III
2022-06-23 17:27       ` Chuck Lever III
2022-06-23 22:33         ` Dave Chinner
2022-06-23 23:59           ` Chuck Lever III [this message]
2022-06-22 14:16 ` [PATCH RFC 30/30] NFSD: Clean up unusued code after rhashtable conversion Chuck Lever
2022-06-22 18:36 ` [PATCH RFC 00/30] Overhaul NFSD filecache Wang Yugui
2022-06-22 19:04   ` Chuck Lever III
2022-06-22 19:59     ` Chuck Lever III
2022-06-23  9:02       ` Wang Yugui
2022-06-23 16:44         ` Chuck Lever III
2022-06-23 17:51           ` Wang Yugui
2022-06-24 15:30             ` Chuck Lever III
2022-06-23  0:21     ` Dave Chinner
2022-06-23  1:01       ` Chuck Lever III
2022-06-23 20:27 ` Frank van der Linden
2022-06-28 17:57   ` Chuck Lever III

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EDD9404B-ACBA-4284-8AFC-8AB4536481A3@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=david@fromorbit.com \
    --cc=jlayton@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=tgraf@suug.ch \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).