linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tom Talpey <tom@talpey.com>
To: NeilBrown <neilb@suse.de>, "bfields@fieldses.org" <bfields@fieldses.org>
Cc: Trond Myklebust <trondmy@hammerspace.com>,
	"fsorenso@redhat.com" <fsorenso@redhat.com>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	"aglo@umich.edu" <aglo@umich.edu>,
	"bcodding@redhat.com" <bcodding@redhat.com>,
	"jshivers@redhat.com" <jshivers@redhat.com>,
	"chuck.lever@oracle.com" <chuck.lever@oracle.com>
Subject: Re: unsharing tcp connections from different NFS mounts
Date: Tue, 4 May 2021 09:27:12 -0400	[thread overview]
Message-ID: <b8c7ab25-fb8e-5c89-5b0b-7cf6fbb36a0b@talpey.com> (raw)
In-Reply-To: <162009412979.28954.17703105649506010394@noble.neil.brown.name>

On 5/3/2021 10:08 PM, NeilBrown wrote:
> On Tue, 04 May 2021, bfields@fieldses.org wrote:
>> On Wed, Jan 20, 2021 at 10:07:37AM -0500, bfields@fieldses.org wrote:
>>>
>>> So mainly:
>>>
>>>>>> Why is there a performance regression being seen by these setups
>>>>>> when they share the same connection? Is it really the connection,
>>>>>> or is it the fact that they all share the same fixed-slot session?
>>>
>>> I don't know.  Any pointers how we might go about finding the answer?
>>
>> I set this aside and then get bugged about it again.
>>
>> I apologize, I don't understand what you're asking for here, but it
>> seemed obvious to you and Tom, so I'm sure the problem is me.  Are you
>> free for a call sometime maybe?  Or do you have any suggestions for how
>> you'd go about investigating this?
> 
> I think a useful first step would be to understand what is getting in
> the way of the small requests.
>   - are they in the client waiting for slots which are all consumed by
>     large writes?
>   - are they in TCP stream behind megabytes of writes that need to be
>     consumed before they can even be seen by the server?
>   - are they in a socket buffer on the server waiting to be served
>     while all the nfsd thread are busy handling writes?
> 
> I cannot see an easy way to measure which it is.

I completely agree. The most likely scenario is a slot shortage which
might be preventing the client sending new RPCs. And with a round-robin
policy, the first connection with such a shortage will stall them all.

How can we observe whether this is the case?

Tom.


> I guess monitoring how much of the time that the client has no free
> slots might give hints about the first.  If there are always free slots,
> the first case cannot be the problem.
> 
> With NFSv3, the slot management happened at the RPC layer and there were
> several queues (RPC_PRIORITY_LOW/NORMAL/HIGH/PRIVILEGED) where requests
> could wait for a free slot.  Since we gained dynamic slot allocation -
> up to 65536 by default - I wonder if that has much effect any more.
> 
> For NFSv4.1+ the slot management is at the NFS level.  The server sets a
> maximum which defaults to (maybe is limited to) 1024 by the Linux server.
> So there are always free rpc slots.
> The Linux client only has a single queue for each slot table, and I
> think there is one slot table for the forward channel of a session.
> So it seems we no longer get any priority management (sync writes used
> to get priority over async writes).
> 
> Increasing the number of slots advertised by the server might be
> interesting.  It is unlikely to fix anything, but it might move the
> bottle-neck.
> 
> Decreasing the maximum of number of tcp slots might also be interesting
> (below the number of NFS slots at least).
> That would allow the RPC priority infrastructure to work, and if the
> large-file writes are async, they might gets slowed down.
> 
> If the problem is in the TCP stream (which is possible if the relevant
> network buffers are bloated), then you'd really need multiple TCP streams
> (which can certainly improve throughput in some cases).  That is what
> nconnect give you.  nconnect does minimal balancing.  It general it will
> round-robin, but if the number of requests (not bytes) queued on one
> socket is below average, that socket is likely to get the next request.
> So just adding more connections with nconnect is unlikely to help.  You
> would need to add a policy engine (struct rpc_xpr_iter_ops) which
> reserves some connections for small requests.  That should be fairly
> easy to write a proof-of-concept for.
> 
> NeilBrown
> 
> 
>>
>> Would it be worth experimenting with giving some sort of advantage to
>> readers?  (E.g., reserving a few slots for reads and getattrs and such?)
>>
>> --b.
>>
>>> It's easy to test the case of entirely seperate state & tcp connections.
>>>
>>> If we want to test with a shared connection but separate slots I guess
>>> we'd need to create a separate session for each nfs4_server, and a lot
>>> of functions that currently take an nfs4_client would need to take an
>>> nfs4_server?
>>>
>>> --b.
>>
>>
> 

  reply	other threads:[~2021-05-04 13:27 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-06 15:13 unsharing tcp connections from different NFS mounts J. Bruce Fields
2020-10-06 15:20 ` Chuck Lever
2020-10-06 15:22   ` Bruce Fields
2020-10-06 17:07     ` Tom Talpey
2020-10-06 19:30       ` Bruce Fields
     [not found]         ` <CAGrwUG5_KeRVR8chcA8=3FSeii2+4c8FbuE=CSGAtYVYqV4kLg@mail.gmail.com>
2020-10-07 14:08           ` Tom Talpey
2020-10-06 19:36 ` Benjamin Coddington
2020-10-06 21:46   ` Olga Kornievskaia
2020-10-07  0:18     ` J. Bruce Fields
2020-10-07 11:27       ` Benjamin Coddington
2020-10-07 12:55         ` Benjamin Coddington
2020-10-07 13:45           ` Chuck Lever
2020-10-07 14:05             ` Bruce Fields
2020-10-07 14:15               ` Chuck Lever
2020-10-07 16:05                 ` Bruce Fields
2020-10-07 16:44                   ` Trond Myklebust
2020-10-07 17:15                     ` Bruce Fields
2020-10-07 17:29                       ` Trond Myklebust
2020-10-07 18:05                         ` bfields
2020-10-07 19:11                           ` Trond Myklebust
2020-10-07 20:29                             ` bfields
2020-10-07 18:04                     ` Benjamin Coddington
2020-10-07 18:19                       ` Trond Myklebust
2020-10-07 16:50                   ` Trond Myklebust
2021-01-19 22:22                     ` bfields
2021-01-19 23:09                       ` Trond Myklebust
2021-01-20 15:07                         ` bfields
2021-05-03 20:09                           ` bfields
2021-05-04  2:08                             ` NeilBrown
2021-05-04 13:27                               ` Tom Talpey [this message]
2021-05-04 14:27                               ` Trond Myklebust
2021-05-04 16:51                                 ` bfields
2021-05-04 21:32                                   ` Daire Byrne
2021-05-04 21:48                                     ` Trond Myklebust
2021-05-05 12:53                                       ` Daire Byrne
2021-01-20 15:58                       ` Chuck Lever
2020-10-07 13:56 ` Patrick Goetz
2020-10-07 16:28   ` Igor Ostrovsky
2020-10-07 16:30   ` Benjamin Coddington

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b8c7ab25-fb8e-5c89-5b0b-7cf6fbb36a0b@talpey.com \
    --to=tom@talpey.com \
    --cc=aglo@umich.edu \
    --cc=bcodding@redhat.com \
    --cc=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=fsorenso@redhat.com \
    --cc=jshivers@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=trondmy@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).