All of lore.kernel.org
 help / color / mirror / Atom feed
From: Trond Myklebust <trondmy@hammerspace.com>
To: "olga.kornievskaia@gmail.com" <olga.kornievskaia@gmail.com>,
	"chuck.lever@oracle.com" <chuck.lever@oracle.com>
Cc: "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	"anna.schumaker@netapp.com" <anna.schumaker@netapp.com>
Subject: Re: [PATCH v2 2/3] NFSv4 introduce max_connect mount options
Date: Thu, 10 Jun 2021 15:30:02 +0000	[thread overview]
Message-ID: <ccd48bfd2ccf9b2978d578963609ff03bcce8bee.camel@hammerspace.com> (raw)
In-Reply-To: <CAN-5tyEFtOa97+vdCeCyHtdub8n5zHSP8sv7Zv2CCnd_duv5fg@mail.gmail.com>

On Thu, 2021-06-10 at 11:01 -0400, Olga Kornievskaia wrote:
> On Thu, Jun 10, 2021 at 10:51 AM Chuck Lever III <
> chuck.lever@oracle.com> wrote:
> > 
> > 
> > 
> > > On Jun 10, 2021, at 10:29 AM, Olga Kornievskaia <
> > > olga.kornievskaia@gmail.com> wrote:
> > > 
> > > On Thu, Jun 10, 2021 at 9:56 AM Chuck Lever III <
> > > chuck.lever@oracle.com> wrote:
> > > > 
> > > > 
> > > > 
> > > > > On Jun 10, 2021, at 9:34 AM, Trond Myklebust <
> > > > > trondmy@hammerspace.com> wrote:
> > > > > 
> > > > > On Thu, 2021-06-10 at 13:30 +0000, Chuck Lever III wrote:
> > > > > > 
> > > > > > 
> > > > > > > On Jun 9, 2021, at 5:53 PM, Olga Kornievskaia <
> > > > > > > olga.kornievskaia@gmail.com> wrote:
> > > > > > > 
> > > > > > > From: Olga Kornievskaia <kolga@netapp.com>
> > > > > > > 
> > > > > > > This option will control up to how many xprts can the
> > > > > > > client
> > > > > > > establish to the server. This patch parses the value and
> > > > > > > sets
> > > > > > > up structures that keep track of max_connect.
> > > > > > > 
> > > > > > > Signed-off-by: Olga Kornievskaia <kolga@netapp.com>
> > > > > > > ---
> > > > > > > fs/nfs/client.c           |  1 +
> > > > > > > fs/nfs/fs_context.c       |  8 ++++++++
> > > > > > > fs/nfs/internal.h         |  2 ++
> > > > > > > fs/nfs/nfs4client.c       | 12 ++++++++++--
> > > > > > > fs/nfs/super.c            |  2 ++
> > > > > > > include/linux/nfs_fs_sb.h |  1 +
> > > > > > > 6 files changed, 24 insertions(+), 2 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/fs/nfs/client.c b/fs/nfs/client.c
> > > > > > > index 330f65727c45..486dec59972b 100644
> > > > > > > --- a/fs/nfs/client.c
> > > > > > > +++ b/fs/nfs/client.c
> > > > > > > @@ -179,6 +179,7 @@ struct nfs_client
> > > > > > > *nfs_alloc_client(const
> > > > > > > struct nfs_client_initdata *cl_init)
> > > > > > > 
> > > > > > >        clp->cl_proto = cl_init->proto;
> > > > > > >        clp->cl_nconnect = cl_init->nconnect;
> > > > > > > +       clp->cl_max_connect = cl_init->max_connect ?
> > > > > > > cl_init-
> > > > > > > > max_connect : 1;
> > > > > > 
> > > > > > So, 1 is the default setting, meaning the "add another
> > > > > > transport"
> > > > > > facility is disabled by default. Would it be less
> > > > > > surprising for
> > > > > > an admin to allow some extra connections by default?
> > > > > > 
> > > > > > 
> > > > > > >        clp->cl_net = get_net(cl_init->net);
> > > > > > > 
> > > > > > >        clp->cl_principal = "*";
> > > > > > > diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
> > > > > > > index d95c9a39bc70..cfbff7098f8e 100644
> > > > > > > --- a/fs/nfs/fs_context.c
> > > > > > > +++ b/fs/nfs/fs_context.c
> > > > > > > @@ -29,6 +29,7 @@
> > > > > > > #endif
> > > > > > > 
> > > > > > > #define NFS_MAX_CONNECTIONS 16
> > > > > > > +#define NFS_MAX_TRANSPORTS 128
> > > > > > 
> > > > > > This maximum seems excessive... again, there are
> > > > > > diminishing
> > > > > > returns to adding more connections to the same server.
> > > > > > what's
> > > > > > wrong with re-using NFS_MAX_CONNECTIONS for the maximum?
> > > > > > 
> > > > > > As always, I'm a little queasy about adding yet another
> > > > > > mount
> > > > > > option. Are there real use cases where a whole-client
> > > > > > setting
> > > > > > (like a sysfs attribute) would be inadequate? Is there a
> > > > > > way
> > > > > > the client could figure out a reasonable maximum without a
> > > > > > human intervention, say, by counting the number of NICs on
> > > > > > the system?
> > > > > 
> > > > > Oh, hell no! We're not tying anything to the number of
> > > > > NICs...
> > > > 
> > > > That's a bit of an over-reaction. :-) A little more explanation
> > > > would be welcome. I mean, don't you expect someone to ask "How
> > > > do I pick a good value?" and someone might reasonably answer
> > > > "Well, start with the number of NICs on your client times 3" or
> > > > something like that.
> > > 
> > > That's what I was thinking and thank you for at least considering
> > > that
> > > it's a reasonable answer.
> > > 
> > > > IMO we're about to add another admin setting without
> > > > understanding
> > > > how it will be used, how to select a good maximum value, or
> > > > even
> > > > whether this maximum needs to be adjustable. In a previous e-
> > > > mail
> > > > Olga has already demonstrated that it will be difficult to
> > > > explain
> > > > how to use this setting with nconnect=.
> > > 
> > > I agree that understanding on how it will be used is unknown or
> > > understood but I think nconnect and max_connect represent
> > > different
> > > capabilities. I agree that adding nconnect transports leads to
> > > diminishing returns after a certain (relatively low) number.
> > > However,
> > > I don't believe the same holds for when xprts are going over
> > > different
> > > NICs. Therefore I didn't think max_connect should have been bound
> > > by
> > > the same numbers as nconnect.
> > 
> > Thanks for reminding me, I had forgotten the distinction between
> > the two mount options.
> > 
> > I think there's more going on than just the NIC -- lock contention
> > on the client will also be a somewhat limiting factor, as will the
> > number of local CPUs and memory bandwidth. And as Trond points out,
> > the network topology between the client and server will also have
> > some impact.
> > 
> > And I'm trying to understand why an admin would want to turn off
> > the "add another xprt" mechanism -- ie, the lower bound. Why is
> > the default setting 1?
> 
> I think the reason for having default as 1 was to address Trond's
> comment that some servers are struggling to support nconnect. So I'm
> trying not to force any current setup to needing to change their
> mount
> setup to specifically say "max_connect=1". I want environments that
> can support trunking specifically allow for trunking by adding a new
> mount option to increase the limit.
> 
> If this is not a concern then max_connect's default can just be the
> whatever default value we pick for the it.
> 

The default needs to preserve existing behaviour, so max_connect=1 is
correct.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



  reply	other threads:[~2021-06-10 15:30 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 21:53 [PATCH v2 0/3] don't collapse transports for the trunkable Olga Kornievskaia
2021-06-09 21:53 ` [PATCH v2 1/3] SUNRPC query xprt switch for number of active transports Olga Kornievskaia
2021-06-10 13:34   ` Chuck Lever III
2021-06-10 14:50     ` Olga Kornievskaia
2021-06-10 14:55       ` Chuck Lever III
2021-06-09 21:53 ` [PATCH v2 2/3] NFSv4 introduce max_connect mount options Olga Kornievskaia
2021-06-10  1:49   ` Wang Yugui
2021-06-10  2:22     ` Wang Yugui
2021-06-10 13:30   ` Chuck Lever III
2021-06-10 13:34     ` Trond Myklebust
2021-06-10 13:56       ` Chuck Lever III
2021-06-10 14:13         ` Trond Myklebust
2021-06-10 14:31           ` Olga Kornievskaia
2021-06-10 14:55             ` Trond Myklebust
2021-06-10 16:14               ` Olga Kornievskaia
2021-06-10 16:36                 ` Trond Myklebust
2021-06-10 17:30                   ` Olga Kornievskaia
2021-06-10 22:17                     ` Olga Kornievskaia
2021-06-10 14:38           ` Chuck Lever III
2021-06-10 14:29         ` Olga Kornievskaia
2021-06-10 14:51           ` Chuck Lever III
2021-06-10 15:01             ` Olga Kornievskaia
2021-06-10 15:30               ` Trond Myklebust [this message]
2021-06-09 21:53 ` [PATCH v2 3/3] NFSv4.1+ add trunking when server trunking detected Olga Kornievskaia
2021-06-09 22:27 ` [PATCH v2 0/3] don't collapse transports for the trunkable Olga Kornievskaia
2021-06-10 13:32 ` Steve Dickson
2021-06-10 17:33   ` Olga Kornievskaia
2021-06-10 17:39     ` Olga Kornievskaia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ccd48bfd2ccf9b2978d578963609ff03bcce8bee.camel@hammerspace.com \
    --to=trondmy@hammerspace.com \
    --cc=anna.schumaker@netapp.com \
    --cc=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=olga.kornievskaia@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.