Linux-NFS Archive on lore.kernel.org
 help / color / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: NeilBrown <neilb@suse.de>
Cc: "J. Bruce Fields" <bfields@redhat.com>, linux-nfs@vger.kernel.org
Subject: Re: [PATCH 1/2 - vers2] nfsd: handle drc over-allocation gracefully.
Date: Fri, 20 Sep 2019 12:28:21 -0400
Message-ID: <20190920162821.GG13260@fieldses.org> (raw)
In-Reply-To: <8736gra34j.fsf@notabene.neil.brown.name>

On Fri, Sep 20, 2019 at 04:33:16PM +1000, NeilBrown wrote:
> 
> Currently, if there are more clients than allowed for by the
> space allocation in set_max_drc(), we fail a SESSION_CREATE
> request with NFS4ERR_DELAY.
> This means that the client retries indefinitely, which isn't
> a user-friendly response.
> 
> The RFC requires NFS4ERR_NOSPC, but that would at best result in a
> clean failure on the client, which is not much more friendly.
> 
> The current space allocation is a best-guess and doesn't provide any
> guarantees, we could still run out of space when trying to allocate
> drc space.
> 
> So fail more gracefully - always give out at least one slot.
> If all clients used all the space in all slots, we might start getting
> memory pressure, but that is possible anyway.
> 
> So ensure 'num' is always at least 1, and remove the test for it
> being zero.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  fs/nfsd/nfs4state.c | 19 +++++++++++++++----
>  1 file changed, 15 insertions(+), 4 deletions(-)
> 
> Sorry, but I was confused about clamp_t().
> If the 'max' is smaller than the 'min', then the 'max' wins, so
> 'avail' will never be more than total_avail/3.
> I might have believed the comment here instead of the code there.
> 
> So the current code cannot overflow, but I think we agree that
> failure is not good.  So this patch avoids failure.

Ah, got it, looks good.  Applying for 5.4.

--b.

> NeilBrown
> 
> 
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 7857942c5ca6..084a30d77359 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -1570,14 +1570,25 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
>  	unsigned long avail, total_avail;
>  
>  	spin_lock(&nfsd_drc_lock);
> -	total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
> +	if (nfsd_drc_max_mem > nfsd_drc_mem_used)
> +		total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
> +	else
> +		/* We have handed out more space than we chose in
> +		 * set_max_drc() to allow.  That isn't really a
> +		 * problem as long as that doesn't make us thing we
> +		 * have lots more due to integer overflow.
> +		 */
> +		total_avail = 0;
>  	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
>  	/*
>  	 * Never use more than a third of the remaining memory,
> -	 * unless it's the only way to give this client a slot:
> +	 * unless it's the only way to give this client a slot.
> +	 * Give the client one slot even that would require
> +	 * over-allocation, it is better than failure.
>  	 */
>  	avail = clamp_t(unsigned long, avail, slotsize, total_avail/3);
>  	num = min_t(int, num, avail / slotsize);
> +	num = max_t(int, num, 1);
>  	nfsd_drc_mem_used += num * slotsize;
>  	spin_unlock(&nfsd_drc_lock);
>  
> @@ -3169,10 +3180,10 @@ static __be32 check_forechannel_attrs(struct nfsd4_channel_attrs *ca, struct nfs
>  	 * performance.  When short on memory we therefore prefer to
>  	 * decrease number of slots instead of their size.  Clients that
>  	 * request larger slots than they need will get poor results:
> +	 * Note that we always allow at least one slot, because our
> +	 * accounting is soft and provides no guarantees either way.
>  	 */
>  	ca->maxreqs = nfsd4_get_drc_mem(ca);
> -	if (!ca->maxreqs)
> -		return nfserr_jukebox;
>  
>  	return nfs_ok;
>  }
> -- 
> 2.23.0
> 



      parent reply index

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-25 13:21 [PATCH 0/2] tweak knfsd session slot table sizing J. Bruce Fields
2017-09-25 13:21 ` [PATCH 1/2] nfsd: increase DRC cache limit J. Bruce Fields
2017-09-25 13:21 ` [PATCH 2/2] nfsd: give out fewer session slots as limit approaches J. Bruce Fields
2019-09-19  1:08   ` NeilBrown
2019-09-19 16:22     ` J. Bruce Fields
2019-09-19 17:17       ` J. Bruce Fields
2019-09-19 18:41         ` bfields
2019-09-20  6:15           ` [PATCH 1/2] nfsd: handle drc over-allocation gracefully NeilBrown
2019-09-20  6:33             ` [PATCH 1/2 - vers2] " NeilBrown
2019-09-20  6:36               ` [PATCH - 2/2] nfsd: degraded slot-count more gracefully as allocation nears exhaustion NeilBrown
2019-09-20 16:28               ` J. Bruce Fields [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190920162821.GG13260@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=bfields@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nfs/0 linux-nfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nfs linux-nfs/ https://lore.kernel.org/linux-nfs \
		linux-nfs@vger.kernel.org
	public-inbox-index linux-nfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-nfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git