All of lore.kernel.org
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Trond Myklebust <trondmy@hammerspace.com>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH/RFC] Does nfsiod need WQ_CPU_INTENSIVE?
Date: Fri, 06 Nov 2020 11:23:56 +1100	[thread overview]
Message-ID: <87k0uzqqn7.fsf@notabene.neil.brown.name> (raw)
In-Reply-To: <37ec02047951a5d61cf9f9f5b4dc7151cd877772.camel@hammerspace.com>

[-- Attachment #1: Type: text/plain, Size: 4160 bytes --]

On Thu, Nov 05 2020, Trond Myklebust wrote:

> On Thu, 2020-11-05 at 11:12 +1100, NeilBrown wrote:
>> 
>> Hi,
>>  I have a customer report of NFS getting stuck due to a workqueue
>>  lockup.
>> 
>>  This appears to be triggered by calling 'close' on a 5TB file.
>>  The rpc_release set up by nfs4_do_close() calls a final iput()
>>  on the inode which leads to nfs4_evict_inode() which calls
>>  truncate_inode_pages_final().  On a 5TB file, this can take a little
>>  while.
>> 
>>  Documentation for workqueue says
>>    Generally, work items are not expected to hog a CPU and consume
>> many
>>    cycles.
>> 
>>  so maybe that isn't a good idea.
>>  truncate_inode_pages_final() does call cond_resched(), but workqueue
>>  doesn't take notice of that.  By default it only runs more threads
>> on
>>  the same CPU if the first thread actually sleeps.  So the net result
>> is
>>  that there are lots of rpc_async_schedule tasks queued up behind the
>>  iput, waiting for it to finish rather than running concurrently.
>> 
>>  I believe this can be fixed by setting WQ_CPU_INTENSIVE on the
>> nfsiod
>>  workqueue.  This flag causes the workqueue core to schedule more
>>  threads as needed even if the existing threads never sleep.
>>  I don't know if this is a good idea as it might spans lots of
>> threads
>>  needlessly when rpc_release functions don't have lots of work to do.
>> 
>>  Another option might be to create a separate
>> nfsiod_intensive_workqueue
>>  with this flag set, and hand all iput requests over to this
>> workqueue.
>> 
>>  I've asked for the customer to test with this simple patch.
>> 
>>  Any thoughts or suggestions most welcome,
>> 
>
> Isn't this a general problem (i.e. one that is not specific to NFS)
> when you have multi-terabyte caches? Why wouldn't io_uring be
> vulnerable to the same issue, for instance?

Maybe it is.  I haven't followed io_uring in any detail, but if it calls
iput() from a workqueue which isn't marked WQ_CPU_INTENSIVE, then it
will suffer the same problem.

And maybe that means we should look for a more general solution and I'm
not against that (though my customer needs a fix *now*).

>
> The thing is that truncate_inode_pages() has plenty of latency reducing
> cond_sched() calls that should ensure that other threads get scheduled,
> so this problem doesn't strictly meet the 'CPU intensive' criterion
> that I understand WQ_CPU_INTENSIVE to be designed for.
>
It is relevant to observe the end of should_reclaim_retry() in mm/page_alloc.c.
	/*
	 * Memory allocation/reclaim might be called from a WQ context and the
	 * current implementation of the WQ concurrency control doesn't
	 * recognize that a particular WQ is congested if the worker thread is
	 * looping without ever sleeping. Therefore we have to do a short sleep
	 * here rather than calling cond_resched().
	 */
	if (current->flags & PF_WQ_WORKER)
		schedule_timeout_uninterruptible(1);
	else
		cond_resched();

This suggests that cond_resched() isn't sufficient in the content of a
workqueue worker.  If cond_resched() were changed to include that test,
or if the cond_reshed() in truncate_inode_pages_range() were changed to
use the above code fragment, they would equally solve my current
problem.  So if you, as NFS maintainer, were happy to second one of those,
I'm happy to accept your support and carry the mission further into the
core.

w.r.t WQ_CPU_INTENSIVE in general: the goal as I understand it is to
have multiple runnable workers on the same CPU, leaving it to the
scheduler to balance them.  Unless kernel preempt is enabled, that means
that each thread *must* call cond_resched() periodically, otherwise
there is nothing the scheduler can do, and some lockup-detector would fire.

Probably the best "fix" would be to make WQ_CPU_INTENSIVE irrelevant by
getting the workqueue core to notice when a worker calls cond_resched(),
but I don't know how to do that.

NeilBrown


> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 853 bytes --]

  reply	other threads:[~2020-11-06  0:24 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-05  0:12 [PATCH/RFC] Does nfsiod need WQ_CPU_INTENSIVE? NeilBrown
2020-11-05 19:23 ` Trond Myklebust
2020-11-06  0:23   ` NeilBrown [this message]
2020-11-27  0:24     ` [PATCH] NFS: switch nfsiod to be an UNBOUND workqueue NeilBrown
2020-12-02 19:07       ` TJ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87k0uzqqn7.fsf@notabene.neil.brown.name \
    --to=neilb@suse.de \
    --cc=linux-nfs@vger.kernel.org \
    --cc=trondmy@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.