All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lyle Seaman <lyleseaman@gmail.com>
To: linux-nfs@vger.kernel.org
Subject: Re: Thread scheduling issues
Date: Thu, 14 Oct 2010 20:09:26 -0400	[thread overview]
Message-ID: <AANLkTikiyuT=_3WU4ru__J+e0O-eZ5ksWUm1fqK9AwU0@mail.gmail.com> (raw)
In-Reply-To: <20101014171522.GA553@fieldses.org>

Sorry to add to the confusion.  It is a 2.6.32.23 kernel, allegedly:
Linux prodLMS03 2.6.32-23-server #37-Ubuntu SMP Fri Jun 11 09:11:11
UTC 2010 x86_6
4 GNU/Linux

Though I won't be 100% sure what's in it unless I build one myself from source.

>>  363        if (pool->sp_nwaking >= SVC_MAX_WAKING) {
>> //  == 5 -lws
>>  364                /* too many threads are runnable and trying to wake up */
>>  365                thread_avail = 0;
>>  366                pool->sp_stats.overloads_avoided++;
>>  367        }
>
> That strikes me as a more likely cause of your problem.  But it was
> reverted in v2.6.33, and you say you're using v2.6.35.22, so there's
> some confusion here?

I think that the problem is the combination of the above with the
half-second sleep, and my next step is to build a kernel and tweak
both of those.

> I think the code was trying to avoid reading an rpc request
> until it was reasonably sure it would have the resources to complete it,
> so we didn't end up with a thread stuck waiting on a half-received rpc.

Yes, that's what it looks like to me too.  I'd have to think hard
about how to balance the different factors and avoid deadlock,
particularly where the client and server are sharing the same VM pool.
 It seems to me that preventing stuck threads is most important if
threads are the scarce resource, but I suppose memory could be a
problem too.   One issue is that data is acknowledged when it is
consumed by NFS so you can't just decide "whoops, I'm out of memory",
throw it away, and let the client resend - at least, not if using
RPC/TCP. (we could do that in AFS because we had two different kind of
acknowledgements, incidentally)

The symptoms of the problem are:

workload is biased towards small writes, ~ 4k ave size.

ops are:
getattr 40%  setattr 5% lookup 10% access 19%
read 5% write 8% commit 8%

low CPU utilization - usr+sys < 5%, idle == 0, wait > 95%
high disk-wait %age ==> "disk bound", but that's a bit misleading, see below
low # of nfsd in D state, usually 3-8, very occasionally 20.
50 nfsd threads are configured, but those last 30 just never run.
small #s (3-ish) of simultaneous I/Os delivered to disk subsystem as
reported by 'sar -d'.
2 clients with lots of pent-up demand ( > 50 threads on each are
blocked waiting for NFS )

local processes on the NFS server are still getting great performance
out of the disk subsystem, so the driver/HBA isn't the bottleneck.
sar reports mean svc times <4ms.  I don't know how much to trust sar
though.

The disk subsystem is one LUN in a shared SAN with lots of capacity,
if only I could deliver more than a handful of ops to it at one time.
"disk-bound" just means that adding cpu won't help with the same
workload, but it doesn't mean that the disk subsystem is incapable of
handling additional load if I can get it there.

Watching /proc/fs/nfsd/pool_stats shows deltas from previous sample in
the range of
0 ~4x x ~2x x 0
( pool, packets, sockets_queued, threads_woken, overloads_avoided,
threads_timedout )

That is, at every sample, the value of sockets_queued is exactly equal
to overloads_avoided, and can be as much as half the number of total
calls handled (though usually it is much less).

5 threads which are blocked waiting for the underlying filesystem op
do not cause increments  of "overloads_avoided" counter.  That looks
like a, hmm, heuristic for systems with more disk capacity and less
CPU than I have.  I don't absolutely *know* that this particular
alloc_page is failing but it sure fits the data.   Next step for me is
instrumenting and counting that branch, but I thought I'd check to see
if this had been talked about before, since I can't find it in the
archives or bugzilla.

exporting the fs with -o async helps but I don't like doing it.

  reply	other threads:[~2010-10-15  0:09 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-14 15:21 Thread scheduling issues Lyle Seaman
2010-10-14 17:15 ` J. Bruce Fields
2010-10-15  0:09   ` Lyle Seaman [this message]
  -- strict thread matches above, loose matches on Subject: below --
2010-10-14 12:38 Lyle Seaman
     [not found] ` <AANLkTim975UbEmeNSSS0awLSou151rr7=DpRxxw6trdn-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-10-14 14:12   ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTikiyuT=_3WU4ru__J+e0O-eZ5ksWUm1fqK9AwU0@mail.gmail.com' \
    --to=lyleseaman@gmail.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.