From: Phil Auld <pauld@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>,
Ming Lei <ming.lei@redhat.com>,
linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org,
Jeff Moyer <jmoyer@redhat.com>,
Dave Chinner <dchinner@redhat.com>,
Eric Sandeen <sandeen@redhat.com>, Christoph Hellwig <hch@lst.de>,
Jens Axboe <axboe@kernel.dk>, Ingo Molnar <mingo@redhat.com>,
Tejun Heo <tj@kernel.org>,
Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: single aio thread is migrated crazily by scheduler
Date: Wed, 20 Nov 2019 17:03:13 -0500 [thread overview]
Message-ID: <20191120220313.GC18056@pauld.bos.csb> (raw)
In-Reply-To: <20191120191636.GI4097@hirez.programming.kicks-ass.net>
Hi Peter,
On Wed, Nov 20, 2019 at 08:16:36PM +0100 Peter Zijlstra wrote:
> On Tue, Nov 19, 2019 at 07:40:54AM +1100, Dave Chinner wrote:
> > On Mon, Nov 18, 2019 at 10:21:21AM +0100, Peter Zijlstra wrote:
>
> > > We typically only fall back to the active balancer when there is
> > > (persistent) imbalance and we fail to migrate anything else (of
> > > substance).
> > >
> > > The tuning mentioned has the effect of less frequent scheduling, IOW,
> > > leaving (short) tasks on the runqueue longer. This obviously means the
> > > load-balancer will have a bigger chance of seeing them.
> > >
> > > Now; it's been a while since I looked at the workqueue code but one
> > > possible explanation would be if the kworker that picks up the work item
> > > is pinned. That would make it runnable but not migratable, the exact
> > > situation in which we'll end up shooting the current task with active
> > > balance.
> >
> > Yes, that's precisely the problem - work is queued, by default, on a
> > specific CPU and it will wait for a kworker that is pinned to that
>
> I'm thinking the problem is that it doesn't wait. If it went and waited
> for it, active balance wouldn't be needed, that only works on active
> tasks.
Since this is AIO I wonder if it should queue_work on a nearby cpu by
default instead of unbound.
>
> > specific CPU to dispatch it. We've already tested that queuing on a
> > different CPU (via queue_work_on()) makes the problem largely go
> > away as the work is not longer queued behind the long running fio
> > task.
> >
> > This, however, is not at viable solution to the problem. The pattern
> > of a long running process queuing small pieces of individual work
> > for processing in a separate context is pretty common...
>
> Right, but you're putting the scheduler in a bind. By overloading the
> CPU and only allowing the one task to migrate, it pretty much has no
> choice left.
>
> Anyway, I'm still going to have try and reproduce -- I got side-tracked
> into a crashing bug, I'll hopefully get back to this tomorrow. Lastly,
> one other thing to try is -next. Vincent reworked the load-balancer
> quite a bit.
>
I've tried it with the lb patch series. I get basically the same results.
With the high granularity settings I get 3700 migrations for the 30
second run at 4k. Of those about 3200 are active balance on stock 5.4-rc7.
With the lb patches it's 3500 and 3000, a slight drop.
Using the default granularity settings 50 and 22 for stock and 250 and 25.
So a few more total migrations with the lb patches but about the same active.
On this system I'm getting 100k migrations using 512 byte blocksize. Almost
all not active. I haven't looked into that closely yet but it's like 3000
per second looking like this:
...
64.19641 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19694 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19746 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19665 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.19718 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.19772 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.19800 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19828 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.19856 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19882 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.19909 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19937 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.19967 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.19995 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.20023 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.20053 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.20079 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.20107 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.20135 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.20163 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
64.20192 386 386 kworker/15:1 sched_migrate_task fio/2784 cpu 15->19
64.20221 389 389 kworker/19:1 sched_migrate_task fio/2784 cpu 19->15
...
Which is roughly equal to the number if iops it's doing.
Cheers,
Phil
--
next prev parent reply other threads:[~2019-11-20 22:03 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-14 11:31 single aio thread is migrated crazily by scheduler Ming Lei
2019-11-14 13:14 ` Peter Zijlstra
2019-11-15 0:09 ` Ming Lei
2019-11-15 14:16 ` Ming Lei
2019-11-14 23:54 ` Dave Chinner
2019-11-15 1:08 ` Ming Lei
2019-11-15 4:56 ` Dave Chinner
2019-11-15 7:08 ` Ming Lei
2019-11-15 23:40 ` Dave Chinner
2019-11-16 6:31 ` Ming Lei
2019-11-18 9:21 ` Peter Zijlstra
2019-11-18 14:54 ` Vincent Guittot
2019-11-18 20:40 ` Dave Chinner
2019-11-20 19:16 ` Peter Zijlstra
2019-11-20 22:03 ` Phil Auld [this message]
2019-11-21 4:12 ` Ming Lei
2019-11-21 14:12 ` Phil Auld
2019-11-21 15:02 ` Boaz Harrosh
2019-11-21 16:19 ` Jens Axboe
2019-12-09 16:58 ` Srikar Dronamraju
2019-11-21 22:10 ` Dave Chinner
2019-11-21 13:29 ` Peter Zijlstra
2019-11-21 14:21 ` Phil Auld
2019-12-09 16:51 ` Srikar Dronamraju
2019-12-09 23:17 ` Dave Chinner
2019-12-10 3:27 ` Srikar Dronamraju
2019-12-10 5:43 ` [PATCH v2] sched/core: Preempt current task in favour of bound kthread Srikar Dronamraju
2019-12-10 9:26 ` Peter Zijlstra
2019-12-10 9:33 ` Peter Zijlstra
2019-12-10 10:18 ` Srikar Dronamraju
2019-12-10 10:16 ` Srikar Dronamraju
2019-12-10 9:43 ` Vincent Guittot
2019-12-10 10:11 ` Srikar Dronamraju
2019-12-10 11:02 ` Vincent Guittot
2019-12-10 17:23 ` [PATCH v3] " Srikar Dronamraju
2019-12-11 17:38 ` [PATCH v4] " Srikar Dronamraju
2019-12-11 22:46 ` Dave Chinner
2019-12-12 10:10 ` Peter Zijlstra
2019-12-12 10:14 ` Peter Zijlstra
2019-12-12 10:23 ` Peter Zijlstra
2019-12-12 11:20 ` Vincent Guittot
2019-12-12 13:12 ` Peter Zijlstra
2019-12-12 15:07 ` Srikar Dronamraju
2019-12-12 15:15 ` Peter Zijlstra
2019-12-13 5:32 ` Srikar Dronamraju
2019-11-18 16:26 ` single aio thread is migrated crazily by scheduler Srikar Dronamraju
2019-11-18 21:18 ` Dave Chinner
2019-11-19 8:54 ` Ming Lei
[not found] ` <20191128094003.752-1-hdanton@sina.com>
2019-11-28 9:53 ` Vincent Guittot
2019-12-02 2:46 ` Ming Lei
2019-12-02 4:02 ` Dave Chinner
2019-12-02 4:22 ` Ming Lei
2019-12-02 13:45 ` Vincent Guittot
2019-12-02 21:22 ` Phil Auld
2019-12-03 9:45 ` Vincent Guittot
2019-12-04 13:50 ` Ming Lei
2019-12-02 23:53 ` Dave Chinner
2019-12-03 0:18 ` Ming Lei
2019-12-03 13:34 ` Vincent Guittot
2019-12-02 7:39 ` Juri Lelli
2019-12-02 3:08 ` Dave Chinner
[not found] ` <20191202090158.15016-1-hdanton@sina.com>
2019-12-02 23:06 ` Dave Chinner
[not found] ` <20191203131514.5176-1-hdanton@sina.com>
2019-12-03 22:29 ` Dave Chinner
[not found] ` <20191204102903.896-1-hdanton@sina.com>
2019-12-04 22:59 ` Dave Chinner
2019-11-27 0:43 ` 0da4873c66: xfstests.generic.287.fail kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191120220313.GC18056@pauld.bos.csb \
--to=pauld@redhat.com \
--cc=axboe@kernel.dk \
--cc=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=hch@lst.de \
--cc=jmoyer@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=sandeen@redhat.com \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).