All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Hillf Danton <hdanton@sina.com>
Cc: Ming Lei <ming.lei@redhat.com>,
	linux-block <linux-block@vger.kernel.org>,
	linux-fs <linux-fsdevel@vger.kernel.org>,
	linux-xfs <linux-xfs@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Peter Zijlstra <peterz@infradead.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Rong Chen <rong.a.chen@intel.com>, Tejun Heo <tj@kernel.org>
Subject: Re: single aio thread is migrated crazily by scheduler
Date: Tue, 3 Dec 2019 10:06:18 +1100	[thread overview]
Message-ID: <20191202230618.GI2695@dread.disaster.area> (raw)
In-Reply-To: <20191202090158.15016-1-hdanton@sina.com>

On Mon, Dec 02, 2019 at 05:01:58PM +0800, Hillf Danton wrote:
> 
> On Mon, 2 Dec 2019 14:08:44 +1100 Dave Chinner wrote:
> > On Thu, Nov 28, 2019 at 05:40:03PM +0800, Hillf Danton wrote:
> > > On Sat, 16 Nov 2019 10:40:05 Dave Chinner wrote:
> > > > Yeah, the fio task averages 13.4ms on any given CPU before being
> > > > switched to another CPU. Mind you, the stddev is 12ms, so the range
> > > > of how long it spends on any one CPU is pretty wide (330us to
> > > > 330ms).
> > > > 
> > > Hey Dave
> > > 
> > > > IOWs, this doesn't look like a workqueue problem at all - this looks
> > > 
> > > Surprised to see you're so sure it has little to do with wq,
> > 
> > Because I understand how the workqueue is used here.
> > 
> > Essentially, the workqueue is not necessary for a -pure- overwrite
> > where no metadata updates or end-of-io filesystem work is required.
> > 
> > However, change the workload just slightly, such as allocating the
> > space, writing into preallocated space (unwritten extents), using
> > AIO writes to extend the file, using O_DSYNC, etc, and we *must*
> > use a workqueue as we have to take blocking locks and/or run
> > transactions.
> > 
> > These may still be very short (e.g. updating inode size) and in most
> > cases will not block, but if they do, then if we don't move the work
> > out of the block layer completion context (i.e. softirq running the
> > block bh) then we risk deadlocking the code.
> > 
> > Not to mention none of the filesytem inode locks are irq safe.
> > 
> > IOWs, we can remove the workqueue for this -one specific instance-
> > but it does not remove the requirement for using a workqueue for all
> > the other types of write IO that pass through this code.
> > 
> So it's not true that it doesn't has anything to do with workqueue.

You misunderstood what I was saying. I meant that this adverse
schdeuler behaviour is not *unique to this specific workqueue
instance* or workload. There are another 5+ workqueues in XFS alone
that are based around the same "do all the deferred work on the same
CPU" queuing behaviour. Several of them are IO completion
processing workqueues, and it is designed this way to avoid running
completion work that access common structures across all the CPUs in
the system.

And, FWIW, we've had this "per-cpu delayed work" processing
mechanism in XFS since ~2002 when per-cpu work queues were
introduced in ~2.5.40. What we are doing with workqueues here is not
new or novel, and it's worked just fine for most of this time...

> > >  			INIT_WORK(&dio->aio.work, iomap_dio_complete_work);
> > > -			queue_work(inode->i_sb->s_dio_done_wq, &dio->aio.work);
> > > +			schedule_work(&dio->aio.work);
> > 
> > This does nothing but change the workqueue from a per-sb wq to the
> > system wq. The work is still bound to the same CPU it is queued on,
> > so nothing will change.
> > 
> The system wq is enough here to make some visible difference as CFS will
> be looking to make new lb decision in particular when submitter and
> completion are running on different CPUs.

That's noise caused by slightly different loading of the system
workqueue vs a private work queue. It's likely just enough to move
the scheduler out of the window where it makes incorrect decisions.
i.e. Add a bit more user load or load onto other CPUs, and the
problem will reappear.

As I said, this is *not* a fix for the problem - it just moves it
around so that you can't see it for this specific workload instance.

> It's claimed that "Maintaining CPU affinity across dispatch and completion
> work has been proven to be a significant performance win." If completion
> is running in the softirq context then it would take some time to sort
> out why irq (not CPU) affinity is making difference across CPUs.

We use irq steering to provide CPU affinity for the structures being
used by completion because they are the same ones used by
submission. If completion happens quickly enough, those structures
are still hot in the cache of the submission CPU, and so we don't
drag bio and filesystem structures out of the CPU cache they sit in
by steering the completion to the submission CPU.

Most of the modern high perofrmance storage hardware has hardware
interrupt steering so the block layer doesn't have to do this. See
__blk_mq_complete_request() and __blk_complete_request(). If the
device has multiple hardware queues, they are already delivering CPU
affine completions. Otherwise __blk_complete_request() uses IPIs
to steer the completion to a CPU that shares a cache with the
submission CPU....

IOWs, we are trying to ensure that we run the data IO completion on
the CPU with that has that data hot in cache. When we are running
millions of IOs every second, this matters -a lot-. IRQ steering is
just a mechansim that is used to ensure completion processing hits
hot caches.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2019-12-02 23:06 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-14 11:31 single aio thread is migrated crazily by scheduler Ming Lei
2019-11-14 13:14 ` Peter Zijlstra
2019-11-15  0:09   ` Ming Lei
2019-11-15 14:16     ` Ming Lei
2019-11-14 23:54 ` Dave Chinner
2019-11-15  1:08   ` Ming Lei
2019-11-15  4:56     ` Dave Chinner
2019-11-15  7:08       ` Ming Lei
2019-11-15 23:40         ` Dave Chinner
2019-11-16  6:31           ` Ming Lei
2019-11-18  9:21           ` Peter Zijlstra
2019-11-18 14:54             ` Vincent Guittot
2019-11-18 20:40             ` Dave Chinner
2019-11-20 19:16               ` Peter Zijlstra
2019-11-20 22:03                 ` Phil Auld
2019-11-21  4:12                   ` Ming Lei
2019-11-21 14:12                     ` Phil Auld
2019-11-21 15:02                       ` Boaz Harrosh
2019-11-21 16:19                         ` Jens Axboe
2019-12-09 16:58                           ` Srikar Dronamraju
2019-11-21 22:10                       ` Dave Chinner
2019-11-21 13:29                   ` Peter Zijlstra
2019-11-21 14:21                     ` Phil Auld
2019-12-09 16:51                     ` Srikar Dronamraju
2019-12-09 23:17                       ` Dave Chinner
2019-12-10  3:27                         ` Srikar Dronamraju
2019-12-10  5:43                         ` [PATCH v2] sched/core: Preempt current task in favour of bound kthread Srikar Dronamraju
2019-12-10  9:26                           ` Peter Zijlstra
2019-12-10  9:33                             ` Peter Zijlstra
2019-12-10 10:18                               ` Srikar Dronamraju
2019-12-10 10:16                             ` Srikar Dronamraju
2019-12-10  9:43                           ` Vincent Guittot
2019-12-10 10:11                             ` Srikar Dronamraju
2019-12-10 11:02                               ` Vincent Guittot
2019-12-10 17:23                           ` [PATCH v3] " Srikar Dronamraju
2019-12-11 17:38                             ` [PATCH v4] " Srikar Dronamraju
2019-12-11 22:46                               ` Dave Chinner
2019-12-12 10:10                                 ` Peter Zijlstra
2019-12-12 10:14                                   ` Peter Zijlstra
2019-12-12 10:23                                     ` Peter Zijlstra
2019-12-12 11:20                                       ` Vincent Guittot
2019-12-12 13:12                                         ` Peter Zijlstra
2019-12-12 15:07                                   ` Srikar Dronamraju
2019-12-12 15:15                                     ` Peter Zijlstra
2019-12-13  5:32                                   ` Srikar Dronamraju
2019-11-18 16:26           ` single aio thread is migrated crazily by scheduler Srikar Dronamraju
2019-11-18 21:18             ` Dave Chinner
2019-11-19  8:54               ` Ming Lei
     [not found]         ` <20191128094003.752-1-hdanton@sina.com>
2019-11-28  9:53           ` Vincent Guittot
2019-12-02  2:46             ` Ming Lei
2019-12-02  4:02               ` Dave Chinner
2019-12-02  4:22                 ` Ming Lei
2019-12-02 13:45                 ` Vincent Guittot
2019-12-02 21:22                   ` Phil Auld
2019-12-03  9:45                     ` Vincent Guittot
2019-12-04 13:50                       ` Ming Lei
2019-12-02 23:53                   ` Dave Chinner
2019-12-03  0:18                     ` Ming Lei
2019-12-03 13:34                     ` Vincent Guittot
2019-12-02  7:39               ` Juri Lelli
2019-12-02  3:08           ` Dave Chinner
     [not found]           ` <20191202090158.15016-1-hdanton@sina.com>
2019-12-02 23:06             ` Dave Chinner [this message]
     [not found]             ` <20191203131514.5176-1-hdanton@sina.com>
2019-12-03 22:29               ` Dave Chinner
     [not found]               ` <20191204102903.896-1-hdanton@sina.com>
2019-12-04 22:59                 ` Dave Chinner
2019-11-27  0:43   ` 0da4873c66: xfstests.generic.287.fail kernel test robot
2019-11-27  0:43     ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191202230618.GI2695@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=hdanton@sina.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rong.a.chen@intel.com \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.