From: "chenxiang (M)" <chenxiang66@hisilicon.com>
To: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>
Cc: <linux-block@vger.kernel.org>, John Garry <john.garry@huawei.com>,
"Bart Van Assche" <bvanassche@acm.org>,
Hannes Reinecke <hare@suse.com>, "Christoph Hellwig" <hch@lst.de>,
Thomas Gleixner <tglx@linutronix.de>,
Keith Busch <keith.busch@intel.com>
Subject: Re: [PATCH V4 0/5] blk-mq: improvement on handling IO during CPU hotplug
Date: Thu, 28 Nov 2019 09:09:13 +0800 [thread overview]
Message-ID: <b3d90798-484f-09f5-a22f-f3ed3701f0d4@hisilicon.com> (raw)
In-Reply-To: <20191014015043.25029-1-ming.lei@redhat.com>
Hi,
在 2019/10/14 9:50, Ming Lei 写道:
> Hi,
>
> Thomas mentioned:
> "
> That was the constraint of managed interrupts from the very beginning:
>
> The driver/subsystem has to quiesce the interrupt line and the associated
> queue _before_ it gets shutdown in CPU unplug and not fiddle with it
> until it's restarted by the core when the CPU is plugged in again.
> "
>
> But no drivers or blk-mq do that before one hctx becomes dead(all
> CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
> to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
>
> This patchset tries to address the issue by two stages:
>
> 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
>
> - mark the hctx as internal stopped, and drain all in-flight requests
> if the hctx is going to be dead.
>
> 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
>
> - steal bios from the request, and resubmit them via generic_make_request(),
> then these IO will be mapped to other live hctx for dispatch
>
> Please comment & review, thanks!
>
> John, I don't add your tested-by tag since V3 have some changes,
> and I appreciate if you may run your test on V3.
I tested those patchset with John's testcase, except dump_stack() in
function __blk_mq_run_hw_queue() sometimes occurs which don't
affect the function, it solves the CPU hotplug issue, so add tested-by
for those patchset:
Tested-by: Xiang Chen <chenxiang66@hisilicon.com>
>
> V4:
> - resubmit IOs in dispatch list in case that this hctx is dead
>
> V3:
> - re-organize patch 2 & 3 a bit for addressing Hannes's comment
> - fix patch 4 for avoiding potential deadlock, as found by Hannes
>
> V2:
> - patch4 & patch 5 in V1 have been merged to block tree, so remove
> them
> - address comments from John Garry and Minwoo
>
>
>
> Ming Lei (5):
> blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED
> blk-mq: prepare for draining IO when hctx's all CPUs are offline
> blk-mq: stop to handle IO and drain IO before hctx becomes dead
> blk-mq: re-submit IO in case that hctx is dead
> blk-mq: handle requests dispatched from IO scheduler in case that hctx
> is dead
>
> block/blk-mq-debugfs.c | 2 +
> block/blk-mq-tag.c | 2 +-
> block/blk-mq-tag.h | 2 +
> block/blk-mq.c | 137 ++++++++++++++++++++++++++++++++++---
> block/blk-mq.h | 3 +-
> drivers/block/loop.c | 2 +-
> drivers/md/dm-rq.c | 2 +-
> include/linux/blk-mq.h | 5 ++
> include/linux/cpuhotplug.h | 1 +
> 9 files changed, 141 insertions(+), 15 deletions(-)
>
> Cc: John Garry <john.garry@huawei.com>
> Cc: Bart Van Assche <bvanassche@acm.org>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Keith Busch <keith.busch@intel.com>
>
next prev parent reply other threads:[~2019-11-28 1:09 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-14 1:50 [PATCH V4 0/5] blk-mq: improvement on handling IO during CPU hotplug Ming Lei
2019-10-14 1:50 ` [PATCH V4 1/5] blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED Ming Lei
2019-10-14 1:50 ` [PATCH V4 2/5] blk-mq: prepare for draining IO when hctx's all CPUs are offline Ming Lei
2019-10-14 1:50 ` [PATCH V4 3/5] blk-mq: stop to handle IO and drain IO before hctx becomes dead Ming Lei
2019-11-28 9:29 ` John Garry
2019-10-14 1:50 ` [PATCH V4 4/5] blk-mq: re-submit IO in case that hctx is dead Ming Lei
2019-10-14 1:50 ` [PATCH V4 5/5] blk-mq: handle requests dispatched from IO scheduler " Ming Lei
2019-10-16 8:58 ` [PATCH V4 0/5] blk-mq: improvement on handling IO during CPU hotplug John Garry
2019-10-16 12:07 ` Ming Lei
2019-10-16 16:19 ` John Garry
[not found] ` <55a84ea3-647d-0a76-596c-c6c6b2fc1b75@huawei.com>
2019-10-20 10:14 ` Ming Lei
2019-10-21 9:19 ` John Garry
2019-10-21 9:34 ` Ming Lei
2019-10-21 9:47 ` John Garry
2019-10-21 10:24 ` Ming Lei
2019-10-21 11:49 ` John Garry
2019-10-21 12:53 ` Ming Lei
2019-10-21 14:02 ` John Garry
2019-10-22 0:16 ` Ming Lei
2019-10-22 11:19 ` John Garry
2019-10-22 13:45 ` Ming Lei
2019-10-25 16:33 ` John Garry
2019-10-28 10:42 ` Ming Lei
2019-10-28 11:55 ` John Garry
2019-10-29 1:50 ` Ming Lei
2019-10-29 9:22 ` John Garry
2019-10-29 10:05 ` Ming Lei
2019-10-29 17:54 ` John Garry
2019-10-31 16:28 ` John Garry
2019-11-28 1:09 ` chenxiang (M) [this message]
2019-11-28 2:02 ` Ming Lei
2019-11-28 10:45 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b3d90798-484f-09f5-a22f-f3ed3701f0d4@hisilicon.com \
--to=chenxiang66@hisilicon.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=hare@suse.com \
--cc=hch@lst.de \
--cc=john.garry@huawei.com \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).