linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>
Cc: <linux-block@vger.kernel.org>,
	Minwoo Im <minwoo.im.dev@gmail.com>,
	"Bart Van Assche" <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>, "Christoph Hellwig" <hch@lst.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Keith Busch <keith.busch@intel.com>,
	chenxiang <chenxiang66@hisilicon.com>
Subject: Re: [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug
Date: Wed, 2 Oct 2019 10:56:46 +0100	[thread overview]
Message-ID: <9add1de0-9135-fd1f-8fd4-de17234d8883@huawei.com> (raw)
In-Reply-To: <a2f9e930-1b9c-dc95-78f8-70df9460669d@huawei.com>

On 22/08/2019 18:39, John Garry wrote:
> On 12/08/2019 14:43, Ming Lei wrote:
>> Hi,
>>
>> Thomas mentioned:
>>     "
>>      That was the constraint of managed interrupts from the very
>> beginning:
>>
>>       The driver/subsystem has to quiesce the interrupt line and the
>> associated
>>       queue _before_ it gets shutdown in CPU unplug and not fiddle
>> with it
>>       until it's restarted by the core when the CPU is plugged in again.
>>     "
>>
>> But no drivers or blk-mq do that before one hctx becomes dead(all
>> CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
>> to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
>>
>> This patchset tries to address the issue by two stages:
>>
>> 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
>>
>> - mark the hctx as internal stopped, and drain all in-flight requests
>> if the hctx is going to be dead.
>>
>> 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx
>> becomes dead
>>
>> - steal bios from the request, and resubmit them via
>> generic_make_request(),
>> then these IO will be mapped to other live hctx for dispatch
>>
>> Please comment & review, thanks!
>>
>> V2:
>>     - patch4 & patch 5 in V1 have been merged to block tree, so remove
>>       them
>>     - address comments from John Garry and Minwoo
>>
>>
>> Ming Lei (5):
>>   blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED
>>   blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ
>>   blk-mq: stop to handle IO before hctx's all CPUs become offline
>>   blk-mq: re-submit IO in case that hctx is dead
>>   blk-mq: handle requests dispatched from IO scheduler in case that hctx
>>     is dead
>
> Hi Ming,
>
> This looks to fix the hotplug issue for me.
>
> Previously I could manufacture a scenario while running fio where I got
> IO timeouts, like this:
>
> root@(none)$ echo 0 > ./sys/devices/system/cpu/cpu0/online
> [  296.897627] process 891 (fio) no longer affine to cpu0
> [  296.898488] process 893 (fio) no longer affine to cpu0
> [  296.910270] process 890 (fio) no longer affine to cpu0
> [  296.927322] IRQ 775: no longer affine to CPU0
> [  296.932762] CPU0: shutdown
> [  296.935469] psci: CPU0 killed.
> root@(none)$ [  326.971962] sas: Enter sas_scsi_recover_host busy: 61
> failed: 61
> [  326.977978] sas: sas_scsi_find_task: aborting task 0x00000000e2cdc79b
> root@(none)$ [  333.047964] hisi_sas_v3_hw 0000:74:02.0: internal task
> abort: timeout and not done.
> [  333.055616] hisi_sas_v3_hw 0000:74:02.0: abort task: internal abort (-5)
> [  333.062306] sas: sas_scsi_find_task: querying task 0x00000000e2cdc79b
> [  333.068776] sas: sas_scsi_find_task: task 0x00000000e2cdc79b not at LU
> [  333.075295] sas: task 0x00000000e2cdc79b is not at LU: I_T recover
> [  333.081464] sas: I_T nexus reset for dev 5000c500a7b95a49
>
> Please notice the 30-second delay for the SCSI IO timeout.
>
> And now I don't see it; here's a sample for irq shutdown:
> root@(none)$ echo 0 > ./sys/devices/system/cpu/cpu0/online
> [  344.608148] process 849 (fio) no longer affine to cpu0
> [  344.608639] process 848 (fio) no longer affine to cpu0
> [  344.609454] process 850 (fio) no longer affine to cpu0
> [  344.643481] process 847 (fio) no longer affine to cpu0
> [  346.213842] IRQ 775: no longer affine to CPU0
> [  346.219712] CPU0: shutdown
> [  346.222425] psci: CPU0 killed.
>
> Please notice the ~1.5s pause, which would be the queue draining.
>
> So FWIW:
> Tested-by: John Garry <john.garry@huawei.com>
>
> JFYI, I tested on 5.3-rc5 and cherry-picked
> https://github.com/ming1/linux/commit/0d2cd3c99bb0fe81d2c0ca5d68e02bdc4521d4d6
> and "blk-mq: add callback of .cleanup_rq".
>
> Cheers,
> John

Hi Jens,

I don't mean to be pushy, but can we consider to get these patches from 
Ming merged?

As above, I tested on my SCSI driver and it works. I also tested on an 
NVMe disk, and it solves the condition which generates this message:
root@(none)$ echo 0 > /sys/devices/system/cpu/cpu2/online
[  465.635960] CPU2: shutdown
[  465.638662] psci: CPU2 killed.
[  111.381653] nvme nvme0: I/O 705 QID 18 timeout, completion polled

(that's on top off v5.4-rc1)

Thanks,
John



>
>>
>>  block/blk-mq-debugfs.c     |   2 +



  reply	other threads:[~2019-10-02  9:57 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-12 13:43 [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug Ming Lei
2019-08-12 13:43 ` [PATCH V2 1/5] blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED Ming Lei
2019-08-12 13:43 ` [PATCH V2 2/5] blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ Ming Lei
2019-08-12 13:43 ` [PATCH V2 3/5] blk-mq: stop to handle IO before hctx's all CPUs become offline Ming Lei
2019-08-12 14:24   ` Hannes Reinecke
2019-08-12 22:24     ` Ming Lei
2019-08-12 13:43 ` [PATCH V2 4/5] blk-mq: re-submit IO in case that hctx is dead Ming Lei
2019-08-12 14:26   ` Hannes Reinecke
2019-08-12 22:30     ` Ming Lei
2019-08-12 13:43 ` [PATCH V2 5/5] blk-mq: handle requests dispatched from IO scheduler " Ming Lei
2019-08-12 13:46 ` [PATCH V2 0/5] blk-mq: improvement on handling IO during CPU hotplug Ming Lei
2019-08-12 16:21   ` John Garry
2019-08-12 22:45     ` Ming Lei
2019-08-22 17:39 ` John Garry
2019-10-02  9:56   ` John Garry [this message]
2019-10-02 14:36     ` Jens Axboe
2019-10-06  2:47       ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9add1de0-9135-fd1f-8fd4-de17234d8883@huawei.com \
    --to=john.garry@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=minwoo.im.dev@gmail.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).