From: Ming Lei <ming.lei@redhat.com> To: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>, "Martin K . Petersen" <martin.petersen@oracle.com>, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg <sagi@grimberg.me>, Daniel Wagner <dwagner@suse.de>, Wen Xiong <wenxiong@us.ibm.com>, John Garry <john.garry@huawei.com>, Hannes Reinecke <hare@suse.de>, Keith Busch <kbusch@kernel.org>, Damien Le Moal <damien.lemoal@wdc.com>, Ming Lei <ming.lei@redhat.com> Subject: [PATCH V2 0/6] blk-mq: fix blk_mq_alloc_request_hctx Date: Fri, 2 Jul 2021 23:05:49 +0800 [thread overview] Message-ID: <20210702150555.2401722-1-ming.lei@redhat.com> (raw) Hi, blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask. However, all cpus in hctx->cpumask may be offline. This usage model isn't well supported by blk-mq which supposes allocator is always done on one online CPU in hctx->cpumask. This assumption is related with managed irq, which also requires blk-mq to drain inflight request in this hctx when the last cpu in hctx->cpumask is going to offline. However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow them to ask for request allocation when the specified hctx is inactive (all cpus in hctx->cpumask are offline). Fix blk_mq_alloc_request_hctx() by adding/passing flag of BLK_MQ_F_MANAGED_IRQ. Meantime optimize blk-mq cpu hotplug handling for non-managed irq. V2: - use flag of BLK_MQ_F_MANAGED_IRQ - pass BLK_MQ_F_MANAGED_IRQ from driver explicitly - kill BLK_MQ_F_STACKING Ming Lei (6): blk-mq: prepare for not deactivating hctx if managed irq isn't used nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq scsi: add flag of .use_managed_irq to 'struct Scsi_Host' scsi: set shost->use_managed_irq if driver uses managed irq virtio: add one field into virtio_device for recording if device uses managed irq blk-mq: don't deactivate hctx if managed irq isn't used block/blk-mq-debugfs.c | 2 +- block/blk-mq.c | 27 +++++++++++++---------- drivers/block/loop.c | 2 +- drivers/block/virtio_blk.c | 2 ++ drivers/md/dm-rq.c | 2 +- drivers/nvme/host/pci.c | 3 ++- drivers/scsi/aacraid/linit.c | 3 +++ drivers/scsi/be2iscsi/be_main.c | 3 +++ drivers/scsi/csiostor/csio_init.c | 3 +++ drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 1 + drivers/scsi/hpsa.c | 3 +++ drivers/scsi/lpfc/lpfc.h | 1 + drivers/scsi/lpfc/lpfc_init.c | 4 ++++ drivers/scsi/megaraid/megaraid_sas_base.c | 3 +++ drivers/scsi/mpt3sas/mpt3sas_scsih.c | 3 +++ drivers/scsi/qla2xxx/qla_isr.c | 5 ++++- drivers/scsi/scsi_lib.c | 12 +++++----- drivers/scsi/smartpqi/smartpqi_init.c | 3 +++ drivers/scsi/virtio_scsi.c | 1 + drivers/virtio/virtio_pci_common.c | 1 + include/linux/blk-mq.h | 6 +---- include/linux/virtio.h | 1 + include/scsi/scsi_host.h | 3 +++ 23 files changed, 67 insertions(+), 27 deletions(-) -- 2.31.1
WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com> To: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>, "Martin K . Petersen" <martin.petersen@oracle.com>, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg <sagi@grimberg.me>, Daniel Wagner <dwagner@suse.de>, Wen Xiong <wenxiong@us.ibm.com>, John Garry <john.garry@huawei.com>, Hannes Reinecke <hare@suse.de>, Keith Busch <kbusch@kernel.org>, Damien Le Moal <damien.lemoal@wdc.com>, Ming Lei <ming.lei@redhat.com> Subject: [PATCH V2 0/6] blk-mq: fix blk_mq_alloc_request_hctx Date: Fri, 2 Jul 2021 23:05:49 +0800 [thread overview] Message-ID: <20210702150555.2401722-1-ming.lei@redhat.com> (raw) Hi, blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask. However, all cpus in hctx->cpumask may be offline. This usage model isn't well supported by blk-mq which supposes allocator is always done on one online CPU in hctx->cpumask. This assumption is related with managed irq, which also requires blk-mq to drain inflight request in this hctx when the last cpu in hctx->cpumask is going to offline. However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow them to ask for request allocation when the specified hctx is inactive (all cpus in hctx->cpumask are offline). Fix blk_mq_alloc_request_hctx() by adding/passing flag of BLK_MQ_F_MANAGED_IRQ. Meantime optimize blk-mq cpu hotplug handling for non-managed irq. V2: - use flag of BLK_MQ_F_MANAGED_IRQ - pass BLK_MQ_F_MANAGED_IRQ from driver explicitly - kill BLK_MQ_F_STACKING Ming Lei (6): blk-mq: prepare for not deactivating hctx if managed irq isn't used nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq scsi: add flag of .use_managed_irq to 'struct Scsi_Host' scsi: set shost->use_managed_irq if driver uses managed irq virtio: add one field into virtio_device for recording if device uses managed irq blk-mq: don't deactivate hctx if managed irq isn't used block/blk-mq-debugfs.c | 2 +- block/blk-mq.c | 27 +++++++++++++---------- drivers/block/loop.c | 2 +- drivers/block/virtio_blk.c | 2 ++ drivers/md/dm-rq.c | 2 +- drivers/nvme/host/pci.c | 3 ++- drivers/scsi/aacraid/linit.c | 3 +++ drivers/scsi/be2iscsi/be_main.c | 3 +++ drivers/scsi/csiostor/csio_init.c | 3 +++ drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 1 + drivers/scsi/hpsa.c | 3 +++ drivers/scsi/lpfc/lpfc.h | 1 + drivers/scsi/lpfc/lpfc_init.c | 4 ++++ drivers/scsi/megaraid/megaraid_sas_base.c | 3 +++ drivers/scsi/mpt3sas/mpt3sas_scsih.c | 3 +++ drivers/scsi/qla2xxx/qla_isr.c | 5 ++++- drivers/scsi/scsi_lib.c | 12 +++++----- drivers/scsi/smartpqi/smartpqi_init.c | 3 +++ drivers/scsi/virtio_scsi.c | 1 + drivers/virtio/virtio_pci_common.c | 1 + include/linux/blk-mq.h | 6 +---- include/linux/virtio.h | 1 + include/scsi/scsi_host.h | 3 +++ 23 files changed, 67 insertions(+), 27 deletions(-) -- 2.31.1 _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next reply other threads:[~2021-07-02 15:06 UTC|newest] Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-02 15:05 Ming Lei [this message] 2021-07-02 15:05 ` [PATCH V2 0/6] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei 2021-07-02 15:05 ` [PATCH V2 1/6] blk-mq: prepare for not deactivating hctx if managed irq isn't used Ming Lei 2021-07-02 15:05 ` Ming Lei 2021-07-02 15:05 ` [PATCH V2 2/6] nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq Ming Lei 2021-07-02 15:05 ` Ming Lei 2021-07-02 15:05 ` [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host' Ming Lei 2021-07-02 15:05 ` Ming Lei 2021-07-05 9:25 ` John Garry 2021-07-05 9:25 ` John Garry 2021-07-05 9:55 ` Ming Lei 2021-07-05 9:55 ` Ming Lei 2021-07-06 5:37 ` Christoph Hellwig 2021-07-06 5:37 ` Christoph Hellwig 2021-07-06 7:41 ` Ming Lei 2021-07-06 7:41 ` Ming Lei 2021-07-06 10:32 ` Hannes Reinecke 2021-07-06 10:32 ` Hannes Reinecke 2021-07-07 10:53 ` Ming Lei 2021-07-07 10:53 ` Ming Lei 2021-07-02 15:05 ` [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq Ming Lei 2021-07-02 15:05 ` Ming Lei 2021-07-05 7:35 ` John Garry 2021-07-05 7:35 ` John Garry 2021-07-06 5:38 ` Christoph Hellwig 2021-07-06 5:38 ` Christoph Hellwig 2021-07-02 15:05 ` [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device " Ming Lei 2021-07-02 15:05 ` Ming Lei 2021-07-02 15:05 ` Ming Lei 2021-07-02 15:55 ` Michael S. Tsirkin 2021-07-02 15:55 ` Michael S. Tsirkin 2021-07-02 15:55 ` Michael S. Tsirkin 2021-07-05 2:48 ` Ming Lei 2021-07-05 2:48 ` Ming Lei 2021-07-05 2:48 ` Ming Lei 2021-07-05 3:59 ` Jason Wang 2021-07-05 3:59 ` Jason Wang 2021-07-05 3:59 ` Jason Wang 2021-07-06 5:42 ` Christoph Hellwig 2021-07-06 5:42 ` Christoph Hellwig 2021-07-06 5:42 ` Christoph Hellwig 2021-07-06 7:53 ` Ming Lei 2021-07-06 7:53 ` Ming Lei 2021-07-06 7:53 ` Ming Lei 2021-07-07 9:06 ` Thomas Gleixner 2021-07-07 9:06 ` Thomas Gleixner 2021-07-07 9:06 ` Thomas Gleixner 2021-07-07 9:42 ` Ming Lei 2021-07-07 9:42 ` Ming Lei 2021-07-07 9:42 ` Ming Lei 2021-07-07 14:05 ` Christoph Hellwig 2021-07-07 14:05 ` Christoph Hellwig 2021-07-07 14:05 ` Christoph Hellwig 2021-07-08 6:34 ` Ming Lei 2021-07-08 6:34 ` Ming Lei 2021-07-08 6:34 ` Ming Lei 2021-07-02 15:05 ` [PATCH V2 6/6] blk-mq: don't deactivate hctx if managed irq isn't used Ming Lei 2021-07-02 15:05 ` Ming Lei
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210702150555.2401722-1-ming.lei@redhat.com \ --to=ming.lei@redhat.com \ --cc=axboe@kernel.dk \ --cc=damien.lemoal@wdc.com \ --cc=dwagner@suse.de \ --cc=hare@suse.de \ --cc=hch@lst.de \ --cc=john.garry@huawei.com \ --cc=kbusch@kernel.org \ --cc=linux-block@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=linux-scsi@vger.kernel.org \ --cc=martin.petersen@oracle.com \ --cc=sagi@grimberg.me \ --cc=wenxiong@us.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.