linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V4 0/3] blk-mq: fix blk_mq_alloc_request_hctx
@ 2021-07-15 12:08 Ming Lei
  2021-07-15 12:08 ` [PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed Ming Lei
                   ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Ming Lei @ 2021-07-15 12:08 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, linux-block, linux-nvme,
	Greg Kroah-Hartman, Bjorn Helgaas, linux-pci
  Cc: Thomas Gleixner, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Ming Lei

Hi,

blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
However, all cpus in hctx->cpumask may be offline.

This usage model isn't well supported by blk-mq which supposes allocator is
always done on one online CPU in hctx->cpumask. This assumption is
related with managed irq, which also requires blk-mq to drain inflight
request in this hctx when the last cpu in hctx->cpumask is going to
offline.

However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
them to ask for request allocation when the specified hctx is inactive
(all cpus in hctx->cpumask are offline). Fix blk_mq_alloc_request_hctx() by
allowing to allocate request when all CPUs of this hctx are offline.


V4:
	- remove patches for cleanup queue map helpers
	- take Christoph's suggestion to add field into 'struct device' for
	describing if managed irq is allocated from one device

V3:
	- cleanup map queues helpers, and remove pci/virtio/rdma queue
	  helpers
	- store use managed irq info into qmap


V2:
	- use flag of BLK_MQ_F_MANAGED_IRQ
	- pass BLK_MQ_F_MANAGED_IRQ from driver explicitly
	- kill BLK_MQ_F_STACKING


Ming Lei (3):
  driver core: mark device as irq affinity managed if any irq is managed
  blk-mq: mark if one queue map uses managed irq
  blk-mq: don't deactivate hctx if managed irq isn't used

 block/blk-mq-pci.c      |  1 +
 block/blk-mq-rdma.c     |  3 +++
 block/blk-mq-virtio.c   |  1 +
 block/blk-mq.c          | 27 +++++++++++++++++----------
 block/blk-mq.h          |  8 ++++++++
 drivers/base/platform.c |  7 +++++++
 drivers/pci/msi.c       |  3 +++
 include/linux/blk-mq.h  |  3 ++-
 include/linux/device.h  |  1 +
 9 files changed, 43 insertions(+), 11 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-07-22  7:48 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15 12:08 [PATCH V4 0/3] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-07-15 12:08 ` [PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed Ming Lei
2021-07-15 12:40   ` Greg Kroah-Hartman
2021-07-16  2:17     ` Ming Lei
2021-07-16 20:01   ` Bjorn Helgaas
2021-07-17  9:30     ` Ming Lei
2021-07-21  0:30       ` Bjorn Helgaas
2021-07-19  7:51   ` John Garry
2021-07-19  9:44     ` Christoph Hellwig
2021-07-19 10:39       ` John Garry
2021-07-20  2:38         ` Ming Lei
2021-07-21  7:20       ` Thomas Gleixner
2021-07-21  7:24         ` Christoph Hellwig
2021-07-21  9:44           ` John Garry
2021-07-21 20:22             ` Thomas Gleixner
2021-07-22  7:48               ` John Garry
2021-07-21 20:14           ` Thomas Gleixner
2021-07-21 20:32             ` Christoph Hellwig
2021-07-21 22:38               ` Thomas Gleixner
2021-07-22  7:46                 ` Christoph Hellwig
2021-07-15 12:08 ` [PATCH V4 2/3] blk-mq: mark if one queue map uses managed irq Ming Lei
2021-07-15 12:08 ` [PATCH V4 3/3] blk-mq: don't deactivate hctx if managed irq isn't used Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).