linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@meta.com>,
	linux-nvme@lists.infradead.org, hch@lst.de
Subject: Re: [PATCHv2] nvme-pci: allow unmanaged interrupts
Date: Fri, 10 May 2024 18:29:23 -0600	[thread overview]
Message-ID: <Zj6745UDnwX1BteO@kbusch-mbp.dhcp.thefacebook.com> (raw)
In-Reply-To: <Zj6yDtRzl68zspQY@fedora>

On Sat, May 11, 2024 at 07:47:26AM +0800, Ming Lei wrote:
> On Fri, May 10, 2024 at 10:46:45AM -0700, Keith Busch wrote:
> >  		map->queue_offset = qoff;
> > -		if (i != HCTX_TYPE_POLL && offset)
> > +		if (managed_irqs && i != HCTX_TYPE_POLL && offset)
> >  			blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset);
> >  		else
> >  			blk_mq_map_queues(map);
> 
> Now the queue mapping is built with nothing from irq affinity which is
> setup from userspace, and performance could be pretty bad.

This just decouples the sw from the irq mappings. Every cpu still has a
blk-mq hctx, there's just no connection to the completing CPU if you
enable this.

Everyone expects nvme performance will suffer. IO latency and CPU
efficieny are not everyone's top priority, so allowing people to
optimize for something else seems like a reasonable request.
 
> Is there any benefit to use unmanaged irq in this way?

The immediate desire is more predictable scheduling on a subset of CPUs
by steering hardware interrupts somewhere else. It's the same reason
RDMA undid managed interrupts.

  231243c82793428 ("Revert "mlx5: move affinity hints assignments to generic code")

Yes, the kernel's managed interrupts are the best choice for optimizing
interaction with that device, but it's not free, and maybe you want to
exchange that optimization for something else.


  reply	other threads:[~2024-05-11  0:29 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-10 17:46 [PATCHv2] nvme-pci: allow unmanaged interrupts Keith Busch
2024-05-10 23:47 ` Ming Lei
2024-05-11  0:29   ` Keith Busch [this message]
2024-05-11  0:44     ` Ming Lei
2024-05-12 14:16       ` Sagi Grimberg
2024-05-12 22:05         ` Keith Busch
2024-05-13  1:12         ` Ming Lei
2024-05-13  4:09           ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zj6745UDnwX1BteO@kbusch-mbp.dhcp.thefacebook.com \
    --to=kbusch@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@meta.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).