From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Bjorn Helgaas <bhelgaas@google.com>,
linux-pci@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
Sagi Grimberg <sagi@grimberg.me>, Daniel Wagner <dwagner@suse.de>,
Wen Xiong <wenxiong@us.ibm.com>, Hannes Reinecke <hare@suse.de>,
Keith Busch <kbusch@kernel.org>
Subject: Re: [PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed
Date: Tue, 20 Jul 2021 10:38:08 +0800 [thread overview]
Message-ID: <YPY3EMngyf2JFZ3j@T590> (raw)
In-Reply-To: <5153406c-e3ed-d466-5603-14fd919304f4@huawei.com>
On Mon, Jul 19, 2021 at 11:39:53AM +0100, John Garry wrote:
> On 19/07/2021 10:44, Christoph Hellwig wrote:
> > On Mon, Jul 19, 2021 at 08:51:22AM +0100, John Garry wrote:
> > > > Address this issue by adding one field of .irq_affinity_managed into
> > > > 'struct device'.
> > > >
> > > > Suggested-by: Christoph Hellwig <hch@lst.de>
> > > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > >
> > > Did you consider that for PCI device we effectively have this info already:
> > >
> > > bool dev_has_managed_msi_irq(struct device *dev)
> > > {
> > > struct msi_desc *desc;
> > >
> > > list_for_each_entry(desc, dev_to_msi_list(dev), list)
>
> I just noticed for_each_msi_entry(), which is the same
>
>
> > > if (desc->affinity && desc->affinity->is_managed)
> > > return true;
> > > }
> > >
> > > return false;
> >
> > Just walking the list seems fine to me given that this is not a
> > performance criticial path. But what are the locking implications?
>
> Since it would be used for sequential setup code, I didn't think any locking
> was required. But would need to consider where that function lived and
> whether it's public.
Yeah, the allocated irq vectors should be live when running map queues.
>
> >
> > Also does the above imply this won't work for your platform MSI case?
> > .
> >
>
> Right. I think that it may be possible to reach into the platform msi
> descriptors to get this info, but I am not sure it's worth it. There is only
> 1x user there and there is no generic .map_queues function, so could set the
> flag directly:
>
> int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev
> *pdev,
> for_each_cpu(cpu, mask)
> qmap->mq_map[cpu] = qmap->queue_offset + queue;
> }
> + qmap->use_managed_irq = dev_has_managed_msi_irq(&pdev->dev);
> }
>
> --- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
> +++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
> @@ -3563,6 +3563,8 @@ static int map_queues_v2_hw(struct Scsi_Host *shost)
> qmap->mq_map[cpu] = qmap->queue_offset + queue;
> }
>
> + qmap->use_managed_irq = 1;
> +
> return 0;
virtio can be populated via platform device too, but managed irq affinity
isn't used, so seems dev_has_managed_msi_irq() is fine.
Thanks,
Ming
next prev parent reply other threads:[~2021-07-20 2:48 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-15 12:08 [PATCH V4 0/3] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-07-15 12:08 ` [PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed Ming Lei
2021-07-15 12:40 ` Greg Kroah-Hartman
2021-07-16 2:17 ` Ming Lei
2021-07-16 20:01 ` Bjorn Helgaas
2021-07-17 9:30 ` Ming Lei
2021-07-21 0:30 ` Bjorn Helgaas
2021-07-19 7:51 ` John Garry
2021-07-19 9:44 ` Christoph Hellwig
2021-07-19 10:39 ` John Garry
2021-07-20 2:38 ` Ming Lei [this message]
2021-07-21 7:20 ` Thomas Gleixner
2021-07-21 7:24 ` Christoph Hellwig
2021-07-21 9:44 ` John Garry
2021-07-21 20:22 ` Thomas Gleixner
2021-07-22 7:48 ` John Garry
2021-07-21 20:14 ` Thomas Gleixner
2021-07-21 20:32 ` Christoph Hellwig
2021-07-21 22:38 ` Thomas Gleixner
2021-07-22 7:46 ` Christoph Hellwig
2021-07-15 12:08 ` [PATCH V4 2/3] blk-mq: mark if one queue map uses managed irq Ming Lei
2021-07-15 12:08 ` [PATCH V4 3/3] blk-mq: don't deactivate hctx if managed irq isn't used Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YPY3EMngyf2JFZ3j@T590 \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=dwagner@suse.de \
--cc=gregkh@linuxfoundation.org \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=john.garry@huawei.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=sagi@grimberg.me \
--cc=tglx@linutronix.de \
--cc=wenxiong@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).