linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: "Sironi, Filippo" <sironi@amazon.de>
To: Keith Busch <kbusch@kernel.org>
Cc: "sagi@grimberg.me" <sagi@grimberg.me>,
	"bigeasy@linutronix.de" <bigeasy@linutronix.de>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"ming.lei@redhat.com" <ming.lei@redhat.com>,
	"helgaas@kernel.org" <helgaas@kernel.org>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"hch@lst.de" <hch@lst.de>
Subject: Re: [PATCHv2 2/2] nvme/pci: Mask device interrupts for threaded handlers
Date: Wed, 4 Dec 2019 10:10:05 +0000	[thread overview]
Message-ID: <1646A1C5-C2E3-46CD-9269-115601132C4B@amazon.de> (raw)
In-Reply-To: <20191202222058.2096-3-kbusch@kernel.org>



> On 2. Dec 2019, at 23:20, Keith Busch <kbusch@kernel.org> wrote:
> 
> Local interrupts are reenabled when the nvme irq thread is woken, so
> subsequent MSI or a level triggered interrupts may restart the nvme irq
> check while the thread handler is running. This unnecessarily spends CPU
> cycles and potentially triggers spurious interrupt detection, disabling
> our nvme irq.
> 
> Prevent the controller from sending future messages. For legacy and MSI,
> use the nvme interrupt mask/clear registers. For MSIx, use the quick
> control mask function.
> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> drivers/nvme/host/pci.c | 28 ++++++++++++++++++++++++----
> 1 file changed, 24 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 0590640ba62c..4022a872d29c 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -13,8 +13,10 @@
> #include <linux/init.h>
> #include <linux/interrupt.h>
> #include <linux/io.h>
> +#include <linux/irq.h>
> #include <linux/mm.h>
> #include <linux/module.h>
> +#include <linux/msi.h>
> #include <linux/mutex.h>
> #include <linux/once.h>
> #include <linux/pci.h>
> @@ -1036,12 +1038,29 @@ static irqreturn_t nvme_irq(int irq, void *data)
> 	return ret;
> }
> 
> +static irqreturn_t nvme_irq_thread(int irq, void *data)
> +{
> +	struct nvme_queue *nvmeq = data;
> +
> +	nvme_irq(irq, data);
> +	if (to_pci_dev(nvmeq->dev->dev)->msix_enabled)
> +		__pci_msix_desc_mask_irq(irq_get_msi_desc(irq), 0);
> +	else
> +		writel(1 << nvmeq->cq_vector, nvmeq->dev->bar + NVME_REG_INTMC);
> +	return IRQ_HANDLED;
> +}
> +
> static irqreturn_t nvme_irq_check(int irq, void *data)
> {
> 	struct nvme_queue *nvmeq = data;
> -	if (nvme_cqe_pending(nvmeq))
> -		return IRQ_WAKE_THREAD;
> -	return IRQ_NONE;
> +
> +	if (!nvme_cqe_pending(nvmeq))
> +		return IRQ_NONE;
> +	if (to_pci_dev(nvmeq->dev->dev)->msix_enabled)
> +		__pci_msix_desc_mask_irq(irq_get_msi_desc(irq), 1);

Have you considered that __pci_msix_desc_mask_irq will cause
a trap from guest to hypervisor mode when running virtualized?

> +	else
> +		writel(1 << nvmeq->cq_vector, nvmeq->dev->bar + NVME_REG_INTMS);

What's stopping us from always using this method?

> +	return IRQ_WAKE_THREAD;
> }
> 
> /*
> @@ -1499,7 +1518,8 @@ static int queue_request_irq(struct nvme_queue *nvmeq)
> 
> 	if (use_threaded_interrupts) {
> 		return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq_check,
> -				nvme_irq, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
> +				nvme_irq_thread, nvmeq, "nvme%dq%d", nr,
> +				nvmeq->qid);
> 	} else {
> 		return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq,
> 				NULL, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
> -- 
> 2.21.0
> 
> 
> _______________________________________________
> linux-nvme mailing list
> linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2019-12-04 10:10 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-02 22:20 [PATCHv2 0/2] Keith Busch
2019-12-02 22:20 ` [PATCHv2 1/2] PCI/MSI: Export __pci_msix_desc_mask_irq Keith Busch
2019-12-02 22:46   ` Christoph Hellwig
2019-12-03  9:04     ` Sebastian Andrzej Siewior
2019-12-06 21:18       ` Keith Busch
2019-12-02 22:20 ` [PATCHv2 2/2] nvme/pci: Mask device interrupts for threaded handlers Keith Busch
2019-12-03  7:47   ` Christoph Hellwig
2019-12-03 12:07     ` Keith Busch
2019-12-04 10:10   ` Sironi, Filippo [this message]
2019-12-04 13:58     ` hch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1646A1C5-C2E3-46CD-9269-115601132C4B@amazon.de \
    --to=sironi@amazon.de \
    --cc=bigeasy@linutronix.de \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).