From: Keith Busch <kbusch@kernel.org> To: linux-nvme@lists.infradead.org Cc: Keith Busch <kbusch@kernel.org>, bigeasy@linutronix.de, ming.lei@redhat.com, hch@lst.de, sagi@grimberg.me Subject: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Date: Tue, 3 Dec 2019 07:22:04 +0900 [thread overview] Message-ID: <20191202222206.2225-2-kbusch@kernel.org> (raw) In-Reply-To: <20191202222206.2225-1-kbusch@kernel.org> The nvme threaded interrupt handler reduces CPU time spent in hard irq context, but waking it increases latency for low queue depth workloads. Poll the completion queue once from the primary handler and wake the thread only if more completions remain after. Since there is a window of time where the threaded and primary handlers may run simultaneously, add a new nvmeq flag so that the two can synchronize which owns processing the queue. Signed-off-by: Keith Busch <kbusch@kernel.org> --- drivers/nvme/host/pci.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 4022a872d29c..c811c0984fe0 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -186,6 +186,7 @@ struct nvme_queue { #define NVMEQ_SQ_CMB 1 #define NVMEQ_DELETE_ERROR 2 #define NVMEQ_POLLED 3 +#define NVMEQ_THREAD 4 u32 *dbbuf_sq_db; u32 *dbbuf_cq_db; u32 *dbbuf_sq_ei; @@ -1047,15 +1048,27 @@ static irqreturn_t nvme_irq_thread(int irq, void *data) __pci_msix_desc_mask_irq(irq_get_msi_desc(irq), 0); else writel(1 << nvmeq->cq_vector, nvmeq->dev->bar + NVME_REG_INTMC); + clear_bit(NVMEQ_THREAD, &nvmeq->flags); return IRQ_HANDLED; } static irqreturn_t nvme_irq_check(int irq, void *data) { struct nvme_queue *nvmeq = data; + irqreturn_t ret; if (!nvme_cqe_pending(nvmeq)) return IRQ_NONE; + + if (test_and_set_bit(NVMEQ_THREAD, &nvmeq->flags)) + return IRQ_NONE; + + ret = nvme_irq(irq, data); + if (!nvme_cqe_pending(nvmeq)) { + clear_bit(NVMEQ_THREAD, &nvmeq->flags); + return ret; + } + if (to_pci_dev(nvmeq->dev->dev)->msix_enabled) __pci_msix_desc_mask_irq(irq_get_msi_desc(irq), 1); else -- 2.21.0 _______________________________________________ linux-nvme mailing list linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-12-02 22:22 UTC|newest] Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-12-02 22:22 [RFC PATCH 0/3] nvme threaded interrupt improvements Keith Busch 2019-12-02 22:22 ` Keith Busch [this message] 2019-12-03 7:50 ` [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Christoph Hellwig 2019-12-03 9:39 ` Sebastian Andrzej Siewior 2019-12-03 11:50 ` Keith Busch 2019-12-03 10:09 ` Sebastian Andrzej Siewior 2019-12-03 11:16 ` Keith Busch 2019-12-04 10:21 ` Ming Lei 2019-12-02 22:22 ` [RFC PATCH 2/3] nvme/pci: Remove use_threaded_interrupts parameter Keith Busch 2019-12-02 22:22 ` [RFC PATCH 3/3] nvme/pci: Poll for new completions in irq thread Keith Busch 2019-12-03 10:17 ` Sebastian Andrzej Siewior
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191202222206.2225-2-kbusch@kernel.org \ --to=kbusch@kernel.org \ --cc=bigeasy@linutronix.de \ --cc=hch@lst.de \ --cc=linux-nvme@lists.infradead.org \ --cc=ming.lei@redhat.com \ --cc=sagi@grimberg.me \ --subject='Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).