linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: ming.lei@redhat.com, hch@lst.de, linux-nvme@lists.infradead.org,
	sagi@grimberg.me
Subject: Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler
Date: Tue, 3 Dec 2019 04:16:26 -0700	[thread overview]
Message-ID: <20191203111626.GA86476@C02WT3WMHTD6.lpcnextlight.net> (raw)
In-Reply-To: <20191203100930.r76fiu3s5hlbrlxu@linutronix.de>

On Tue, Dec 03, 2019 at 11:09:30AM +0100, Sebastian Andrzej Siewior wrote:
> On 2019-12-03 07:22:04 [+0900], Keith Busch wrote:
> > The nvme threaded interrupt handler reduces CPU time spent in hard irq
> > context, but waking it increases latency for low queue depth workloads.
> > 
> > Poll the completion queue once from the primary handler and wake the
> > thread only if more completions remain after. Since there is a window
> > of time where the threaded and primary handlers may run simultaneously,
> > add a new nvmeq flag so that the two can synchronize which owns processing
> > the queue.
> 
> It depends on what you mean by "run simultaneously" but it sounds like
> this does not happen.
> 
> The primary handler disables the interrupt source and returns
> IRQ_WAKE_THREAD. From now on, the primary handler won't fire (unless it
> is a shared handler and someone else gets an interrupt).

The driver won't share these interrupts, despite some wierd pci
host bridges that force sharing among  other devices (ex: see the
only user of handle_untracked_irq). That isn't what I was considering
though.

It's true the controller won't send new MSIs once masked, but my
concern is for MSIs on the wire that pass that MMIO mask write.
Those retrigger the primary handler after it previously returned
IRQ_WAKE_THREAD.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2019-12-03 11:16 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-02 22:22 [RFC PATCH 0/3] nvme threaded interrupt improvements Keith Busch
2019-12-02 22:22 ` [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Keith Busch
2019-12-03  7:50   ` Christoph Hellwig
2019-12-03  9:39     ` Sebastian Andrzej Siewior
2019-12-03 11:50     ` Keith Busch
2019-12-03 10:09   ` Sebastian Andrzej Siewior
2019-12-03 11:16     ` Keith Busch [this message]
2019-12-04 10:21       ` Ming Lei
2019-12-02 22:22 ` [RFC PATCH 2/3] nvme/pci: Remove use_threaded_interrupts parameter Keith Busch
2019-12-02 22:22 ` [RFC PATCH 3/3] nvme/pci: Poll for new completions in irq thread Keith Busch
2019-12-03 10:17   ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191203111626.GA86476@C02WT3WMHTD6.lpcnextlight.net \
    --to=kbusch@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --subject='Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).