Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Keith Busch <kbusch@kernel.org>
Cc: sagi@grimberg.me, bigeasy@linutronix.de,
	linux-nvme@lists.infradead.org, ming.lei@redhat.com,
	tglx@linutronix.de, hch@lst.de
Subject: Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler
Date: Tue, 3 Dec 2019 08:50:46 +0100
Message-ID: <20191203075046.GF23881@lst.de> (raw)
In-Reply-To: <20191202222206.2225-2-kbusch@kernel.org>

On Tue, Dec 03, 2019 at 07:22:04AM +0900, Keith Busch wrote:
> The nvme threaded interrupt handler reduces CPU time spent in hard irq
> context, but waking it increases latency for low queue depth workloads.
> 
> Poll the completion queue once from the primary handler and wake the
> thread only if more completions remain after.

How is this going to work with -rt, which wants to run all actual
interrupt work in the irq thread?

> Since there is a window
> of time where the threaded and primary handlers may run simultaneously,
> add a new nvmeq flag so that the two can synchronize which owns processing
> the queue.

If the above scheme is something that the irq subsystem maintainers are
fine with I think we need to lift that synchronization into the core
irq code instead of open coding it in drivers.

But I'd love to understand what kind of performance this split scheme
gives us to start with.  Do you have any numbers?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply index

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-02 22:22 [RFC PATCH 0/3] nvme threaded interrupt improvements Keith Busch
2019-12-02 22:22 ` [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Keith Busch
2019-12-03  7:50   ` Christoph Hellwig [this message]
2019-12-03  9:39     ` Sebastian Andrzej Siewior
2019-12-03 11:50     ` Keith Busch
2019-12-03 10:09   ` Sebastian Andrzej Siewior
2019-12-03 11:16     ` Keith Busch
2019-12-04 10:21       ` Ming Lei
2019-12-02 22:22 ` [RFC PATCH 2/3] nvme/pci: Remove use_threaded_interrupts parameter Keith Busch
2019-12-02 22:22 ` [RFC PATCH 3/3] nvme/pci: Poll for new completions in irq thread Keith Busch
2019-12-03 10:17   ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191203075046.GF23881@lst.de \
    --to=hch@lst.de \
    --cc=bigeasy@linutronix.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git