From: Keith Busch <kbusch@kernel.org> To: Christoph Hellwig <hch@lst.de> Cc: bigeasy@linutronix.de, tglx@linutronix.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, ming.lei@redhat.com Subject: Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Date: Tue, 3 Dec 2019 04:50:06 -0700 [thread overview] Message-ID: <20191203115006.GB86476@C02WT3WMHTD6.lpcnextlight.net> (raw) In-Reply-To: <20191203075046.GF23881@lst.de> On Tue, Dec 03, 2019 at 08:50:46AM +0100, Christoph Hellwig wrote: > On Tue, Dec 03, 2019 at 07:22:04AM +0900, Keith Busch wrote: > > The nvme threaded interrupt handler reduces CPU time spent in hard irq > > context, but waking it increases latency for low queue depth workloads. > > > > Poll the completion queue once from the primary handler and wake the > > thread only if more completions remain after. > > How is this going to work with -rt, which wants to run all actual > interrupt work in the irq thread? That just has the primary and the bottom half run as threads. This patch just avoids a context switch for shallow workloads whether use -rt or not. > > Since there is a window > > of time where the threaded and primary handlers may run simultaneously, > > add a new nvmeq flag so that the two can synchronize which owns processing > > the queue. > > If the above scheme is something that the irq subsystem maintainers are > fine with I think we need to lift that synchronization into the core > irq code instead of open coding it in drivers. It's not generally applicable to all drivers, so it'd have to be an opt-in. I think IRQF_ONESHOT pretty much captures the same idea, though. > But I'd love to understand what kind of performance this split scheme > gives us to start with. Do you have any numbers? I do have a little bit of data. On a queue-depth 1 test, returning IRQ_WAKE_THREAD adds just under 1usec latency compared to completing in the primary handler. That's about 10% for some of the faster devices out there. _______________________________________________ linux-nvme mailing list linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-12-03 11:50 UTC|newest] Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-12-02 22:22 [RFC PATCH 0/3] nvme threaded interrupt improvements Keith Busch 2019-12-02 22:22 ` [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Keith Busch 2019-12-03 7:50 ` Christoph Hellwig 2019-12-03 9:39 ` Sebastian Andrzej Siewior 2019-12-03 11:50 ` Keith Busch [this message] 2019-12-03 10:09 ` Sebastian Andrzej Siewior 2019-12-03 11:16 ` Keith Busch 2019-12-04 10:21 ` Ming Lei 2019-12-02 22:22 ` [RFC PATCH 2/3] nvme/pci: Remove use_threaded_interrupts parameter Keith Busch 2019-12-02 22:22 ` [RFC PATCH 3/3] nvme/pci: Poll for new completions in irq thread Keith Busch 2019-12-03 10:17 ` Sebastian Andrzej Siewior
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191203115006.GB86476@C02WT3WMHTD6.lpcnextlight.net \ --to=kbusch@kernel.org \ --cc=bigeasy@linutronix.de \ --cc=hch@lst.de \ --cc=linux-nvme@lists.infradead.org \ --cc=ming.lei@redhat.com \ --cc=sagi@grimberg.me \ --cc=tglx@linutronix.de \ --subject='Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).