From: Ming Lei <firstname.lastname@example.org> To: Keith Busch <email@example.com> Cc: Sebastian Andrzej Siewior <firstname.lastname@example.org>, email@example.com, firstname.lastname@example.org, email@example.com Subject: Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Date: Wed, 4 Dec 2019 18:21:55 +0800 [thread overview] Message-ID: <20191204102155.GC5958@ming.t460p> (raw) In-Reply-To: <20191203111626.GA86476@C02WT3WMHTD6.lpcnextlight.net> On Tue, Dec 03, 2019 at 04:16:26AM -0700, Keith Busch wrote: > On Tue, Dec 03, 2019 at 11:09:30AM +0100, Sebastian Andrzej Siewior wrote: > > On 2019-12-03 07:22:04 [+0900], Keith Busch wrote: > > > The nvme threaded interrupt handler reduces CPU time spent in hard irq > > > context, but waking it increases latency for low queue depth workloads. > > > > > > Poll the completion queue once from the primary handler and wake the > > > thread only if more completions remain after. Since there is a window > > > of time where the threaded and primary handlers may run simultaneously, > > > add a new nvmeq flag so that the two can synchronize which owns processing > > > the queue. > > > > It depends on what you mean by "run simultaneously" but it sounds like > > this does not happen. > > > > The primary handler disables the interrupt source and returns > > IRQ_WAKE_THREAD. From now on, the primary handler won't fire (unless it > > is a shared handler and someone else gets an interrupt). > > The driver won't share these interrupts, despite some wierd pci > host bridges that force sharing among other devices (ex: see the > only user of handle_untracked_irq). That isn't what I was considering > though. > > It's true the controller won't send new MSIs once masked, but my > concern is for MSIs on the wire that pass that MMIO mask write. Could you explain a bit what is the 'MSIs on the wire'? And where is it from? Thanks, Ming _______________________________________________ linux-nvme mailing list firstname.lastname@example.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-12-04 10:22 UTC|newest] Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-12-02 22:22 [RFC PATCH 0/3] nvme threaded interrupt improvements Keith Busch 2019-12-02 22:22 ` [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler Keith Busch 2019-12-03 7:50 ` Christoph Hellwig 2019-12-03 9:39 ` Sebastian Andrzej Siewior 2019-12-03 11:50 ` Keith Busch 2019-12-03 10:09 ` Sebastian Andrzej Siewior 2019-12-03 11:16 ` Keith Busch 2019-12-04 10:21 ` Ming Lei [this message] 2019-12-02 22:22 ` [RFC PATCH 2/3] nvme/pci: Remove use_threaded_interrupts parameter Keith Busch 2019-12-02 22:22 ` [RFC PATCH 3/3] nvme/pci: Poll for new completions in irq thread Keith Busch 2019-12-03 10:17 ` Sebastian Andrzej Siewior
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191204102155.GC5958@ming.t460p \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --subject='Re: [RFC PATCH 1/3] nvme/pci: Poll the cq in the primary irq handler' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).