linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Chris Friesen <chris.friesen@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Bjorn Helgaas <helgaas@kernel.org>,
	linux-pci@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
	Nitesh Narayan Lal <nitesh@redhat.com>
Subject: Re: PCI, isolcpus, and irq affinity
Date: Thu, 15 Oct 2020 12:02:55 -0700	[thread overview]
Message-ID: <20201015190255.GA1424788@dhcp-10-100-145-180.wdl.wdc.com> (raw)
In-Reply-To: <c1747780-cfac-abe1-eb58-5de532c28fba@windriver.com>

On Thu, Oct 15, 2020 at 12:47:23PM -0600, Chris Friesen wrote:
> On 10/12/2020 1:07 PM, Keith Busch wrote:
> > On Mon, Oct 12, 2020 at 12:58:41PM -0600, Chris Friesen wrote:
> > > On 10/12/2020 11:50 AM, Thomas Gleixner wrote:
> > > > On Mon, Oct 12 2020 at 11:58, Bjorn Helgaas wrote:
> > > > > On Mon, Oct 12, 2020 at 09:49:37AM -0600, Chris Friesen wrote:
> > > > > > I've got a linux system running the RT kernel with threaded irqs.  On
> > > > > > startup we affine the various irq threads to the housekeeping CPUs, but I
> > > > > > recently hit a scenario where after some days of uptime we ended up with a
> > > > > > number of NVME irq threads affined to application cores instead (not good
> > > > > > when we're trying to run low-latency applications).
> > > > 
> > > > These threads and the associated interupt vectors are completely
> > > > harmless and fully idle as long as there is nothing on those isolated
> > > > CPUs which does disk I/O.
> > > 
> > > Some of the irq threads are affined (by the kernel presumably) to multiple
> > > CPUs (nvme1q2 and nvme0q2 were both affined 0x38000038, a couple of other
> > > queues were affined 0x1c00001c0).
> > 
> > That means you have more CPUs than your controller has queues. When that
> > happens, some sharing of the queue resources among CPUs is required.
> 
> Is it required that every CPU is part of the mask for at least one queue?
>
> If we can preferentially route interrupts to the housekeeping CPUs (for
> queues with multiple CPUs in the mask), how is that different than just
> affining all the queues to the housekeeping CPUs and leaving the isolated
> CPUs out of the mask entirely?

The same mask is used for submission affinity. Any CPU can dispatch IO,
so every CPU has to affine to a queue.

  reply	other threads:[~2020-10-15 19:03 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12 15:49 Chris Friesen
2020-10-12 16:58 ` Bjorn Helgaas
2020-10-12 17:39   ` Sean V Kelley
2020-10-12 19:18     ` Chris Friesen
2020-10-12 17:42   ` Nitesh Narayan Lal
2020-10-12 17:50   ` Thomas Gleixner
2020-10-12 18:58     ` Chris Friesen
2020-10-12 19:07       ` Keith Busch
2020-10-12 19:44         ` Thomas Gleixner
2020-10-15 18:47         ` Chris Friesen
2020-10-15 19:02           ` Keith Busch [this message]
2020-10-12 19:31       ` Thomas Gleixner
2020-10-12 20:24         ` David Woodhouse
2020-10-12 22:25           ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201015190255.GA1424788@dhcp-10-100-145-180.wdl.wdc.com \
    --to=kbusch@kernel.org \
    --cc=chris.friesen@windriver.com \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=nitesh@redhat.com \
    --cc=tglx@linutronix.de \
    --subject='Re: PCI, isolcpus, and irq affinity' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).