All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Derrick, Jonathan" <jonathan.derrick@intel.com>
To: "kbusch@kernel.org" <kbusch@kernel.org>
Cc: "lorenzo.pieralisi@arm.com" <lorenzo.pieralisi@arm.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	"helgaas@kernel.org" <helgaas@kernel.org>
Subject: Re: [PATCH 3/3] PCI: vmd: Use managed irq affinities
Date: Wed, 6 Nov 2019 20:14:41 +0000	[thread overview]
Message-ID: <0a4a4151b56567f3c8ca71a29a2e39add6e3bf77.camel@intel.com> (raw)
In-Reply-To: <20191106181032.GD29853@redsun51.ssa.fujisawa.hgst.com>

On Thu, 2019-11-07 at 03:10 +0900, Keith Busch wrote:
> On Wed, Nov 06, 2019 at 04:40:08AM -0700, Jon Derrick wrote:
> > Using managed IRQ affinities sets up the VMD affinities identically to
> > the child devices when those devices vector counts are limited by VMD.
> > This promotes better affinity handling as interrupts won't necessarily
> > need to pass context between non-local CPUs. One pre-vector is reserved
> > for the slow interrupt and not considered in the affinity algorithm.
> 
> This only works if all devices have exactly the same number of interrupts
> as the parent VMD host bridge. If a child device has less, the device
> will stop working if you offline a cpu: the child device may have a
> resource affined to other online cpus, but the VMD device affinity is to
> that single offline cpu.

Yes that problem exists today and this set limits the exposure as it's
a rare case where you have a child NVMe device with fewer than 32
vectors.

  reply	other threads:[~2019-11-06 20:14 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-06 11:40 [PATCH 0/3] PCI: vmd: Reducing tail latency by affining to the storage stack Jon Derrick
2019-11-06 11:40 ` [PATCH 1/3] PCI: vmd: Reduce VMD vectors using NVMe calculation Jon Derrick
2019-11-06 18:02   ` Keith Busch
2019-11-06 19:51     ` Derrick, Jonathan
2019-11-06 11:40 ` [PATCH 2/3] PCI: vmd: Align IRQ lists with child device vectors Jon Derrick
2019-11-06 18:06   ` Keith Busch
2019-11-06 20:14     ` Derrick, Jonathan
2019-11-06 11:40 ` [PATCH 3/3] PCI: vmd: Use managed irq affinities Jon Derrick
2019-11-06 18:10   ` Keith Busch
2019-11-06 20:14     ` Derrick, Jonathan [this message]
2019-11-06 20:27       ` Keith Busch
2019-11-06 20:33         ` Derrick, Jonathan
2019-11-18 10:49           ` Lorenzo Pieralisi
2019-11-18 16:43             ` Derrick, Jonathan
2019-11-07  9:39 ` [PATCH 0/3] PCI: vmd: Reducing tail latency by affining to the storage stack Christoph Hellwig
2019-11-07 14:12   ` Derrick, Jonathan
2019-11-07 15:37     ` hch
2019-11-07 15:40       ` Derrick, Jonathan
2019-11-07 15:42         ` hch
2019-11-07 15:47           ` Derrick, Jonathan
2019-11-11 17:03             ` hch
2022-12-23  2:33 ` Kai-Heng Feng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0a4a4151b56567f3c8ca71a29a2e39add6e3bf77.camel@intel.com \
    --to=jonathan.derrick@intel.com \
    --cc=helgaas@kernel.org \
    --cc=kbusch@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.