linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jon Derrick <jonathan.derrick@intel.com>
To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Keith Busch <kbusch@kernel.org>,
	Bjorn Helgaas <helgaas@kernel.org>, <linux-pci@vger.kernel.org>,
	Jon Derrick <jonathan.derrick@intel.com>
Subject: [PATCH 2/3] PCI: vmd: Align IRQ lists with child device vectors
Date: Wed,  6 Nov 2019 04:40:07 -0700	[thread overview]
Message-ID: <1573040408-3831-3-git-send-email-jonathan.derrick@intel.com> (raw)
In-Reply-To: <1573040408-3831-1-git-send-email-jonathan.derrick@intel.com>

In order to provide better affinity alignment along the entire storage
stack, VMD IRQ lists can be assigned to in a manner where the underlying
IRQ can be affinitized the same as the child (NVMe) device.

This patch changes the assignment of child device vectors in IRQ lists
from a round-robin strategy to a matching-entry strategy. NVMe
affinities are deterministic in a VMD domain when these devices have the
same vector count as limited by the VMD MSI domain or cpu count. When
one or more child devices are attached on a VMD domain, this patch
aligns the NVMe submission-side affinity with the VMD completion-side
affinity as it completes through the VMD IRQ list.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/controller/vmd.c | 57 ++++++++++++++++----------------------------
 1 file changed, 21 insertions(+), 36 deletions(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index ebe7ff6..7aca925 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -75,13 +75,10 @@ struct vmd_irq {
  * struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector
  * @irq_list:	the list of irq's the VMD one demuxes to.
  * @srcu:	SRCU struct for local synchronization.
- * @count:	number of child IRQs assigned to this vector; used to track
- *		sharing.
  */
 struct vmd_irq_list {
 	struct list_head	irq_list;
 	struct srcu_struct	srcu;
-	unsigned int		count;
 	unsigned int		index;
 };
 
@@ -184,37 +181,32 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
 	return 0;
 }
 
-/*
- * XXX: We can be even smarter selecting the best IRQ once we solve the
- * affinity problem.
- */
 static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
 {
-	int i, best = 1;
-	unsigned long flags;
-
-	if (vmd->msix_count == 1)
-		return vmd->irqs[0];
-
-	/*
-	 * White list for fast-interrupt handlers. All others will share the
-	 * "slow" interrupt vector.
-	 */
-	switch (msi_desc_to_pci_dev(desc)->class) {
-	case PCI_CLASS_STORAGE_EXPRESS:
-		break;
-	default:
-		return vmd->irqs[0];
+	int entry_nr = desc->msi_attrib.entry_nr;
+
+	if (vmd->msix_count == 1) {
+		entry_nr = 0;
+	} else {
+
+		/*
+		 * White list for fast-interrupt handlers. All others will
+		 * share the "slow" interrupt vector.
+		 */
+		switch (msi_desc_to_pci_dev(desc)->class) {
+		case PCI_CLASS_STORAGE_EXPRESS:
+			break;
+		default:
+			entry_nr = 0;
+		}
 	}
 
-	raw_spin_lock_irqsave(&list_lock, flags);
-	for (i = 1; i < vmd->msix_count; i++)
-		if (vmd->irqs[i]->count < vmd->irqs[best]->count)
-			best = i;
-	vmd->irqs[best]->count++;
-	raw_spin_unlock_irqrestore(&list_lock, flags);
+	if (entry_nr > vmd->msix_count)
+		entry_nr = 0;
 
-	return vmd->irqs[best];
+	dev_dbg(desc->dev, "Entry %d using VMD IRQ list %d/%d\n",
+		desc->msi_attrib.entry_nr, entry_nr, vmd->msix_count - 1);
+	return vmd->irqs[entry_nr];
 }
 
 static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
@@ -243,15 +235,8 @@ static void vmd_msi_free(struct irq_domain *domain,
 			struct msi_domain_info *info, unsigned int virq)
 {
 	struct vmd_irq *vmdirq = irq_get_chip_data(virq);
-	unsigned long flags;
 
 	synchronize_srcu(&vmdirq->irq->srcu);
-
-	/* XXX: Potential optimization to rebalance */
-	raw_spin_lock_irqsave(&list_lock, flags);
-	vmdirq->irq->count--;
-	raw_spin_unlock_irqrestore(&list_lock, flags);
-
 	kfree(vmdirq);
 }
 
-- 
1.8.3.1


  parent reply	other threads:[~2019-11-06 17:42 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-06 11:40 [PATCH 0/3] PCI: vmd: Reducing tail latency by affining to the storage stack Jon Derrick
2019-11-06 11:40 ` [PATCH 1/3] PCI: vmd: Reduce VMD vectors using NVMe calculation Jon Derrick
2019-11-06 18:02   ` Keith Busch
2019-11-06 19:51     ` Derrick, Jonathan
2019-11-06 11:40 ` Jon Derrick [this message]
2019-11-06 18:06   ` [PATCH 2/3] PCI: vmd: Align IRQ lists with child device vectors Keith Busch
2019-11-06 20:14     ` Derrick, Jonathan
2019-11-06 11:40 ` [PATCH 3/3] PCI: vmd: Use managed irq affinities Jon Derrick
2019-11-06 18:10   ` Keith Busch
2019-11-06 20:14     ` Derrick, Jonathan
2019-11-06 20:27       ` Keith Busch
2019-11-06 20:33         ` Derrick, Jonathan
2019-11-18 10:49           ` Lorenzo Pieralisi
2019-11-18 16:43             ` Derrick, Jonathan
2019-11-07  9:39 ` [PATCH 0/3] PCI: vmd: Reducing tail latency by affining to the storage stack Christoph Hellwig
2019-11-07 14:12   ` Derrick, Jonathan
2019-11-07 15:37     ` hch
2019-11-07 15:40       ` Derrick, Jonathan
2019-11-07 15:42         ` hch
2019-11-07 15:47           ` Derrick, Jonathan
2019-11-11 17:03             ` hch
2022-12-23  2:33 ` Kai-Heng Feng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1573040408-3831-3-git-send-email-jonathan.derrick@intel.com \
    --to=jonathan.derrick@intel.com \
    --cc=helgaas@kernel.org \
    --cc=kbusch@kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).