linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] PCI/LINK: Account for BW notification in vector calculation
@ 2019-04-22 22:43 Alex Williamson
  2019-04-23  0:05 ` Alex G
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Alex Williamson @ 2019-04-22 22:43 UTC (permalink / raw)
  To: bhelgaas, helgaas, mr.nuke.me, linux-pci
  Cc: austin_bolen, alex_gagniuc, keith.busch, Shyam_Iyer, lukas,
	okaya, torvalds, linux-kernel

On systems that don't support any PCIe services other than bandwidth
notification, pcie_message_numbers() can return zero vectors, causing
the vector reallocation in pcie_port_enable_irq_vec() to retry with
zero, which fails, resulting in fallback to INTx (which might be
broken) for the bandwidth notification service.  This can resolve
spurious interrupt faults due to this service on some systems.

Fixes: e8303bb7a75c ("PCI/LINK: Report degraded links via link bandwidth notification")
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

However, the system is still susceptible to random spew in dmesg
depending on how the root port handles downstream device managed link
speed changes.  For example, GPUs like to scale their link speed for
power management when idle.  A GPU assigned to a VM through vfio-pci
can generate link bandwidth notification every time the link is
scaled down, ex:

[  329.725607] vfio-pci 0000:07:00.0: 32.000 Gb/s available PCIe bandwidth,
limited by 2.5 GT/s x16 link at 0000:00:02.0 (capable of 64.000 Gb/s with 5
GT/s x16 link)
[  708.151488] vfio-pci 0000:07:00.0: 32.000 Gb/s available PCIe bandwidth,
limited by 2.5 GT/s x16 link at 0000:00:02.0 (capable of 64.000 Gb/s with 5
GT/s x16 link)
[  718.262959] vfio-pci 0000:07:00.0: 32.000 Gb/s available PCIe bandwidth,
limited by 2.5 GT/s x16 link at 0000:00:02.0 (capable of 64.000 Gb/s with 5
GT/s x16 link)
[ 1138.124932] vfio-pci 0000:07:00.0: 32.000 Gb/s available PCIe bandwidth,
limited by 2.5 GT/s x16 link at 0000:00:02.0 (capable of 64.000 Gb/s with 5
GT/s x16 link)

What is the value of this nagging?

 drivers/pci/pcie/portdrv_core.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
index 7d04f9d087a6..1b330129089f 100644
--- a/drivers/pci/pcie/portdrv_core.c
+++ b/drivers/pci/pcie/portdrv_core.c
@@ -55,7 +55,8 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask,
 	 * 7.8.2, 7.10.10, 7.31.2.
 	 */
 
-	if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) {
+	if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP |
+		    PCIE_PORT_SERVICE_BWNOTIF)) {
 		pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16);
 		*pme = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9;
 		nvec = *pme + 1;


^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-05-01 20:30 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-22 22:43 [PATCH] PCI/LINK: Account for BW notification in vector calculation Alex Williamson
2019-04-23  0:05 ` Alex G
2019-04-23  0:33   ` Alex Williamson
2019-04-23 14:33     ` Alex G
2019-04-23 15:34       ` Alex Williamson
2019-04-23 15:49         ` Lukas Wunner
2019-04-23 16:03         ` Alex G
2019-04-23 16:22           ` Alex Williamson
2019-04-23 16:27             ` Alex G
2019-04-23 16:37               ` Alex Williamson
2019-04-23 17:10       ` Bjorn Helgaas
2019-04-23 17:53         ` Alex G
2019-04-23 18:38           ` Alex Williamson
2019-04-23 17:59 ` Alex G
2019-05-01 20:30 ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).