linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Legacy direct-assign mode
@ 2020-11-20 22:51 Jon Derrick
  2020-11-20 22:51 ` [PATCH 1/5] PCI: vmd: Reset the VMD subdevice domain on probe Jon Derrick
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Jon Derrick @ 2020-11-20 22:51 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota, Jon Derrick

This set adds a legacy direct-assign mode. Newer enterprise hardware has
physical addressing hints to assist device passthrough to guests that needs to
correctly program bridge windows with physical addresses. Some customers are
using a legacy method that relies on the VMD subdevice domain's root port
windows to be written with the physical addresses. This method also allows
other hypervisors besides QEMU/KVM to perform guest passthrough.

This patchset adds a host and guest mode to write the physical address
information to the root port registers in the host and read them in the guest,
and restore them in both cases on module unload.

This patchset also folds in the VMD subdevice domain secondary bus reset
patchset [1] to clear the domain prior to guest passthrough.

[1] https://patchwork.kernel.org/project/linux-pci/cover/20200928010557.5324-1-jonathan.derrick@intel.com/

Jon Derrick (5):
  PCI: vmd: Reset the VMD subdevice domain on probe
  PCI: Add a reset quirk for VMD
  PCI: vmd: Add offset translation helper
  PCI: vmd: Pass features to vmd_get_phys_offsets()
  PCI: vmd: Add legacy guest passthrough mode

 drivers/pci/controller/vmd.c | 200 ++++++++++++++++++++++++++++++++++++++-----
 drivers/pci/quirks.c         |  48 +++++++++++
 2 files changed, 227 insertions(+), 21 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] PCI: vmd: Reset the VMD subdevice domain on probe
  2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
@ 2020-11-20 22:51 ` Jon Derrick
  2020-11-20 22:51 ` [PATCH 2/5] PCI: Add a reset quirk for VMD Jon Derrick
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jon Derrick @ 2020-11-20 22:51 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota, Jon Derrick

The VMD subdevice domain resource requirements may have changed
in-between module loads. Generic PCI resource assignment code may rely
on existing resource configuration rather than the VMD preference of
re-examining the domain. Add a Secondary Bus Reset to the VMD subdevice
domain during driver attachment to clear the PCI config space of the
subdevices.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/controller/vmd.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index c31e4d5..c7b5614 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -14,6 +14,7 @@
 #include <linux/srcu.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
+#include <linux/delay.h>
 
 #include <asm/irqdomain.h>
 #include <asm/device.h>
@@ -424,6 +425,36 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
 	.write		= vmd_pci_write,
 };
 
+static void vmd_domain_reset_sbr(struct vmd_dev *vmd)
+{
+	char __iomem *base;
+	int rp;
+	u16 ctl;
+
+	/*
+	 * Subdevice config space is mapped linearly using 4k config space
+	 * increments. Use increments of 0x8000 to locate root port devices.
+	 */
+	for (rp = 0; rp < 4; rp++) {
+		base = vmd->cfgbar + rp * 0x8000;
+		if (readl(base + PCI_COMMAND) == 0xFFFFFFFF)
+			continue;
+
+		/* pci_reset_secondary_bus() */
+		ctl = readw(base + PCI_BRIDGE_CONTROL);
+		ctl |= PCI_BRIDGE_CTL_BUS_RESET;
+		writew(ctl, base + PCI_BRIDGE_CONTROL);
+		readw(base + PCI_BRIDGE_CONTROL);
+		msleep(2);
+
+		ctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+		writew(ctl, base + PCI_BRIDGE_CONTROL);
+		readw(base + PCI_BRIDGE_CONTROL);
+	}
+
+	ssleep(1);
+}
+
 static void vmd_attach_resources(struct vmd_dev *vmd)
 {
 	vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1];
@@ -707,6 +738,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 	if (vmd->irq_domain)
 		dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
 
+	vmd_domain_reset_sbr(vmd);
 	pci_scan_child_bus(vmd->bus);
 	pci_assign_unassigned_bus_resources(vmd->bus);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] PCI: Add a reset quirk for VMD
  2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
  2020-11-20 22:51 ` [PATCH 1/5] PCI: vmd: Reset the VMD subdevice domain on probe Jon Derrick
@ 2020-11-20 22:51 ` Jon Derrick
  2020-11-24 21:40   ` Bjorn Helgaas
  2020-11-20 22:51 ` [PATCH 3/5] PCI: vmd: Add offset translation helper Jon Derrick
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Jon Derrick @ 2020-11-20 22:51 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota, Jon Derrick

VMD domains should be reset in-between special attachment such as VFIO
users. VMD does not offer a reset, however the subdevice domain itself
can be reset starting at the Root Bus. Add a Secondary Bus Reset on each
of the individual root port devices immediately downstream of the VMD
root bus.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/quirks.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index f70692a..ee58b51 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3744,6 +3744,49 @@ static int reset_ivb_igd(struct pci_dev *dev, int probe)
 	return 0;
 }
 
+/* Issues SBR to VMD domain to clear PCI configuration */
+static int reset_vmd_sbr(struct pci_dev *dev, int probe)
+{
+	char __iomem *cfgbar, *base;
+	int rp;
+	u16 ctl;
+
+	if (probe)
+		return 0;
+
+	if (dev->dev.driver)
+		return 0;
+
+	cfgbar = pci_iomap(dev, 0, 0);
+	if (!cfgbar)
+		return -ENOMEM;
+
+	/*
+	 * Subdevice config space is mapped linearly using 4k config space
+	 * increments. Use increments of 0x8000 to locate root port devices.
+	 */
+	for (rp = 0; rp < 4; rp++) {
+		base = cfgbar + rp * 0x8000;
+		if (readl(base + PCI_COMMAND) == 0xFFFFFFFF)
+			continue;
+
+		/* pci_reset_secondary_bus() */
+		ctl = readw(base + PCI_BRIDGE_CONTROL);
+		ctl |= PCI_BRIDGE_CTL_BUS_RESET;
+		writew(ctl, base + PCI_BRIDGE_CONTROL);
+		readw(base + PCI_BRIDGE_CONTROL);
+		msleep(2);
+
+		ctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+		writew(ctl, base + PCI_BRIDGE_CONTROL);
+		readw(base + PCI_BRIDGE_CONTROL);
+	}
+
+	ssleep(1);
+	pci_iounmap(dev, cfgbar);
+	return 0;
+}
+
 /* Device-specific reset method for Chelsio T4-based adapters */
 static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
 {
@@ -3919,6 +3962,11 @@ static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
 		reset_ivb_igd },
 	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_M2_VGA,
 		reset_ivb_igd },
+	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D, reset_vmd_sbr },
+	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0, reset_vmd_sbr },
+	{ PCI_VENDOR_ID_INTEL, 0x467f, reset_vmd_sbr },
+	{ PCI_VENDOR_ID_INTEL, 0x4c3d, reset_vmd_sbr },
+	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B, reset_vmd_sbr },
 	{ PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr },
 	{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
 	{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] PCI: vmd: Add offset translation helper
  2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
  2020-11-20 22:51 ` [PATCH 1/5] PCI: vmd: Reset the VMD subdevice domain on probe Jon Derrick
  2020-11-20 22:51 ` [PATCH 2/5] PCI: Add a reset quirk for VMD Jon Derrick
@ 2020-11-20 22:51 ` Jon Derrick
  2020-11-20 22:51 ` [PATCH 4/5] PCI: vmd: Pass features to vmd_get_phys_offsets() Jon Derrick
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jon Derrick @ 2020-11-20 22:51 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota, Jon Derrick

Adds a helper for translating physical addresses to bridge window
offsets. No functional changes.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/controller/vmd.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index c7b5614..b0504ee 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -483,6 +483,16 @@ static int vmd_find_free_domain(void)
 	return domain + 1;
 }
 
+static void vmd_phys_to_offset(struct vmd_dev *vmd, u64 phys1, u64 phys2,
+				 resource_size_t *offset1,
+				 resource_size_t *offset2)
+{
+	*offset1 = vmd->dev->resource[VMD_MEMBAR1].start -
+			(phys1 & PCI_BASE_ADDRESS_MEM_MASK);
+	*offset2 = vmd->dev->resource[VMD_MEMBAR2].start -
+			(phys2 & PCI_BASE_ADDRESS_MEM_MASK);
+}
+
 static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
 				resource_size_t *offset1,
 				resource_size_t *offset2)
@@ -507,8 +517,8 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
 			phys1 = readq(membar2 + MB2_SHADOW_OFFSET);
 			phys2 = readq(membar2 + MB2_SHADOW_OFFSET + 8);
 			pci_iounmap(dev, membar2);
-		} else
-			return 0;
+			vmd_phys_to_offset(vmd, phys1, phys2, offset1, offset2);
+		}
 	} else {
 		/* Hypervisor-Emulated Vendor-Specific Capability */
 		int pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
@@ -525,15 +535,10 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
 			pci_read_config_dword(dev, pos + 16, &reg);
 			pci_read_config_dword(dev, pos + 20, &regu);
 			phys2 = (u64) regu << 32 | reg;
-		} else
-			return 0;
+			vmd_phys_to_offset(vmd, phys1, phys2, offset1, offset2);
+		}
 	}
 
-	*offset1 = dev->resource[VMD_MEMBAR1].start -
-			(phys1 & PCI_BASE_ADDRESS_MEM_MASK);
-	*offset2 = dev->resource[VMD_MEMBAR2].start -
-			(phys2 & PCI_BASE_ADDRESS_MEM_MASK);
-
 	return 0;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] PCI: vmd: Pass features to vmd_get_phys_offsets()
  2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
                   ` (2 preceding siblings ...)
  2020-11-20 22:51 ` [PATCH 3/5] PCI: vmd: Add offset translation helper Jon Derrick
@ 2020-11-20 22:51 ` Jon Derrick
  2020-11-20 22:51 ` [PATCH 5/5] PCI: vmd: Add legacy guest passthrough mode Jon Derrick
  2021-03-22 12:28 ` [PATCH 0/5] Legacy direct-assign mode Lorenzo Pieralisi
  5 siblings, 0 replies; 13+ messages in thread
From: Jon Derrick @ 2020-11-20 22:51 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota, Jon Derrick

Modifies the physical address offset parser to use the device-specific
features member. No functional changes.

Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/controller/vmd.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index b0504ee..71aa002 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -493,14 +493,14 @@ static void vmd_phys_to_offset(struct vmd_dev *vmd, u64 phys1, u64 phys2,
 			(phys2 & PCI_BASE_ADDRESS_MEM_MASK);
 }
 
-static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
+static int vmd_get_phys_offsets(struct vmd_dev *vmd, unsigned long features,
 				resource_size_t *offset1,
 				resource_size_t *offset2)
 {
 	struct pci_dev *dev = vmd->dev;
 	u64 phys1, phys2;
 
-	if (native_hint) {
+	if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) {
 		u32 vmlock;
 		int ret;
 
@@ -519,7 +519,7 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
 			pci_iounmap(dev, membar2);
 			vmd_phys_to_offset(vmd, phys1, phys2, offset1, offset2);
 		}
-	} else {
+	} else if (features & VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP) {
 		/* Hypervisor-Emulated Vendor-Specific Capability */
 		int pos = pci_find_capability(dev, PCI_CAP_ID_VNDR);
 		u32 reg, regu;
@@ -638,16 +638,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 	 * and child devices. These registers will either return the host value
 	 * or 0, depending on an enable bit in the VMD device.
 	 */
-	if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) {
+	if (features & VMD_FEAT_HAS_MEMBAR_SHADOW)
 		membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE;
-		ret = vmd_get_phys_offsets(vmd, true, &offset[0], &offset[1]);
-		if (ret)
-			return ret;
-	} else if (features & VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP) {
-		ret = vmd_get_phys_offsets(vmd, false, &offset[0], &offset[1]);
-		if (ret)
-			return ret;
-	}
+
+	ret = vmd_get_phys_offsets(vmd, features, &offset[0], &offset[1]);
+	if (ret)
+		return ret;
 
 	/*
 	 * Certain VMD devices may have a root port configuration option which
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] PCI: vmd: Add legacy guest passthrough mode
  2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
                   ` (3 preceding siblings ...)
  2020-11-20 22:51 ` [PATCH 4/5] PCI: vmd: Pass features to vmd_get_phys_offsets() Jon Derrick
@ 2020-11-20 22:51 ` Jon Derrick
  2021-03-22 12:28 ` [PATCH 0/5] Legacy direct-assign mode Lorenzo Pieralisi
  5 siblings, 0 replies; 13+ messages in thread
From: Jon Derrick @ 2020-11-20 22:51 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota, Jon Derrick

Some hypervisors allow passthrough of VMD to guests, but don't supply
the emulated vendor-specific capability. VMD users currently
passing-through VMD rely on a preconfiguration of the VMD Root Ports to
inform the guest of the physical addresses for offset mapping the bridge
windows.

This patch adds a non-visible module parameter to activate host or guest
passthrough mode. In host mode, this patch will write out the VMD MEMBAR
information into the root ports on module unload. Guest mode will use
the direct-assign hints, saving the host-supplied root port information
on VMD module load and restore on exit. It uses this information in the
offset calculation for bridge windows.

This is enabled by non-visible module parameter because it is
non-standard use case for certain users for a legacy behavior.

Link: https://lore.kernel.org/linux-pci/20200706091625.GA26377@e121166-lin.cambridge.arm.com/
Signed-off-by: Sushma Kalakota <sushmax.kalakota@intel.com>
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
 drivers/pci/controller/vmd.c | 127 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 126 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 71aa002..711bbee 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -35,6 +35,19 @@
 #define MB2_SHADOW_OFFSET	0x2000
 #define MB2_SHADOW_SIZE		16
 
+enum legacy_da_mode {
+	VMD_DA_NONE,
+	VMD_DA_HOST,
+	VMD_DA_GUEST,
+};
+
+static int legacy_da_mode;
+static char legacy_da_mode_str[sizeof("guest")];
+module_param_string(legacy_da_mode, legacy_da_mode_str,
+		    sizeof(legacy_da_mode_str), 0);
+MODULE_PARM_DESC(legacy_da_mode,
+	"use legacy host-provided addressing hints in Root Ports to assist guest passthrough (off, host, guest)");
+
 enum vmd_features {
 	/*
 	 * Device may contain registers which hint the physical location of the
@@ -97,6 +110,12 @@ struct vmd_irq_list {
 	unsigned int		count;
 };
 
+struct root_port_addr {
+	int port;
+	u64 membase;
+	u64 pref_membase;
+};
+
 struct vmd_dev {
 	struct pci_dev		*dev;
 
@@ -112,6 +131,7 @@ struct vmd_dev {
 	struct pci_bus		*bus;
 	u8			busn_start;
 	u8			first_vec;
+	struct root_port_addr	rp_addr;
 };
 
 static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
@@ -483,6 +503,97 @@ static int vmd_find_free_domain(void)
 	return domain + 1;
 }
 
+#define VMD_RP_BASE(vmd, port) ((vmd)->cfgbar + (port) * 8 * 4096)
+static void vmd_save_root_port_info(struct vmd_dev *vmd)
+{
+	resource_size_t physical = 0;
+	char __iomem *addr;
+	int port;
+
+	if (upper_32_bits(pci_resource_start(vmd->dev, VMD_MEMBAR1)))
+		return;
+
+	for (port = 0; port < 4; port++) {
+		u32 membase;
+
+		addr = VMD_RP_BASE(vmd, port) + PCI_MEMORY_BASE;
+		membase = readl(addr);
+
+		/* Break on first found root port */
+		if ((membase != 0xffffffff) && (membase != 0) &&
+		    (membase != 0x0000fff0))
+			break;
+	}
+
+	if (port >= 4)
+		return;
+
+	vmd->rp_addr.port = port;
+
+	/* Only save the first root port index in host mode */
+	if (legacy_da_mode == VMD_DA_HOST)
+		return;
+
+	addr = VMD_RP_BASE(vmd, port) + PCI_MEMORY_BASE;
+	physical = ((u64)readw(addr) & 0xfff0) << 16;
+	vmd->rp_addr.membase = physical;
+
+	addr = VMD_RP_BASE(vmd, port) + PCI_PREF_BASE_UPPER32;
+	physical = ((u64)readl(addr)) << 32;
+	vmd->rp_addr.pref_membase = physical;
+
+	addr = VMD_RP_BASE(vmd, port) + PCI_PREF_MEMORY_BASE;
+	physical |= ((u64)readw(addr) & 0xfff0) << 16;
+	vmd->rp_addr.pref_membase |= physical;
+
+	writel(0, VMD_RP_BASE(vmd, port) + PCI_MEMORY_BASE);
+	writel(0, VMD_RP_BASE(vmd, port) + PCI_PREF_BASE_UPPER32);
+	writel(0, VMD_RP_BASE(vmd, port) + PCI_PREF_MEMORY_BASE);
+	writel(0, VMD_RP_BASE(vmd, port) + PCI_PREF_MEMORY_LIMIT);
+}
+
+static void vmd_restore_root_port_info(struct vmd_dev *vmd)
+{
+	resource_size_t	phyaddr;
+	char __iomem *addr;
+	u32 val;
+	int port;
+
+	port = vmd->rp_addr.port;
+	if (legacy_da_mode == VMD_DA_HOST) {
+		/* Write the MEMBAR information to prepare the guest */
+		phyaddr = pci_resource_start(vmd->dev, VMD_MEMBAR1);
+		if (upper_32_bits(phyaddr))
+			return;
+
+		addr = VMD_RP_BASE(vmd, port) + PCI_MEMORY_BASE;
+		val = (phyaddr >> 16) & 0xfff0;
+		writew(val, addr);
+
+		phyaddr = pci_resource_start(vmd->dev, VMD_MEMBAR2);
+		addr = VMD_RP_BASE(vmd, port) + PCI_PREF_BASE_UPPER32;
+		val = phyaddr >> 32;
+		writel(val, addr);
+
+		addr = VMD_RP_BASE(vmd, port) + PCI_PREF_MEMORY_BASE;
+		val = (phyaddr >> 16) & 0xfff0;
+		writew(val, addr);
+	} else if (legacy_da_mode == VMD_DA_GUEST) {
+		/* Restore information provided by Host */
+		addr = VMD_RP_BASE(vmd, port) + PCI_MEMORY_BASE;
+		val = (vmd->rp_addr.membase >> 16) & 0xfff0;
+		writew(val, addr);
+
+		addr = VMD_RP_BASE(vmd, port) + PCI_PREF_BASE_UPPER32;
+		val = vmd->rp_addr.pref_membase >> 32;
+		writel(val, addr);
+
+		addr = VMD_RP_BASE(vmd, port) + PCI_PREF_MEMORY_BASE;
+		val = (vmd->rp_addr.pref_membase >> 16) & 0xfff0;
+		writew(val, addr);
+	}
+}
+
 static void vmd_phys_to_offset(struct vmd_dev *vmd, u64 phys1, u64 phys2,
 				 resource_size_t *offset1,
 				 resource_size_t *offset2)
@@ -500,7 +611,19 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, unsigned long features,
 	struct pci_dev *dev = vmd->dev;
 	u64 phys1, phys2;
 
-	if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) {
+	if (!strncmp(legacy_da_mode_str, "host", 4))
+		legacy_da_mode = VMD_DA_HOST;
+	else if (!strncmp(legacy_da_mode_str, "guest", 5))
+		legacy_da_mode = VMD_DA_GUEST;
+
+	if (legacy_da_mode != VMD_DA_NONE) {
+		vmd_save_root_port_info(vmd);
+		if (legacy_da_mode == VMD_DA_GUEST) {
+			vmd_phys_to_offset(vmd, vmd->rp_addr.membase,
+					   vmd->rp_addr.pref_membase,
+					   offset1, offset2);
+		}
+	} else if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) {
 		u32 vmlock;
 		int ret;
 
@@ -732,6 +855,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 	if (!vmd->bus) {
 		pci_free_resource_list(&resources);
 		vmd_remove_irq_domain(vmd);
+		vmd_restore_root_port_info(vmd);
 		return -ENODEV;
 	}
 
@@ -821,6 +945,7 @@ static void vmd_remove(struct pci_dev *dev)
 	vmd_cleanup_srcu(vmd);
 	vmd_detach_resources(vmd);
 	vmd_remove_irq_domain(vmd);
+	vmd_restore_root_port_info(vmd);
 }
 
 #ifdef CONFIG_PM_SLEEP
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] PCI: Add a reset quirk for VMD
  2020-11-20 22:51 ` [PATCH 2/5] PCI: Add a reset quirk for VMD Jon Derrick
@ 2020-11-24 21:40   ` Bjorn Helgaas
  2020-11-25 17:22     ` Derrick, Jonathan
  0 siblings, 1 reply; 13+ messages in thread
From: Bjorn Helgaas @ 2020-11-24 21:40 UTC (permalink / raw)
  To: Jon Derrick
  Cc: Lorenzo Pieralisi, linux-pci, Nirmal Patel, Sushma Kalakota,
	Alex Williamson

[+cc Alex]

On Fri, Nov 20, 2020 at 03:51:41PM -0700, Jon Derrick wrote:
> VMD domains should be reset in-between special attachment such as VFIO
> users. VMD does not offer a reset, however the subdevice domain itself
> can be reset starting at the Root Bus. Add a Secondary Bus Reset on each
> of the individual root port devices immediately downstream of the VMD
> root bus.
> 
> Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> ---
>  drivers/pci/quirks.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 48 insertions(+)
> 
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index f70692a..ee58b51 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -3744,6 +3744,49 @@ static int reset_ivb_igd(struct pci_dev *dev, int probe)
>  	return 0;
>  }
>  
> +/* Issues SBR to VMD domain to clear PCI configuration */
> +static int reset_vmd_sbr(struct pci_dev *dev, int probe)
> +{
> +	char __iomem *cfgbar, *base;
> +	int rp;
> +	u16 ctl;
> +
> +	if (probe)
> +		return 0;
> +
> +	if (dev->dev.driver)
> +		return 0;

I guess "dev" here is the VMD endpoint?  And if the vmd.c driver is
bound to it, you return success without doing anything?

If there's no driver for the VMD device, who is trying to reset it?

I guess I don't quite understand how VMD works.  I would have thought
that if vmd.c isn't bound to the VMD device, the devices behind the
VMD would be inaccessible and there'd be no point in a reset.

> +	cfgbar = pci_iomap(dev, 0, 0);
> +	if (!cfgbar)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Subdevice config space is mapped linearly using 4k config space
> +	 * increments. Use increments of 0x8000 to locate root port devices.
> +	 */
> +	for (rp = 0; rp < 4; rp++) {
> +		base = cfgbar + rp * 0x8000;

I really don't like this part -- iomapping BAR 0 (apparently
VMD_CFGBAR), and making up the ECAM-ish addresses and basically
open-coding ECAM accesses below.  I guess this assumes Root Ports are
only on functions .0, .2, .4, .6?

Is it all open-coded here because this reset path is only of interest
when vmd.c is NOT bound to the the VMD device, so you can't use
vmd->cfgbar, etc?

What about the case when vmd.c IS bound?  We don't do anything here,
so does that mean we instead use the usual case of asserting SBR on
the Root Ports behind the VMD?

> +		if (readl(base + PCI_COMMAND) == 0xFFFFFFFF)
> +			continue;
> +
> +		/* pci_reset_secondary_bus() */
> +		ctl = readw(base + PCI_BRIDGE_CONTROL);
> +		ctl |= PCI_BRIDGE_CTL_BUS_RESET;
> +		writew(ctl, base + PCI_BRIDGE_CONTROL);
> +		readw(base + PCI_BRIDGE_CONTROL);
> +		msleep(2);
> +
> +		ctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
> +		writew(ctl, base + PCI_BRIDGE_CONTROL);
> +		readw(base + PCI_BRIDGE_CONTROL);
> +	}
> +
> +	ssleep(1);
> +	pci_iounmap(dev, cfgbar);
> +	return 0;
> +}
> +
>  /* Device-specific reset method for Chelsio T4-based adapters */
>  static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
>  {
> @@ -3919,6 +3962,11 @@ static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
>  		reset_ivb_igd },
>  	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_M2_VGA,
>  		reset_ivb_igd },
> +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D, reset_vmd_sbr },
> +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0, reset_vmd_sbr },
> +	{ PCI_VENDOR_ID_INTEL, 0x467f, reset_vmd_sbr },
> +	{ PCI_VENDOR_ID_INTEL, 0x4c3d, reset_vmd_sbr },
> +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B, reset_vmd_sbr },
>  	{ PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr },
>  	{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
>  	{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] PCI: Add a reset quirk for VMD
  2020-11-24 21:40   ` Bjorn Helgaas
@ 2020-11-25 17:22     ` Derrick, Jonathan
  2020-11-25 17:34       ` Alex Williamson
  0 siblings, 1 reply; 13+ messages in thread
From: Derrick, Jonathan @ 2020-11-25 17:22 UTC (permalink / raw)
  To: helgaas
  Cc: Kalakota, SushmaX, lorenzo.pieralisi, linux-pci, Patel, Nirmal,
	alex.williamson

Hi Bjorn,

On Tue, 2020-11-24 at 15:40 -0600, Bjorn Helgaas wrote:
> [+cc Alex]
> 
> On Fri, Nov 20, 2020 at 03:51:41PM -0700, Jon Derrick wrote:
> > VMD domains should be reset in-between special attachment such as VFIO
> > users. VMD does not offer a reset, however the subdevice domain itself
> > can be reset starting at the Root Bus. Add a Secondary Bus Reset on each
> > of the individual root port devices immediately downstream of the VMD
> > root bus.
> > 
> > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> > ---
> >  drivers/pci/quirks.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 48 insertions(+)
> > 
> > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > index f70692a..ee58b51 100644
> > --- a/drivers/pci/quirks.c
> > +++ b/drivers/pci/quirks.c
> > @@ -3744,6 +3744,49 @@ static int reset_ivb_igd(struct pci_dev *dev, int probe)
> >  	return 0;
> >  }
> >  
> > +/* Issues SBR to VMD domain to clear PCI configuration */
> > +static int reset_vmd_sbr(struct pci_dev *dev, int probe)
> > +{
> > +	char __iomem *cfgbar, *base;
> > +	int rp;
> > +	u16 ctl;
> > +
> > +	if (probe)
> > +		return 0;
> > +
> > +	if (dev->dev.driver)
> > +		return 0;
> 
> I guess "dev" here is the VMD endpoint?  And if the vmd.c driver is
> bound to it, you return success without doing anything?
> 
> If there's no driver for the VMD device, who is trying to reset it?
> 
> I guess I don't quite understand how VMD works.  I would have thought
> that if vmd.c isn't bound to the VMD device, the devices behind the
> VMD would be inaccessible and there'd be no point in a reset.

This is basically the idea behind this reset - allow the user to reset
VMD if there is no driver bound to it, but prevent the reset from
deenumerating the domain if there is a driver.

If this is an unusual/unexpected use case, we can drop it.


> 
> > +	cfgbar = pci_iomap(dev, 0, 0);
> > +	if (!cfgbar)
> > +		return -ENOMEM;
> > +
> > +	/*
> > +	 * Subdevice config space is mapped linearly using 4k config space
> > +	 * increments. Use increments of 0x8000 to locate root port devices.
> > +	 */
> > +	for (rp = 0; rp < 4; rp++) {
> > +		base = cfgbar + rp * 0x8000;
> 
> I really don't like this part -- iomapping BAR 0 (apparently
> VMD_CFGBAR), and making up the ECAM-ish addresses and basically
> open-coding ECAM accesses below.  I guess this assumes Root Ports are
> only on functions .0, .2, .4, .6?

The Root Ports are Devices xx:00.0, xx:01.0, xx:02.0, and xx:03.0
(corresponding to PCIE_EXT_SLOT_SHIFT = 15)


> 
> Is it all open-coded here because this reset path is only of interest
> when vmd.c is NOT bound to the the VMD device, so you can't use
> vmd->cfgbar, etc?

That's correct, but as mentioned above it might be an unusual code path
so is not as important as the reset within the driver in patch 1/5.

> 
> What about the case when vmd.c IS bound?  We don't do anything here,
> so does that mean we instead use the usual case of asserting SBR on
> the Root Ports behind the VMD?

It uses the standard Linux reset code paths for Root Port devices

> 
> > +		if (readl(base + PCI_COMMAND) == 0xFFFFFFFF)
> > +			continue;
> > +
> > +		/* pci_reset_secondary_bus() */
> > +		ctl = readw(base + PCI_BRIDGE_CONTROL);
> > +		ctl |= PCI_BRIDGE_CTL_BUS_RESET;
> > +		writew(ctl, base + PCI_BRIDGE_CONTROL);
> > +		readw(base + PCI_BRIDGE_CONTROL);
> > +		msleep(2);
> > +
> > +		ctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
> > +		writew(ctl, base + PCI_BRIDGE_CONTROL);
> > +		readw(base + PCI_BRIDGE_CONTROL);
> > +	}
> > +
> > +	ssleep(1);
> > +	pci_iounmap(dev, cfgbar);
> > +	return 0;
> > +}
> > +
> >  /* Device-specific reset method for Chelsio T4-based adapters */
> >  static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
> >  {
> > @@ -3919,6 +3962,11 @@ static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
> >  		reset_ivb_igd },
> >  	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_M2_VGA,
> >  		reset_ivb_igd },
> > +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D, reset_vmd_sbr },
> > +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0, reset_vmd_sbr },
> > +	{ PCI_VENDOR_ID_INTEL, 0x467f, reset_vmd_sbr },
> > +	{ PCI_VENDOR_ID_INTEL, 0x4c3d, reset_vmd_sbr },
> > +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B, reset_vmd_sbr },
> >  	{ PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr },
> >  	{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
> >  	{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
> > -- 
> > 1.8.3.1
> > 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] PCI: Add a reset quirk for VMD
  2020-11-25 17:22     ` Derrick, Jonathan
@ 2020-11-25 17:34       ` Alex Williamson
  0 siblings, 0 replies; 13+ messages in thread
From: Alex Williamson @ 2020-11-25 17:34 UTC (permalink / raw)
  To: Derrick, Jonathan
  Cc: helgaas, Kalakota, SushmaX, lorenzo.pieralisi, linux-pci, Patel, Nirmal

On Wed, 25 Nov 2020 17:22:05 +0000
"Derrick, Jonathan" <jonathan.derrick@intel.com> wrote:

> Hi Bjorn,
> 
> On Tue, 2020-11-24 at 15:40 -0600, Bjorn Helgaas wrote:
> > [+cc Alex]
> > 
> > On Fri, Nov 20, 2020 at 03:51:41PM -0700, Jon Derrick wrote:  
> > > VMD domains should be reset in-between special attachment such as VFIO
> > > users. VMD does not offer a reset, however the subdevice domain itself
> > > can be reset starting at the Root Bus. Add a Secondary Bus Reset on each
> > > of the individual root port devices immediately downstream of the VMD
> > > root bus.
> > > 
> > > Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> > > ---
> > >  drivers/pci/quirks.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 48 insertions(+)
> > > 
> > > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> > > index f70692a..ee58b51 100644
> > > --- a/drivers/pci/quirks.c
> > > +++ b/drivers/pci/quirks.c
> > > @@ -3744,6 +3744,49 @@ static int reset_ivb_igd(struct pci_dev *dev, int probe)
> > >  	return 0;
> > >  }
> > >  
> > > +/* Issues SBR to VMD domain to clear PCI configuration */
> > > +static int reset_vmd_sbr(struct pci_dev *dev, int probe)
> > > +{
> > > +	char __iomem *cfgbar, *base;
> > > +	int rp;
> > > +	u16 ctl;
> > > +
> > > +	if (probe)
> > > +		return 0;
> > > +
> > > +	if (dev->dev.driver)
> > > +		return 0;  
> > 
> > I guess "dev" here is the VMD endpoint?  And if the vmd.c driver is
> > bound to it, you return success without doing anything?
> > 
> > If there's no driver for the VMD device, who is trying to reset it?
> > 
> > I guess I don't quite understand how VMD works.  I would have thought
> > that if vmd.c isn't bound to the VMD device, the devices behind the
> > VMD would be inaccessible and there'd be no point in a reset.  
> 
> This is basically the idea behind this reset - allow the user to reset
> VMD if there is no driver bound to it, but prevent the reset from
> deenumerating the domain if there is a driver.
> 
> If this is an unusual/unexpected use case, we can drop it.

I don't understand how this improves the vfio use case as claimed in
the commit log, are you expecting the device to be unbound from all
drivers and reset via pci-sysfs between uses?  vfio would not be able
to perform the reset itself with this behavior, including between
resets of a VM or between separate users without external manual
unbinding and reset.

   
> > > +	cfgbar = pci_iomap(dev, 0, 0);
> > > +	if (!cfgbar)
> > > +		return -ENOMEM;
> > > +
> > > +	/*
> > > +	 * Subdevice config space is mapped linearly using 4k config space
> > > +	 * increments. Use increments of 0x8000 to locate root port devices.
> > > +	 */
> > > +	for (rp = 0; rp < 4; rp++) {
> > > +		base = cfgbar + rp * 0x8000;  
> > 
> > I really don't like this part -- iomapping BAR 0 (apparently
> > VMD_CFGBAR), and making up the ECAM-ish addresses and basically
> > open-coding ECAM accesses below.  I guess this assumes Root Ports are
> > only on functions .0, .2, .4, .6?  
> 
> The Root Ports are Devices xx:00.0, xx:01.0, xx:02.0, and xx:03.0
> (corresponding to PCIE_EXT_SLOT_SHIFT = 15)
> 
> 
> > 
> > Is it all open-coded here because this reset path is only of interest
> > when vmd.c is NOT bound to the the VMD device, so you can't use
> > vmd->cfgbar, etc?  
> 
> That's correct, but as mentioned above it might be an unusual code path
> so is not as important as the reset within the driver in patch 1/5.
> 
> > 
> > What about the case when vmd.c IS bound?  We don't do anything here,
> > so does that mean we instead use the usual case of asserting SBR on
> > the Root Ports behind the VMD?  
> 
> It uses the standard Linux reset code paths for Root Port devices
> 
> >   
> > > +		if (readl(base + PCI_COMMAND) == 0xFFFFFFFF)
> > > +			continue;
> > > +
> > > +		/* pci_reset_secondary_bus() */
> > > +		ctl = readw(base + PCI_BRIDGE_CONTROL);
> > > +		ctl |= PCI_BRIDGE_CTL_BUS_RESET;
> > > +		writew(ctl, base + PCI_BRIDGE_CONTROL);
> > > +		readw(base + PCI_BRIDGE_CONTROL);
> > > +		msleep(2);
> > > +
> > > +		ctl &= ~PCI_BRIDGE_CTL_BUS_RESET;
> > > +		writew(ctl, base + PCI_BRIDGE_CONTROL);
> > > +		readw(base + PCI_BRIDGE_CONTROL);

We're performing an SBR of the internal root ports here, is the config
space of the affected endpoints handled via save+restore of the code
that calls this?  I'm a little rusty on VMD again.  Thanks,

Alex


> > > +	}
> > > +
> > > +	ssleep(1);
> > > +	pci_iounmap(dev, cfgbar);
> > > +	return 0;
> > > +}
> > > +
> > >  /* Device-specific reset method for Chelsio T4-based adapters */
> > >  static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
> > >  {
> > > @@ -3919,6 +3962,11 @@ static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
> > >  		reset_ivb_igd },
> > >  	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IVB_M2_VGA,
> > >  		reset_ivb_igd },
> > > +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D, reset_vmd_sbr },
> > > +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0, reset_vmd_sbr },
> > > +	{ PCI_VENDOR_ID_INTEL, 0x467f, reset_vmd_sbr },
> > > +	{ PCI_VENDOR_ID_INTEL, 0x4c3d, reset_vmd_sbr },
> > > +	{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B, reset_vmd_sbr },
> > >  	{ PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr },
> > >  	{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
> > >  	{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
> > > -- 
> > > 1.8.3.1
> > >   


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] Legacy direct-assign mode
  2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
                   ` (4 preceding siblings ...)
  2020-11-20 22:51 ` [PATCH 5/5] PCI: vmd: Add legacy guest passthrough mode Jon Derrick
@ 2021-03-22 12:28 ` Lorenzo Pieralisi
  2021-03-22 15:25   ` Derrick, Jonathan
  2021-03-22 19:48   ` Christoph Hellwig
  5 siblings, 2 replies; 13+ messages in thread
From: Lorenzo Pieralisi @ 2021-03-22 12:28 UTC (permalink / raw)
  To: Jon Derrick; +Cc: linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota

On Fri, Nov 20, 2020 at 03:51:39PM -0700, Jon Derrick wrote:
> This set adds a legacy direct-assign mode. Newer enterprise hardware has
> physical addressing hints to assist device passthrough to guests that needs to
> correctly program bridge windows with physical addresses. Some customers are
> using a legacy method that relies on the VMD subdevice domain's root port
> windows to be written with the physical addresses. This method also allows
> other hypervisors besides QEMU/KVM to perform guest passthrough.
> 
> This patchset adds a host and guest mode to write the physical address
> information to the root port registers in the host and read them in the guest,
> and restore them in both cases on module unload.
> 
> This patchset also folds in the VMD subdevice domain secondary bus reset
> patchset [1] to clear the domain prior to guest passthrough.
> 
> [1] https://patchwork.kernel.org/project/linux-pci/cover/20200928010557.5324-1-jonathan.derrick@intel.com/
> 
> Jon Derrick (5):
>   PCI: vmd: Reset the VMD subdevice domain on probe
>   PCI: Add a reset quirk for VMD
>   PCI: vmd: Add offset translation helper
>   PCI: vmd: Pass features to vmd_get_phys_offsets()
>   PCI: vmd: Add legacy guest passthrough mode
> 
>  drivers/pci/controller/vmd.c | 200 ++++++++++++++++++++++++++++++++++++++-----
>  drivers/pci/quirks.c         |  48 +++++++++++
>  2 files changed, 227 insertions(+), 21 deletions(-)

Hi Jon,

it is unclear to me where we are with this series, please let me know.

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] Legacy direct-assign mode
  2021-03-22 12:28 ` [PATCH 0/5] Legacy direct-assign mode Lorenzo Pieralisi
@ 2021-03-22 15:25   ` Derrick, Jonathan
  2021-03-22 19:48   ` Christoph Hellwig
  1 sibling, 0 replies; 13+ messages in thread
From: Derrick, Jonathan @ 2021-03-22 15:25 UTC (permalink / raw)
  To: lorenzo.pieralisi; +Cc: linux-pci, Patel, Nirmal, helgaas, Kalakota, Sushma

On Mon, 2021-03-22 at 12:28 +0000, Lorenzo Pieralisi wrote:
> On Fri, Nov 20, 2020 at 03:51:39PM -0700, Jon Derrick wrote:
> > This set adds a legacy direct-assign mode. Newer enterprise hardware has
> > physical addressing hints to assist device passthrough to guests that needs to
> > correctly program bridge windows with physical addresses. Some customers are
> > using a legacy method that relies on the VMD subdevice domain's root port
> > windows to be written with the physical addresses. This method also allows
> > other hypervisors besides QEMU/KVM to perform guest passthrough.
> > 
> > This patchset adds a host and guest mode to write the physical address
> > information to the root port registers in the host and read them in the guest,
> > and restore them in both cases on module unload.
> > 
> > This patchset also folds in the VMD subdevice domain secondary bus reset
> > patchset [1] to clear the domain prior to guest passthrough.
> > 
> > [1] https://patchwork.kernel.org/project/linux-pci/cover/20200928010557.5324-1-jonathan.derrick@intel.com/
> > 
> > Jon Derrick (5):
> >   PCI: vmd: Reset the VMD subdevice domain on probe
> >   PCI: Add a reset quirk for VMD
> >   PCI: vmd: Add offset translation helper
> >   PCI: vmd: Pass features to vmd_get_phys_offsets()
> >   PCI: vmd: Add legacy guest passthrough mode
> > 
> >  drivers/pci/controller/vmd.c | 200 ++++++++++++++++++++++++++++++++++++++-----
> >  drivers/pci/quirks.c         |  48 +++++++++++
> >  2 files changed, 227 insertions(+), 21 deletions(-)
> 
> Hi Jon,
> 
> it is unclear to me where we are with this series, please let me know.
> 
> Thanks,
> Lorenzo

Hi Lorenzo,

We can drop this series for now. We may revive it in the future with
modifications to the reset sequence.

Best,
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] Legacy direct-assign mode
  2021-03-22 12:28 ` [PATCH 0/5] Legacy direct-assign mode Lorenzo Pieralisi
  2021-03-22 15:25   ` Derrick, Jonathan
@ 2021-03-22 19:48   ` Christoph Hellwig
  2021-03-22 22:55     ` Derrick, Jonathan
  1 sibling, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2021-03-22 19:48 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Jon Derrick, linux-pci, Bjorn Helgaas, Nirmal Patel, Sushma Kalakota

On Mon, Mar 22, 2021 at 12:28:37PM +0000, Lorenzo Pieralisi wrote:
> > correctly program bridge windows with physical addresses. Some customers are
> > using a legacy method that relies on the VMD subdevice domain's root port
> > windows to be written with the physical addresses. This method also allows
> > other hypervisors besides QEMU/KVM to perform guest passthrough.

This seems like a bad idea.  What are these other hypervisors?  AFAIK
there are no purely userspace hypervisors, so in other words what you
propose here is only for unsupported external modules.

I don't think we shoud merge something like this.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] Legacy direct-assign mode
  2021-03-22 19:48   ` Christoph Hellwig
@ 2021-03-22 22:55     ` Derrick, Jonathan
  0 siblings, 0 replies; 13+ messages in thread
From: Derrick, Jonathan @ 2021-03-22 22:55 UTC (permalink / raw)
  To: lorenzo.pieralisi, hch
  Cc: Kalakota, SushmaX, linux-pci, Patel, Nirmal, helgaas

On Mon, 2021-03-22 at 19:48 +0000, Christoph Hellwig wrote:
> On Mon, Mar 22, 2021 at 12:28:37PM +0000, Lorenzo Pieralisi wrote:
> > > correctly program bridge windows with physical addresses. Some customers are
> > > using a legacy method that relies on the VMD subdevice domain's root port
> > > windows to be written with the physical addresses. This method also allows
> > > other hypervisors besides QEMU/KVM to perform guest passthrough.
> 
> This seems like a bad idea.  What are these other hypervisors?  AFAIK
> there are no purely userspace hypervisors, so in other words what you
> propose here is only for unsupported external modules.
Any of the type 1 hypervisors here:
https://en.wikipedia.org/wiki/Hypervisor

> 
> I don't think we shoud merge something like this.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-03-22 22:56 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-20 22:51 [PATCH 0/5] Legacy direct-assign mode Jon Derrick
2020-11-20 22:51 ` [PATCH 1/5] PCI: vmd: Reset the VMD subdevice domain on probe Jon Derrick
2020-11-20 22:51 ` [PATCH 2/5] PCI: Add a reset quirk for VMD Jon Derrick
2020-11-24 21:40   ` Bjorn Helgaas
2020-11-25 17:22     ` Derrick, Jonathan
2020-11-25 17:34       ` Alex Williamson
2020-11-20 22:51 ` [PATCH 3/5] PCI: vmd: Add offset translation helper Jon Derrick
2020-11-20 22:51 ` [PATCH 4/5] PCI: vmd: Pass features to vmd_get_phys_offsets() Jon Derrick
2020-11-20 22:51 ` [PATCH 5/5] PCI: vmd: Add legacy guest passthrough mode Jon Derrick
2021-03-22 12:28 ` [PATCH 0/5] Legacy direct-assign mode Lorenzo Pieralisi
2021-03-22 15:25   ` Derrick, Jonathan
2021-03-22 19:48   ` Christoph Hellwig
2021-03-22 22:55     ` Derrick, Jonathan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).