All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch Part3 V6 0/8] Enable support of Intel DMAR device hotplug
@ 2014-09-19  5:18 ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

When hot plugging a descrete IOH or a physical processor with embedded
IIO, we need to handle DMAR(or IOMMU) unit in the PCIe host bridge if
DMAR is in use. This patch set tries to enhance current DMAR/IOMMU/IR
drivers to support hotplug and is based on latest Linus master branch.

All prerequisite patches to support DMAR device hotplug have been merged
into the mainstream kernel, and this is the last patch set to enable
DMAR device hotplug.

You may access the patch set at:
https://github.com/jiangliu/linux.git iommu/hotplug_v6

This patch set has been tested on Intel development machine.
Appreciate any comments and tests.

V5->V6:
1) Fix a race condition found during review
2) Refine dmar_walk_resources() to dmar_walk_remapping_entries()

Patch 1-4 enhances DMAR framework to support hotplug
Patch 5 enhances Intel interrupt remapping driver to support hotplug
Patch 6 enhances error handling in Intel IR driver
Patch 7 enhance Intel IOMMU to support hotplug
Patch 8 enhance ACPI pci_root driver to handle DMAR units

Jiang Liu (8):
  iommu/vt-d: Introduce helper function dmar_walk_resources()
  iommu/vt-d: Dynamically allocate and free seq_id for DMAR units
  iommu/vt-d: Implement DMAR unit hotplug framework
  iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug
  iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit
    hotplug
  iommu/vt-d: Enhance error recovery in function
    intel_enable_irq_remapping()
  iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug
  pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug

 drivers/acpi/pci_root.c             |   16 +-
 drivers/iommu/dmar.c                |  532 ++++++++++++++++++++++++++++-------
 drivers/iommu/intel-iommu.c         |  297 ++++++++++++++-----
 drivers/iommu/intel_irq_remapping.c |  237 ++++++++++++----
 include/linux/dmar.h                |   50 +++-
 5 files changed, 890 insertions(+), 242 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 0/8] Enable support of Intel DMAR device hotplug
@ 2014-09-19  5:18 ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

When hot plugging a descrete IOH or a physical processor with embedded
IIO, we need to handle DMAR(or IOMMU) unit in the PCIe host bridge if
DMAR is in use. This patch set tries to enhance current DMAR/IOMMU/IR
drivers to support hotplug and is based on latest Linus master branch.

All prerequisite patches to support DMAR device hotplug have been merged
into the mainstream kernel, and this is the last patch set to enable
DMAR device hotplug.

You may access the patch set at:
https://github.com/jiangliu/linux.git iommu/hotplug_v6

This patch set has been tested on Intel development machine.
Appreciate any comments and tests.

V5->V6:
1) Fix a race condition found during review
2) Refine dmar_walk_resources() to dmar_walk_remapping_entries()

Patch 1-4 enhances DMAR framework to support hotplug
Patch 5 enhances Intel interrupt remapping driver to support hotplug
Patch 6 enhances error handling in Intel IR driver
Patch 7 enhance Intel IOMMU to support hotplug
Patch 8 enhance ACPI pci_root driver to handle DMAR units

Jiang Liu (8):
  iommu/vt-d: Introduce helper function dmar_walk_resources()
  iommu/vt-d: Dynamically allocate and free seq_id for DMAR units
  iommu/vt-d: Implement DMAR unit hotplug framework
  iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug
  iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit
    hotplug
  iommu/vt-d: Enhance error recovery in function
    intel_enable_irq_remapping()
  iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug
  pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug

 drivers/acpi/pci_root.c             |   16 +-
 drivers/iommu/dmar.c                |  532 ++++++++++++++++++++++++++++-------
 drivers/iommu/intel-iommu.c         |  297 ++++++++++++++-----
 drivers/iommu/intel_irq_remapping.c |  237 ++++++++++++----
 include/linux/dmar.h                |   50 +++-
 5 files changed, 890 insertions(+), 242 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources()
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Introduce helper function dmar_walk_resources to walk resource entries
in DMAR table and ACPI buffer object returned by ACPI _DSM method
for IOMMU hot-plug.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/iommu/dmar.c        |  209 +++++++++++++++++++++++--------------------
 drivers/iommu/intel-iommu.c |    4 +-
 include/linux/dmar.h        |   19 ++--
 3 files changed, 122 insertions(+), 110 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 06d268abe951..a05cf3634efe 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -44,6 +44,14 @@
 
 #include "irq_remapping.h"
 
+typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
+struct dmar_res_callback {
+	dmar_res_handler_t	cb[ACPI_DMAR_TYPE_RESERVED];
+	void			*arg[ACPI_DMAR_TYPE_RESERVED];
+	bool			ignore_unhandled;
+	bool			print_entry;
+};
+
 /*
  * Assumptions:
  * 1) The hotplug framework guarentees that DMAR unit will be hot-added
@@ -333,7 +341,7 @@ static struct notifier_block dmar_pci_bus_nb = {
  * present in the platform
  */
 static int __init
-dmar_parse_one_drhd(struct acpi_dmar_header *header)
+dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_hardware_unit *drhd;
 	struct dmar_drhd_unit *dmaru;
@@ -364,6 +372,10 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header)
 		return ret;
 	}
 	dmar_register_drhd_unit(dmaru);
+
+	if (arg)
+		(*(int *)arg)++;
+
 	return 0;
 }
 
@@ -376,7 +388,8 @@ static void dmar_free_drhd(struct dmar_drhd_unit *dmaru)
 	kfree(dmaru);
 }
 
-static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
+static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
+				      void *arg)
 {
 	struct acpi_dmar_andd *andd = (void *)header;
 
@@ -398,7 +411,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
 
 #ifdef CONFIG_ACPI_NUMA
 static int __init
-dmar_parse_one_rhsa(struct acpi_dmar_header *header)
+dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_rhsa *rhsa;
 	struct dmar_drhd_unit *drhd;
@@ -425,6 +438,8 @@ dmar_parse_one_rhsa(struct acpi_dmar_header *header)
 
 	return 0;
 }
+#else
+#define	dmar_parse_one_rhsa		dmar_res_noop
 #endif
 
 static void __init
@@ -486,6 +501,52 @@ static int __init dmar_table_detect(void)
 	return (ACPI_SUCCESS(status) ? 1 : 0);
 }
 
+static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
+				       size_t len, struct dmar_res_callback *cb)
+{
+	int ret = 0;
+	struct acpi_dmar_header *iter, *next;
+	struct acpi_dmar_header *end = ((void *)start) + len;
+
+	for (iter = start; iter < end && ret == 0; iter = next) {
+		next = (void *)iter + iter->length;
+		if (iter->length == 0) {
+			/* Avoid looping forever on bad ACPI tables */
+			pr_debug(FW_BUG "Invalid 0-length structure\n");
+			break;
+		} else if (next > end) {
+			/* Avoid passing table end */
+			pr_warn(FW_BUG "record passes table end\n");
+			ret = -EINVAL;
+			break;
+		}
+
+		if (cb->print_entry)
+			dmar_table_print_dmar_entry(iter);
+
+		if (iter->type >= ACPI_DMAR_TYPE_RESERVED) {
+			/* continue for forward compatibility */
+			pr_debug("Unknown DMAR structure type %d\n",
+				 iter->type);
+		} else if (cb->cb[iter->type]) {
+			ret = cb->cb[iter->type](iter, cb->arg[iter->type]);
+		} else if (!cb->ignore_unhandled) {
+			pr_warn("No handler for DMAR structure type %d\n",
+				iter->type);
+			ret = -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar,
+				       struct dmar_res_callback *cb)
+{
+	return dmar_walk_remapping_entries((void *)(dmar + 1),
+			dmar->header.length - sizeof(*dmar), cb);
+}
+
 /**
  * parse_dmar_table - parses the DMA reporting table
  */
@@ -493,9 +554,18 @@ static int __init
 parse_dmar_table(void)
 {
 	struct acpi_table_dmar *dmar;
-	struct acpi_dmar_header *entry_header;
 	int ret = 0;
 	int drhd_count = 0;
+	struct dmar_res_callback cb = {
+		.print_entry = true,
+		.ignore_unhandled = true,
+		.arg[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &drhd_count,
+		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_parse_one_drhd,
+		.cb[ACPI_DMAR_TYPE_RESERVED_MEMORY] = &dmar_parse_one_rmrr,
+		.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
+		.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
+		.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
+	};
 
 	/*
 	 * Do it again, earlier dmar_tbl mapping could be mapped with
@@ -519,51 +589,10 @@ parse_dmar_table(void)
 	}
 
 	pr_info("Host address width %d\n", dmar->width + 1);
-
-	entry_header = (struct acpi_dmar_header *)(dmar + 1);
-	while (((unsigned long)entry_header) <
-			(((unsigned long)dmar) + dmar_tbl->length)) {
-		/* Avoid looping forever on bad ACPI tables */
-		if (entry_header->length == 0) {
-			pr_warn("Invalid 0-length structure\n");
-			ret = -EINVAL;
-			break;
-		}
-
-		dmar_table_print_dmar_entry(entry_header);
-
-		switch (entry_header->type) {
-		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
-			drhd_count++;
-			ret = dmar_parse_one_drhd(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
-			ret = dmar_parse_one_rmrr(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_ROOT_ATS:
-			ret = dmar_parse_one_atsr(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_HARDWARE_AFFINITY:
-#ifdef CONFIG_ACPI_NUMA
-			ret = dmar_parse_one_rhsa(entry_header);
-#endif
-			break;
-		case ACPI_DMAR_TYPE_NAMESPACE:
-			ret = dmar_parse_one_andd(entry_header);
-			break;
-		default:
-			pr_warn("Unknown DMAR structure type %d\n",
-				entry_header->type);
-			ret = 0; /* for forward compatibility */
-			break;
-		}
-		if (ret)
-			break;
-
-		entry_header = ((void *)entry_header + entry_header->length);
-	}
-	if (drhd_count == 0)
+	ret = dmar_walk_dmar_table(dmar, &cb);
+	if (ret == 0 && drhd_count == 0)
 		pr_warn(FW_BUG "No DRHD structure found in DMAR table\n");
+
 	return ret;
 }
 
@@ -761,76 +790,60 @@ static void warn_invalid_dmar(u64 addr, const char *message)
 		dmi_get_system_info(DMI_PRODUCT_VERSION));
 }
 
-static int __init check_zero_address(void)
+static int __ref
+dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
 {
-	struct acpi_table_dmar *dmar;
-	struct acpi_dmar_header *entry_header;
 	struct acpi_dmar_hardware_unit *drhd;
+	void __iomem *addr;
+	u64 cap, ecap;
 
-	dmar = (struct acpi_table_dmar *)dmar_tbl;
-	entry_header = (struct acpi_dmar_header *)(dmar + 1);
-
-	while (((unsigned long)entry_header) <
-			(((unsigned long)dmar) + dmar_tbl->length)) {
-		/* Avoid looping forever on bad ACPI tables */
-		if (entry_header->length == 0) {
-			pr_warn("Invalid 0-length structure\n");
-			return 0;
-		}
-
-		if (entry_header->type == ACPI_DMAR_TYPE_HARDWARE_UNIT) {
-			void __iomem *addr;
-			u64 cap, ecap;
-
-			drhd = (void *)entry_header;
-			if (!drhd->address) {
-				warn_invalid_dmar(0, "");
-				goto failed;
-			}
+	drhd = (void *)entry;
+	if (!drhd->address) {
+		warn_invalid_dmar(0, "");
+		return -EINVAL;
+	}
 
-			addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
-			if (!addr ) {
-				printk("IOMMU: can't validate: %llx\n", drhd->address);
-				goto failed;
-			}
-			cap = dmar_readq(addr + DMAR_CAP_REG);
-			ecap = dmar_readq(addr + DMAR_ECAP_REG);
-			early_iounmap(addr, VTD_PAGE_SIZE);
-			if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
-				warn_invalid_dmar(drhd->address,
-						  " returns all ones");
-				goto failed;
-			}
-		}
+	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
+	if (!addr) {
+		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
+		return -EINVAL;
+	}
+	cap = dmar_readq(addr + DMAR_CAP_REG);
+	ecap = dmar_readq(addr + DMAR_ECAP_REG);
+	early_iounmap(addr, VTD_PAGE_SIZE);
 
-		entry_header = ((void *)entry_header + entry_header->length);
+	if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
+		warn_invalid_dmar(drhd->address, " returns all ones");
+		return -EINVAL;
 	}
-	return 1;
 
-failed:
 	return 0;
 }
 
 int __init detect_intel_iommu(void)
 {
 	int ret;
+	struct dmar_res_callback validate_drhd_cb = {
+		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_validate_one_drhd,
+		.ignore_unhandled = true,
+	};
 
 	down_write(&dmar_global_lock);
 	ret = dmar_table_detect();
 	if (ret)
-		ret = check_zero_address();
-	{
-		if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
-			iommu_detected = 1;
-			/* Make sure ACS will be enabled */
-			pci_request_acs();
-		}
+		ret = !dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
+					    &validate_drhd_cb);
+	if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
+		iommu_detected = 1;
+		/* Make sure ACS will be enabled */
+		pci_request_acs();
+	}
 
 #ifdef CONFIG_X86
-		if (ret)
-			x86_init.iommu.iommu_init = intel_iommu_init;
+	if (ret)
+		x86_init.iommu.iommu_init = intel_iommu_init;
 #endif
-	}
+
 	early_acpi_os_unmap_memory((void __iomem *)dmar_tbl, dmar_tbl_size);
 	dmar_tbl = NULL;
 	up_write(&dmar_global_lock);
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 5619f264862d..4af2206e41bc 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3682,7 +3682,7 @@ static inline void init_iommu_pm_ops(void) {}
 #endif	/* CONFIG_PM */
 
 
-int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
+int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_reserved_memory *rmrr;
 	struct dmar_rmrr_unit *rmrru;
@@ -3708,7 +3708,7 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
 	return 0;
 }
 
-int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr)
+int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 {
 	struct acpi_dmar_atsr *atsr;
 	struct dmar_atsr_unit *atsru;
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index 1deece46a0ca..fac8ca34f9a8 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -115,22 +115,21 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
 extern int detect_intel_iommu(void);
 extern int enable_drhd_fault_handling(void);
 
+static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
+{
+	return 0;
+}
+
 #ifdef CONFIG_INTEL_IOMMU
 extern int iommu_detected, no_iommu;
 extern int intel_iommu_init(void);
-extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header);
-extern int dmar_parse_one_atsr(struct acpi_dmar_header *header);
+extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
+extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
 extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
 #else /* !CONFIG_INTEL_IOMMU: */
 static inline int intel_iommu_init(void) { return -ENODEV; }
-static inline int dmar_parse_one_rmrr(struct acpi_dmar_header *header)
-{
-	return 0;
-}
-static inline int dmar_parse_one_atsr(struct acpi_dmar_header *header)
-{
-	return 0;
-}
+#define	dmar_parse_one_rmrr		dmar_res_noop
+#define	dmar_parse_one_atsr		dmar_res_noop
 static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
 {
 	return 0;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources()
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

Introduce helper function dmar_walk_resources to walk resource entries
in DMAR table and ACPI buffer object returned by ACPI _DSM method
for IOMMU hot-plug.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
---
 drivers/iommu/dmar.c        |  209 +++++++++++++++++++++++--------------------
 drivers/iommu/intel-iommu.c |    4 +-
 include/linux/dmar.h        |   19 ++--
 3 files changed, 122 insertions(+), 110 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 06d268abe951..a05cf3634efe 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -44,6 +44,14 @@
 
 #include "irq_remapping.h"
 
+typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
+struct dmar_res_callback {
+	dmar_res_handler_t	cb[ACPI_DMAR_TYPE_RESERVED];
+	void			*arg[ACPI_DMAR_TYPE_RESERVED];
+	bool			ignore_unhandled;
+	bool			print_entry;
+};
+
 /*
  * Assumptions:
  * 1) The hotplug framework guarentees that DMAR unit will be hot-added
@@ -333,7 +341,7 @@ static struct notifier_block dmar_pci_bus_nb = {
  * present in the platform
  */
 static int __init
-dmar_parse_one_drhd(struct acpi_dmar_header *header)
+dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_hardware_unit *drhd;
 	struct dmar_drhd_unit *dmaru;
@@ -364,6 +372,10 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header)
 		return ret;
 	}
 	dmar_register_drhd_unit(dmaru);
+
+	if (arg)
+		(*(int *)arg)++;
+
 	return 0;
 }
 
@@ -376,7 +388,8 @@ static void dmar_free_drhd(struct dmar_drhd_unit *dmaru)
 	kfree(dmaru);
 }
 
-static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
+static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
+				      void *arg)
 {
 	struct acpi_dmar_andd *andd = (void *)header;
 
@@ -398,7 +411,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
 
 #ifdef CONFIG_ACPI_NUMA
 static int __init
-dmar_parse_one_rhsa(struct acpi_dmar_header *header)
+dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_rhsa *rhsa;
 	struct dmar_drhd_unit *drhd;
@@ -425,6 +438,8 @@ dmar_parse_one_rhsa(struct acpi_dmar_header *header)
 
 	return 0;
 }
+#else
+#define	dmar_parse_one_rhsa		dmar_res_noop
 #endif
 
 static void __init
@@ -486,6 +501,52 @@ static int __init dmar_table_detect(void)
 	return (ACPI_SUCCESS(status) ? 1 : 0);
 }
 
+static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
+				       size_t len, struct dmar_res_callback *cb)
+{
+	int ret = 0;
+	struct acpi_dmar_header *iter, *next;
+	struct acpi_dmar_header *end = ((void *)start) + len;
+
+	for (iter = start; iter < end && ret == 0; iter = next) {
+		next = (void *)iter + iter->length;
+		if (iter->length == 0) {
+			/* Avoid looping forever on bad ACPI tables */
+			pr_debug(FW_BUG "Invalid 0-length structure\n");
+			break;
+		} else if (next > end) {
+			/* Avoid passing table end */
+			pr_warn(FW_BUG "record passes table end\n");
+			ret = -EINVAL;
+			break;
+		}
+
+		if (cb->print_entry)
+			dmar_table_print_dmar_entry(iter);
+
+		if (iter->type >= ACPI_DMAR_TYPE_RESERVED) {
+			/* continue for forward compatibility */
+			pr_debug("Unknown DMAR structure type %d\n",
+				 iter->type);
+		} else if (cb->cb[iter->type]) {
+			ret = cb->cb[iter->type](iter, cb->arg[iter->type]);
+		} else if (!cb->ignore_unhandled) {
+			pr_warn("No handler for DMAR structure type %d\n",
+				iter->type);
+			ret = -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar,
+				       struct dmar_res_callback *cb)
+{
+	return dmar_walk_remapping_entries((void *)(dmar + 1),
+			dmar->header.length - sizeof(*dmar), cb);
+}
+
 /**
  * parse_dmar_table - parses the DMA reporting table
  */
@@ -493,9 +554,18 @@ static int __init
 parse_dmar_table(void)
 {
 	struct acpi_table_dmar *dmar;
-	struct acpi_dmar_header *entry_header;
 	int ret = 0;
 	int drhd_count = 0;
+	struct dmar_res_callback cb = {
+		.print_entry = true,
+		.ignore_unhandled = true,
+		.arg[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &drhd_count,
+		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_parse_one_drhd,
+		.cb[ACPI_DMAR_TYPE_RESERVED_MEMORY] = &dmar_parse_one_rmrr,
+		.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
+		.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
+		.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
+	};
 
 	/*
 	 * Do it again, earlier dmar_tbl mapping could be mapped with
@@ -519,51 +589,10 @@ parse_dmar_table(void)
 	}
 
 	pr_info("Host address width %d\n", dmar->width + 1);
-
-	entry_header = (struct acpi_dmar_header *)(dmar + 1);
-	while (((unsigned long)entry_header) <
-			(((unsigned long)dmar) + dmar_tbl->length)) {
-		/* Avoid looping forever on bad ACPI tables */
-		if (entry_header->length == 0) {
-			pr_warn("Invalid 0-length structure\n");
-			ret = -EINVAL;
-			break;
-		}
-
-		dmar_table_print_dmar_entry(entry_header);
-
-		switch (entry_header->type) {
-		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
-			drhd_count++;
-			ret = dmar_parse_one_drhd(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
-			ret = dmar_parse_one_rmrr(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_ROOT_ATS:
-			ret = dmar_parse_one_atsr(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_HARDWARE_AFFINITY:
-#ifdef CONFIG_ACPI_NUMA
-			ret = dmar_parse_one_rhsa(entry_header);
-#endif
-			break;
-		case ACPI_DMAR_TYPE_NAMESPACE:
-			ret = dmar_parse_one_andd(entry_header);
-			break;
-		default:
-			pr_warn("Unknown DMAR structure type %d\n",
-				entry_header->type);
-			ret = 0; /* for forward compatibility */
-			break;
-		}
-		if (ret)
-			break;
-
-		entry_header = ((void *)entry_header + entry_header->length);
-	}
-	if (drhd_count == 0)
+	ret = dmar_walk_dmar_table(dmar, &cb);
+	if (ret == 0 && drhd_count == 0)
 		pr_warn(FW_BUG "No DRHD structure found in DMAR table\n");
+
 	return ret;
 }
 
@@ -761,76 +790,60 @@ static void warn_invalid_dmar(u64 addr, const char *message)
 		dmi_get_system_info(DMI_PRODUCT_VERSION));
 }
 
-static int __init check_zero_address(void)
+static int __ref
+dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
 {
-	struct acpi_table_dmar *dmar;
-	struct acpi_dmar_header *entry_header;
 	struct acpi_dmar_hardware_unit *drhd;
+	void __iomem *addr;
+	u64 cap, ecap;
 
-	dmar = (struct acpi_table_dmar *)dmar_tbl;
-	entry_header = (struct acpi_dmar_header *)(dmar + 1);
-
-	while (((unsigned long)entry_header) <
-			(((unsigned long)dmar) + dmar_tbl->length)) {
-		/* Avoid looping forever on bad ACPI tables */
-		if (entry_header->length == 0) {
-			pr_warn("Invalid 0-length structure\n");
-			return 0;
-		}
-
-		if (entry_header->type == ACPI_DMAR_TYPE_HARDWARE_UNIT) {
-			void __iomem *addr;
-			u64 cap, ecap;
-
-			drhd = (void *)entry_header;
-			if (!drhd->address) {
-				warn_invalid_dmar(0, "");
-				goto failed;
-			}
+	drhd = (void *)entry;
+	if (!drhd->address) {
+		warn_invalid_dmar(0, "");
+		return -EINVAL;
+	}
 
-			addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
-			if (!addr ) {
-				printk("IOMMU: can't validate: %llx\n", drhd->address);
-				goto failed;
-			}
-			cap = dmar_readq(addr + DMAR_CAP_REG);
-			ecap = dmar_readq(addr + DMAR_ECAP_REG);
-			early_iounmap(addr, VTD_PAGE_SIZE);
-			if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
-				warn_invalid_dmar(drhd->address,
-						  " returns all ones");
-				goto failed;
-			}
-		}
+	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
+	if (!addr) {
+		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
+		return -EINVAL;
+	}
+	cap = dmar_readq(addr + DMAR_CAP_REG);
+	ecap = dmar_readq(addr + DMAR_ECAP_REG);
+	early_iounmap(addr, VTD_PAGE_SIZE);
 
-		entry_header = ((void *)entry_header + entry_header->length);
+	if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
+		warn_invalid_dmar(drhd->address, " returns all ones");
+		return -EINVAL;
 	}
-	return 1;
 
-failed:
 	return 0;
 }
 
 int __init detect_intel_iommu(void)
 {
 	int ret;
+	struct dmar_res_callback validate_drhd_cb = {
+		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_validate_one_drhd,
+		.ignore_unhandled = true,
+	};
 
 	down_write(&dmar_global_lock);
 	ret = dmar_table_detect();
 	if (ret)
-		ret = check_zero_address();
-	{
-		if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
-			iommu_detected = 1;
-			/* Make sure ACS will be enabled */
-			pci_request_acs();
-		}
+		ret = !dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
+					    &validate_drhd_cb);
+	if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
+		iommu_detected = 1;
+		/* Make sure ACS will be enabled */
+		pci_request_acs();
+	}
 
 #ifdef CONFIG_X86
-		if (ret)
-			x86_init.iommu.iommu_init = intel_iommu_init;
+	if (ret)
+		x86_init.iommu.iommu_init = intel_iommu_init;
 #endif
-	}
+
 	early_acpi_os_unmap_memory((void __iomem *)dmar_tbl, dmar_tbl_size);
 	dmar_tbl = NULL;
 	up_write(&dmar_global_lock);
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 5619f264862d..4af2206e41bc 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3682,7 +3682,7 @@ static inline void init_iommu_pm_ops(void) {}
 #endif	/* CONFIG_PM */
 
 
-int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
+int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_reserved_memory *rmrr;
 	struct dmar_rmrr_unit *rmrru;
@@ -3708,7 +3708,7 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
 	return 0;
 }
 
-int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr)
+int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 {
 	struct acpi_dmar_atsr *atsr;
 	struct dmar_atsr_unit *atsru;
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index 1deece46a0ca..fac8ca34f9a8 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -115,22 +115,21 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
 extern int detect_intel_iommu(void);
 extern int enable_drhd_fault_handling(void);
 
+static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
+{
+	return 0;
+}
+
 #ifdef CONFIG_INTEL_IOMMU
 extern int iommu_detected, no_iommu;
 extern int intel_iommu_init(void);
-extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header);
-extern int dmar_parse_one_atsr(struct acpi_dmar_header *header);
+extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
+extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
 extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
 #else /* !CONFIG_INTEL_IOMMU: */
 static inline int intel_iommu_init(void) { return -ENODEV; }
-static inline int dmar_parse_one_rmrr(struct acpi_dmar_header *header)
-{
-	return 0;
-}
-static inline int dmar_parse_one_atsr(struct acpi_dmar_header *header)
-{
-	return 0;
-}
+#define	dmar_parse_one_rmrr		dmar_res_noop
+#define	dmar_parse_one_atsr		dmar_res_noop
 static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
 {
 	return 0;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources()
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Introduce helper function dmar_walk_resources to walk resource entries
in DMAR table and ACPI buffer object returned by ACPI _DSM method
for IOMMU hot-plug.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/iommu/dmar.c        |  209 +++++++++++++++++++++++--------------------
 drivers/iommu/intel-iommu.c |    4 +-
 include/linux/dmar.h        |   19 ++--
 3 files changed, 122 insertions(+), 110 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 06d268abe951..a05cf3634efe 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -44,6 +44,14 @@
 
 #include "irq_remapping.h"
 
+typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
+struct dmar_res_callback {
+	dmar_res_handler_t	cb[ACPI_DMAR_TYPE_RESERVED];
+	void			*arg[ACPI_DMAR_TYPE_RESERVED];
+	bool			ignore_unhandled;
+	bool			print_entry;
+};
+
 /*
  * Assumptions:
  * 1) The hotplug framework guarentees that DMAR unit will be hot-added
@@ -333,7 +341,7 @@ static struct notifier_block dmar_pci_bus_nb = {
  * present in the platform
  */
 static int __init
-dmar_parse_one_drhd(struct acpi_dmar_header *header)
+dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_hardware_unit *drhd;
 	struct dmar_drhd_unit *dmaru;
@@ -364,6 +372,10 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header)
 		return ret;
 	}
 	dmar_register_drhd_unit(dmaru);
+
+	if (arg)
+		(*(int *)arg)++;
+
 	return 0;
 }
 
@@ -376,7 +388,8 @@ static void dmar_free_drhd(struct dmar_drhd_unit *dmaru)
 	kfree(dmaru);
 }
 
-static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
+static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
+				      void *arg)
 {
 	struct acpi_dmar_andd *andd = (void *)header;
 
@@ -398,7 +411,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
 
 #ifdef CONFIG_ACPI_NUMA
 static int __init
-dmar_parse_one_rhsa(struct acpi_dmar_header *header)
+dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_rhsa *rhsa;
 	struct dmar_drhd_unit *drhd;
@@ -425,6 +438,8 @@ dmar_parse_one_rhsa(struct acpi_dmar_header *header)
 
 	return 0;
 }
+#else
+#define	dmar_parse_one_rhsa		dmar_res_noop
 #endif
 
 static void __init
@@ -486,6 +501,52 @@ static int __init dmar_table_detect(void)
 	return (ACPI_SUCCESS(status) ? 1 : 0);
 }
 
+static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
+				       size_t len, struct dmar_res_callback *cb)
+{
+	int ret = 0;
+	struct acpi_dmar_header *iter, *next;
+	struct acpi_dmar_header *end = ((void *)start) + len;
+
+	for (iter = start; iter < end && ret = 0; iter = next) {
+		next = (void *)iter + iter->length;
+		if (iter->length = 0) {
+			/* Avoid looping forever on bad ACPI tables */
+			pr_debug(FW_BUG "Invalid 0-length structure\n");
+			break;
+		} else if (next > end) {
+			/* Avoid passing table end */
+			pr_warn(FW_BUG "record passes table end\n");
+			ret = -EINVAL;
+			break;
+		}
+
+		if (cb->print_entry)
+			dmar_table_print_dmar_entry(iter);
+
+		if (iter->type >= ACPI_DMAR_TYPE_RESERVED) {
+			/* continue for forward compatibility */
+			pr_debug("Unknown DMAR structure type %d\n",
+				 iter->type);
+		} else if (cb->cb[iter->type]) {
+			ret = cb->cb[iter->type](iter, cb->arg[iter->type]);
+		} else if (!cb->ignore_unhandled) {
+			pr_warn("No handler for DMAR structure type %d\n",
+				iter->type);
+			ret = -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar,
+				       struct dmar_res_callback *cb)
+{
+	return dmar_walk_remapping_entries((void *)(dmar + 1),
+			dmar->header.length - sizeof(*dmar), cb);
+}
+
 /**
  * parse_dmar_table - parses the DMA reporting table
  */
@@ -493,9 +554,18 @@ static int __init
 parse_dmar_table(void)
 {
 	struct acpi_table_dmar *dmar;
-	struct acpi_dmar_header *entry_header;
 	int ret = 0;
 	int drhd_count = 0;
+	struct dmar_res_callback cb = {
+		.print_entry = true,
+		.ignore_unhandled = true,
+		.arg[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &drhd_count,
+		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_parse_one_drhd,
+		.cb[ACPI_DMAR_TYPE_RESERVED_MEMORY] = &dmar_parse_one_rmrr,
+		.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
+		.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
+		.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
+	};
 
 	/*
 	 * Do it again, earlier dmar_tbl mapping could be mapped with
@@ -519,51 +589,10 @@ parse_dmar_table(void)
 	}
 
 	pr_info("Host address width %d\n", dmar->width + 1);
-
-	entry_header = (struct acpi_dmar_header *)(dmar + 1);
-	while (((unsigned long)entry_header) <
-			(((unsigned long)dmar) + dmar_tbl->length)) {
-		/* Avoid looping forever on bad ACPI tables */
-		if (entry_header->length = 0) {
-			pr_warn("Invalid 0-length structure\n");
-			ret = -EINVAL;
-			break;
-		}
-
-		dmar_table_print_dmar_entry(entry_header);
-
-		switch (entry_header->type) {
-		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
-			drhd_count++;
-			ret = dmar_parse_one_drhd(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
-			ret = dmar_parse_one_rmrr(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_ROOT_ATS:
-			ret = dmar_parse_one_atsr(entry_header);
-			break;
-		case ACPI_DMAR_TYPE_HARDWARE_AFFINITY:
-#ifdef CONFIG_ACPI_NUMA
-			ret = dmar_parse_one_rhsa(entry_header);
-#endif
-			break;
-		case ACPI_DMAR_TYPE_NAMESPACE:
-			ret = dmar_parse_one_andd(entry_header);
-			break;
-		default:
-			pr_warn("Unknown DMAR structure type %d\n",
-				entry_header->type);
-			ret = 0; /* for forward compatibility */
-			break;
-		}
-		if (ret)
-			break;
-
-		entry_header = ((void *)entry_header + entry_header->length);
-	}
-	if (drhd_count = 0)
+	ret = dmar_walk_dmar_table(dmar, &cb);
+	if (ret = 0 && drhd_count = 0)
 		pr_warn(FW_BUG "No DRHD structure found in DMAR table\n");
+
 	return ret;
 }
 
@@ -761,76 +790,60 @@ static void warn_invalid_dmar(u64 addr, const char *message)
 		dmi_get_system_info(DMI_PRODUCT_VERSION));
 }
 
-static int __init check_zero_address(void)
+static int __ref
+dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
 {
-	struct acpi_table_dmar *dmar;
-	struct acpi_dmar_header *entry_header;
 	struct acpi_dmar_hardware_unit *drhd;
+	void __iomem *addr;
+	u64 cap, ecap;
 
-	dmar = (struct acpi_table_dmar *)dmar_tbl;
-	entry_header = (struct acpi_dmar_header *)(dmar + 1);
-
-	while (((unsigned long)entry_header) <
-			(((unsigned long)dmar) + dmar_tbl->length)) {
-		/* Avoid looping forever on bad ACPI tables */
-		if (entry_header->length = 0) {
-			pr_warn("Invalid 0-length structure\n");
-			return 0;
-		}
-
-		if (entry_header->type = ACPI_DMAR_TYPE_HARDWARE_UNIT) {
-			void __iomem *addr;
-			u64 cap, ecap;
-
-			drhd = (void *)entry_header;
-			if (!drhd->address) {
-				warn_invalid_dmar(0, "");
-				goto failed;
-			}
+	drhd = (void *)entry;
+	if (!drhd->address) {
+		warn_invalid_dmar(0, "");
+		return -EINVAL;
+	}
 
-			addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
-			if (!addr ) {
-				printk("IOMMU: can't validate: %llx\n", drhd->address);
-				goto failed;
-			}
-			cap = dmar_readq(addr + DMAR_CAP_REG);
-			ecap = dmar_readq(addr + DMAR_ECAP_REG);
-			early_iounmap(addr, VTD_PAGE_SIZE);
-			if (cap = (uint64_t)-1 && ecap = (uint64_t)-1) {
-				warn_invalid_dmar(drhd->address,
-						  " returns all ones");
-				goto failed;
-			}
-		}
+	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
+	if (!addr) {
+		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
+		return -EINVAL;
+	}
+	cap = dmar_readq(addr + DMAR_CAP_REG);
+	ecap = dmar_readq(addr + DMAR_ECAP_REG);
+	early_iounmap(addr, VTD_PAGE_SIZE);
 
-		entry_header = ((void *)entry_header + entry_header->length);
+	if (cap = (uint64_t)-1 && ecap = (uint64_t)-1) {
+		warn_invalid_dmar(drhd->address, " returns all ones");
+		return -EINVAL;
 	}
-	return 1;
 
-failed:
 	return 0;
 }
 
 int __init detect_intel_iommu(void)
 {
 	int ret;
+	struct dmar_res_callback validate_drhd_cb = {
+		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_validate_one_drhd,
+		.ignore_unhandled = true,
+	};
 
 	down_write(&dmar_global_lock);
 	ret = dmar_table_detect();
 	if (ret)
-		ret = check_zero_address();
-	{
-		if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
-			iommu_detected = 1;
-			/* Make sure ACS will be enabled */
-			pci_request_acs();
-		}
+		ret = !dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
+					    &validate_drhd_cb);
+	if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
+		iommu_detected = 1;
+		/* Make sure ACS will be enabled */
+		pci_request_acs();
+	}
 
 #ifdef CONFIG_X86
-		if (ret)
-			x86_init.iommu.iommu_init = intel_iommu_init;
+	if (ret)
+		x86_init.iommu.iommu_init = intel_iommu_init;
 #endif
-	}
+
 	early_acpi_os_unmap_memory((void __iomem *)dmar_tbl, dmar_tbl_size);
 	dmar_tbl = NULL;
 	up_write(&dmar_global_lock);
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 5619f264862d..4af2206e41bc 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3682,7 +3682,7 @@ static inline void init_iommu_pm_ops(void) {}
 #endif	/* CONFIG_PM */
 
 
-int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
+int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_reserved_memory *rmrr;
 	struct dmar_rmrr_unit *rmrru;
@@ -3708,7 +3708,7 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
 	return 0;
 }
 
-int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr)
+int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 {
 	struct acpi_dmar_atsr *atsr;
 	struct dmar_atsr_unit *atsru;
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index 1deece46a0ca..fac8ca34f9a8 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -115,22 +115,21 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
 extern int detect_intel_iommu(void);
 extern int enable_drhd_fault_handling(void);
 
+static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
+{
+	return 0;
+}
+
 #ifdef CONFIG_INTEL_IOMMU
 extern int iommu_detected, no_iommu;
 extern int intel_iommu_init(void);
-extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header);
-extern int dmar_parse_one_atsr(struct acpi_dmar_header *header);
+extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
+extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
 extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
 #else /* !CONFIG_INTEL_IOMMU: */
 static inline int intel_iommu_init(void) { return -ENODEV; }
-static inline int dmar_parse_one_rmrr(struct acpi_dmar_header *header)
-{
-	return 0;
-}
-static inline int dmar_parse_one_atsr(struct acpi_dmar_header *header)
-{
-	return 0;
-}
+#define	dmar_parse_one_rmrr		dmar_res_noop
+#define	dmar_parse_one_atsr		dmar_res_noop
 static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
 {
 	return 0;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 2/8] iommu/vt-d: Dynamically allocate and free seq_id for DMAR units
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Introduce functions to support dynamic IOMMU seq_id allocating and
releasing, which will be used to support DMAR hotplug.

Also rename IOMMU_UNITS_SUPPORTED as DMAR_UNITS_SUPPORTED.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/dmar.c        |   40 ++++++++++++++++++++++++++++++++++------
 drivers/iommu/intel-iommu.c |   13 +++----------
 include/linux/dmar.h        |    6 ++++++
 3 files changed, 43 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index a05cf3634efe..ac4f8ee2871f 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -70,6 +70,7 @@ LIST_HEAD(dmar_drhd_units);
 struct acpi_table_header * __initdata dmar_tbl;
 static acpi_size dmar_tbl_size;
 static int dmar_dev_scope_status = 1;
+static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
 
 static int alloc_iommu(struct dmar_drhd_unit *drhd);
 static void free_iommu(struct intel_iommu *iommu);
@@ -927,11 +928,32 @@ out:
 	return err;
 }
 
+static int dmar_alloc_seq_id(struct intel_iommu *iommu)
+{
+	iommu->seq_id = find_first_zero_bit(dmar_seq_ids,
+					    DMAR_UNITS_SUPPORTED);
+	if (iommu->seq_id >= DMAR_UNITS_SUPPORTED) {
+		iommu->seq_id = -1;
+	} else {
+		set_bit(iommu->seq_id, dmar_seq_ids);
+		sprintf(iommu->name, "dmar%d", iommu->seq_id);
+	}
+
+	return iommu->seq_id;
+}
+
+static void dmar_free_seq_id(struct intel_iommu *iommu)
+{
+	if (iommu->seq_id >= 0) {
+		clear_bit(iommu->seq_id, dmar_seq_ids);
+		iommu->seq_id = -1;
+	}
+}
+
 static int alloc_iommu(struct dmar_drhd_unit *drhd)
 {
 	struct intel_iommu *iommu;
 	u32 ver, sts;
-	static int iommu_allocated = 0;
 	int agaw = 0;
 	int msagaw = 0;
 	int err;
@@ -945,13 +967,16 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
 	if (!iommu)
 		return -ENOMEM;
 
-	iommu->seq_id = iommu_allocated++;
-	sprintf (iommu->name, "dmar%d", iommu->seq_id);
+	if (dmar_alloc_seq_id(iommu) < 0) {
+		pr_err("IOMMU: failed to allocate seq_id\n");
+		err = -ENOSPC;
+		goto error;
+	}
 
 	err = map_iommu(iommu, drhd->reg_base_addr);
 	if (err) {
 		pr_err("IOMMU: failed to map %s\n", iommu->name);
-		goto error;
+		goto error_free_seq_id;
 	}
 
 	err = -EINVAL;
@@ -1001,9 +1026,11 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
 
 	return 0;
 
- err_unmap:
+err_unmap:
 	unmap_iommu(iommu);
- error:
+error_free_seq_id:
+	dmar_free_seq_id(iommu);
+error:
 	kfree(iommu);
 	return err;
 }
@@ -1027,6 +1054,7 @@ static void free_iommu(struct intel_iommu *iommu)
 	if (iommu->reg)
 		unmap_iommu(iommu);
 
+	dmar_free_seq_id(iommu);
 	kfree(iommu);
 }
 
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 4af2206e41bc..7daa74ed46d0 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -328,17 +328,10 @@ static int hw_pass_through = 1;
 /* si_domain contains mulitple devices */
 #define DOMAIN_FLAG_STATIC_IDENTITY	(1 << 1)
 
-/* define the limit of IOMMUs supported in each domain */
-#ifdef	CONFIG_X86
-# define	IOMMU_UNITS_SUPPORTED	MAX_IO_APICS
-#else
-# define	IOMMU_UNITS_SUPPORTED	64
-#endif
-
 struct dmar_domain {
 	int	id;			/* domain id */
 	int	nid;			/* node id */
-	DECLARE_BITMAP(iommu_bmp, IOMMU_UNITS_SUPPORTED);
+	DECLARE_BITMAP(iommu_bmp, DMAR_UNITS_SUPPORTED);
 					/* bitmap of iommus this domain uses*/
 
 	struct list_head devices; 	/* all devices' list */
@@ -2728,12 +2721,12 @@ static int __init init_dmars(void)
 		 * threaded kernel __init code path all other access are read
 		 * only
 		 */
-		if (g_num_of_iommus < IOMMU_UNITS_SUPPORTED) {
+		if (g_num_of_iommus < DMAR_UNITS_SUPPORTED) {
 			g_num_of_iommus++;
 			continue;
 		}
 		printk_once(KERN_ERR "intel-iommu: exceeded %d IOMMUs\n",
-			  IOMMU_UNITS_SUPPORTED);
+			  DMAR_UNITS_SUPPORTED);
 	}
 
 	g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index fac8ca34f9a8..c8a576bc3a98 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -30,6 +30,12 @@
 
 struct acpi_dmar_header;
 
+#ifdef	CONFIG_X86
+# define	DMAR_UNITS_SUPPORTED	MAX_IO_APICS
+#else
+# define	DMAR_UNITS_SUPPORTED	64
+#endif
+
 /* DMAR Flags */
 #define DMAR_INTR_REMAP		0x1
 #define DMAR_X2APIC_OPT_OUT	0x2
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 2/8] iommu/vt-d: Dynamically allocate and free seq_id for DMAR units
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

Introduce functions to support dynamic IOMMU seq_id allocating and
releasing, which will be used to support DMAR hotplug.

Also rename IOMMU_UNITS_SUPPORTED as DMAR_UNITS_SUPPORTED.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/iommu/dmar.c        |   40 ++++++++++++++++++++++++++++++++++------
 drivers/iommu/intel-iommu.c |   13 +++----------
 include/linux/dmar.h        |    6 ++++++
 3 files changed, 43 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index a05cf3634efe..ac4f8ee2871f 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -70,6 +70,7 @@ LIST_HEAD(dmar_drhd_units);
 struct acpi_table_header * __initdata dmar_tbl;
 static acpi_size dmar_tbl_size;
 static int dmar_dev_scope_status = 1;
+static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
 
 static int alloc_iommu(struct dmar_drhd_unit *drhd);
 static void free_iommu(struct intel_iommu *iommu);
@@ -927,11 +928,32 @@ out:
 	return err;
 }
 
+static int dmar_alloc_seq_id(struct intel_iommu *iommu)
+{
+	iommu->seq_id = find_first_zero_bit(dmar_seq_ids,
+					    DMAR_UNITS_SUPPORTED);
+	if (iommu->seq_id >= DMAR_UNITS_SUPPORTED) {
+		iommu->seq_id = -1;
+	} else {
+		set_bit(iommu->seq_id, dmar_seq_ids);
+		sprintf(iommu->name, "dmar%d", iommu->seq_id);
+	}
+
+	return iommu->seq_id;
+}
+
+static void dmar_free_seq_id(struct intel_iommu *iommu)
+{
+	if (iommu->seq_id >= 0) {
+		clear_bit(iommu->seq_id, dmar_seq_ids);
+		iommu->seq_id = -1;
+	}
+}
+
 static int alloc_iommu(struct dmar_drhd_unit *drhd)
 {
 	struct intel_iommu *iommu;
 	u32 ver, sts;
-	static int iommu_allocated = 0;
 	int agaw = 0;
 	int msagaw = 0;
 	int err;
@@ -945,13 +967,16 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
 	if (!iommu)
 		return -ENOMEM;
 
-	iommu->seq_id = iommu_allocated++;
-	sprintf (iommu->name, "dmar%d", iommu->seq_id);
+	if (dmar_alloc_seq_id(iommu) < 0) {
+		pr_err("IOMMU: failed to allocate seq_id\n");
+		err = -ENOSPC;
+		goto error;
+	}
 
 	err = map_iommu(iommu, drhd->reg_base_addr);
 	if (err) {
 		pr_err("IOMMU: failed to map %s\n", iommu->name);
-		goto error;
+		goto error_free_seq_id;
 	}
 
 	err = -EINVAL;
@@ -1001,9 +1026,11 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
 
 	return 0;
 
- err_unmap:
+err_unmap:
 	unmap_iommu(iommu);
- error:
+error_free_seq_id:
+	dmar_free_seq_id(iommu);
+error:
 	kfree(iommu);
 	return err;
 }
@@ -1027,6 +1054,7 @@ static void free_iommu(struct intel_iommu *iommu)
 	if (iommu->reg)
 		unmap_iommu(iommu);
 
+	dmar_free_seq_id(iommu);
 	kfree(iommu);
 }
 
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 4af2206e41bc..7daa74ed46d0 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -328,17 +328,10 @@ static int hw_pass_through = 1;
 /* si_domain contains mulitple devices */
 #define DOMAIN_FLAG_STATIC_IDENTITY	(1 << 1)
 
-/* define the limit of IOMMUs supported in each domain */
-#ifdef	CONFIG_X86
-# define	IOMMU_UNITS_SUPPORTED	MAX_IO_APICS
-#else
-# define	IOMMU_UNITS_SUPPORTED	64
-#endif
-
 struct dmar_domain {
 	int	id;			/* domain id */
 	int	nid;			/* node id */
-	DECLARE_BITMAP(iommu_bmp, IOMMU_UNITS_SUPPORTED);
+	DECLARE_BITMAP(iommu_bmp, DMAR_UNITS_SUPPORTED);
 					/* bitmap of iommus this domain uses*/
 
 	struct list_head devices; 	/* all devices' list */
@@ -2728,12 +2721,12 @@ static int __init init_dmars(void)
 		 * threaded kernel __init code path all other access are read
 		 * only
 		 */
-		if (g_num_of_iommus < IOMMU_UNITS_SUPPORTED) {
+		if (g_num_of_iommus < DMAR_UNITS_SUPPORTED) {
 			g_num_of_iommus++;
 			continue;
 		}
 		printk_once(KERN_ERR "intel-iommu: exceeded %d IOMMUs\n",
-			  IOMMU_UNITS_SUPPORTED);
+			  DMAR_UNITS_SUPPORTED);
 	}
 
 	g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index fac8ca34f9a8..c8a576bc3a98 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -30,6 +30,12 @@
 
 struct acpi_dmar_header;
 
+#ifdef	CONFIG_X86
+# define	DMAR_UNITS_SUPPORTED	MAX_IO_APICS
+#else
+# define	DMAR_UNITS_SUPPORTED	64
+#endif
+
 /* DMAR Flags */
 #define DMAR_INTR_REMAP		0x1
 #define DMAR_X2APIC_OPT_OUT	0x2
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 2/8] iommu/vt-d: Dynamically allocate and free seq_id for DMAR units
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Introduce functions to support dynamic IOMMU seq_id allocating and
releasing, which will be used to support DMAR hotplug.

Also rename IOMMU_UNITS_SUPPORTED as DMAR_UNITS_SUPPORTED.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/dmar.c        |   40 ++++++++++++++++++++++++++++++++++------
 drivers/iommu/intel-iommu.c |   13 +++----------
 include/linux/dmar.h        |    6 ++++++
 3 files changed, 43 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index a05cf3634efe..ac4f8ee2871f 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -70,6 +70,7 @@ LIST_HEAD(dmar_drhd_units);
 struct acpi_table_header * __initdata dmar_tbl;
 static acpi_size dmar_tbl_size;
 static int dmar_dev_scope_status = 1;
+static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
 
 static int alloc_iommu(struct dmar_drhd_unit *drhd);
 static void free_iommu(struct intel_iommu *iommu);
@@ -927,11 +928,32 @@ out:
 	return err;
 }
 
+static int dmar_alloc_seq_id(struct intel_iommu *iommu)
+{
+	iommu->seq_id = find_first_zero_bit(dmar_seq_ids,
+					    DMAR_UNITS_SUPPORTED);
+	if (iommu->seq_id >= DMAR_UNITS_SUPPORTED) {
+		iommu->seq_id = -1;
+	} else {
+		set_bit(iommu->seq_id, dmar_seq_ids);
+		sprintf(iommu->name, "dmar%d", iommu->seq_id);
+	}
+
+	return iommu->seq_id;
+}
+
+static void dmar_free_seq_id(struct intel_iommu *iommu)
+{
+	if (iommu->seq_id >= 0) {
+		clear_bit(iommu->seq_id, dmar_seq_ids);
+		iommu->seq_id = -1;
+	}
+}
+
 static int alloc_iommu(struct dmar_drhd_unit *drhd)
 {
 	struct intel_iommu *iommu;
 	u32 ver, sts;
-	static int iommu_allocated = 0;
 	int agaw = 0;
 	int msagaw = 0;
 	int err;
@@ -945,13 +967,16 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
 	if (!iommu)
 		return -ENOMEM;
 
-	iommu->seq_id = iommu_allocated++;
-	sprintf (iommu->name, "dmar%d", iommu->seq_id);
+	if (dmar_alloc_seq_id(iommu) < 0) {
+		pr_err("IOMMU: failed to allocate seq_id\n");
+		err = -ENOSPC;
+		goto error;
+	}
 
 	err = map_iommu(iommu, drhd->reg_base_addr);
 	if (err) {
 		pr_err("IOMMU: failed to map %s\n", iommu->name);
-		goto error;
+		goto error_free_seq_id;
 	}
 
 	err = -EINVAL;
@@ -1001,9 +1026,11 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
 
 	return 0;
 
- err_unmap:
+err_unmap:
 	unmap_iommu(iommu);
- error:
+error_free_seq_id:
+	dmar_free_seq_id(iommu);
+error:
 	kfree(iommu);
 	return err;
 }
@@ -1027,6 +1054,7 @@ static void free_iommu(struct intel_iommu *iommu)
 	if (iommu->reg)
 		unmap_iommu(iommu);
 
+	dmar_free_seq_id(iommu);
 	kfree(iommu);
 }
 
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 4af2206e41bc..7daa74ed46d0 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -328,17 +328,10 @@ static int hw_pass_through = 1;
 /* si_domain contains mulitple devices */
 #define DOMAIN_FLAG_STATIC_IDENTITY	(1 << 1)
 
-/* define the limit of IOMMUs supported in each domain */
-#ifdef	CONFIG_X86
-# define	IOMMU_UNITS_SUPPORTED	MAX_IO_APICS
-#else
-# define	IOMMU_UNITS_SUPPORTED	64
-#endif
-
 struct dmar_domain {
 	int	id;			/* domain id */
 	int	nid;			/* node id */
-	DECLARE_BITMAP(iommu_bmp, IOMMU_UNITS_SUPPORTED);
+	DECLARE_BITMAP(iommu_bmp, DMAR_UNITS_SUPPORTED);
 					/* bitmap of iommus this domain uses*/
 
 	struct list_head devices; 	/* all devices' list */
@@ -2728,12 +2721,12 @@ static int __init init_dmars(void)
 		 * threaded kernel __init code path all other access are read
 		 * only
 		 */
-		if (g_num_of_iommus < IOMMU_UNITS_SUPPORTED) {
+		if (g_num_of_iommus < DMAR_UNITS_SUPPORTED) {
 			g_num_of_iommus++;
 			continue;
 		}
 		printk_once(KERN_ERR "intel-iommu: exceeded %d IOMMUs\n",
-			  IOMMU_UNITS_SUPPORTED);
+			  DMAR_UNITS_SUPPORTED);
 	}
 
 	g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index fac8ca34f9a8..c8a576bc3a98 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -30,6 +30,12 @@
 
 struct acpi_dmar_header;
 
+#ifdef	CONFIG_X86
+# define	DMAR_UNITS_SUPPORTED	MAX_IO_APICS
+#else
+# define	DMAR_UNITS_SUPPORTED	64
+#endif
+
 /* DMAR Flags */
 #define DMAR_INTR_REMAP		0x1
 #define DMAR_X2APIC_OPT_OUT	0x2
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 3/8] iommu/vt-d: Implement DMAR unit hotplug framework
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

On Intel platforms, an IO Hub (PCI/PCIe host bridge) may contain DMAR
units, so we need to support DMAR hotplug when supporting PCI host
bridge hotplug on Intel platforms.

According to Section 8.8 "Remapping Hardware Unit Hot Plug" in "Intel
Virtualization Technology for Directed IO Architecture Specification
Rev 2.2", ACPI BIOS should implement ACPI _DSM method under the ACPI
object for the PCI host bridge to support DMAR hotplug.

This patch introduces interfaces to parse ACPI _DSM method for
DMAR unit hotplug. It also implements state machines for DMAR unit
hot-addition and hot-removal.

The PCI host bridge hotplug driver should call dmar_hotplug_hotplug()
before scanning PCI devices connected for hot-addition and after
destroying all PCI devices for hot-removal.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/dmar.c                |  268 +++++++++++++++++++++++++++++++++--
 drivers/iommu/intel-iommu.c         |   78 +++++++++-
 drivers/iommu/intel_irq_remapping.c |    5 +
 include/linux/dmar.h                |   33 +++++
 4 files changed, 370 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index ac4f8ee2871f..ab504cf0f34a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -75,7 +75,7 @@ static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
 static int alloc_iommu(struct dmar_drhd_unit *drhd);
 static void free_iommu(struct intel_iommu *iommu);
 
-static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
+static void dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
 {
 	/*
 	 * add INCLUDE_ALL at the tail, so scan the list will find it at
@@ -336,24 +336,45 @@ static struct notifier_block dmar_pci_bus_nb = {
 	.priority = INT_MIN,
 };
 
+static struct dmar_drhd_unit *
+dmar_find_dmaru(struct acpi_dmar_hardware_unit *drhd)
+{
+	struct dmar_drhd_unit *dmaru;
+
+	list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list)
+		if (dmaru->segment == drhd->segment &&
+		    dmaru->reg_base_addr == drhd->address)
+			return dmaru;
+
+	return NULL;
+}
+
 /**
  * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
  * structure which uniquely represent one DMA remapping hardware unit
  * present in the platform
  */
-static int __init
-dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
+static int dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_hardware_unit *drhd;
 	struct dmar_drhd_unit *dmaru;
 	int ret = 0;
 
 	drhd = (struct acpi_dmar_hardware_unit *)header;
-	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
+	dmaru = dmar_find_dmaru(drhd);
+	if (dmaru)
+		goto out;
+
+	dmaru = kzalloc(sizeof(*dmaru) + header->length, GFP_KERNEL);
 	if (!dmaru)
 		return -ENOMEM;
 
-	dmaru->hdr = header;
+	/*
+	 * If header is allocated from slab by ACPI _DSM method, we need to
+	 * copy the content because the memory buffer will be freed on return.
+	 */
+	dmaru->hdr = (void *)(dmaru + 1);
+	memcpy(dmaru->hdr, header, header->length);
 	dmaru->reg_base_addr = drhd->address;
 	dmaru->segment = drhd->segment;
 	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
@@ -374,6 +395,7 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 	}
 	dmar_register_drhd_unit(dmaru);
 
+out:
 	if (arg)
 		(*(int *)arg)++;
 
@@ -411,8 +433,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
 }
 
 #ifdef CONFIG_ACPI_NUMA
-static int __init
-dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
+static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_rhsa *rhsa;
 	struct dmar_drhd_unit *drhd;
@@ -804,14 +825,22 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
 		return -EINVAL;
 	}
 
-	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
+	if (arg)
+		addr = ioremap(drhd->address, VTD_PAGE_SIZE);
+	else
+		addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
 	if (!addr) {
 		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
 		return -EINVAL;
 	}
+
 	cap = dmar_readq(addr + DMAR_CAP_REG);
 	ecap = dmar_readq(addr + DMAR_ECAP_REG);
-	early_iounmap(addr, VTD_PAGE_SIZE);
+
+	if (arg)
+		iounmap(addr);
+	else
+		early_iounmap(addr, VTD_PAGE_SIZE);
 
 	if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
 		warn_invalid_dmar(drhd->address, " returns all ones");
@@ -1685,12 +1714,17 @@ int __init dmar_ir_support(void)
 	return dmar->flags & 0x1;
 }
 
+/* Check whether DMAR units are in use */
+static inline bool dmar_in_use(void)
+{
+	return irq_remapping_enabled || intel_iommu_enabled;
+}
+
 static int __init dmar_free_unused_resources(void)
 {
 	struct dmar_drhd_unit *dmaru, *dmaru_n;
 
-	/* DMAR units are in use */
-	if (irq_remapping_enabled || intel_iommu_enabled)
+	if (dmar_in_use())
 		return 0;
 
 	if (dmar_dev_scope_status != 1 && !list_empty(&dmar_drhd_units))
@@ -1708,3 +1742,215 @@ static int __init dmar_free_unused_resources(void)
 
 late_initcall(dmar_free_unused_resources);
 IOMMU_INIT_POST(detect_intel_iommu);
+
+/*
+ * DMAR Hotplug Support
+ * For more details, please refer to Intel(R) Virtualization Technology
+ * for Directed-IO Architecture Specifiction, Rev 2.2, Section 8.8
+ * "Remapping Hardware Unit Hot Plug".
+ */
+static u8 dmar_hp_uuid[] = {
+	/* 0000 */    0xA6, 0xA3, 0xC1, 0xD8, 0x9B, 0xBE, 0x9B, 0x4C,
+	/* 0008 */    0x91, 0xBF, 0xC3, 0xCB, 0x81, 0xFC, 0x5D, 0xAF
+};
+
+/*
+ * Currently there's only one revision and BIOS will not check the revision id,
+ * so use 0 for safety.
+ */
+#define	DMAR_DSM_REV_ID			0
+#define	DMAR_DSM_FUNC_DRHD		1
+#define	DMAR_DSM_FUNC_ATSR		2
+#define	DMAR_DSM_FUNC_RHSA		3
+
+static inline bool dmar_detect_dsm(acpi_handle handle, int func)
+{
+	return acpi_check_dsm(handle, dmar_hp_uuid, DMAR_DSM_REV_ID, 1 << func);
+}
+
+static int dmar_walk_dsm_resource(acpi_handle handle, int func,
+				  dmar_res_handler_t handler, void *arg)
+{
+	int ret = -ENODEV;
+	union acpi_object *obj;
+	struct acpi_dmar_header *start;
+	struct dmar_res_callback callback;
+	static int res_type[] = {
+		[DMAR_DSM_FUNC_DRHD] = ACPI_DMAR_TYPE_HARDWARE_UNIT,
+		[DMAR_DSM_FUNC_ATSR] = ACPI_DMAR_TYPE_ROOT_ATS,
+		[DMAR_DSM_FUNC_RHSA] = ACPI_DMAR_TYPE_HARDWARE_AFFINITY,
+	};
+
+	if (!dmar_detect_dsm(handle, func))
+		return 0;
+
+	obj = acpi_evaluate_dsm_typed(handle, dmar_hp_uuid, DMAR_DSM_REV_ID,
+				      func, NULL, ACPI_TYPE_BUFFER);
+	if (!obj)
+		return -ENODEV;
+
+	memset(&callback, 0, sizeof(callback));
+	callback.cb[res_type[func]] = handler;
+	callback.arg[res_type[func]] = arg;
+	start = (struct acpi_dmar_header *)obj->buffer.pointer;
+	ret = dmar_walk_remapping_entries(start, obj->buffer.length, &callback);
+
+	ACPI_FREE(obj);
+
+	return ret;
+}
+
+static int dmar_hp_add_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	int ret;
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (!dmaru)
+		return -ENODEV;
+
+	ret = dmar_ir_hotplug(dmaru, true);
+	if (ret == 0)
+		ret = dmar_iommu_hotplug(dmaru, true);
+
+	return ret;
+}
+
+static int dmar_hp_remove_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	int i, ret;
+	struct device *dev;
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (!dmaru)
+		return 0;
+
+	/*
+	 * All PCI devices managed by this unit should have been destroyed.
+	 */
+	if (!dmaru->include_all && dmaru->devices && dmaru->devices_cnt)
+		for_each_active_dev_scope(dmaru->devices,
+					  dmaru->devices_cnt, i, dev)
+			return -EBUSY;
+
+	ret = dmar_ir_hotplug(dmaru, false);
+	if (ret == 0)
+		ret = dmar_iommu_hotplug(dmaru, false);
+
+	return ret;
+}
+
+static int dmar_hp_release_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (dmaru) {
+		list_del_rcu(&dmaru->list);
+		synchronize_rcu();
+		dmar_free_drhd(dmaru);
+	}
+
+	return 0;
+}
+
+static int dmar_hotplug_insert(acpi_handle handle)
+{
+	int ret;
+	int drhd_count = 0;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_validate_one_drhd, (void *)1);
+	if (ret)
+		goto out;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_parse_one_drhd, (void *)&drhd_count);
+	if (ret == 0 && drhd_count == 0) {
+		pr_warn(FW_BUG "No DRHD structures in buffer returned by _DSM method\n");
+		goto out;
+	} else if (ret) {
+		goto release_drhd;
+	}
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_RHSA,
+				     &dmar_parse_one_rhsa, NULL);
+	if (ret)
+		goto release_drhd;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+				     &dmar_parse_one_atsr, NULL);
+	if (ret)
+		goto release_atsr;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_hp_add_drhd, NULL);
+	if (!ret)
+		return 0;
+
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+			       &dmar_hp_remove_drhd, NULL);
+release_atsr:
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+			       &dmar_release_one_atsr, NULL);
+release_drhd:
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+			       &dmar_hp_release_drhd, NULL);
+out:
+	return ret;
+}
+
+static int dmar_hotplug_remove(acpi_handle handle)
+{
+	int ret;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+				     &dmar_check_one_atsr, NULL);
+	if (ret)
+		return ret;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_hp_remove_drhd, NULL);
+	if (ret == 0) {
+		WARN_ON(dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+					       &dmar_release_one_atsr, NULL));
+		WARN_ON(dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+					       &dmar_hp_release_drhd, NULL));
+	} else {
+		dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				       &dmar_hp_add_drhd, NULL);
+	}
+
+	return ret;
+}
+
+static int dmar_device_hotplug(acpi_handle handle, bool insert)
+{
+	int ret;
+
+	if (!dmar_in_use())
+		return 0;
+
+	if (!dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD))
+		return 0;
+
+	down_write(&dmar_global_lock);
+	if (insert)
+		ret = dmar_hotplug_insert(handle);
+	else
+		ret = dmar_hotplug_remove(handle);
+	up_write(&dmar_global_lock);
+
+	return ret;
+}
+
+int dmar_device_add(acpi_handle handle)
+{
+	return dmar_device_hotplug(handle, true);
+}
+
+int dmar_device_remove(acpi_handle handle)
+{
+	return dmar_device_hotplug(handle, false);
+}
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 7daa74ed46d0..70d9d47eaeda 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3701,17 +3701,48 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
 	return 0;
 }
 
-int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+static struct dmar_atsr_unit *dmar_find_atsr(struct acpi_dmar_atsr *atsr)
+{
+	struct dmar_atsr_unit *atsru;
+	struct acpi_dmar_atsr *tmp;
+
+	list_for_each_entry_rcu(atsru, &dmar_atsr_units, list) {
+		tmp = (struct acpi_dmar_atsr *)atsru->hdr;
+		if (atsr->segment != tmp->segment)
+			continue;
+		if (atsr->header.length != tmp->header.length)
+			continue;
+		if (memcmp(atsr, tmp, atsr->header.length) == 0)
+			return atsru;
+	}
+
+	return NULL;
+}
+
+int dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 {
 	struct acpi_dmar_atsr *atsr;
 	struct dmar_atsr_unit *atsru;
 
+	if (system_state != SYSTEM_BOOTING && !intel_iommu_enabled)
+		return 0;
+
 	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
-	atsru = kzalloc(sizeof(*atsru), GFP_KERNEL);
+	atsru = dmar_find_atsr(atsr);
+	if (atsru)
+		return 0;
+
+	atsru = kzalloc(sizeof(*atsru) + hdr->length, GFP_KERNEL);
 	if (!atsru)
 		return -ENOMEM;
 
-	atsru->hdr = hdr;
+	/*
+	 * If memory is allocated from slab by ACPI _DSM method, we need to
+	 * copy the memory content because the memory buffer will be freed
+	 * on return.
+	 */
+	atsru->hdr = (void *)(atsru + 1);
+	memcpy(atsru->hdr, hdr, hdr->length);
 	atsru->include_all = atsr->flags & 0x1;
 	if (!atsru->include_all) {
 		atsru->devices = dmar_alloc_dev_scope((void *)(atsr + 1),
@@ -3734,6 +3765,47 @@ static void intel_iommu_free_atsr(struct dmar_atsr_unit *atsru)
 	kfree(atsru);
 }
 
+int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+{
+	struct acpi_dmar_atsr *atsr;
+	struct dmar_atsr_unit *atsru;
+
+	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
+	atsru = dmar_find_atsr(atsr);
+	if (atsru) {
+		list_del_rcu(&atsru->list);
+		synchronize_rcu();
+		intel_iommu_free_atsr(atsru);
+	}
+
+	return 0;
+}
+
+int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+{
+	int i;
+	struct device *dev;
+	struct acpi_dmar_atsr *atsr;
+	struct dmar_atsr_unit *atsru;
+
+	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
+	atsru = dmar_find_atsr(atsr);
+	if (!atsru)
+		return 0;
+
+	if (!atsru->include_all && atsru->devices && atsru->devices_cnt)
+		for_each_active_dev_scope(atsru->devices, atsru->devices_cnt,
+					  i, dev)
+			return -EBUSY;
+
+	return 0;
+}
+
+int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return intel_iommu_enabled ? -ENOSYS : 0;
+}
+
 static void intel_iommu_free_dmars(void)
 {
 	struct dmar_rmrr_unit *rmrru, *rmrr_n;
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 0df41f6264f5..9b140ed854ec 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -1172,3 +1172,8 @@ struct irq_remap_ops intel_irq_remap_ops = {
 	.msi_setup_irq		= intel_msi_setup_irq,
 	.setup_hpet_msi		= intel_setup_hpet_msi,
 };
+
+int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return irq_remapping_enabled ? -ENOSYS : 0;
+}
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index c8a576bc3a98..594d4ac79e75 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -120,6 +120,8 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
 /* Intel IOMMU detection */
 extern int detect_intel_iommu(void);
 extern int enable_drhd_fault_handling(void);
+extern int dmar_device_add(acpi_handle handle);
+extern int dmar_device_remove(acpi_handle handle);
 
 static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
 {
@@ -131,17 +133,48 @@ extern int iommu_detected, no_iommu;
 extern int intel_iommu_init(void);
 extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
 extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
+extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg);
+extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg);
+extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
 extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
 #else /* !CONFIG_INTEL_IOMMU: */
 static inline int intel_iommu_init(void) { return -ENODEV; }
+
 #define	dmar_parse_one_rmrr		dmar_res_noop
 #define	dmar_parse_one_atsr		dmar_res_noop
+#define	dmar_check_one_atsr		dmar_res_noop
+#define	dmar_release_one_atsr		dmar_res_noop
+
 static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
 {
 	return 0;
 }
+
+static inline int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return 0;
+}
 #endif /* CONFIG_INTEL_IOMMU */
 
+#ifdef CONFIG_IRQ_REMAP
+extern int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
+#else  /* CONFIG_IRQ_REMAP */
+static inline int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{ return 0; }
+#endif /* CONFIG_IRQ_REMAP */
+
+#else /* CONFIG_DMAR_TABLE */
+
+static inline int dmar_device_add(void *handle)
+{
+	return 0;
+}
+
+static inline int dmar_device_remove(void *handle)
+{
+	return 0;
+}
+
 #endif /* CONFIG_DMAR_TABLE */
 
 struct irte {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 3/8] iommu/vt-d: Implement DMAR unit hotplug framework
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

On Intel platforms, an IO Hub (PCI/PCIe host bridge) may contain DMAR
units, so we need to support DMAR hotplug when supporting PCI host
bridge hotplug on Intel platforms.

According to Section 8.8 "Remapping Hardware Unit Hot Plug" in "Intel
Virtualization Technology for Directed IO Architecture Specification
Rev 2.2", ACPI BIOS should implement ACPI _DSM method under the ACPI
object for the PCI host bridge to support DMAR hotplug.

This patch introduces interfaces to parse ACPI _DSM method for
DMAR unit hotplug. It also implements state machines for DMAR unit
hot-addition and hot-removal.

The PCI host bridge hotplug driver should call dmar_hotplug_hotplug()
before scanning PCI devices connected for hot-addition and after
destroying all PCI devices for hot-removal.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/iommu/dmar.c                |  268 +++++++++++++++++++++++++++++++++--
 drivers/iommu/intel-iommu.c         |   78 +++++++++-
 drivers/iommu/intel_irq_remapping.c |    5 +
 include/linux/dmar.h                |   33 +++++
 4 files changed, 370 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index ac4f8ee2871f..ab504cf0f34a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -75,7 +75,7 @@ static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
 static int alloc_iommu(struct dmar_drhd_unit *drhd);
 static void free_iommu(struct intel_iommu *iommu);
 
-static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
+static void dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
 {
 	/*
 	 * add INCLUDE_ALL at the tail, so scan the list will find it at
@@ -336,24 +336,45 @@ static struct notifier_block dmar_pci_bus_nb = {
 	.priority = INT_MIN,
 };
 
+static struct dmar_drhd_unit *
+dmar_find_dmaru(struct acpi_dmar_hardware_unit *drhd)
+{
+	struct dmar_drhd_unit *dmaru;
+
+	list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list)
+		if (dmaru->segment == drhd->segment &&
+		    dmaru->reg_base_addr == drhd->address)
+			return dmaru;
+
+	return NULL;
+}
+
 /**
  * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
  * structure which uniquely represent one DMA remapping hardware unit
  * present in the platform
  */
-static int __init
-dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
+static int dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_hardware_unit *drhd;
 	struct dmar_drhd_unit *dmaru;
 	int ret = 0;
 
 	drhd = (struct acpi_dmar_hardware_unit *)header;
-	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
+	dmaru = dmar_find_dmaru(drhd);
+	if (dmaru)
+		goto out;
+
+	dmaru = kzalloc(sizeof(*dmaru) + header->length, GFP_KERNEL);
 	if (!dmaru)
 		return -ENOMEM;
 
-	dmaru->hdr = header;
+	/*
+	 * If header is allocated from slab by ACPI _DSM method, we need to
+	 * copy the content because the memory buffer will be freed on return.
+	 */
+	dmaru->hdr = (void *)(dmaru + 1);
+	memcpy(dmaru->hdr, header, header->length);
 	dmaru->reg_base_addr = drhd->address;
 	dmaru->segment = drhd->segment;
 	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
@@ -374,6 +395,7 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 	}
 	dmar_register_drhd_unit(dmaru);
 
+out:
 	if (arg)
 		(*(int *)arg)++;
 
@@ -411,8 +433,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
 }
 
 #ifdef CONFIG_ACPI_NUMA
-static int __init
-dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
+static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_rhsa *rhsa;
 	struct dmar_drhd_unit *drhd;
@@ -804,14 +825,22 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
 		return -EINVAL;
 	}
 
-	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
+	if (arg)
+		addr = ioremap(drhd->address, VTD_PAGE_SIZE);
+	else
+		addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
 	if (!addr) {
 		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
 		return -EINVAL;
 	}
+
 	cap = dmar_readq(addr + DMAR_CAP_REG);
 	ecap = dmar_readq(addr + DMAR_ECAP_REG);
-	early_iounmap(addr, VTD_PAGE_SIZE);
+
+	if (arg)
+		iounmap(addr);
+	else
+		early_iounmap(addr, VTD_PAGE_SIZE);
 
 	if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
 		warn_invalid_dmar(drhd->address, " returns all ones");
@@ -1685,12 +1714,17 @@ int __init dmar_ir_support(void)
 	return dmar->flags & 0x1;
 }
 
+/* Check whether DMAR units are in use */
+static inline bool dmar_in_use(void)
+{
+	return irq_remapping_enabled || intel_iommu_enabled;
+}
+
 static int __init dmar_free_unused_resources(void)
 {
 	struct dmar_drhd_unit *dmaru, *dmaru_n;
 
-	/* DMAR units are in use */
-	if (irq_remapping_enabled || intel_iommu_enabled)
+	if (dmar_in_use())
 		return 0;
 
 	if (dmar_dev_scope_status != 1 && !list_empty(&dmar_drhd_units))
@@ -1708,3 +1742,215 @@ static int __init dmar_free_unused_resources(void)
 
 late_initcall(dmar_free_unused_resources);
 IOMMU_INIT_POST(detect_intel_iommu);
+
+/*
+ * DMAR Hotplug Support
+ * For more details, please refer to Intel(R) Virtualization Technology
+ * for Directed-IO Architecture Specifiction, Rev 2.2, Section 8.8
+ * "Remapping Hardware Unit Hot Plug".
+ */
+static u8 dmar_hp_uuid[] = {
+	/* 0000 */    0xA6, 0xA3, 0xC1, 0xD8, 0x9B, 0xBE, 0x9B, 0x4C,
+	/* 0008 */    0x91, 0xBF, 0xC3, 0xCB, 0x81, 0xFC, 0x5D, 0xAF
+};
+
+/*
+ * Currently there's only one revision and BIOS will not check the revision id,
+ * so use 0 for safety.
+ */
+#define	DMAR_DSM_REV_ID			0
+#define	DMAR_DSM_FUNC_DRHD		1
+#define	DMAR_DSM_FUNC_ATSR		2
+#define	DMAR_DSM_FUNC_RHSA		3
+
+static inline bool dmar_detect_dsm(acpi_handle handle, int func)
+{
+	return acpi_check_dsm(handle, dmar_hp_uuid, DMAR_DSM_REV_ID, 1 << func);
+}
+
+static int dmar_walk_dsm_resource(acpi_handle handle, int func,
+				  dmar_res_handler_t handler, void *arg)
+{
+	int ret = -ENODEV;
+	union acpi_object *obj;
+	struct acpi_dmar_header *start;
+	struct dmar_res_callback callback;
+	static int res_type[] = {
+		[DMAR_DSM_FUNC_DRHD] = ACPI_DMAR_TYPE_HARDWARE_UNIT,
+		[DMAR_DSM_FUNC_ATSR] = ACPI_DMAR_TYPE_ROOT_ATS,
+		[DMAR_DSM_FUNC_RHSA] = ACPI_DMAR_TYPE_HARDWARE_AFFINITY,
+	};
+
+	if (!dmar_detect_dsm(handle, func))
+		return 0;
+
+	obj = acpi_evaluate_dsm_typed(handle, dmar_hp_uuid, DMAR_DSM_REV_ID,
+				      func, NULL, ACPI_TYPE_BUFFER);
+	if (!obj)
+		return -ENODEV;
+
+	memset(&callback, 0, sizeof(callback));
+	callback.cb[res_type[func]] = handler;
+	callback.arg[res_type[func]] = arg;
+	start = (struct acpi_dmar_header *)obj->buffer.pointer;
+	ret = dmar_walk_remapping_entries(start, obj->buffer.length, &callback);
+
+	ACPI_FREE(obj);
+
+	return ret;
+}
+
+static int dmar_hp_add_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	int ret;
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (!dmaru)
+		return -ENODEV;
+
+	ret = dmar_ir_hotplug(dmaru, true);
+	if (ret == 0)
+		ret = dmar_iommu_hotplug(dmaru, true);
+
+	return ret;
+}
+
+static int dmar_hp_remove_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	int i, ret;
+	struct device *dev;
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (!dmaru)
+		return 0;
+
+	/*
+	 * All PCI devices managed by this unit should have been destroyed.
+	 */
+	if (!dmaru->include_all && dmaru->devices && dmaru->devices_cnt)
+		for_each_active_dev_scope(dmaru->devices,
+					  dmaru->devices_cnt, i, dev)
+			return -EBUSY;
+
+	ret = dmar_ir_hotplug(dmaru, false);
+	if (ret == 0)
+		ret = dmar_iommu_hotplug(dmaru, false);
+
+	return ret;
+}
+
+static int dmar_hp_release_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (dmaru) {
+		list_del_rcu(&dmaru->list);
+		synchronize_rcu();
+		dmar_free_drhd(dmaru);
+	}
+
+	return 0;
+}
+
+static int dmar_hotplug_insert(acpi_handle handle)
+{
+	int ret;
+	int drhd_count = 0;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_validate_one_drhd, (void *)1);
+	if (ret)
+		goto out;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_parse_one_drhd, (void *)&drhd_count);
+	if (ret == 0 && drhd_count == 0) {
+		pr_warn(FW_BUG "No DRHD structures in buffer returned by _DSM method\n");
+		goto out;
+	} else if (ret) {
+		goto release_drhd;
+	}
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_RHSA,
+				     &dmar_parse_one_rhsa, NULL);
+	if (ret)
+		goto release_drhd;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+				     &dmar_parse_one_atsr, NULL);
+	if (ret)
+		goto release_atsr;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_hp_add_drhd, NULL);
+	if (!ret)
+		return 0;
+
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+			       &dmar_hp_remove_drhd, NULL);
+release_atsr:
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+			       &dmar_release_one_atsr, NULL);
+release_drhd:
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+			       &dmar_hp_release_drhd, NULL);
+out:
+	return ret;
+}
+
+static int dmar_hotplug_remove(acpi_handle handle)
+{
+	int ret;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+				     &dmar_check_one_atsr, NULL);
+	if (ret)
+		return ret;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_hp_remove_drhd, NULL);
+	if (ret == 0) {
+		WARN_ON(dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+					       &dmar_release_one_atsr, NULL));
+		WARN_ON(dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+					       &dmar_hp_release_drhd, NULL));
+	} else {
+		dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				       &dmar_hp_add_drhd, NULL);
+	}
+
+	return ret;
+}
+
+static int dmar_device_hotplug(acpi_handle handle, bool insert)
+{
+	int ret;
+
+	if (!dmar_in_use())
+		return 0;
+
+	if (!dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD))
+		return 0;
+
+	down_write(&dmar_global_lock);
+	if (insert)
+		ret = dmar_hotplug_insert(handle);
+	else
+		ret = dmar_hotplug_remove(handle);
+	up_write(&dmar_global_lock);
+
+	return ret;
+}
+
+int dmar_device_add(acpi_handle handle)
+{
+	return dmar_device_hotplug(handle, true);
+}
+
+int dmar_device_remove(acpi_handle handle)
+{
+	return dmar_device_hotplug(handle, false);
+}
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 7daa74ed46d0..70d9d47eaeda 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3701,17 +3701,48 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
 	return 0;
 }
 
-int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+static struct dmar_atsr_unit *dmar_find_atsr(struct acpi_dmar_atsr *atsr)
+{
+	struct dmar_atsr_unit *atsru;
+	struct acpi_dmar_atsr *tmp;
+
+	list_for_each_entry_rcu(atsru, &dmar_atsr_units, list) {
+		tmp = (struct acpi_dmar_atsr *)atsru->hdr;
+		if (atsr->segment != tmp->segment)
+			continue;
+		if (atsr->header.length != tmp->header.length)
+			continue;
+		if (memcmp(atsr, tmp, atsr->header.length) == 0)
+			return atsru;
+	}
+
+	return NULL;
+}
+
+int dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 {
 	struct acpi_dmar_atsr *atsr;
 	struct dmar_atsr_unit *atsru;
 
+	if (system_state != SYSTEM_BOOTING && !intel_iommu_enabled)
+		return 0;
+
 	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
-	atsru = kzalloc(sizeof(*atsru), GFP_KERNEL);
+	atsru = dmar_find_atsr(atsr);
+	if (atsru)
+		return 0;
+
+	atsru = kzalloc(sizeof(*atsru) + hdr->length, GFP_KERNEL);
 	if (!atsru)
 		return -ENOMEM;
 
-	atsru->hdr = hdr;
+	/*
+	 * If memory is allocated from slab by ACPI _DSM method, we need to
+	 * copy the memory content because the memory buffer will be freed
+	 * on return.
+	 */
+	atsru->hdr = (void *)(atsru + 1);
+	memcpy(atsru->hdr, hdr, hdr->length);
 	atsru->include_all = atsr->flags & 0x1;
 	if (!atsru->include_all) {
 		atsru->devices = dmar_alloc_dev_scope((void *)(atsr + 1),
@@ -3734,6 +3765,47 @@ static void intel_iommu_free_atsr(struct dmar_atsr_unit *atsru)
 	kfree(atsru);
 }
 
+int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+{
+	struct acpi_dmar_atsr *atsr;
+	struct dmar_atsr_unit *atsru;
+
+	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
+	atsru = dmar_find_atsr(atsr);
+	if (atsru) {
+		list_del_rcu(&atsru->list);
+		synchronize_rcu();
+		intel_iommu_free_atsr(atsru);
+	}
+
+	return 0;
+}
+
+int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+{
+	int i;
+	struct device *dev;
+	struct acpi_dmar_atsr *atsr;
+	struct dmar_atsr_unit *atsru;
+
+	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
+	atsru = dmar_find_atsr(atsr);
+	if (!atsru)
+		return 0;
+
+	if (!atsru->include_all && atsru->devices && atsru->devices_cnt)
+		for_each_active_dev_scope(atsru->devices, atsru->devices_cnt,
+					  i, dev)
+			return -EBUSY;
+
+	return 0;
+}
+
+int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return intel_iommu_enabled ? -ENOSYS : 0;
+}
+
 static void intel_iommu_free_dmars(void)
 {
 	struct dmar_rmrr_unit *rmrru, *rmrr_n;
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 0df41f6264f5..9b140ed854ec 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -1172,3 +1172,8 @@ struct irq_remap_ops intel_irq_remap_ops = {
 	.msi_setup_irq		= intel_msi_setup_irq,
 	.setup_hpet_msi		= intel_setup_hpet_msi,
 };
+
+int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return irq_remapping_enabled ? -ENOSYS : 0;
+}
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index c8a576bc3a98..594d4ac79e75 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -120,6 +120,8 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
 /* Intel IOMMU detection */
 extern int detect_intel_iommu(void);
 extern int enable_drhd_fault_handling(void);
+extern int dmar_device_add(acpi_handle handle);
+extern int dmar_device_remove(acpi_handle handle);
 
 static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
 {
@@ -131,17 +133,48 @@ extern int iommu_detected, no_iommu;
 extern int intel_iommu_init(void);
 extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
 extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
+extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg);
+extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg);
+extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
 extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
 #else /* !CONFIG_INTEL_IOMMU: */
 static inline int intel_iommu_init(void) { return -ENODEV; }
+
 #define	dmar_parse_one_rmrr		dmar_res_noop
 #define	dmar_parse_one_atsr		dmar_res_noop
+#define	dmar_check_one_atsr		dmar_res_noop
+#define	dmar_release_one_atsr		dmar_res_noop
+
 static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
 {
 	return 0;
 }
+
+static inline int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return 0;
+}
 #endif /* CONFIG_INTEL_IOMMU */
 
+#ifdef CONFIG_IRQ_REMAP
+extern int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
+#else  /* CONFIG_IRQ_REMAP */
+static inline int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{ return 0; }
+#endif /* CONFIG_IRQ_REMAP */
+
+#else /* CONFIG_DMAR_TABLE */
+
+static inline int dmar_device_add(void *handle)
+{
+	return 0;
+}
+
+static inline int dmar_device_remove(void *handle)
+{
+	return 0;
+}
+
 #endif /* CONFIG_DMAR_TABLE */
 
 struct irte {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 3/8] iommu/vt-d: Implement DMAR unit hotplug framework
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

On Intel platforms, an IO Hub (PCI/PCIe host bridge) may contain DMAR
units, so we need to support DMAR hotplug when supporting PCI host
bridge hotplug on Intel platforms.

According to Section 8.8 "Remapping Hardware Unit Hot Plug" in "Intel
Virtualization Technology for Directed IO Architecture Specification
Rev 2.2", ACPI BIOS should implement ACPI _DSM method under the ACPI
object for the PCI host bridge to support DMAR hotplug.

This patch introduces interfaces to parse ACPI _DSM method for
DMAR unit hotplug. It also implements state machines for DMAR unit
hot-addition and hot-removal.

The PCI host bridge hotplug driver should call dmar_hotplug_hotplug()
before scanning PCI devices connected for hot-addition and after
destroying all PCI devices for hot-removal.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/dmar.c                |  268 +++++++++++++++++++++++++++++++++--
 drivers/iommu/intel-iommu.c         |   78 +++++++++-
 drivers/iommu/intel_irq_remapping.c |    5 +
 include/linux/dmar.h                |   33 +++++
 4 files changed, 370 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index ac4f8ee2871f..ab504cf0f34a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -75,7 +75,7 @@ static unsigned long dmar_seq_ids[BITS_TO_LONGS(DMAR_UNITS_SUPPORTED)];
 static int alloc_iommu(struct dmar_drhd_unit *drhd);
 static void free_iommu(struct intel_iommu *iommu);
 
-static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
+static void dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
 {
 	/*
 	 * add INCLUDE_ALL at the tail, so scan the list will find it at
@@ -336,24 +336,45 @@ static struct notifier_block dmar_pci_bus_nb = {
 	.priority = INT_MIN,
 };
 
+static struct dmar_drhd_unit *
+dmar_find_dmaru(struct acpi_dmar_hardware_unit *drhd)
+{
+	struct dmar_drhd_unit *dmaru;
+
+	list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list)
+		if (dmaru->segment = drhd->segment &&
+		    dmaru->reg_base_addr = drhd->address)
+			return dmaru;
+
+	return NULL;
+}
+
 /**
  * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
  * structure which uniquely represent one DMA remapping hardware unit
  * present in the platform
  */
-static int __init
-dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
+static int dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_hardware_unit *drhd;
 	struct dmar_drhd_unit *dmaru;
 	int ret = 0;
 
 	drhd = (struct acpi_dmar_hardware_unit *)header;
-	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
+	dmaru = dmar_find_dmaru(drhd);
+	if (dmaru)
+		goto out;
+
+	dmaru = kzalloc(sizeof(*dmaru) + header->length, GFP_KERNEL);
 	if (!dmaru)
 		return -ENOMEM;
 
-	dmaru->hdr = header;
+	/*
+	 * If header is allocated from slab by ACPI _DSM method, we need to
+	 * copy the content because the memory buffer will be freed on return.
+	 */
+	dmaru->hdr = (void *)(dmaru + 1);
+	memcpy(dmaru->hdr, header, header->length);
 	dmaru->reg_base_addr = drhd->address;
 	dmaru->segment = drhd->segment;
 	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
@@ -374,6 +395,7 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
 	}
 	dmar_register_drhd_unit(dmaru);
 
+out:
 	if (arg)
 		(*(int *)arg)++;
 
@@ -411,8 +433,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
 }
 
 #ifdef CONFIG_ACPI_NUMA
-static int __init
-dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
+static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
 {
 	struct acpi_dmar_rhsa *rhsa;
 	struct dmar_drhd_unit *drhd;
@@ -804,14 +825,22 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
 		return -EINVAL;
 	}
 
-	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
+	if (arg)
+		addr = ioremap(drhd->address, VTD_PAGE_SIZE);
+	else
+		addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
 	if (!addr) {
 		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
 		return -EINVAL;
 	}
+
 	cap = dmar_readq(addr + DMAR_CAP_REG);
 	ecap = dmar_readq(addr + DMAR_ECAP_REG);
-	early_iounmap(addr, VTD_PAGE_SIZE);
+
+	if (arg)
+		iounmap(addr);
+	else
+		early_iounmap(addr, VTD_PAGE_SIZE);
 
 	if (cap = (uint64_t)-1 && ecap = (uint64_t)-1) {
 		warn_invalid_dmar(drhd->address, " returns all ones");
@@ -1685,12 +1714,17 @@ int __init dmar_ir_support(void)
 	return dmar->flags & 0x1;
 }
 
+/* Check whether DMAR units are in use */
+static inline bool dmar_in_use(void)
+{
+	return irq_remapping_enabled || intel_iommu_enabled;
+}
+
 static int __init dmar_free_unused_resources(void)
 {
 	struct dmar_drhd_unit *dmaru, *dmaru_n;
 
-	/* DMAR units are in use */
-	if (irq_remapping_enabled || intel_iommu_enabled)
+	if (dmar_in_use())
 		return 0;
 
 	if (dmar_dev_scope_status != 1 && !list_empty(&dmar_drhd_units))
@@ -1708,3 +1742,215 @@ static int __init dmar_free_unused_resources(void)
 
 late_initcall(dmar_free_unused_resources);
 IOMMU_INIT_POST(detect_intel_iommu);
+
+/*
+ * DMAR Hotplug Support
+ * For more details, please refer to Intel(R) Virtualization Technology
+ * for Directed-IO Architecture Specifiction, Rev 2.2, Section 8.8
+ * "Remapping Hardware Unit Hot Plug".
+ */
+static u8 dmar_hp_uuid[] = {
+	/* 0000 */    0xA6, 0xA3, 0xC1, 0xD8, 0x9B, 0xBE, 0x9B, 0x4C,
+	/* 0008 */    0x91, 0xBF, 0xC3, 0xCB, 0x81, 0xFC, 0x5D, 0xAF
+};
+
+/*
+ * Currently there's only one revision and BIOS will not check the revision id,
+ * so use 0 for safety.
+ */
+#define	DMAR_DSM_REV_ID			0
+#define	DMAR_DSM_FUNC_DRHD		1
+#define	DMAR_DSM_FUNC_ATSR		2
+#define	DMAR_DSM_FUNC_RHSA		3
+
+static inline bool dmar_detect_dsm(acpi_handle handle, int func)
+{
+	return acpi_check_dsm(handle, dmar_hp_uuid, DMAR_DSM_REV_ID, 1 << func);
+}
+
+static int dmar_walk_dsm_resource(acpi_handle handle, int func,
+				  dmar_res_handler_t handler, void *arg)
+{
+	int ret = -ENODEV;
+	union acpi_object *obj;
+	struct acpi_dmar_header *start;
+	struct dmar_res_callback callback;
+	static int res_type[] = {
+		[DMAR_DSM_FUNC_DRHD] = ACPI_DMAR_TYPE_HARDWARE_UNIT,
+		[DMAR_DSM_FUNC_ATSR] = ACPI_DMAR_TYPE_ROOT_ATS,
+		[DMAR_DSM_FUNC_RHSA] = ACPI_DMAR_TYPE_HARDWARE_AFFINITY,
+	};
+
+	if (!dmar_detect_dsm(handle, func))
+		return 0;
+
+	obj = acpi_evaluate_dsm_typed(handle, dmar_hp_uuid, DMAR_DSM_REV_ID,
+				      func, NULL, ACPI_TYPE_BUFFER);
+	if (!obj)
+		return -ENODEV;
+
+	memset(&callback, 0, sizeof(callback));
+	callback.cb[res_type[func]] = handler;
+	callback.arg[res_type[func]] = arg;
+	start = (struct acpi_dmar_header *)obj->buffer.pointer;
+	ret = dmar_walk_remapping_entries(start, obj->buffer.length, &callback);
+
+	ACPI_FREE(obj);
+
+	return ret;
+}
+
+static int dmar_hp_add_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	int ret;
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (!dmaru)
+		return -ENODEV;
+
+	ret = dmar_ir_hotplug(dmaru, true);
+	if (ret = 0)
+		ret = dmar_iommu_hotplug(dmaru, true);
+
+	return ret;
+}
+
+static int dmar_hp_remove_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	int i, ret;
+	struct device *dev;
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (!dmaru)
+		return 0;
+
+	/*
+	 * All PCI devices managed by this unit should have been destroyed.
+	 */
+	if (!dmaru->include_all && dmaru->devices && dmaru->devices_cnt)
+		for_each_active_dev_scope(dmaru->devices,
+					  dmaru->devices_cnt, i, dev)
+			return -EBUSY;
+
+	ret = dmar_ir_hotplug(dmaru, false);
+	if (ret = 0)
+		ret = dmar_iommu_hotplug(dmaru, false);
+
+	return ret;
+}
+
+static int dmar_hp_release_drhd(struct acpi_dmar_header *header, void *arg)
+{
+	struct dmar_drhd_unit *dmaru;
+
+	dmaru = dmar_find_dmaru((struct acpi_dmar_hardware_unit *)header);
+	if (dmaru) {
+		list_del_rcu(&dmaru->list);
+		synchronize_rcu();
+		dmar_free_drhd(dmaru);
+	}
+
+	return 0;
+}
+
+static int dmar_hotplug_insert(acpi_handle handle)
+{
+	int ret;
+	int drhd_count = 0;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_validate_one_drhd, (void *)1);
+	if (ret)
+		goto out;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_parse_one_drhd, (void *)&drhd_count);
+	if (ret = 0 && drhd_count = 0) {
+		pr_warn(FW_BUG "No DRHD structures in buffer returned by _DSM method\n");
+		goto out;
+	} else if (ret) {
+		goto release_drhd;
+	}
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_RHSA,
+				     &dmar_parse_one_rhsa, NULL);
+	if (ret)
+		goto release_drhd;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+				     &dmar_parse_one_atsr, NULL);
+	if (ret)
+		goto release_atsr;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_hp_add_drhd, NULL);
+	if (!ret)
+		return 0;
+
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+			       &dmar_hp_remove_drhd, NULL);
+release_atsr:
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+			       &dmar_release_one_atsr, NULL);
+release_drhd:
+	dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+			       &dmar_hp_release_drhd, NULL);
+out:
+	return ret;
+}
+
+static int dmar_hotplug_remove(acpi_handle handle)
+{
+	int ret;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+				     &dmar_check_one_atsr, NULL);
+	if (ret)
+		return ret;
+
+	ret = dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				     &dmar_hp_remove_drhd, NULL);
+	if (ret = 0) {
+		WARN_ON(dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_ATSR,
+					       &dmar_release_one_atsr, NULL));
+		WARN_ON(dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+					       &dmar_hp_release_drhd, NULL));
+	} else {
+		dmar_walk_dsm_resource(handle, DMAR_DSM_FUNC_DRHD,
+				       &dmar_hp_add_drhd, NULL);
+	}
+
+	return ret;
+}
+
+static int dmar_device_hotplug(acpi_handle handle, bool insert)
+{
+	int ret;
+
+	if (!dmar_in_use())
+		return 0;
+
+	if (!dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD))
+		return 0;
+
+	down_write(&dmar_global_lock);
+	if (insert)
+		ret = dmar_hotplug_insert(handle);
+	else
+		ret = dmar_hotplug_remove(handle);
+	up_write(&dmar_global_lock);
+
+	return ret;
+}
+
+int dmar_device_add(acpi_handle handle)
+{
+	return dmar_device_hotplug(handle, true);
+}
+
+int dmar_device_remove(acpi_handle handle)
+{
+	return dmar_device_hotplug(handle, false);
+}
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 7daa74ed46d0..70d9d47eaeda 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3701,17 +3701,48 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
 	return 0;
 }
 
-int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+static struct dmar_atsr_unit *dmar_find_atsr(struct acpi_dmar_atsr *atsr)
+{
+	struct dmar_atsr_unit *atsru;
+	struct acpi_dmar_atsr *tmp;
+
+	list_for_each_entry_rcu(atsru, &dmar_atsr_units, list) {
+		tmp = (struct acpi_dmar_atsr *)atsru->hdr;
+		if (atsr->segment != tmp->segment)
+			continue;
+		if (atsr->header.length != tmp->header.length)
+			continue;
+		if (memcmp(atsr, tmp, atsr->header.length) = 0)
+			return atsru;
+	}
+
+	return NULL;
+}
+
+int dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 {
 	struct acpi_dmar_atsr *atsr;
 	struct dmar_atsr_unit *atsru;
 
+	if (system_state != SYSTEM_BOOTING && !intel_iommu_enabled)
+		return 0;
+
 	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
-	atsru = kzalloc(sizeof(*atsru), GFP_KERNEL);
+	atsru = dmar_find_atsr(atsr);
+	if (atsru)
+		return 0;
+
+	atsru = kzalloc(sizeof(*atsru) + hdr->length, GFP_KERNEL);
 	if (!atsru)
 		return -ENOMEM;
 
-	atsru->hdr = hdr;
+	/*
+	 * If memory is allocated from slab by ACPI _DSM method, we need to
+	 * copy the memory content because the memory buffer will be freed
+	 * on return.
+	 */
+	atsru->hdr = (void *)(atsru + 1);
+	memcpy(atsru->hdr, hdr, hdr->length);
 	atsru->include_all = atsr->flags & 0x1;
 	if (!atsru->include_all) {
 		atsru->devices = dmar_alloc_dev_scope((void *)(atsr + 1),
@@ -3734,6 +3765,47 @@ static void intel_iommu_free_atsr(struct dmar_atsr_unit *atsru)
 	kfree(atsru);
 }
 
+int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+{
+	struct acpi_dmar_atsr *atsr;
+	struct dmar_atsr_unit *atsru;
+
+	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
+	atsru = dmar_find_atsr(atsr);
+	if (atsru) {
+		list_del_rcu(&atsru->list);
+		synchronize_rcu();
+		intel_iommu_free_atsr(atsru);
+	}
+
+	return 0;
+}
+
+int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg)
+{
+	int i;
+	struct device *dev;
+	struct acpi_dmar_atsr *atsr;
+	struct dmar_atsr_unit *atsru;
+
+	atsr = container_of(hdr, struct acpi_dmar_atsr, header);
+	atsru = dmar_find_atsr(atsr);
+	if (!atsru)
+		return 0;
+
+	if (!atsru->include_all && atsru->devices && atsru->devices_cnt)
+		for_each_active_dev_scope(atsru->devices, atsru->devices_cnt,
+					  i, dev)
+			return -EBUSY;
+
+	return 0;
+}
+
+int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return intel_iommu_enabled ? -ENOSYS : 0;
+}
+
 static void intel_iommu_free_dmars(void)
 {
 	struct dmar_rmrr_unit *rmrru, *rmrr_n;
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 0df41f6264f5..9b140ed854ec 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -1172,3 +1172,8 @@ struct irq_remap_ops intel_irq_remap_ops = {
 	.msi_setup_irq		= intel_msi_setup_irq,
 	.setup_hpet_msi		= intel_setup_hpet_msi,
 };
+
+int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return irq_remapping_enabled ? -ENOSYS : 0;
+}
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index c8a576bc3a98..594d4ac79e75 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -120,6 +120,8 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
 /* Intel IOMMU detection */
 extern int detect_intel_iommu(void);
 extern int enable_drhd_fault_handling(void);
+extern int dmar_device_add(acpi_handle handle);
+extern int dmar_device_remove(acpi_handle handle);
 
 static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
 {
@@ -131,17 +133,48 @@ extern int iommu_detected, no_iommu;
 extern int intel_iommu_init(void);
 extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
 extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
+extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg);
+extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg);
+extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
 extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
 #else /* !CONFIG_INTEL_IOMMU: */
 static inline int intel_iommu_init(void) { return -ENODEV; }
+
 #define	dmar_parse_one_rmrr		dmar_res_noop
 #define	dmar_parse_one_atsr		dmar_res_noop
+#define	dmar_check_one_atsr		dmar_res_noop
+#define	dmar_release_one_atsr		dmar_res_noop
+
 static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
 {
 	return 0;
 }
+
+static inline int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{
+	return 0;
+}
 #endif /* CONFIG_INTEL_IOMMU */
 
+#ifdef CONFIG_IRQ_REMAP
+extern int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
+#else  /* CONFIG_IRQ_REMAP */
+static inline int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
+{ return 0; }
+#endif /* CONFIG_IRQ_REMAP */
+
+#else /* CONFIG_DMAR_TABLE */
+
+static inline int dmar_device_add(void *handle)
+{
+	return 0;
+}
+
+static inline int dmar_device_remove(void *handle)
+{
+	return 0;
+}
+
 #endif /* CONFIG_DMAR_TABLE */
 
 struct irte {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 4/8] iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

According to Intel VT-d specification, _DSM method to support DMAR
hotplug should exist directly under corresponding ACPI object
representing PCI host bridge. But some BIOSes doesn't conform to
this, so search for _DSM method in the subtree starting from the
ACPI object representing the PCI host bridge.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/dmar.c |   35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index ab504cf0f34a..9249836d0a6a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1925,21 +1925,48 @@ static int dmar_hotplug_remove(acpi_handle handle)
 	return ret;
 }
 
-static int dmar_device_hotplug(acpi_handle handle, bool insert)
+static acpi_status dmar_get_dsm_handle(acpi_handle handle, u32 lvl,
+				       void *context, void **retval)
+{
+	acpi_handle *phdl = retval;
+
+	if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+		*phdl = handle;
+		return AE_CTRL_TERMINATE;
+	}
+
+	return AE_OK;
+}
+
+int dmar_device_hotplug(acpi_handle handle, bool insert)
 {
 	int ret;
+	acpi_handle tmp = NULL;
+	acpi_status status;
 
 	if (!dmar_in_use())
 		return 0;
 
-	if (!dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD))
+	if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+		tmp = handle;
+	} else {
+		status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle,
+					     ACPI_UINT32_MAX,
+					     dmar_get_dsm_handle,
+					     NULL, NULL, &tmp);
+		if (ACPI_FAILURE(status)) {
+			pr_warn("Failed to locate _DSM method.\n");
+			return -ENXIO;
+		}
+	}
+	if (tmp == NULL)
 		return 0;
 
 	down_write(&dmar_global_lock);
 	if (insert)
-		ret = dmar_hotplug_insert(handle);
+		ret = dmar_hotplug_insert(tmp);
 	else
-		ret = dmar_hotplug_remove(handle);
+		ret = dmar_hotplug_remove(tmp);
 	up_write(&dmar_global_lock);
 
 	return ret;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 4/8] iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

According to Intel VT-d specification, _DSM method to support DMAR
hotplug should exist directly under corresponding ACPI object
representing PCI host bridge. But some BIOSes doesn't conform to
this, so search for _DSM method in the subtree starting from the
ACPI object representing the PCI host bridge.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/iommu/dmar.c |   35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index ab504cf0f34a..9249836d0a6a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1925,21 +1925,48 @@ static int dmar_hotplug_remove(acpi_handle handle)
 	return ret;
 }
 
-static int dmar_device_hotplug(acpi_handle handle, bool insert)
+static acpi_status dmar_get_dsm_handle(acpi_handle handle, u32 lvl,
+				       void *context, void **retval)
+{
+	acpi_handle *phdl = retval;
+
+	if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+		*phdl = handle;
+		return AE_CTRL_TERMINATE;
+	}
+
+	return AE_OK;
+}
+
+int dmar_device_hotplug(acpi_handle handle, bool insert)
 {
 	int ret;
+	acpi_handle tmp = NULL;
+	acpi_status status;
 
 	if (!dmar_in_use())
 		return 0;
 
-	if (!dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD))
+	if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+		tmp = handle;
+	} else {
+		status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle,
+					     ACPI_UINT32_MAX,
+					     dmar_get_dsm_handle,
+					     NULL, NULL, &tmp);
+		if (ACPI_FAILURE(status)) {
+			pr_warn("Failed to locate _DSM method.\n");
+			return -ENXIO;
+		}
+	}
+	if (tmp == NULL)
 		return 0;
 
 	down_write(&dmar_global_lock);
 	if (insert)
-		ret = dmar_hotplug_insert(handle);
+		ret = dmar_hotplug_insert(tmp);
 	else
-		ret = dmar_hotplug_remove(handle);
+		ret = dmar_hotplug_remove(tmp);
 	up_write(&dmar_global_lock);
 
 	return ret;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 4/8] iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

According to Intel VT-d specification, _DSM method to support DMAR
hotplug should exist directly under corresponding ACPI object
representing PCI host bridge. But some BIOSes doesn't conform to
this, so search for _DSM method in the subtree starting from the
ACPI object representing the PCI host bridge.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/dmar.c |   35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index ab504cf0f34a..9249836d0a6a 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1925,21 +1925,48 @@ static int dmar_hotplug_remove(acpi_handle handle)
 	return ret;
 }
 
-static int dmar_device_hotplug(acpi_handle handle, bool insert)
+static acpi_status dmar_get_dsm_handle(acpi_handle handle, u32 lvl,
+				       void *context, void **retval)
+{
+	acpi_handle *phdl = retval;
+
+	if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+		*phdl = handle;
+		return AE_CTRL_TERMINATE;
+	}
+
+	return AE_OK;
+}
+
+int dmar_device_hotplug(acpi_handle handle, bool insert)
 {
 	int ret;
+	acpi_handle tmp = NULL;
+	acpi_status status;
 
 	if (!dmar_in_use())
 		return 0;
 
-	if (!dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD))
+	if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {
+		tmp = handle;
+	} else {
+		status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle,
+					     ACPI_UINT32_MAX,
+					     dmar_get_dsm_handle,
+					     NULL, NULL, &tmp);
+		if (ACPI_FAILURE(status)) {
+			pr_warn("Failed to locate _DSM method.\n");
+			return -ENXIO;
+		}
+	}
+	if (tmp = NULL)
 		return 0;
 
 	down_write(&dmar_global_lock);
 	if (insert)
-		ret = dmar_hotplug_insert(handle);
+		ret = dmar_hotplug_insert(tmp);
 	else
-		ret = dmar_hotplug_remove(handle);
+		ret = dmar_hotplug_remove(tmp);
 	up_write(&dmar_global_lock);
 
 	return ret;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Implement required callback functions for intel_irq_remapping driver
to support DMAR unit hotplug.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/iommu/intel_irq_remapping.c |  226 ++++++++++++++++++++++++++---------
 1 file changed, 171 insertions(+), 55 deletions(-)

diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 9b140ed854ec..1edbbed8c6bc 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -36,7 +36,6 @@ struct hpet_scope {
 
 static struct ioapic_scope ir_ioapic[MAX_IO_APICS];
 static struct hpet_scope ir_hpet[MAX_HPET_TBS];
-static int ir_ioapic_num, ir_hpet_num;
 
 /*
  * Lock ordering:
@@ -206,7 +205,7 @@ static struct intel_iommu *map_hpet_to_ir(u8 hpet_id)
 	int i;
 
 	for (i = 0; i < MAX_HPET_TBS; i++)
-		if (ir_hpet[i].id == hpet_id)
+		if (ir_hpet[i].id == hpet_id && ir_hpet[i].iommu)
 			return ir_hpet[i].iommu;
 	return NULL;
 }
@@ -216,7 +215,7 @@ static struct intel_iommu *map_ioapic_to_ir(int apic)
 	int i;
 
 	for (i = 0; i < MAX_IO_APICS; i++)
-		if (ir_ioapic[i].id == apic)
+		if (ir_ioapic[i].id == apic && ir_ioapic[i].iommu)
 			return ir_ioapic[i].iommu;
 	return NULL;
 }
@@ -325,7 +324,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
 
 	down_read(&dmar_global_lock);
 	for (i = 0; i < MAX_IO_APICS; i++) {
-		if (ir_ioapic[i].id == apic) {
+		if (ir_ioapic[i].iommu && ir_ioapic[i].id == apic) {
 			sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
 			break;
 		}
@@ -352,7 +351,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
 
 	down_read(&dmar_global_lock);
 	for (i = 0; i < MAX_HPET_TBS; i++) {
-		if (ir_hpet[i].id == id) {
+		if (ir_hpet[i].iommu && ir_hpet[i].id == id) {
 			sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
 			break;
 		}
@@ -474,17 +473,17 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
 	raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
 }
 
-
-static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
+static int intel_setup_irq_remapping(struct intel_iommu *iommu)
 {
 	struct ir_table *ir_table;
 	struct page *pages;
 	unsigned long *bitmap;
 
-	ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table),
-					     GFP_ATOMIC);
+	if (iommu->ir_table)
+		return 0;
 
-	if (!iommu->ir_table)
+	ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC);
+	if (!ir_table)
 		return -ENOMEM;
 
 	pages = alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO,
@@ -493,7 +492,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
 	if (!pages) {
 		pr_err("IR%d: failed to allocate pages of order %d\n",
 		       iommu->seq_id, INTR_REMAP_PAGE_ORDER);
-		kfree(iommu->ir_table);
+		kfree(ir_table);
 		return -ENOMEM;
 	}
 
@@ -508,11 +507,22 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
 
 	ir_table->base = page_address(pages);
 	ir_table->bitmap = bitmap;
+	iommu->ir_table = ir_table;
 
-	iommu_set_irq_remapping(iommu, mode);
 	return 0;
 }
 
+static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
+{
+	if (iommu && iommu->ir_table) {
+		free_pages((unsigned long)iommu->ir_table->base,
+			   INTR_REMAP_PAGE_ORDER);
+		kfree(iommu->ir_table->bitmap);
+		kfree(iommu->ir_table);
+		iommu->ir_table = NULL;
+	}
+}
+
 /*
  * Disable Interrupt Remapping.
  */
@@ -667,9 +677,10 @@ static int __init intel_enable_irq_remapping(void)
 		if (!ecap_ir_support(iommu->ecap))
 			continue;
 
-		if (intel_setup_irq_remapping(iommu, eim))
+		if (intel_setup_irq_remapping(iommu))
 			goto error;
 
+		iommu_set_irq_remapping(iommu, eim);
 		setup = 1;
 	}
 
@@ -700,12 +711,13 @@ error:
 	return -1;
 }
 
-static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
-				      struct intel_iommu *iommu)
+static int ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
+				   struct intel_iommu *iommu,
+				   struct acpi_dmar_hardware_unit *drhd)
 {
 	struct acpi_dmar_pci_path *path;
 	u8 bus;
-	int count;
+	int count, free = -1;
 
 	bus = scope->bus;
 	path = (struct acpi_dmar_pci_path *)(scope + 1);
@@ -721,19 +733,36 @@ static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
 					   PCI_SECONDARY_BUS);
 		path++;
 	}
-	ir_hpet[ir_hpet_num].bus   = bus;
-	ir_hpet[ir_hpet_num].devfn = PCI_DEVFN(path->device, path->function);
-	ir_hpet[ir_hpet_num].iommu = iommu;
-	ir_hpet[ir_hpet_num].id    = scope->enumeration_id;
-	ir_hpet_num++;
+
+	for (count = 0; count < MAX_HPET_TBS; count++) {
+		if (ir_hpet[count].iommu == iommu &&
+		    ir_hpet[count].id == scope->enumeration_id)
+			return 0;
+		else if (ir_hpet[count].iommu == NULL && free == -1)
+			free = count;
+	}
+	if (free == -1) {
+		pr_warn("Exceeded Max HPET blocks\n");
+		return -ENOSPC;
+	}
+
+	ir_hpet[free].iommu = iommu;
+	ir_hpet[free].id    = scope->enumeration_id;
+	ir_hpet[free].bus   = bus;
+	ir_hpet[free].devfn = PCI_DEVFN(path->device, path->function);
+	pr_info("HPET id %d under DRHD base 0x%Lx\n",
+		scope->enumeration_id, drhd->address);
+
+	return 0;
 }
 
-static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
-				      struct intel_iommu *iommu)
+static int ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
+				     struct intel_iommu *iommu,
+				     struct acpi_dmar_hardware_unit *drhd)
 {
 	struct acpi_dmar_pci_path *path;
 	u8 bus;
-	int count;
+	int count, free = -1;
 
 	bus = scope->bus;
 	path = (struct acpi_dmar_pci_path *)(scope + 1);
@@ -750,54 +779,63 @@ static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
 		path++;
 	}
 
-	ir_ioapic[ir_ioapic_num].bus   = bus;
-	ir_ioapic[ir_ioapic_num].devfn = PCI_DEVFN(path->device, path->function);
-	ir_ioapic[ir_ioapic_num].iommu = iommu;
-	ir_ioapic[ir_ioapic_num].id    = scope->enumeration_id;
-	ir_ioapic_num++;
+	for (count = 0; count < MAX_IO_APICS; count++) {
+		if (ir_ioapic[count].iommu == iommu &&
+		    ir_ioapic[count].id == scope->enumeration_id)
+			return 0;
+		else if (ir_ioapic[count].iommu == NULL && free == -1)
+			free = count;
+	}
+	if (free == -1) {
+		pr_warn("Exceeded Max IO APICS\n");
+		return -ENOSPC;
+	}
+
+	ir_ioapic[free].bus   = bus;
+	ir_ioapic[free].devfn = PCI_DEVFN(path->device, path->function);
+	ir_ioapic[free].iommu = iommu;
+	ir_ioapic[free].id    = scope->enumeration_id;
+	pr_info("IOAPIC id %d under DRHD base  0x%Lx IOMMU %d\n",
+		scope->enumeration_id, drhd->address, iommu->seq_id);
+
+	return 0;
 }
 
 static int ir_parse_ioapic_hpet_scope(struct acpi_dmar_header *header,
 				      struct intel_iommu *iommu)
 {
+	int ret = 0;
 	struct acpi_dmar_hardware_unit *drhd;
 	struct acpi_dmar_device_scope *scope;
 	void *start, *end;
 
 	drhd = (struct acpi_dmar_hardware_unit *)header;
-
 	start = (void *)(drhd + 1);
 	end = ((void *)drhd) + header->length;
 
-	while (start < end) {
+	while (start < end && ret == 0) {
 		scope = start;
-		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC) {
-			if (ir_ioapic_num == MAX_IO_APICS) {
-				printk(KERN_WARNING "Exceeded Max IO APICS\n");
-				return -1;
-			}
-
-			printk(KERN_INFO "IOAPIC id %d under DRHD base "
-			       " 0x%Lx IOMMU %d\n", scope->enumeration_id,
-			       drhd->address, iommu->seq_id);
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC)
+			ret = ir_parse_one_ioapic_scope(scope, iommu, drhd);
+		else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET)
+			ret = ir_parse_one_hpet_scope(scope, iommu, drhd);
+		start += scope->length;
+	}
 
-			ir_parse_one_ioapic_scope(scope, iommu);
-		} else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET) {
-			if (ir_hpet_num == MAX_HPET_TBS) {
-				printk(KERN_WARNING "Exceeded Max HPET blocks\n");
-				return -1;
-			}
+	return ret;
+}
 
-			printk(KERN_INFO "HPET id %d under DRHD base"
-			       " 0x%Lx\n", scope->enumeration_id,
-			       drhd->address);
+static void ir_remove_ioapic_hpet_scope(struct intel_iommu *iommu)
+{
+	int i;
 
-			ir_parse_one_hpet_scope(scope, iommu);
-		}
-		start += scope->length;
-	}
+	for (i = 0; i < MAX_HPET_TBS; i++)
+		if (ir_hpet[i].iommu == iommu)
+			ir_hpet[i].iommu = NULL;
 
-	return 0;
+	for (i = 0; i < MAX_IO_APICS; i++)
+		if (ir_ioapic[i].iommu == iommu)
+			ir_ioapic[i].iommu = NULL;
 }
 
 /*
@@ -1173,7 +1211,85 @@ struct irq_remap_ops intel_irq_remap_ops = {
 	.setup_hpet_msi		= intel_setup_hpet_msi,
 };
 
+/*
+ * Support of Interrupt Remapping Unit Hotplug
+ */
+static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
+{
+	int ret;
+	int eim = x2apic_enabled();
+
+	if (eim && !ecap_eim_support(iommu->ecap)) {
+		pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
+			iommu->reg_phys, iommu->ecap);
+		return -ENODEV;
+	}
+
+	if (ir_parse_ioapic_hpet_scope(dmaru->hdr, iommu)) {
+		pr_warn("DRHD %Lx: failed to parse managed IOAPIC/HPET\n",
+			iommu->reg_phys);
+		return -ENODEV;
+	}
+
+	/* TODO: check all IOAPICs are covered by IOMMU */
+
+	/* Setup Interrupt-remapping now. */
+	ret = intel_setup_irq_remapping(iommu);
+	if (ret) {
+		pr_err("DRHD %Lx: failed to allocate resource\n",
+		       iommu->reg_phys);
+		ir_remove_ioapic_hpet_scope(iommu);
+		return ret;
+	}
+
+	if (!iommu->qi) {
+		/* Clear previous faults. */
+		dmar_fault(-1, iommu);
+		iommu_disable_irq_remapping(iommu);
+		dmar_disable_qi(iommu);
+	}
+
+	/* Enable queued invalidation */
+	ret = dmar_enable_qi(iommu);
+	if (!ret) {
+		iommu_set_irq_remapping(iommu, eim);
+	} else {
+		pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
+		       iommu->reg_phys, iommu->ecap, ret);
+		intel_teardown_irq_remapping(iommu);
+		ir_remove_ioapic_hpet_scope(iommu);
+	}
+
+	return ret;
+}
+
 int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
 {
-	return irq_remapping_enabled ? -ENOSYS : 0;
+	int ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (!irq_remapping_enabled)
+		return 0;
+	if (iommu == NULL)
+		return -EINVAL;
+	if (!ecap_ir_support(iommu->ecap))
+		return 0;
+
+	if (insert) {
+		if (!iommu->ir_table)
+			ret = dmar_ir_add(dmaru, iommu);
+	} else {
+		if (iommu->ir_table) {
+			if (!bitmap_empty(iommu->ir_table->bitmap,
+					  INTR_REMAP_TABLE_ENTRIES)) {
+				ret = -EBUSY;
+			} else {
+				iommu_disable_irq_remapping(iommu);
+				intel_teardown_irq_remapping(iommu);
+				ir_remove_ioapic_hpet_scope(iommu);
+			}
+		}
+	}
+
+	return ret;
 }
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

Implement required callback functions for intel_irq_remapping driver
to support DMAR unit hotplug.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
---
 drivers/iommu/intel_irq_remapping.c |  226 ++++++++++++++++++++++++++---------
 1 file changed, 171 insertions(+), 55 deletions(-)

diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 9b140ed854ec..1edbbed8c6bc 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -36,7 +36,6 @@ struct hpet_scope {
 
 static struct ioapic_scope ir_ioapic[MAX_IO_APICS];
 static struct hpet_scope ir_hpet[MAX_HPET_TBS];
-static int ir_ioapic_num, ir_hpet_num;
 
 /*
  * Lock ordering:
@@ -206,7 +205,7 @@ static struct intel_iommu *map_hpet_to_ir(u8 hpet_id)
 	int i;
 
 	for (i = 0; i < MAX_HPET_TBS; i++)
-		if (ir_hpet[i].id == hpet_id)
+		if (ir_hpet[i].id == hpet_id && ir_hpet[i].iommu)
 			return ir_hpet[i].iommu;
 	return NULL;
 }
@@ -216,7 +215,7 @@ static struct intel_iommu *map_ioapic_to_ir(int apic)
 	int i;
 
 	for (i = 0; i < MAX_IO_APICS; i++)
-		if (ir_ioapic[i].id == apic)
+		if (ir_ioapic[i].id == apic && ir_ioapic[i].iommu)
 			return ir_ioapic[i].iommu;
 	return NULL;
 }
@@ -325,7 +324,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
 
 	down_read(&dmar_global_lock);
 	for (i = 0; i < MAX_IO_APICS; i++) {
-		if (ir_ioapic[i].id == apic) {
+		if (ir_ioapic[i].iommu && ir_ioapic[i].id == apic) {
 			sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
 			break;
 		}
@@ -352,7 +351,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
 
 	down_read(&dmar_global_lock);
 	for (i = 0; i < MAX_HPET_TBS; i++) {
-		if (ir_hpet[i].id == id) {
+		if (ir_hpet[i].iommu && ir_hpet[i].id == id) {
 			sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
 			break;
 		}
@@ -474,17 +473,17 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
 	raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
 }
 
-
-static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
+static int intel_setup_irq_remapping(struct intel_iommu *iommu)
 {
 	struct ir_table *ir_table;
 	struct page *pages;
 	unsigned long *bitmap;
 
-	ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table),
-					     GFP_ATOMIC);
+	if (iommu->ir_table)
+		return 0;
 
-	if (!iommu->ir_table)
+	ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC);
+	if (!ir_table)
 		return -ENOMEM;
 
 	pages = alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO,
@@ -493,7 +492,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
 	if (!pages) {
 		pr_err("IR%d: failed to allocate pages of order %d\n",
 		       iommu->seq_id, INTR_REMAP_PAGE_ORDER);
-		kfree(iommu->ir_table);
+		kfree(ir_table);
 		return -ENOMEM;
 	}
 
@@ -508,11 +507,22 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
 
 	ir_table->base = page_address(pages);
 	ir_table->bitmap = bitmap;
+	iommu->ir_table = ir_table;
 
-	iommu_set_irq_remapping(iommu, mode);
 	return 0;
 }
 
+static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
+{
+	if (iommu && iommu->ir_table) {
+		free_pages((unsigned long)iommu->ir_table->base,
+			   INTR_REMAP_PAGE_ORDER);
+		kfree(iommu->ir_table->bitmap);
+		kfree(iommu->ir_table);
+		iommu->ir_table = NULL;
+	}
+}
+
 /*
  * Disable Interrupt Remapping.
  */
@@ -667,9 +677,10 @@ static int __init intel_enable_irq_remapping(void)
 		if (!ecap_ir_support(iommu->ecap))
 			continue;
 
-		if (intel_setup_irq_remapping(iommu, eim))
+		if (intel_setup_irq_remapping(iommu))
 			goto error;
 
+		iommu_set_irq_remapping(iommu, eim);
 		setup = 1;
 	}
 
@@ -700,12 +711,13 @@ error:
 	return -1;
 }
 
-static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
-				      struct intel_iommu *iommu)
+static int ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
+				   struct intel_iommu *iommu,
+				   struct acpi_dmar_hardware_unit *drhd)
 {
 	struct acpi_dmar_pci_path *path;
 	u8 bus;
-	int count;
+	int count, free = -1;
 
 	bus = scope->bus;
 	path = (struct acpi_dmar_pci_path *)(scope + 1);
@@ -721,19 +733,36 @@ static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
 					   PCI_SECONDARY_BUS);
 		path++;
 	}
-	ir_hpet[ir_hpet_num].bus   = bus;
-	ir_hpet[ir_hpet_num].devfn = PCI_DEVFN(path->device, path->function);
-	ir_hpet[ir_hpet_num].iommu = iommu;
-	ir_hpet[ir_hpet_num].id    = scope->enumeration_id;
-	ir_hpet_num++;
+
+	for (count = 0; count < MAX_HPET_TBS; count++) {
+		if (ir_hpet[count].iommu == iommu &&
+		    ir_hpet[count].id == scope->enumeration_id)
+			return 0;
+		else if (ir_hpet[count].iommu == NULL && free == -1)
+			free = count;
+	}
+	if (free == -1) {
+		pr_warn("Exceeded Max HPET blocks\n");
+		return -ENOSPC;
+	}
+
+	ir_hpet[free].iommu = iommu;
+	ir_hpet[free].id    = scope->enumeration_id;
+	ir_hpet[free].bus   = bus;
+	ir_hpet[free].devfn = PCI_DEVFN(path->device, path->function);
+	pr_info("HPET id %d under DRHD base 0x%Lx\n",
+		scope->enumeration_id, drhd->address);
+
+	return 0;
 }
 
-static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
-				      struct intel_iommu *iommu)
+static int ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
+				     struct intel_iommu *iommu,
+				     struct acpi_dmar_hardware_unit *drhd)
 {
 	struct acpi_dmar_pci_path *path;
 	u8 bus;
-	int count;
+	int count, free = -1;
 
 	bus = scope->bus;
 	path = (struct acpi_dmar_pci_path *)(scope + 1);
@@ -750,54 +779,63 @@ static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
 		path++;
 	}
 
-	ir_ioapic[ir_ioapic_num].bus   = bus;
-	ir_ioapic[ir_ioapic_num].devfn = PCI_DEVFN(path->device, path->function);
-	ir_ioapic[ir_ioapic_num].iommu = iommu;
-	ir_ioapic[ir_ioapic_num].id    = scope->enumeration_id;
-	ir_ioapic_num++;
+	for (count = 0; count < MAX_IO_APICS; count++) {
+		if (ir_ioapic[count].iommu == iommu &&
+		    ir_ioapic[count].id == scope->enumeration_id)
+			return 0;
+		else if (ir_ioapic[count].iommu == NULL && free == -1)
+			free = count;
+	}
+	if (free == -1) {
+		pr_warn("Exceeded Max IO APICS\n");
+		return -ENOSPC;
+	}
+
+	ir_ioapic[free].bus   = bus;
+	ir_ioapic[free].devfn = PCI_DEVFN(path->device, path->function);
+	ir_ioapic[free].iommu = iommu;
+	ir_ioapic[free].id    = scope->enumeration_id;
+	pr_info("IOAPIC id %d under DRHD base  0x%Lx IOMMU %d\n",
+		scope->enumeration_id, drhd->address, iommu->seq_id);
+
+	return 0;
 }
 
 static int ir_parse_ioapic_hpet_scope(struct acpi_dmar_header *header,
 				      struct intel_iommu *iommu)
 {
+	int ret = 0;
 	struct acpi_dmar_hardware_unit *drhd;
 	struct acpi_dmar_device_scope *scope;
 	void *start, *end;
 
 	drhd = (struct acpi_dmar_hardware_unit *)header;
-
 	start = (void *)(drhd + 1);
 	end = ((void *)drhd) + header->length;
 
-	while (start < end) {
+	while (start < end && ret == 0) {
 		scope = start;
-		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC) {
-			if (ir_ioapic_num == MAX_IO_APICS) {
-				printk(KERN_WARNING "Exceeded Max IO APICS\n");
-				return -1;
-			}
-
-			printk(KERN_INFO "IOAPIC id %d under DRHD base "
-			       " 0x%Lx IOMMU %d\n", scope->enumeration_id,
-			       drhd->address, iommu->seq_id);
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC)
+			ret = ir_parse_one_ioapic_scope(scope, iommu, drhd);
+		else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET)
+			ret = ir_parse_one_hpet_scope(scope, iommu, drhd);
+		start += scope->length;
+	}
 
-			ir_parse_one_ioapic_scope(scope, iommu);
-		} else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET) {
-			if (ir_hpet_num == MAX_HPET_TBS) {
-				printk(KERN_WARNING "Exceeded Max HPET blocks\n");
-				return -1;
-			}
+	return ret;
+}
 
-			printk(KERN_INFO "HPET id %d under DRHD base"
-			       " 0x%Lx\n", scope->enumeration_id,
-			       drhd->address);
+static void ir_remove_ioapic_hpet_scope(struct intel_iommu *iommu)
+{
+	int i;
 
-			ir_parse_one_hpet_scope(scope, iommu);
-		}
-		start += scope->length;
-	}
+	for (i = 0; i < MAX_HPET_TBS; i++)
+		if (ir_hpet[i].iommu == iommu)
+			ir_hpet[i].iommu = NULL;
 
-	return 0;
+	for (i = 0; i < MAX_IO_APICS; i++)
+		if (ir_ioapic[i].iommu == iommu)
+			ir_ioapic[i].iommu = NULL;
 }
 
 /*
@@ -1173,7 +1211,85 @@ struct irq_remap_ops intel_irq_remap_ops = {
 	.setup_hpet_msi		= intel_setup_hpet_msi,
 };
 
+/*
+ * Support of Interrupt Remapping Unit Hotplug
+ */
+static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
+{
+	int ret;
+	int eim = x2apic_enabled();
+
+	if (eim && !ecap_eim_support(iommu->ecap)) {
+		pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
+			iommu->reg_phys, iommu->ecap);
+		return -ENODEV;
+	}
+
+	if (ir_parse_ioapic_hpet_scope(dmaru->hdr, iommu)) {
+		pr_warn("DRHD %Lx: failed to parse managed IOAPIC/HPET\n",
+			iommu->reg_phys);
+		return -ENODEV;
+	}
+
+	/* TODO: check all IOAPICs are covered by IOMMU */
+
+	/* Setup Interrupt-remapping now. */
+	ret = intel_setup_irq_remapping(iommu);
+	if (ret) {
+		pr_err("DRHD %Lx: failed to allocate resource\n",
+		       iommu->reg_phys);
+		ir_remove_ioapic_hpet_scope(iommu);
+		return ret;
+	}
+
+	if (!iommu->qi) {
+		/* Clear previous faults. */
+		dmar_fault(-1, iommu);
+		iommu_disable_irq_remapping(iommu);
+		dmar_disable_qi(iommu);
+	}
+
+	/* Enable queued invalidation */
+	ret = dmar_enable_qi(iommu);
+	if (!ret) {
+		iommu_set_irq_remapping(iommu, eim);
+	} else {
+		pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
+		       iommu->reg_phys, iommu->ecap, ret);
+		intel_teardown_irq_remapping(iommu);
+		ir_remove_ioapic_hpet_scope(iommu);
+	}
+
+	return ret;
+}
+
 int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
 {
-	return irq_remapping_enabled ? -ENOSYS : 0;
+	int ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (!irq_remapping_enabled)
+		return 0;
+	if (iommu == NULL)
+		return -EINVAL;
+	if (!ecap_ir_support(iommu->ecap))
+		return 0;
+
+	if (insert) {
+		if (!iommu->ir_table)
+			ret = dmar_ir_add(dmaru, iommu);
+	} else {
+		if (iommu->ir_table) {
+			if (!bitmap_empty(iommu->ir_table->bitmap,
+					  INTR_REMAP_TABLE_ENTRIES)) {
+				ret = -EBUSY;
+			} else {
+				iommu_disable_irq_remapping(iommu);
+				intel_teardown_irq_remapping(iommu);
+				ir_remove_ioapic_hpet_scope(iommu);
+			}
+		}
+	}
+
+	return ret;
 }
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Implement required callback functions for intel_irq_remapping driver
to support DMAR unit hotplug.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/iommu/intel_irq_remapping.c |  226 ++++++++++++++++++++++++++---------
 1 file changed, 171 insertions(+), 55 deletions(-)

diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 9b140ed854ec..1edbbed8c6bc 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -36,7 +36,6 @@ struct hpet_scope {
 
 static struct ioapic_scope ir_ioapic[MAX_IO_APICS];
 static struct hpet_scope ir_hpet[MAX_HPET_TBS];
-static int ir_ioapic_num, ir_hpet_num;
 
 /*
  * Lock ordering:
@@ -206,7 +205,7 @@ static struct intel_iommu *map_hpet_to_ir(u8 hpet_id)
 	int i;
 
 	for (i = 0; i < MAX_HPET_TBS; i++)
-		if (ir_hpet[i].id = hpet_id)
+		if (ir_hpet[i].id = hpet_id && ir_hpet[i].iommu)
 			return ir_hpet[i].iommu;
 	return NULL;
 }
@@ -216,7 +215,7 @@ static struct intel_iommu *map_ioapic_to_ir(int apic)
 	int i;
 
 	for (i = 0; i < MAX_IO_APICS; i++)
-		if (ir_ioapic[i].id = apic)
+		if (ir_ioapic[i].id = apic && ir_ioapic[i].iommu)
 			return ir_ioapic[i].iommu;
 	return NULL;
 }
@@ -325,7 +324,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
 
 	down_read(&dmar_global_lock);
 	for (i = 0; i < MAX_IO_APICS; i++) {
-		if (ir_ioapic[i].id = apic) {
+		if (ir_ioapic[i].iommu && ir_ioapic[i].id = apic) {
 			sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
 			break;
 		}
@@ -352,7 +351,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
 
 	down_read(&dmar_global_lock);
 	for (i = 0; i < MAX_HPET_TBS; i++) {
-		if (ir_hpet[i].id = id) {
+		if (ir_hpet[i].iommu && ir_hpet[i].id = id) {
 			sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
 			break;
 		}
@@ -474,17 +473,17 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
 	raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
 }
 
-
-static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
+static int intel_setup_irq_remapping(struct intel_iommu *iommu)
 {
 	struct ir_table *ir_table;
 	struct page *pages;
 	unsigned long *bitmap;
 
-	ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table),
-					     GFP_ATOMIC);
+	if (iommu->ir_table)
+		return 0;
 
-	if (!iommu->ir_table)
+	ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC);
+	if (!ir_table)
 		return -ENOMEM;
 
 	pages = alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO,
@@ -493,7 +492,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
 	if (!pages) {
 		pr_err("IR%d: failed to allocate pages of order %d\n",
 		       iommu->seq_id, INTR_REMAP_PAGE_ORDER);
-		kfree(iommu->ir_table);
+		kfree(ir_table);
 		return -ENOMEM;
 	}
 
@@ -508,11 +507,22 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
 
 	ir_table->base = page_address(pages);
 	ir_table->bitmap = bitmap;
+	iommu->ir_table = ir_table;
 
-	iommu_set_irq_remapping(iommu, mode);
 	return 0;
 }
 
+static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
+{
+	if (iommu && iommu->ir_table) {
+		free_pages((unsigned long)iommu->ir_table->base,
+			   INTR_REMAP_PAGE_ORDER);
+		kfree(iommu->ir_table->bitmap);
+		kfree(iommu->ir_table);
+		iommu->ir_table = NULL;
+	}
+}
+
 /*
  * Disable Interrupt Remapping.
  */
@@ -667,9 +677,10 @@ static int __init intel_enable_irq_remapping(void)
 		if (!ecap_ir_support(iommu->ecap))
 			continue;
 
-		if (intel_setup_irq_remapping(iommu, eim))
+		if (intel_setup_irq_remapping(iommu))
 			goto error;
 
+		iommu_set_irq_remapping(iommu, eim);
 		setup = 1;
 	}
 
@@ -700,12 +711,13 @@ error:
 	return -1;
 }
 
-static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
-				      struct intel_iommu *iommu)
+static int ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
+				   struct intel_iommu *iommu,
+				   struct acpi_dmar_hardware_unit *drhd)
 {
 	struct acpi_dmar_pci_path *path;
 	u8 bus;
-	int count;
+	int count, free = -1;
 
 	bus = scope->bus;
 	path = (struct acpi_dmar_pci_path *)(scope + 1);
@@ -721,19 +733,36 @@ static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
 					   PCI_SECONDARY_BUS);
 		path++;
 	}
-	ir_hpet[ir_hpet_num].bus   = bus;
-	ir_hpet[ir_hpet_num].devfn = PCI_DEVFN(path->device, path->function);
-	ir_hpet[ir_hpet_num].iommu = iommu;
-	ir_hpet[ir_hpet_num].id    = scope->enumeration_id;
-	ir_hpet_num++;
+
+	for (count = 0; count < MAX_HPET_TBS; count++) {
+		if (ir_hpet[count].iommu = iommu &&
+		    ir_hpet[count].id = scope->enumeration_id)
+			return 0;
+		else if (ir_hpet[count].iommu = NULL && free = -1)
+			free = count;
+	}
+	if (free = -1) {
+		pr_warn("Exceeded Max HPET blocks\n");
+		return -ENOSPC;
+	}
+
+	ir_hpet[free].iommu = iommu;
+	ir_hpet[free].id    = scope->enumeration_id;
+	ir_hpet[free].bus   = bus;
+	ir_hpet[free].devfn = PCI_DEVFN(path->device, path->function);
+	pr_info("HPET id %d under DRHD base 0x%Lx\n",
+		scope->enumeration_id, drhd->address);
+
+	return 0;
 }
 
-static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
-				      struct intel_iommu *iommu)
+static int ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
+				     struct intel_iommu *iommu,
+				     struct acpi_dmar_hardware_unit *drhd)
 {
 	struct acpi_dmar_pci_path *path;
 	u8 bus;
-	int count;
+	int count, free = -1;
 
 	bus = scope->bus;
 	path = (struct acpi_dmar_pci_path *)(scope + 1);
@@ -750,54 +779,63 @@ static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
 		path++;
 	}
 
-	ir_ioapic[ir_ioapic_num].bus   = bus;
-	ir_ioapic[ir_ioapic_num].devfn = PCI_DEVFN(path->device, path->function);
-	ir_ioapic[ir_ioapic_num].iommu = iommu;
-	ir_ioapic[ir_ioapic_num].id    = scope->enumeration_id;
-	ir_ioapic_num++;
+	for (count = 0; count < MAX_IO_APICS; count++) {
+		if (ir_ioapic[count].iommu = iommu &&
+		    ir_ioapic[count].id = scope->enumeration_id)
+			return 0;
+		else if (ir_ioapic[count].iommu = NULL && free = -1)
+			free = count;
+	}
+	if (free = -1) {
+		pr_warn("Exceeded Max IO APICS\n");
+		return -ENOSPC;
+	}
+
+	ir_ioapic[free].bus   = bus;
+	ir_ioapic[free].devfn = PCI_DEVFN(path->device, path->function);
+	ir_ioapic[free].iommu = iommu;
+	ir_ioapic[free].id    = scope->enumeration_id;
+	pr_info("IOAPIC id %d under DRHD base  0x%Lx IOMMU %d\n",
+		scope->enumeration_id, drhd->address, iommu->seq_id);
+
+	return 0;
 }
 
 static int ir_parse_ioapic_hpet_scope(struct acpi_dmar_header *header,
 				      struct intel_iommu *iommu)
 {
+	int ret = 0;
 	struct acpi_dmar_hardware_unit *drhd;
 	struct acpi_dmar_device_scope *scope;
 	void *start, *end;
 
 	drhd = (struct acpi_dmar_hardware_unit *)header;
-
 	start = (void *)(drhd + 1);
 	end = ((void *)drhd) + header->length;
 
-	while (start < end) {
+	while (start < end && ret = 0) {
 		scope = start;
-		if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_IOAPIC) {
-			if (ir_ioapic_num = MAX_IO_APICS) {
-				printk(KERN_WARNING "Exceeded Max IO APICS\n");
-				return -1;
-			}
-
-			printk(KERN_INFO "IOAPIC id %d under DRHD base "
-			       " 0x%Lx IOMMU %d\n", scope->enumeration_id,
-			       drhd->address, iommu->seq_id);
+		if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_IOAPIC)
+			ret = ir_parse_one_ioapic_scope(scope, iommu, drhd);
+		else if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_HPET)
+			ret = ir_parse_one_hpet_scope(scope, iommu, drhd);
+		start += scope->length;
+	}
 
-			ir_parse_one_ioapic_scope(scope, iommu);
-		} else if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_HPET) {
-			if (ir_hpet_num = MAX_HPET_TBS) {
-				printk(KERN_WARNING "Exceeded Max HPET blocks\n");
-				return -1;
-			}
+	return ret;
+}
 
-			printk(KERN_INFO "HPET id %d under DRHD base"
-			       " 0x%Lx\n", scope->enumeration_id,
-			       drhd->address);
+static void ir_remove_ioapic_hpet_scope(struct intel_iommu *iommu)
+{
+	int i;
 
-			ir_parse_one_hpet_scope(scope, iommu);
-		}
-		start += scope->length;
-	}
+	for (i = 0; i < MAX_HPET_TBS; i++)
+		if (ir_hpet[i].iommu = iommu)
+			ir_hpet[i].iommu = NULL;
 
-	return 0;
+	for (i = 0; i < MAX_IO_APICS; i++)
+		if (ir_ioapic[i].iommu = iommu)
+			ir_ioapic[i].iommu = NULL;
 }
 
 /*
@@ -1173,7 +1211,85 @@ struct irq_remap_ops intel_irq_remap_ops = {
 	.setup_hpet_msi		= intel_setup_hpet_msi,
 };
 
+/*
+ * Support of Interrupt Remapping Unit Hotplug
+ */
+static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
+{
+	int ret;
+	int eim = x2apic_enabled();
+
+	if (eim && !ecap_eim_support(iommu->ecap)) {
+		pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
+			iommu->reg_phys, iommu->ecap);
+		return -ENODEV;
+	}
+
+	if (ir_parse_ioapic_hpet_scope(dmaru->hdr, iommu)) {
+		pr_warn("DRHD %Lx: failed to parse managed IOAPIC/HPET\n",
+			iommu->reg_phys);
+		return -ENODEV;
+	}
+
+	/* TODO: check all IOAPICs are covered by IOMMU */
+
+	/* Setup Interrupt-remapping now. */
+	ret = intel_setup_irq_remapping(iommu);
+	if (ret) {
+		pr_err("DRHD %Lx: failed to allocate resource\n",
+		       iommu->reg_phys);
+		ir_remove_ioapic_hpet_scope(iommu);
+		return ret;
+	}
+
+	if (!iommu->qi) {
+		/* Clear previous faults. */
+		dmar_fault(-1, iommu);
+		iommu_disable_irq_remapping(iommu);
+		dmar_disable_qi(iommu);
+	}
+
+	/* Enable queued invalidation */
+	ret = dmar_enable_qi(iommu);
+	if (!ret) {
+		iommu_set_irq_remapping(iommu, eim);
+	} else {
+		pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
+		       iommu->reg_phys, iommu->ecap, ret);
+		intel_teardown_irq_remapping(iommu);
+		ir_remove_ioapic_hpet_scope(iommu);
+	}
+
+	return ret;
+}
+
 int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
 {
-	return irq_remapping_enabled ? -ENOSYS : 0;
+	int ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (!irq_remapping_enabled)
+		return 0;
+	if (iommu = NULL)
+		return -EINVAL;
+	if (!ecap_ir_support(iommu->ecap))
+		return 0;
+
+	if (insert) {
+		if (!iommu->ir_table)
+			ret = dmar_ir_add(dmaru, iommu);
+	} else {
+		if (iommu->ir_table) {
+			if (!bitmap_empty(iommu->ir_table->bitmap,
+					  INTR_REMAP_TABLE_ENTRIES)) {
+				ret = -EBUSY;
+			} else {
+				iommu_disable_irq_remapping(iommu);
+				intel_teardown_irq_remapping(iommu);
+				ir_remove_ioapic_hpet_scope(iommu);
+			}
+		}
+	}
+
+	return ret;
 }
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 6/8] iommu/vt-d: Enhance error recovery in function intel_enable_irq_remapping()
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Enhance error recovery in function intel_enable_irq_remapping()
by tearing down all created data structures.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/intel_irq_remapping.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 1edbbed8c6bc..ddf98a8782a2 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -701,9 +701,11 @@ static int __init intel_enable_irq_remapping(void)
 	return eim ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
 
 error:
-	/*
-	 * handle error condition gracefully here!
-	 */
+	for_each_iommu(iommu, drhd)
+		if (ecap_ir_support(iommu->ecap)) {
+			iommu_disable_irq_remapping(iommu);
+			intel_teardown_irq_remapping(iommu);
+		}
 
 	if (x2apic_present)
 		pr_warn("Failed to enable irq remapping.  You are vulnerable to irq-injection attacks.\n");
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 6/8] iommu/vt-d: Enhance error recovery in function intel_enable_irq_remapping()
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

Enhance error recovery in function intel_enable_irq_remapping()
by tearing down all created data structures.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/iommu/intel_irq_remapping.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 1edbbed8c6bc..ddf98a8782a2 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -701,9 +701,11 @@ static int __init intel_enable_irq_remapping(void)
 	return eim ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
 
 error:
-	/*
-	 * handle error condition gracefully here!
-	 */
+	for_each_iommu(iommu, drhd)
+		if (ecap_ir_support(iommu->ecap)) {
+			iommu_disable_irq_remapping(iommu);
+			intel_teardown_irq_remapping(iommu);
+		}
 
 	if (x2apic_present)
 		pr_warn("Failed to enable irq remapping.  You are vulnerable to irq-injection attacks.\n");
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 6/8] iommu/vt-d: Enhance error recovery in function intel_enable_irq_remapping()
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Enhance error recovery in function intel_enable_irq_remapping()
by tearing down all created data structures.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/intel_irq_remapping.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 1edbbed8c6bc..ddf98a8782a2 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -701,9 +701,11 @@ static int __init intel_enable_irq_remapping(void)
 	return eim ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
 
 error:
-	/*
-	 * handle error condition gracefully here!
-	 */
+	for_each_iommu(iommu, drhd)
+		if (ecap_ir_support(iommu->ecap)) {
+			iommu_disable_irq_remapping(iommu);
+			intel_teardown_irq_remapping(iommu);
+		}
 
 	if (x2apic_present)
 		pr_warn("Failed to enable irq remapping.  You are vulnerable to irq-injection attacks.\n");
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 7/8] iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Implement required callback functions for intel-iommu driver
to support DMAR unit hotplug.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/intel-iommu.c |  206 +++++++++++++++++++++++++++++++------------
 1 file changed, 151 insertions(+), 55 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 70d9d47eaeda..c2d369524960 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1125,8 +1125,11 @@ static int iommu_alloc_root_entry(struct intel_iommu *iommu)
 	unsigned long flags;
 
 	root = (struct root_entry *)alloc_pgtable_page(iommu->node);
-	if (!root)
+	if (!root) {
+		pr_err("IOMMU: allocating root entry for %s failed\n",
+			iommu->name);
 		return -ENOMEM;
+	}
 
 	__iommu_flush_cache(iommu, root, ROOT_SIZE);
 
@@ -1466,7 +1469,7 @@ static int iommu_init_domains(struct intel_iommu *iommu)
 	return 0;
 }
 
-static void free_dmar_iommu(struct intel_iommu *iommu)
+static void disable_dmar_iommu(struct intel_iommu *iommu)
 {
 	struct dmar_domain *domain;
 	int i;
@@ -1490,11 +1493,16 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
 
 	if (iommu->gcmd & DMA_GCMD_TE)
 		iommu_disable_translation(iommu);
+}
 
-	kfree(iommu->domains);
-	kfree(iommu->domain_ids);
-	iommu->domains = NULL;
-	iommu->domain_ids = NULL;
+static void free_dmar_iommu(struct intel_iommu *iommu)
+{
+	if ((iommu->domains) && (iommu->domain_ids)) {
+		kfree(iommu->domains);
+		kfree(iommu->domain_ids);
+		iommu->domains = NULL;
+		iommu->domain_ids = NULL;
+	}
 
 	g_iommus[iommu->seq_id] = NULL;
 
@@ -2701,6 +2709,41 @@ static int __init iommu_prepare_static_identity_mapping(int hw)
 	return 0;
 }
 
+static void intel_iommu_init_qi(struct intel_iommu *iommu)
+{
+	/*
+	 * Start from the sane iommu hardware state.
+	 * If the queued invalidation is already initialized by us
+	 * (for example, while enabling interrupt-remapping) then
+	 * we got the things already rolling from a sane state.
+	 */
+	if (!iommu->qi) {
+		/*
+		 * Clear any previous faults.
+		 */
+		dmar_fault(-1, iommu);
+		/*
+		 * Disable queued invalidation if supported and already enabled
+		 * before OS handover.
+		 */
+		dmar_disable_qi(iommu);
+	}
+
+	if (dmar_enable_qi(iommu)) {
+		/*
+		 * Queued Invalidate not enabled, use Register Based Invalidate
+		 */
+		iommu->flush.flush_context = __iommu_flush_context;
+		iommu->flush.flush_iotlb = __iommu_flush_iotlb;
+		pr_info("IOMMU: %s using Register based invalidation\n",
+			iommu->name);
+	} else {
+		iommu->flush.flush_context = qi_flush_context;
+		iommu->flush.flush_iotlb = qi_flush_iotlb;
+		pr_info("IOMMU: %s using Queued invalidation\n", iommu->name);
+	}
+}
+
 static int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -2729,6 +2772,10 @@ static int __init init_dmars(void)
 			  DMAR_UNITS_SUPPORTED);
 	}
 
+	/* Preallocate enough resources for IOMMU hot-addition */
+	if (g_num_of_iommus < DMAR_UNITS_SUPPORTED)
+		g_num_of_iommus = DMAR_UNITS_SUPPORTED;
+
 	g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
 			GFP_KERNEL);
 	if (!g_iommus) {
@@ -2757,58 +2804,14 @@ static int __init init_dmars(void)
 		 * among all IOMMU's. Need to Split it later.
 		 */
 		ret = iommu_alloc_root_entry(iommu);
-		if (ret) {
-			printk(KERN_ERR "IOMMU: allocate root entry failed\n");
+		if (ret)
 			goto free_iommu;
-		}
 		if (!ecap_pass_through(iommu->ecap))
 			hw_pass_through = 0;
 	}
 
-	/*
-	 * Start from the sane iommu hardware state.
-	 */
-	for_each_active_iommu(iommu, drhd) {
-		/*
-		 * If the queued invalidation is already initialized by us
-		 * (for example, while enabling interrupt-remapping) then
-		 * we got the things already rolling from a sane state.
-		 */
-		if (iommu->qi)
-			continue;
-
-		/*
-		 * Clear any previous faults.
-		 */
-		dmar_fault(-1, iommu);
-		/*
-		 * Disable queued invalidation if supported and already enabled
-		 * before OS handover.
-		 */
-		dmar_disable_qi(iommu);
-	}
-
-	for_each_active_iommu(iommu, drhd) {
-		if (dmar_enable_qi(iommu)) {
-			/*
-			 * Queued Invalidate not enabled, use Register Based
-			 * Invalidate
-			 */
-			iommu->flush.flush_context = __iommu_flush_context;
-			iommu->flush.flush_iotlb = __iommu_flush_iotlb;
-			printk(KERN_INFO "IOMMU %d 0x%Lx: using Register based "
-			       "invalidation\n",
-				iommu->seq_id,
-			       (unsigned long long)drhd->reg_base_addr);
-		} else {
-			iommu->flush.flush_context = qi_flush_context;
-			iommu->flush.flush_iotlb = qi_flush_iotlb;
-			printk(KERN_INFO "IOMMU %d 0x%Lx: using Queued "
-			       "invalidation\n",
-				iommu->seq_id,
-			       (unsigned long long)drhd->reg_base_addr);
-		}
-	}
+	for_each_active_iommu(iommu, drhd)
+		intel_iommu_init_qi(iommu);
 
 	if (iommu_pass_through)
 		iommu_identity_mapping |= IDENTMAP_ALL;
@@ -2894,8 +2897,10 @@ static int __init init_dmars(void)
 	return 0;
 
 free_iommu:
-	for_each_active_iommu(iommu, drhd)
+	for_each_active_iommu(iommu, drhd) {
+		disable_dmar_iommu(iommu);
 		free_dmar_iommu(iommu);
+	}
 	kfree(deferred_flush);
 free_g_iommus:
 	kfree(g_iommus);
@@ -3801,9 +3806,100 @@ int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 	return 0;
 }
 
+static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
+{
+	int sp, ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (g_iommus[iommu->seq_id])
+		return 0;
+
+	if (hw_pass_through && !ecap_pass_through(iommu->ecap)) {
+		pr_warn("IOMMU: %s doesn't support hardware pass through.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+	if (!ecap_sc_support(iommu->ecap) &&
+	    domain_update_iommu_snooping(iommu)) {
+		pr_warn("IOMMU: %s doesn't support snooping.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+	sp = domain_update_iommu_superpage(iommu) - 1;
+	if (sp >= 0 && !(cap_super_page_val(iommu->cap) & (1 << sp))) {
+		pr_warn("IOMMU: %s doesn't support large page.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+
+	/*
+	 * Disable translation if already enabled prior to OS handover.
+	 */
+	if (iommu->gcmd & DMA_GCMD_TE)
+		iommu_disable_translation(iommu);
+
+	g_iommus[iommu->seq_id] = iommu;
+	ret = iommu_init_domains(iommu);
+	if (ret == 0)
+		ret = iommu_alloc_root_entry(iommu);
+	if (ret)
+		goto out;
+
+	if (dmaru->ignored) {
+		/*
+		 * we always have to disable PMRs or DMA may fail on this device
+		 */
+		if (force_on)
+			iommu_disable_protect_mem_regions(iommu);
+		return 0;
+	}
+
+	intel_iommu_init_qi(iommu);
+	iommu_flush_write_buffer(iommu);
+	ret = dmar_set_interrupt(iommu);
+	if (ret)
+		goto disable_iommu;
+
+	iommu_set_root_entry(iommu);
+	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+	iommu_enable_translation(iommu);
+
+	if (si_domain) {
+		ret = iommu_attach_domain(si_domain, iommu);
+		if (ret < 0 || si_domain->id != ret)
+			goto disable_iommu;
+		domain_attach_iommu(si_domain, iommu);
+	}
+
+	iommu_disable_protect_mem_regions(iommu);
+	return 0;
+
+disable_iommu:
+	disable_dmar_iommu(iommu);
+out:
+	free_dmar_iommu(iommu);
+	return ret;
+}
+
 int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
 {
-	return intel_iommu_enabled ? -ENOSYS : 0;
+	int ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (!intel_iommu_enabled)
+		return 0;
+	if (iommu == NULL)
+		return -EINVAL;
+
+	if (insert) {
+		ret = intel_iommu_add(dmaru);
+	} else {
+		disable_dmar_iommu(iommu);
+		free_dmar_iommu(iommu);
+	}
+
+	return ret;
 }
 
 static void intel_iommu_free_dmars(void)
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 7/8] iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

Implement required callback functions for intel-iommu driver
to support DMAR unit hotplug.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/iommu/intel-iommu.c |  206 +++++++++++++++++++++++++++++++------------
 1 file changed, 151 insertions(+), 55 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 70d9d47eaeda..c2d369524960 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1125,8 +1125,11 @@ static int iommu_alloc_root_entry(struct intel_iommu *iommu)
 	unsigned long flags;
 
 	root = (struct root_entry *)alloc_pgtable_page(iommu->node);
-	if (!root)
+	if (!root) {
+		pr_err("IOMMU: allocating root entry for %s failed\n",
+			iommu->name);
 		return -ENOMEM;
+	}
 
 	__iommu_flush_cache(iommu, root, ROOT_SIZE);
 
@@ -1466,7 +1469,7 @@ static int iommu_init_domains(struct intel_iommu *iommu)
 	return 0;
 }
 
-static void free_dmar_iommu(struct intel_iommu *iommu)
+static void disable_dmar_iommu(struct intel_iommu *iommu)
 {
 	struct dmar_domain *domain;
 	int i;
@@ -1490,11 +1493,16 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
 
 	if (iommu->gcmd & DMA_GCMD_TE)
 		iommu_disable_translation(iommu);
+}
 
-	kfree(iommu->domains);
-	kfree(iommu->domain_ids);
-	iommu->domains = NULL;
-	iommu->domain_ids = NULL;
+static void free_dmar_iommu(struct intel_iommu *iommu)
+{
+	if ((iommu->domains) && (iommu->domain_ids)) {
+		kfree(iommu->domains);
+		kfree(iommu->domain_ids);
+		iommu->domains = NULL;
+		iommu->domain_ids = NULL;
+	}
 
 	g_iommus[iommu->seq_id] = NULL;
 
@@ -2701,6 +2709,41 @@ static int __init iommu_prepare_static_identity_mapping(int hw)
 	return 0;
 }
 
+static void intel_iommu_init_qi(struct intel_iommu *iommu)
+{
+	/*
+	 * Start from the sane iommu hardware state.
+	 * If the queued invalidation is already initialized by us
+	 * (for example, while enabling interrupt-remapping) then
+	 * we got the things already rolling from a sane state.
+	 */
+	if (!iommu->qi) {
+		/*
+		 * Clear any previous faults.
+		 */
+		dmar_fault(-1, iommu);
+		/*
+		 * Disable queued invalidation if supported and already enabled
+		 * before OS handover.
+		 */
+		dmar_disable_qi(iommu);
+	}
+
+	if (dmar_enable_qi(iommu)) {
+		/*
+		 * Queued Invalidate not enabled, use Register Based Invalidate
+		 */
+		iommu->flush.flush_context = __iommu_flush_context;
+		iommu->flush.flush_iotlb = __iommu_flush_iotlb;
+		pr_info("IOMMU: %s using Register based invalidation\n",
+			iommu->name);
+	} else {
+		iommu->flush.flush_context = qi_flush_context;
+		iommu->flush.flush_iotlb = qi_flush_iotlb;
+		pr_info("IOMMU: %s using Queued invalidation\n", iommu->name);
+	}
+}
+
 static int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -2729,6 +2772,10 @@ static int __init init_dmars(void)
 			  DMAR_UNITS_SUPPORTED);
 	}
 
+	/* Preallocate enough resources for IOMMU hot-addition */
+	if (g_num_of_iommus < DMAR_UNITS_SUPPORTED)
+		g_num_of_iommus = DMAR_UNITS_SUPPORTED;
+
 	g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
 			GFP_KERNEL);
 	if (!g_iommus) {
@@ -2757,58 +2804,14 @@ static int __init init_dmars(void)
 		 * among all IOMMU's. Need to Split it later.
 		 */
 		ret = iommu_alloc_root_entry(iommu);
-		if (ret) {
-			printk(KERN_ERR "IOMMU: allocate root entry failed\n");
+		if (ret)
 			goto free_iommu;
-		}
 		if (!ecap_pass_through(iommu->ecap))
 			hw_pass_through = 0;
 	}
 
-	/*
-	 * Start from the sane iommu hardware state.
-	 */
-	for_each_active_iommu(iommu, drhd) {
-		/*
-		 * If the queued invalidation is already initialized by us
-		 * (for example, while enabling interrupt-remapping) then
-		 * we got the things already rolling from a sane state.
-		 */
-		if (iommu->qi)
-			continue;
-
-		/*
-		 * Clear any previous faults.
-		 */
-		dmar_fault(-1, iommu);
-		/*
-		 * Disable queued invalidation if supported and already enabled
-		 * before OS handover.
-		 */
-		dmar_disable_qi(iommu);
-	}
-
-	for_each_active_iommu(iommu, drhd) {
-		if (dmar_enable_qi(iommu)) {
-			/*
-			 * Queued Invalidate not enabled, use Register Based
-			 * Invalidate
-			 */
-			iommu->flush.flush_context = __iommu_flush_context;
-			iommu->flush.flush_iotlb = __iommu_flush_iotlb;
-			printk(KERN_INFO "IOMMU %d 0x%Lx: using Register based "
-			       "invalidation\n",
-				iommu->seq_id,
-			       (unsigned long long)drhd->reg_base_addr);
-		} else {
-			iommu->flush.flush_context = qi_flush_context;
-			iommu->flush.flush_iotlb = qi_flush_iotlb;
-			printk(KERN_INFO "IOMMU %d 0x%Lx: using Queued "
-			       "invalidation\n",
-				iommu->seq_id,
-			       (unsigned long long)drhd->reg_base_addr);
-		}
-	}
+	for_each_active_iommu(iommu, drhd)
+		intel_iommu_init_qi(iommu);
 
 	if (iommu_pass_through)
 		iommu_identity_mapping |= IDENTMAP_ALL;
@@ -2894,8 +2897,10 @@ static int __init init_dmars(void)
 	return 0;
 
 free_iommu:
-	for_each_active_iommu(iommu, drhd)
+	for_each_active_iommu(iommu, drhd) {
+		disable_dmar_iommu(iommu);
 		free_dmar_iommu(iommu);
+	}
 	kfree(deferred_flush);
 free_g_iommus:
 	kfree(g_iommus);
@@ -3801,9 +3806,100 @@ int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 	return 0;
 }
 
+static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
+{
+	int sp, ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (g_iommus[iommu->seq_id])
+		return 0;
+
+	if (hw_pass_through && !ecap_pass_through(iommu->ecap)) {
+		pr_warn("IOMMU: %s doesn't support hardware pass through.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+	if (!ecap_sc_support(iommu->ecap) &&
+	    domain_update_iommu_snooping(iommu)) {
+		pr_warn("IOMMU: %s doesn't support snooping.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+	sp = domain_update_iommu_superpage(iommu) - 1;
+	if (sp >= 0 && !(cap_super_page_val(iommu->cap) & (1 << sp))) {
+		pr_warn("IOMMU: %s doesn't support large page.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+
+	/*
+	 * Disable translation if already enabled prior to OS handover.
+	 */
+	if (iommu->gcmd & DMA_GCMD_TE)
+		iommu_disable_translation(iommu);
+
+	g_iommus[iommu->seq_id] = iommu;
+	ret = iommu_init_domains(iommu);
+	if (ret == 0)
+		ret = iommu_alloc_root_entry(iommu);
+	if (ret)
+		goto out;
+
+	if (dmaru->ignored) {
+		/*
+		 * we always have to disable PMRs or DMA may fail on this device
+		 */
+		if (force_on)
+			iommu_disable_protect_mem_regions(iommu);
+		return 0;
+	}
+
+	intel_iommu_init_qi(iommu);
+	iommu_flush_write_buffer(iommu);
+	ret = dmar_set_interrupt(iommu);
+	if (ret)
+		goto disable_iommu;
+
+	iommu_set_root_entry(iommu);
+	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+	iommu_enable_translation(iommu);
+
+	if (si_domain) {
+		ret = iommu_attach_domain(si_domain, iommu);
+		if (ret < 0 || si_domain->id != ret)
+			goto disable_iommu;
+		domain_attach_iommu(si_domain, iommu);
+	}
+
+	iommu_disable_protect_mem_regions(iommu);
+	return 0;
+
+disable_iommu:
+	disable_dmar_iommu(iommu);
+out:
+	free_dmar_iommu(iommu);
+	return ret;
+}
+
 int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
 {
-	return intel_iommu_enabled ? -ENOSYS : 0;
+	int ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (!intel_iommu_enabled)
+		return 0;
+	if (iommu == NULL)
+		return -EINVAL;
+
+	if (insert) {
+		ret = intel_iommu_add(dmaru);
+	} else {
+		disable_dmar_iommu(iommu);
+		free_dmar_iommu(iommu);
+	}
+
+	return ret;
 }
 
 static void intel_iommu_free_dmars(void)
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 7/8] iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Implement required callback functions for intel-iommu driver
to support DMAR unit hotplug.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/iommu/intel-iommu.c |  206 +++++++++++++++++++++++++++++++------------
 1 file changed, 151 insertions(+), 55 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 70d9d47eaeda..c2d369524960 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1125,8 +1125,11 @@ static int iommu_alloc_root_entry(struct intel_iommu *iommu)
 	unsigned long flags;
 
 	root = (struct root_entry *)alloc_pgtable_page(iommu->node);
-	if (!root)
+	if (!root) {
+		pr_err("IOMMU: allocating root entry for %s failed\n",
+			iommu->name);
 		return -ENOMEM;
+	}
 
 	__iommu_flush_cache(iommu, root, ROOT_SIZE);
 
@@ -1466,7 +1469,7 @@ static int iommu_init_domains(struct intel_iommu *iommu)
 	return 0;
 }
 
-static void free_dmar_iommu(struct intel_iommu *iommu)
+static void disable_dmar_iommu(struct intel_iommu *iommu)
 {
 	struct dmar_domain *domain;
 	int i;
@@ -1490,11 +1493,16 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
 
 	if (iommu->gcmd & DMA_GCMD_TE)
 		iommu_disable_translation(iommu);
+}
 
-	kfree(iommu->domains);
-	kfree(iommu->domain_ids);
-	iommu->domains = NULL;
-	iommu->domain_ids = NULL;
+static void free_dmar_iommu(struct intel_iommu *iommu)
+{
+	if ((iommu->domains) && (iommu->domain_ids)) {
+		kfree(iommu->domains);
+		kfree(iommu->domain_ids);
+		iommu->domains = NULL;
+		iommu->domain_ids = NULL;
+	}
 
 	g_iommus[iommu->seq_id] = NULL;
 
@@ -2701,6 +2709,41 @@ static int __init iommu_prepare_static_identity_mapping(int hw)
 	return 0;
 }
 
+static void intel_iommu_init_qi(struct intel_iommu *iommu)
+{
+	/*
+	 * Start from the sane iommu hardware state.
+	 * If the queued invalidation is already initialized by us
+	 * (for example, while enabling interrupt-remapping) then
+	 * we got the things already rolling from a sane state.
+	 */
+	if (!iommu->qi) {
+		/*
+		 * Clear any previous faults.
+		 */
+		dmar_fault(-1, iommu);
+		/*
+		 * Disable queued invalidation if supported and already enabled
+		 * before OS handover.
+		 */
+		dmar_disable_qi(iommu);
+	}
+
+	if (dmar_enable_qi(iommu)) {
+		/*
+		 * Queued Invalidate not enabled, use Register Based Invalidate
+		 */
+		iommu->flush.flush_context = __iommu_flush_context;
+		iommu->flush.flush_iotlb = __iommu_flush_iotlb;
+		pr_info("IOMMU: %s using Register based invalidation\n",
+			iommu->name);
+	} else {
+		iommu->flush.flush_context = qi_flush_context;
+		iommu->flush.flush_iotlb = qi_flush_iotlb;
+		pr_info("IOMMU: %s using Queued invalidation\n", iommu->name);
+	}
+}
+
 static int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -2729,6 +2772,10 @@ static int __init init_dmars(void)
 			  DMAR_UNITS_SUPPORTED);
 	}
 
+	/* Preallocate enough resources for IOMMU hot-addition */
+	if (g_num_of_iommus < DMAR_UNITS_SUPPORTED)
+		g_num_of_iommus = DMAR_UNITS_SUPPORTED;
+
 	g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
 			GFP_KERNEL);
 	if (!g_iommus) {
@@ -2757,58 +2804,14 @@ static int __init init_dmars(void)
 		 * among all IOMMU's. Need to Split it later.
 		 */
 		ret = iommu_alloc_root_entry(iommu);
-		if (ret) {
-			printk(KERN_ERR "IOMMU: allocate root entry failed\n");
+		if (ret)
 			goto free_iommu;
-		}
 		if (!ecap_pass_through(iommu->ecap))
 			hw_pass_through = 0;
 	}
 
-	/*
-	 * Start from the sane iommu hardware state.
-	 */
-	for_each_active_iommu(iommu, drhd) {
-		/*
-		 * If the queued invalidation is already initialized by us
-		 * (for example, while enabling interrupt-remapping) then
-		 * we got the things already rolling from a sane state.
-		 */
-		if (iommu->qi)
-			continue;
-
-		/*
-		 * Clear any previous faults.
-		 */
-		dmar_fault(-1, iommu);
-		/*
-		 * Disable queued invalidation if supported and already enabled
-		 * before OS handover.
-		 */
-		dmar_disable_qi(iommu);
-	}
-
-	for_each_active_iommu(iommu, drhd) {
-		if (dmar_enable_qi(iommu)) {
-			/*
-			 * Queued Invalidate not enabled, use Register Based
-			 * Invalidate
-			 */
-			iommu->flush.flush_context = __iommu_flush_context;
-			iommu->flush.flush_iotlb = __iommu_flush_iotlb;
-			printk(KERN_INFO "IOMMU %d 0x%Lx: using Register based "
-			       "invalidation\n",
-				iommu->seq_id,
-			       (unsigned long long)drhd->reg_base_addr);
-		} else {
-			iommu->flush.flush_context = qi_flush_context;
-			iommu->flush.flush_iotlb = qi_flush_iotlb;
-			printk(KERN_INFO "IOMMU %d 0x%Lx: using Queued "
-			       "invalidation\n",
-				iommu->seq_id,
-			       (unsigned long long)drhd->reg_base_addr);
-		}
-	}
+	for_each_active_iommu(iommu, drhd)
+		intel_iommu_init_qi(iommu);
 
 	if (iommu_pass_through)
 		iommu_identity_mapping |= IDENTMAP_ALL;
@@ -2894,8 +2897,10 @@ static int __init init_dmars(void)
 	return 0;
 
 free_iommu:
-	for_each_active_iommu(iommu, drhd)
+	for_each_active_iommu(iommu, drhd) {
+		disable_dmar_iommu(iommu);
 		free_dmar_iommu(iommu);
+	}
 	kfree(deferred_flush);
 free_g_iommus:
 	kfree(g_iommus);
@@ -3801,9 +3806,100 @@ int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg)
 	return 0;
 }
 
+static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
+{
+	int sp, ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (g_iommus[iommu->seq_id])
+		return 0;
+
+	if (hw_pass_through && !ecap_pass_through(iommu->ecap)) {
+		pr_warn("IOMMU: %s doesn't support hardware pass through.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+	if (!ecap_sc_support(iommu->ecap) &&
+	    domain_update_iommu_snooping(iommu)) {
+		pr_warn("IOMMU: %s doesn't support snooping.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+	sp = domain_update_iommu_superpage(iommu) - 1;
+	if (sp >= 0 && !(cap_super_page_val(iommu->cap) & (1 << sp))) {
+		pr_warn("IOMMU: %s doesn't support large page.\n",
+			iommu->name);
+		return -ENXIO;
+	}
+
+	/*
+	 * Disable translation if already enabled prior to OS handover.
+	 */
+	if (iommu->gcmd & DMA_GCMD_TE)
+		iommu_disable_translation(iommu);
+
+	g_iommus[iommu->seq_id] = iommu;
+	ret = iommu_init_domains(iommu);
+	if (ret = 0)
+		ret = iommu_alloc_root_entry(iommu);
+	if (ret)
+		goto out;
+
+	if (dmaru->ignored) {
+		/*
+		 * we always have to disable PMRs or DMA may fail on this device
+		 */
+		if (force_on)
+			iommu_disable_protect_mem_regions(iommu);
+		return 0;
+	}
+
+	intel_iommu_init_qi(iommu);
+	iommu_flush_write_buffer(iommu);
+	ret = dmar_set_interrupt(iommu);
+	if (ret)
+		goto disable_iommu;
+
+	iommu_set_root_entry(iommu);
+	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+	iommu_enable_translation(iommu);
+
+	if (si_domain) {
+		ret = iommu_attach_domain(si_domain, iommu);
+		if (ret < 0 || si_domain->id != ret)
+			goto disable_iommu;
+		domain_attach_iommu(si_domain, iommu);
+	}
+
+	iommu_disable_protect_mem_regions(iommu);
+	return 0;
+
+disable_iommu:
+	disable_dmar_iommu(iommu);
+out:
+	free_dmar_iommu(iommu);
+	return ret;
+}
+
 int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
 {
-	return intel_iommu_enabled ? -ENOSYS : 0;
+	int ret = 0;
+	struct intel_iommu *iommu = dmaru->iommu;
+
+	if (!intel_iommu_enabled)
+		return 0;
+	if (iommu = NULL)
+		return -EINVAL;
+
+	if (insert) {
+		ret = intel_iommu_add(dmaru);
+	} else {
+		disable_dmar_iommu(iommu);
+		free_dmar_iommu(iommu);
+	}
+
+	return ret;
 }
 
 static void intel_iommu_free_dmars(void)
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Finally enhance pci_root driver to support DMAR device hotplug when
hot-plugging PCI host bridges.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/acpi/pci_root.c |   16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
index e6ae603ed1a1..4e177daa18e3 100644
--- a/drivers/acpi/pci_root.c
+++ b/drivers/acpi/pci_root.c
@@ -33,6 +33,7 @@
 #include <linux/pci.h>
 #include <linux/pci-acpi.h>
 #include <linux/pci-aspm.h>
+#include <linux/dmar.h>
 #include <linux/acpi.h>
 #include <linux/slab.h>
 #include <acpi/apei.h>	/* for acpi_hest_init() */
@@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	struct acpi_pci_root *root;
 	acpi_handle handle = device->handle;
 	int no_aspm = 0, clear_aspm = 0;
+	bool hotadd = system_state != SYSTEM_BOOTING;
 
 	root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
 	if (!root)
@@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
 	device->driver_data = root;
 
+	if (hotadd && dmar_device_add(handle)) {
+		result = -ENXIO;
+		goto end;
+	}
+
 	pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
 	       acpi_device_name(device), acpi_device_bid(device),
 	       root->segment, &root->secondary);
@@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 			root->segment, (unsigned int)root->secondary.start);
 		device->driver_data = NULL;
 		result = -ENODEV;
-		goto end;
+		goto remove_dmar;
 	}
 
 	if (clear_aspm) {
@@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	if (device->wakeup.flags.run_wake)
 		device_set_run_wake(root->bus->bridge, true);
 
-	if (system_state != SYSTEM_BOOTING) {
+	if (hotadd) {
 		pcibios_resource_survey_bus(root->bus);
 		pci_assign_unassigned_root_bus_resources(root->bus);
 	}
@@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	pci_unlock_rescan_remove();
 	return 1;
 
+remove_dmar:
+	if (hotadd)
+		dmar_device_remove(handle);
 end:
 	kfree(root);
 	return result;
@@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
 
 	pci_remove_root_bus(root->bus);
 
+	dmar_device_remove(device->handle);
+
 	pci_unlock_rescan_remove();
 
 	kfree(root);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jiang Liu

Finally enhance pci_root driver to support DMAR device hotplug when
hot-plugging PCI host bridges.

Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/acpi/pci_root.c |   16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
index e6ae603ed1a1..4e177daa18e3 100644
--- a/drivers/acpi/pci_root.c
+++ b/drivers/acpi/pci_root.c
@@ -33,6 +33,7 @@
 #include <linux/pci.h>
 #include <linux/pci-acpi.h>
 #include <linux/pci-aspm.h>
+#include <linux/dmar.h>
 #include <linux/acpi.h>
 #include <linux/slab.h>
 #include <acpi/apei.h>	/* for acpi_hest_init() */
@@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	struct acpi_pci_root *root;
 	acpi_handle handle = device->handle;
 	int no_aspm = 0, clear_aspm = 0;
+	bool hotadd = system_state != SYSTEM_BOOTING;
 
 	root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
 	if (!root)
@@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
 	device->driver_data = root;
 
+	if (hotadd && dmar_device_add(handle)) {
+		result = -ENXIO;
+		goto end;
+	}
+
 	pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
 	       acpi_device_name(device), acpi_device_bid(device),
 	       root->segment, &root->secondary);
@@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 			root->segment, (unsigned int)root->secondary.start);
 		device->driver_data = NULL;
 		result = -ENODEV;
-		goto end;
+		goto remove_dmar;
 	}
 
 	if (clear_aspm) {
@@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	if (device->wakeup.flags.run_wake)
 		device_set_run_wake(root->bus->bridge, true);
 
-	if (system_state != SYSTEM_BOOTING) {
+	if (hotadd) {
 		pcibios_resource_survey_bus(root->bus);
 		pci_assign_unassigned_root_bus_resources(root->bus);
 	}
@@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	pci_unlock_rescan_remove();
 	return 1;
 
+remove_dmar:
+	if (hotadd)
+		dmar_device_remove(handle);
 end:
 	kfree(root);
 	return result;
@@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
 
 	pci_remove_root_bus(root->bus);
 
+	dmar_device_remove(device->handle);
+
 	pci_unlock_rescan_remove();
 
 	kfree(root);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug
@ 2014-09-19  5:18   ` Jiang Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Jiang Liu @ 2014-09-19  5:18 UTC (permalink / raw)
  To: Joerg Roedel, David Woodhouse, Yinghai Lu, Bjorn Helgaas,
	Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Jiang Liu, Ashok Raj, Yijing Wang, Tony Luck, iommu, linux-pci,
	linux-hotplug, linux-kernel, dmaengine

Finally enhance pci_root driver to support DMAR device hotplug when
hot-plugging PCI host bridges.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
---
 drivers/acpi/pci_root.c |   16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
index e6ae603ed1a1..4e177daa18e3 100644
--- a/drivers/acpi/pci_root.c
+++ b/drivers/acpi/pci_root.c
@@ -33,6 +33,7 @@
 #include <linux/pci.h>
 #include <linux/pci-acpi.h>
 #include <linux/pci-aspm.h>
+#include <linux/dmar.h>
 #include <linux/acpi.h>
 #include <linux/slab.h>
 #include <acpi/apei.h>	/* for acpi_hest_init() */
@@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	struct acpi_pci_root *root;
 	acpi_handle handle = device->handle;
 	int no_aspm = 0, clear_aspm = 0;
+	bool hotadd = system_state != SYSTEM_BOOTING;
 
 	root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
 	if (!root)
@@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
 	device->driver_data = root;
 
+	if (hotadd && dmar_device_add(handle)) {
+		result = -ENXIO;
+		goto end;
+	}
+
 	pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
 	       acpi_device_name(device), acpi_device_bid(device),
 	       root->segment, &root->secondary);
@@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 			root->segment, (unsigned int)root->secondary.start);
 		device->driver_data = NULL;
 		result = -ENODEV;
-		goto end;
+		goto remove_dmar;
 	}
 
 	if (clear_aspm) {
@@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	if (device->wakeup.flags.run_wake)
 		device_set_run_wake(root->bus->bridge, true);
 
-	if (system_state != SYSTEM_BOOTING) {
+	if (hotadd) {
 		pcibios_resource_survey_bus(root->bus);
 		pci_assign_unassigned_root_bus_resources(root->bus);
 	}
@@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
 	pci_unlock_rescan_remove();
 	return 1;
 
+remove_dmar:
+	if (hotadd)
+		dmar_device_remove(handle);
 end:
 	kfree(root);
 	return result;
@@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
 
 	pci_remove_root_bus(root->bus);
 
+	dmar_device_remove(device->handle);
+
 	pci_unlock_rescan_remove();
 
 	kfree(root);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources()
@ 2014-09-19  6:49     ` Yijing Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Yijing Wang @ 2014-09-19  6:49 UTC (permalink / raw)
  To: Jiang Liu, Joerg Roedel, David Woodhouse, Yinghai Lu,
	Bjorn Helgaas, Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Ashok Raj, Tony Luck, iommu, linux-pci, linux-hotplug,
	linux-kernel, dmaengine

On 2014/9/19 13:18, Jiang Liu wrote:
> Introduce helper function dmar_walk_resources to walk resource entries
> in DMAR table and ACPI buffer object returned by ACPI _DSM method
> for IOMMU hot-plug.
> 
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>

Reviewed-by: Yijing Wang <wangyijing@huawei.com>

> ---
>  drivers/iommu/dmar.c        |  209 +++++++++++++++++++++++--------------------
>  drivers/iommu/intel-iommu.c |    4 +-
>  include/linux/dmar.h        |   19 ++--
>  3 files changed, 122 insertions(+), 110 deletions(-)
> 
> diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
> index 06d268abe951..a05cf3634efe 100644
> --- a/drivers/iommu/dmar.c
> +++ b/drivers/iommu/dmar.c
> @@ -44,6 +44,14 @@
>  
>  #include "irq_remapping.h"
>  
> +typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
> +struct dmar_res_callback {
> +	dmar_res_handler_t	cb[ACPI_DMAR_TYPE_RESERVED];
> +	void			*arg[ACPI_DMAR_TYPE_RESERVED];
> +	bool			ignore_unhandled;
> +	bool			print_entry;
> +};
> +
>  /*
>   * Assumptions:
>   * 1) The hotplug framework guarentees that DMAR unit will be hot-added
> @@ -333,7 +341,7 @@ static struct notifier_block dmar_pci_bus_nb = {
>   * present in the platform
>   */
>  static int __init
> -dmar_parse_one_drhd(struct acpi_dmar_header *header)
> +dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_hardware_unit *drhd;
>  	struct dmar_drhd_unit *dmaru;
> @@ -364,6 +372,10 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header)
>  		return ret;
>  	}
>  	dmar_register_drhd_unit(dmaru);
> +
> +	if (arg)
> +		(*(int *)arg)++;
> +
>  	return 0;
>  }
>  
> @@ -376,7 +388,8 @@ static void dmar_free_drhd(struct dmar_drhd_unit *dmaru)
>  	kfree(dmaru);
>  }
>  
> -static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
> +static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
> +				      void *arg)
>  {
>  	struct acpi_dmar_andd *andd = (void *)header;
>  
> @@ -398,7 +411,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
>  
>  #ifdef CONFIG_ACPI_NUMA
>  static int __init
> -dmar_parse_one_rhsa(struct acpi_dmar_header *header)
> +dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_rhsa *rhsa;
>  	struct dmar_drhd_unit *drhd;
> @@ -425,6 +438,8 @@ dmar_parse_one_rhsa(struct acpi_dmar_header *header)
>  
>  	return 0;
>  }
> +#else
> +#define	dmar_parse_one_rhsa		dmar_res_noop
>  #endif
>  
>  static void __init
> @@ -486,6 +501,52 @@ static int __init dmar_table_detect(void)
>  	return (ACPI_SUCCESS(status) ? 1 : 0);
>  }
>  
> +static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
> +				       size_t len, struct dmar_res_callback *cb)
> +{
> +	int ret = 0;
> +	struct acpi_dmar_header *iter, *next;
> +	struct acpi_dmar_header *end = ((void *)start) + len;
> +
> +	for (iter = start; iter < end && ret == 0; iter = next) {
> +		next = (void *)iter + iter->length;
> +		if (iter->length == 0) {
> +			/* Avoid looping forever on bad ACPI tables */
> +			pr_debug(FW_BUG "Invalid 0-length structure\n");
> +			break;
> +		} else if (next > end) {
> +			/* Avoid passing table end */
> +			pr_warn(FW_BUG "record passes table end\n");
> +			ret = -EINVAL;
> +			break;
> +		}
> +
> +		if (cb->print_entry)
> +			dmar_table_print_dmar_entry(iter);
> +
> +		if (iter->type >= ACPI_DMAR_TYPE_RESERVED) {
> +			/* continue for forward compatibility */
> +			pr_debug("Unknown DMAR structure type %d\n",
> +				 iter->type);
> +		} else if (cb->cb[iter->type]) {
> +			ret = cb->cb[iter->type](iter, cb->arg[iter->type]);
> +		} else if (!cb->ignore_unhandled) {
> +			pr_warn("No handler for DMAR structure type %d\n",
> +				iter->type);
> +			ret = -EINVAL;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar,
> +				       struct dmar_res_callback *cb)
> +{
> +	return dmar_walk_remapping_entries((void *)(dmar + 1),
> +			dmar->header.length - sizeof(*dmar), cb);
> +}
> +
>  /**
>   * parse_dmar_table - parses the DMA reporting table
>   */
> @@ -493,9 +554,18 @@ static int __init
>  parse_dmar_table(void)
>  {
>  	struct acpi_table_dmar *dmar;
> -	struct acpi_dmar_header *entry_header;
>  	int ret = 0;
>  	int drhd_count = 0;
> +	struct dmar_res_callback cb = {
> +		.print_entry = true,
> +		.ignore_unhandled = true,
> +		.arg[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &drhd_count,
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_parse_one_drhd,
> +		.cb[ACPI_DMAR_TYPE_RESERVED_MEMORY] = &dmar_parse_one_rmrr,
> +		.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
> +		.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
> +	};
>  
>  	/*
>  	 * Do it again, earlier dmar_tbl mapping could be mapped with
> @@ -519,51 +589,10 @@ parse_dmar_table(void)
>  	}
>  
>  	pr_info("Host address width %d\n", dmar->width + 1);
> -
> -	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> -	while (((unsigned long)entry_header) <
> -			(((unsigned long)dmar) + dmar_tbl->length)) {
> -		/* Avoid looping forever on bad ACPI tables */
> -		if (entry_header->length == 0) {
> -			pr_warn("Invalid 0-length structure\n");
> -			ret = -EINVAL;
> -			break;
> -		}
> -
> -		dmar_table_print_dmar_entry(entry_header);
> -
> -		switch (entry_header->type) {
> -		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
> -			drhd_count++;
> -			ret = dmar_parse_one_drhd(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
> -			ret = dmar_parse_one_rmrr(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_ROOT_ATS:
> -			ret = dmar_parse_one_atsr(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_HARDWARE_AFFINITY:
> -#ifdef CONFIG_ACPI_NUMA
> -			ret = dmar_parse_one_rhsa(entry_header);
> -#endif
> -			break;
> -		case ACPI_DMAR_TYPE_NAMESPACE:
> -			ret = dmar_parse_one_andd(entry_header);
> -			break;
> -		default:
> -			pr_warn("Unknown DMAR structure type %d\n",
> -				entry_header->type);
> -			ret = 0; /* for forward compatibility */
> -			break;
> -		}
> -		if (ret)
> -			break;
> -
> -		entry_header = ((void *)entry_header + entry_header->length);
> -	}
> -	if (drhd_count == 0)
> +	ret = dmar_walk_dmar_table(dmar, &cb);
> +	if (ret == 0 && drhd_count == 0)
>  		pr_warn(FW_BUG "No DRHD structure found in DMAR table\n");
> +
>  	return ret;
>  }
>  
> @@ -761,76 +790,60 @@ static void warn_invalid_dmar(u64 addr, const char *message)
>  		dmi_get_system_info(DMI_PRODUCT_VERSION));
>  }
>  
> -static int __init check_zero_address(void)
> +static int __ref
> +dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
>  {
> -	struct acpi_table_dmar *dmar;
> -	struct acpi_dmar_header *entry_header;
>  	struct acpi_dmar_hardware_unit *drhd;
> +	void __iomem *addr;
> +	u64 cap, ecap;
>  
> -	dmar = (struct acpi_table_dmar *)dmar_tbl;
> -	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> -
> -	while (((unsigned long)entry_header) <
> -			(((unsigned long)dmar) + dmar_tbl->length)) {
> -		/* Avoid looping forever on bad ACPI tables */
> -		if (entry_header->length == 0) {
> -			pr_warn("Invalid 0-length structure\n");
> -			return 0;
> -		}
> -
> -		if (entry_header->type == ACPI_DMAR_TYPE_HARDWARE_UNIT) {
> -			void __iomem *addr;
> -			u64 cap, ecap;
> -
> -			drhd = (void *)entry_header;
> -			if (!drhd->address) {
> -				warn_invalid_dmar(0, "");
> -				goto failed;
> -			}
> +	drhd = (void *)entry;
> +	if (!drhd->address) {
> +		warn_invalid_dmar(0, "");
> +		return -EINVAL;
> +	}
>  
> -			addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
> -			if (!addr ) {
> -				printk("IOMMU: can't validate: %llx\n", drhd->address);
> -				goto failed;
> -			}
> -			cap = dmar_readq(addr + DMAR_CAP_REG);
> -			ecap = dmar_readq(addr + DMAR_ECAP_REG);
> -			early_iounmap(addr, VTD_PAGE_SIZE);
> -			if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
> -				warn_invalid_dmar(drhd->address,
> -						  " returns all ones");
> -				goto failed;
> -			}
> -		}
> +	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
> +	if (!addr) {
> +		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
> +		return -EINVAL;
> +	}
> +	cap = dmar_readq(addr + DMAR_CAP_REG);
> +	ecap = dmar_readq(addr + DMAR_ECAP_REG);
> +	early_iounmap(addr, VTD_PAGE_SIZE);
>  
> -		entry_header = ((void *)entry_header + entry_header->length);
> +	if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
> +		warn_invalid_dmar(drhd->address, " returns all ones");
> +		return -EINVAL;
>  	}
> -	return 1;
>  
> -failed:
>  	return 0;
>  }
>  
>  int __init detect_intel_iommu(void)
>  {
>  	int ret;
> +	struct dmar_res_callback validate_drhd_cb = {
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_validate_one_drhd,
> +		.ignore_unhandled = true,
> +	};
>  
>  	down_write(&dmar_global_lock);
>  	ret = dmar_table_detect();
>  	if (ret)
> -		ret = check_zero_address();
> -	{
> -		if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
> -			iommu_detected = 1;
> -			/* Make sure ACS will be enabled */
> -			pci_request_acs();
> -		}
> +		ret = !dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
> +					    &validate_drhd_cb);
> +	if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
> +		iommu_detected = 1;
> +		/* Make sure ACS will be enabled */
> +		pci_request_acs();
> +	}
>  
>  #ifdef CONFIG_X86
> -		if (ret)
> -			x86_init.iommu.iommu_init = intel_iommu_init;
> +	if (ret)
> +		x86_init.iommu.iommu_init = intel_iommu_init;
>  #endif
> -	}
> +
>  	early_acpi_os_unmap_memory((void __iomem *)dmar_tbl, dmar_tbl_size);
>  	dmar_tbl = NULL;
>  	up_write(&dmar_global_lock);
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 5619f264862d..4af2206e41bc 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3682,7 +3682,7 @@ static inline void init_iommu_pm_ops(void) {}
>  #endif	/* CONFIG_PM */
>  
>  
> -int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> +int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_reserved_memory *rmrr;
>  	struct dmar_rmrr_unit *rmrru;
> @@ -3708,7 +3708,7 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
>  	return 0;
>  }
>  
> -int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr)
> +int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
>  {
>  	struct acpi_dmar_atsr *atsr;
>  	struct dmar_atsr_unit *atsru;
> diff --git a/include/linux/dmar.h b/include/linux/dmar.h
> index 1deece46a0ca..fac8ca34f9a8 100644
> --- a/include/linux/dmar.h
> +++ b/include/linux/dmar.h
> @@ -115,22 +115,21 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
>  extern int detect_intel_iommu(void);
>  extern int enable_drhd_fault_handling(void);
>  
> +static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
> +{
> +	return 0;
> +}
> +
>  #ifdef CONFIG_INTEL_IOMMU
>  extern int iommu_detected, no_iommu;
>  extern int intel_iommu_init(void);
> -extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header);
> -extern int dmar_parse_one_atsr(struct acpi_dmar_header *header);
> +extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
> +extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
>  extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
>  #else /* !CONFIG_INTEL_IOMMU: */
>  static inline int intel_iommu_init(void) { return -ENODEV; }
> -static inline int dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> -{
> -	return 0;
> -}
> -static inline int dmar_parse_one_atsr(struct acpi_dmar_header *header)
> -{
> -	return 0;
> -}
> +#define	dmar_parse_one_rmrr		dmar_res_noop
> +#define	dmar_parse_one_atsr		dmar_res_noop
>  static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
>  {
>  	return 0;
> 


-- 
Thanks!
Yijing


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources()
@ 2014-09-19  6:49     ` Yijing Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Yijing Wang @ 2014-09-19  6:49 UTC (permalink / raw)
  To: Jiang Liu, Joerg Roedel, David Woodhouse, Yinghai Lu,
	Bjorn Helgaas, Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA

On 2014/9/19 13:18, Jiang Liu wrote:
> Introduce helper function dmar_walk_resources to walk resource entries
> in DMAR table and ACPI buffer object returned by ACPI _DSM method
> for IOMMU hot-plug.
> 
> Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>

Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

> ---
>  drivers/iommu/dmar.c        |  209 +++++++++++++++++++++++--------------------
>  drivers/iommu/intel-iommu.c |    4 +-
>  include/linux/dmar.h        |   19 ++--
>  3 files changed, 122 insertions(+), 110 deletions(-)
> 
> diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
> index 06d268abe951..a05cf3634efe 100644
> --- a/drivers/iommu/dmar.c
> +++ b/drivers/iommu/dmar.c
> @@ -44,6 +44,14 @@
>  
>  #include "irq_remapping.h"
>  
> +typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
> +struct dmar_res_callback {
> +	dmar_res_handler_t	cb[ACPI_DMAR_TYPE_RESERVED];
> +	void			*arg[ACPI_DMAR_TYPE_RESERVED];
> +	bool			ignore_unhandled;
> +	bool			print_entry;
> +};
> +
>  /*
>   * Assumptions:
>   * 1) The hotplug framework guarentees that DMAR unit will be hot-added
> @@ -333,7 +341,7 @@ static struct notifier_block dmar_pci_bus_nb = {
>   * present in the platform
>   */
>  static int __init
> -dmar_parse_one_drhd(struct acpi_dmar_header *header)
> +dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_hardware_unit *drhd;
>  	struct dmar_drhd_unit *dmaru;
> @@ -364,6 +372,10 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header)
>  		return ret;
>  	}
>  	dmar_register_drhd_unit(dmaru);
> +
> +	if (arg)
> +		(*(int *)arg)++;
> +
>  	return 0;
>  }
>  
> @@ -376,7 +388,8 @@ static void dmar_free_drhd(struct dmar_drhd_unit *dmaru)
>  	kfree(dmaru);
>  }
>  
> -static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
> +static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
> +				      void *arg)
>  {
>  	struct acpi_dmar_andd *andd = (void *)header;
>  
> @@ -398,7 +411,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
>  
>  #ifdef CONFIG_ACPI_NUMA
>  static int __init
> -dmar_parse_one_rhsa(struct acpi_dmar_header *header)
> +dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_rhsa *rhsa;
>  	struct dmar_drhd_unit *drhd;
> @@ -425,6 +438,8 @@ dmar_parse_one_rhsa(struct acpi_dmar_header *header)
>  
>  	return 0;
>  }
> +#else
> +#define	dmar_parse_one_rhsa		dmar_res_noop
>  #endif
>  
>  static void __init
> @@ -486,6 +501,52 @@ static int __init dmar_table_detect(void)
>  	return (ACPI_SUCCESS(status) ? 1 : 0);
>  }
>  
> +static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
> +				       size_t len, struct dmar_res_callback *cb)
> +{
> +	int ret = 0;
> +	struct acpi_dmar_header *iter, *next;
> +	struct acpi_dmar_header *end = ((void *)start) + len;
> +
> +	for (iter = start; iter < end && ret == 0; iter = next) {
> +		next = (void *)iter + iter->length;
> +		if (iter->length == 0) {
> +			/* Avoid looping forever on bad ACPI tables */
> +			pr_debug(FW_BUG "Invalid 0-length structure\n");
> +			break;
> +		} else if (next > end) {
> +			/* Avoid passing table end */
> +			pr_warn(FW_BUG "record passes table end\n");
> +			ret = -EINVAL;
> +			break;
> +		}
> +
> +		if (cb->print_entry)
> +			dmar_table_print_dmar_entry(iter);
> +
> +		if (iter->type >= ACPI_DMAR_TYPE_RESERVED) {
> +			/* continue for forward compatibility */
> +			pr_debug("Unknown DMAR structure type %d\n",
> +				 iter->type);
> +		} else if (cb->cb[iter->type]) {
> +			ret = cb->cb[iter->type](iter, cb->arg[iter->type]);
> +		} else if (!cb->ignore_unhandled) {
> +			pr_warn("No handler for DMAR structure type %d\n",
> +				iter->type);
> +			ret = -EINVAL;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar,
> +				       struct dmar_res_callback *cb)
> +{
> +	return dmar_walk_remapping_entries((void *)(dmar + 1),
> +			dmar->header.length - sizeof(*dmar), cb);
> +}
> +
>  /**
>   * parse_dmar_table - parses the DMA reporting table
>   */
> @@ -493,9 +554,18 @@ static int __init
>  parse_dmar_table(void)
>  {
>  	struct acpi_table_dmar *dmar;
> -	struct acpi_dmar_header *entry_header;
>  	int ret = 0;
>  	int drhd_count = 0;
> +	struct dmar_res_callback cb = {
> +		.print_entry = true,
> +		.ignore_unhandled = true,
> +		.arg[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &drhd_count,
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_parse_one_drhd,
> +		.cb[ACPI_DMAR_TYPE_RESERVED_MEMORY] = &dmar_parse_one_rmrr,
> +		.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
> +		.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
> +	};
>  
>  	/*
>  	 * Do it again, earlier dmar_tbl mapping could be mapped with
> @@ -519,51 +589,10 @@ parse_dmar_table(void)
>  	}
>  
>  	pr_info("Host address width %d\n", dmar->width + 1);
> -
> -	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> -	while (((unsigned long)entry_header) <
> -			(((unsigned long)dmar) + dmar_tbl->length)) {
> -		/* Avoid looping forever on bad ACPI tables */
> -		if (entry_header->length == 0) {
> -			pr_warn("Invalid 0-length structure\n");
> -			ret = -EINVAL;
> -			break;
> -		}
> -
> -		dmar_table_print_dmar_entry(entry_header);
> -
> -		switch (entry_header->type) {
> -		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
> -			drhd_count++;
> -			ret = dmar_parse_one_drhd(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
> -			ret = dmar_parse_one_rmrr(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_ROOT_ATS:
> -			ret = dmar_parse_one_atsr(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_HARDWARE_AFFINITY:
> -#ifdef CONFIG_ACPI_NUMA
> -			ret = dmar_parse_one_rhsa(entry_header);
> -#endif
> -			break;
> -		case ACPI_DMAR_TYPE_NAMESPACE:
> -			ret = dmar_parse_one_andd(entry_header);
> -			break;
> -		default:
> -			pr_warn("Unknown DMAR structure type %d\n",
> -				entry_header->type);
> -			ret = 0; /* for forward compatibility */
> -			break;
> -		}
> -		if (ret)
> -			break;
> -
> -		entry_header = ((void *)entry_header + entry_header->length);
> -	}
> -	if (drhd_count == 0)
> +	ret = dmar_walk_dmar_table(dmar, &cb);
> +	if (ret == 0 && drhd_count == 0)
>  		pr_warn(FW_BUG "No DRHD structure found in DMAR table\n");
> +
>  	return ret;
>  }
>  
> @@ -761,76 +790,60 @@ static void warn_invalid_dmar(u64 addr, const char *message)
>  		dmi_get_system_info(DMI_PRODUCT_VERSION));
>  }
>  
> -static int __init check_zero_address(void)
> +static int __ref
> +dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
>  {
> -	struct acpi_table_dmar *dmar;
> -	struct acpi_dmar_header *entry_header;
>  	struct acpi_dmar_hardware_unit *drhd;
> +	void __iomem *addr;
> +	u64 cap, ecap;
>  
> -	dmar = (struct acpi_table_dmar *)dmar_tbl;
> -	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> -
> -	while (((unsigned long)entry_header) <
> -			(((unsigned long)dmar) + dmar_tbl->length)) {
> -		/* Avoid looping forever on bad ACPI tables */
> -		if (entry_header->length == 0) {
> -			pr_warn("Invalid 0-length structure\n");
> -			return 0;
> -		}
> -
> -		if (entry_header->type == ACPI_DMAR_TYPE_HARDWARE_UNIT) {
> -			void __iomem *addr;
> -			u64 cap, ecap;
> -
> -			drhd = (void *)entry_header;
> -			if (!drhd->address) {
> -				warn_invalid_dmar(0, "");
> -				goto failed;
> -			}
> +	drhd = (void *)entry;
> +	if (!drhd->address) {
> +		warn_invalid_dmar(0, "");
> +		return -EINVAL;
> +	}
>  
> -			addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
> -			if (!addr ) {
> -				printk("IOMMU: can't validate: %llx\n", drhd->address);
> -				goto failed;
> -			}
> -			cap = dmar_readq(addr + DMAR_CAP_REG);
> -			ecap = dmar_readq(addr + DMAR_ECAP_REG);
> -			early_iounmap(addr, VTD_PAGE_SIZE);
> -			if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
> -				warn_invalid_dmar(drhd->address,
> -						  " returns all ones");
> -				goto failed;
> -			}
> -		}
> +	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
> +	if (!addr) {
> +		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
> +		return -EINVAL;
> +	}
> +	cap = dmar_readq(addr + DMAR_CAP_REG);
> +	ecap = dmar_readq(addr + DMAR_ECAP_REG);
> +	early_iounmap(addr, VTD_PAGE_SIZE);
>  
> -		entry_header = ((void *)entry_header + entry_header->length);
> +	if (cap == (uint64_t)-1 && ecap == (uint64_t)-1) {
> +		warn_invalid_dmar(drhd->address, " returns all ones");
> +		return -EINVAL;
>  	}
> -	return 1;
>  
> -failed:
>  	return 0;
>  }
>  
>  int __init detect_intel_iommu(void)
>  {
>  	int ret;
> +	struct dmar_res_callback validate_drhd_cb = {
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_validate_one_drhd,
> +		.ignore_unhandled = true,
> +	};
>  
>  	down_write(&dmar_global_lock);
>  	ret = dmar_table_detect();
>  	if (ret)
> -		ret = check_zero_address();
> -	{
> -		if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
> -			iommu_detected = 1;
> -			/* Make sure ACS will be enabled */
> -			pci_request_acs();
> -		}
> +		ret = !dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
> +					    &validate_drhd_cb);
> +	if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
> +		iommu_detected = 1;
> +		/* Make sure ACS will be enabled */
> +		pci_request_acs();
> +	}
>  
>  #ifdef CONFIG_X86
> -		if (ret)
> -			x86_init.iommu.iommu_init = intel_iommu_init;
> +	if (ret)
> +		x86_init.iommu.iommu_init = intel_iommu_init;
>  #endif
> -	}
> +
>  	early_acpi_os_unmap_memory((void __iomem *)dmar_tbl, dmar_tbl_size);
>  	dmar_tbl = NULL;
>  	up_write(&dmar_global_lock);
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 5619f264862d..4af2206e41bc 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3682,7 +3682,7 @@ static inline void init_iommu_pm_ops(void) {}
>  #endif	/* CONFIG_PM */
>  
>  
> -int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> +int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_reserved_memory *rmrr;
>  	struct dmar_rmrr_unit *rmrru;
> @@ -3708,7 +3708,7 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
>  	return 0;
>  }
>  
> -int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr)
> +int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
>  {
>  	struct acpi_dmar_atsr *atsr;
>  	struct dmar_atsr_unit *atsru;
> diff --git a/include/linux/dmar.h b/include/linux/dmar.h
> index 1deece46a0ca..fac8ca34f9a8 100644
> --- a/include/linux/dmar.h
> +++ b/include/linux/dmar.h
> @@ -115,22 +115,21 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
>  extern int detect_intel_iommu(void);
>  extern int enable_drhd_fault_handling(void);
>  
> +static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
> +{
> +	return 0;
> +}
> +
>  #ifdef CONFIG_INTEL_IOMMU
>  extern int iommu_detected, no_iommu;
>  extern int intel_iommu_init(void);
> -extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header);
> -extern int dmar_parse_one_atsr(struct acpi_dmar_header *header);
> +extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
> +extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
>  extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
>  #else /* !CONFIG_INTEL_IOMMU: */
>  static inline int intel_iommu_init(void) { return -ENODEV; }
> -static inline int dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> -{
> -	return 0;
> -}
> -static inline int dmar_parse_one_atsr(struct acpi_dmar_header *header)
> -{
> -	return 0;
> -}
> +#define	dmar_parse_one_rmrr		dmar_res_noop
> +#define	dmar_parse_one_atsr		dmar_res_noop
>  static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
>  {
>  	return 0;
> 


-- 
Thanks!
Yijing

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources()
@ 2014-09-19  6:49     ` Yijing Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Yijing Wang @ 2014-09-19  6:49 UTC (permalink / raw)
  To: Jiang Liu, Joerg Roedel, David Woodhouse, Yinghai Lu,
	Bjorn Helgaas, Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Ashok Raj, Tony Luck, iommu, linux-pci, linux-hotplug,
	linux-kernel, dmaengine

On 2014/9/19 13:18, Jiang Liu wrote:
> Introduce helper function dmar_walk_resources to walk resource entries
> in DMAR table and ACPI buffer object returned by ACPI _DSM method
> for IOMMU hot-plug.
> 
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>

Reviewed-by: Yijing Wang <wangyijing@huawei.com>

> ---
>  drivers/iommu/dmar.c        |  209 +++++++++++++++++++++++--------------------
>  drivers/iommu/intel-iommu.c |    4 +-
>  include/linux/dmar.h        |   19 ++--
>  3 files changed, 122 insertions(+), 110 deletions(-)
> 
> diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
> index 06d268abe951..a05cf3634efe 100644
> --- a/drivers/iommu/dmar.c
> +++ b/drivers/iommu/dmar.c
> @@ -44,6 +44,14 @@
>  
>  #include "irq_remapping.h"
>  
> +typedef int (*dmar_res_handler_t)(struct acpi_dmar_header *, void *);
> +struct dmar_res_callback {
> +	dmar_res_handler_t	cb[ACPI_DMAR_TYPE_RESERVED];
> +	void			*arg[ACPI_DMAR_TYPE_RESERVED];
> +	bool			ignore_unhandled;
> +	bool			print_entry;
> +};
> +
>  /*
>   * Assumptions:
>   * 1) The hotplug framework guarentees that DMAR unit will be hot-added
> @@ -333,7 +341,7 @@ static struct notifier_block dmar_pci_bus_nb = {
>   * present in the platform
>   */
>  static int __init
> -dmar_parse_one_drhd(struct acpi_dmar_header *header)
> +dmar_parse_one_drhd(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_hardware_unit *drhd;
>  	struct dmar_drhd_unit *dmaru;
> @@ -364,6 +372,10 @@ dmar_parse_one_drhd(struct acpi_dmar_header *header)
>  		return ret;
>  	}
>  	dmar_register_drhd_unit(dmaru);
> +
> +	if (arg)
> +		(*(int *)arg)++;
> +
>  	return 0;
>  }
>  
> @@ -376,7 +388,8 @@ static void dmar_free_drhd(struct dmar_drhd_unit *dmaru)
>  	kfree(dmaru);
>  }
>  
> -static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
> +static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
> +				      void *arg)
>  {
>  	struct acpi_dmar_andd *andd = (void *)header;
>  
> @@ -398,7 +411,7 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header)
>  
>  #ifdef CONFIG_ACPI_NUMA
>  static int __init
> -dmar_parse_one_rhsa(struct acpi_dmar_header *header)
> +dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_rhsa *rhsa;
>  	struct dmar_drhd_unit *drhd;
> @@ -425,6 +438,8 @@ dmar_parse_one_rhsa(struct acpi_dmar_header *header)
>  
>  	return 0;
>  }
> +#else
> +#define	dmar_parse_one_rhsa		dmar_res_noop
>  #endif
>  
>  static void __init
> @@ -486,6 +501,52 @@ static int __init dmar_table_detect(void)
>  	return (ACPI_SUCCESS(status) ? 1 : 0);
>  }
>  
> +static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
> +				       size_t len, struct dmar_res_callback *cb)
> +{
> +	int ret = 0;
> +	struct acpi_dmar_header *iter, *next;
> +	struct acpi_dmar_header *end = ((void *)start) + len;
> +
> +	for (iter = start; iter < end && ret = 0; iter = next) {
> +		next = (void *)iter + iter->length;
> +		if (iter->length = 0) {
> +			/* Avoid looping forever on bad ACPI tables */
> +			pr_debug(FW_BUG "Invalid 0-length structure\n");
> +			break;
> +		} else if (next > end) {
> +			/* Avoid passing table end */
> +			pr_warn(FW_BUG "record passes table end\n");
> +			ret = -EINVAL;
> +			break;
> +		}
> +
> +		if (cb->print_entry)
> +			dmar_table_print_dmar_entry(iter);
> +
> +		if (iter->type >= ACPI_DMAR_TYPE_RESERVED) {
> +			/* continue for forward compatibility */
> +			pr_debug("Unknown DMAR structure type %d\n",
> +				 iter->type);
> +		} else if (cb->cb[iter->type]) {
> +			ret = cb->cb[iter->type](iter, cb->arg[iter->type]);
> +		} else if (!cb->ignore_unhandled) {
> +			pr_warn("No handler for DMAR structure type %d\n",
> +				iter->type);
> +			ret = -EINVAL;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar,
> +				       struct dmar_res_callback *cb)
> +{
> +	return dmar_walk_remapping_entries((void *)(dmar + 1),
> +			dmar->header.length - sizeof(*dmar), cb);
> +}
> +
>  /**
>   * parse_dmar_table - parses the DMA reporting table
>   */
> @@ -493,9 +554,18 @@ static int __init
>  parse_dmar_table(void)
>  {
>  	struct acpi_table_dmar *dmar;
> -	struct acpi_dmar_header *entry_header;
>  	int ret = 0;
>  	int drhd_count = 0;
> +	struct dmar_res_callback cb = {
> +		.print_entry = true,
> +		.ignore_unhandled = true,
> +		.arg[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &drhd_count,
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_parse_one_drhd,
> +		.cb[ACPI_DMAR_TYPE_RESERVED_MEMORY] = &dmar_parse_one_rmrr,
> +		.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
> +		.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
> +	};
>  
>  	/*
>  	 * Do it again, earlier dmar_tbl mapping could be mapped with
> @@ -519,51 +589,10 @@ parse_dmar_table(void)
>  	}
>  
>  	pr_info("Host address width %d\n", dmar->width + 1);
> -
> -	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> -	while (((unsigned long)entry_header) <
> -			(((unsigned long)dmar) + dmar_tbl->length)) {
> -		/* Avoid looping forever on bad ACPI tables */
> -		if (entry_header->length = 0) {
> -			pr_warn("Invalid 0-length structure\n");
> -			ret = -EINVAL;
> -			break;
> -		}
> -
> -		dmar_table_print_dmar_entry(entry_header);
> -
> -		switch (entry_header->type) {
> -		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
> -			drhd_count++;
> -			ret = dmar_parse_one_drhd(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
> -			ret = dmar_parse_one_rmrr(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_ROOT_ATS:
> -			ret = dmar_parse_one_atsr(entry_header);
> -			break;
> -		case ACPI_DMAR_TYPE_HARDWARE_AFFINITY:
> -#ifdef CONFIG_ACPI_NUMA
> -			ret = dmar_parse_one_rhsa(entry_header);
> -#endif
> -			break;
> -		case ACPI_DMAR_TYPE_NAMESPACE:
> -			ret = dmar_parse_one_andd(entry_header);
> -			break;
> -		default:
> -			pr_warn("Unknown DMAR structure type %d\n",
> -				entry_header->type);
> -			ret = 0; /* for forward compatibility */
> -			break;
> -		}
> -		if (ret)
> -			break;
> -
> -		entry_header = ((void *)entry_header + entry_header->length);
> -	}
> -	if (drhd_count = 0)
> +	ret = dmar_walk_dmar_table(dmar, &cb);
> +	if (ret = 0 && drhd_count = 0)
>  		pr_warn(FW_BUG "No DRHD structure found in DMAR table\n");
> +
>  	return ret;
>  }
>  
> @@ -761,76 +790,60 @@ static void warn_invalid_dmar(u64 addr, const char *message)
>  		dmi_get_system_info(DMI_PRODUCT_VERSION));
>  }
>  
> -static int __init check_zero_address(void)
> +static int __ref
> +dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
>  {
> -	struct acpi_table_dmar *dmar;
> -	struct acpi_dmar_header *entry_header;
>  	struct acpi_dmar_hardware_unit *drhd;
> +	void __iomem *addr;
> +	u64 cap, ecap;
>  
> -	dmar = (struct acpi_table_dmar *)dmar_tbl;
> -	entry_header = (struct acpi_dmar_header *)(dmar + 1);
> -
> -	while (((unsigned long)entry_header) <
> -			(((unsigned long)dmar) + dmar_tbl->length)) {
> -		/* Avoid looping forever on bad ACPI tables */
> -		if (entry_header->length = 0) {
> -			pr_warn("Invalid 0-length structure\n");
> -			return 0;
> -		}
> -
> -		if (entry_header->type = ACPI_DMAR_TYPE_HARDWARE_UNIT) {
> -			void __iomem *addr;
> -			u64 cap, ecap;
> -
> -			drhd = (void *)entry_header;
> -			if (!drhd->address) {
> -				warn_invalid_dmar(0, "");
> -				goto failed;
> -			}
> +	drhd = (void *)entry;
> +	if (!drhd->address) {
> +		warn_invalid_dmar(0, "");
> +		return -EINVAL;
> +	}
>  
> -			addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
> -			if (!addr ) {
> -				printk("IOMMU: can't validate: %llx\n", drhd->address);
> -				goto failed;
> -			}
> -			cap = dmar_readq(addr + DMAR_CAP_REG);
> -			ecap = dmar_readq(addr + DMAR_ECAP_REG);
> -			early_iounmap(addr, VTD_PAGE_SIZE);
> -			if (cap = (uint64_t)-1 && ecap = (uint64_t)-1) {
> -				warn_invalid_dmar(drhd->address,
> -						  " returns all ones");
> -				goto failed;
> -			}
> -		}
> +	addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
> +	if (!addr) {
> +		pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
> +		return -EINVAL;
> +	}
> +	cap = dmar_readq(addr + DMAR_CAP_REG);
> +	ecap = dmar_readq(addr + DMAR_ECAP_REG);
> +	early_iounmap(addr, VTD_PAGE_SIZE);
>  
> -		entry_header = ((void *)entry_header + entry_header->length);
> +	if (cap = (uint64_t)-1 && ecap = (uint64_t)-1) {
> +		warn_invalid_dmar(drhd->address, " returns all ones");
> +		return -EINVAL;
>  	}
> -	return 1;
>  
> -failed:
>  	return 0;
>  }
>  
>  int __init detect_intel_iommu(void)
>  {
>  	int ret;
> +	struct dmar_res_callback validate_drhd_cb = {
> +		.cb[ACPI_DMAR_TYPE_HARDWARE_UNIT] = &dmar_validate_one_drhd,
> +		.ignore_unhandled = true,
> +	};
>  
>  	down_write(&dmar_global_lock);
>  	ret = dmar_table_detect();
>  	if (ret)
> -		ret = check_zero_address();
> -	{
> -		if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
> -			iommu_detected = 1;
> -			/* Make sure ACS will be enabled */
> -			pci_request_acs();
> -		}
> +		ret = !dmar_walk_dmar_table((struct acpi_table_dmar *)dmar_tbl,
> +					    &validate_drhd_cb);
> +	if (ret && !no_iommu && !iommu_detected && !dmar_disabled) {
> +		iommu_detected = 1;
> +		/* Make sure ACS will be enabled */
> +		pci_request_acs();
> +	}
>  
>  #ifdef CONFIG_X86
> -		if (ret)
> -			x86_init.iommu.iommu_init = intel_iommu_init;
> +	if (ret)
> +		x86_init.iommu.iommu_init = intel_iommu_init;
>  #endif
> -	}
> +
>  	early_acpi_os_unmap_memory((void __iomem *)dmar_tbl, dmar_tbl_size);
>  	dmar_tbl = NULL;
>  	up_write(&dmar_global_lock);
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 5619f264862d..4af2206e41bc 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3682,7 +3682,7 @@ static inline void init_iommu_pm_ops(void) {}
>  #endif	/* CONFIG_PM */
>  
>  
> -int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> +int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg)
>  {
>  	struct acpi_dmar_reserved_memory *rmrr;
>  	struct dmar_rmrr_unit *rmrru;
> @@ -3708,7 +3708,7 @@ int __init dmar_parse_one_rmrr(struct acpi_dmar_header *header)
>  	return 0;
>  }
>  
> -int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr)
> +int __init dmar_parse_one_atsr(struct acpi_dmar_header *hdr, void *arg)
>  {
>  	struct acpi_dmar_atsr *atsr;
>  	struct dmar_atsr_unit *atsru;
> diff --git a/include/linux/dmar.h b/include/linux/dmar.h
> index 1deece46a0ca..fac8ca34f9a8 100644
> --- a/include/linux/dmar.h
> +++ b/include/linux/dmar.h
> @@ -115,22 +115,21 @@ extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
>  extern int detect_intel_iommu(void);
>  extern int enable_drhd_fault_handling(void);
>  
> +static inline int dmar_res_noop(struct acpi_dmar_header *hdr, void *arg)
> +{
> +	return 0;
> +}
> +
>  #ifdef CONFIG_INTEL_IOMMU
>  extern int iommu_detected, no_iommu;
>  extern int intel_iommu_init(void);
> -extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header);
> -extern int dmar_parse_one_atsr(struct acpi_dmar_header *header);
> +extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
> +extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
>  extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
>  #else /* !CONFIG_INTEL_IOMMU: */
>  static inline int intel_iommu_init(void) { return -ENODEV; }
> -static inline int dmar_parse_one_rmrr(struct acpi_dmar_header *header)
> -{
> -	return 0;
> -}
> -static inline int dmar_parse_one_atsr(struct acpi_dmar_header *header)
> -{
> -	return 0;
> -}
> +#define	dmar_parse_one_rmrr		dmar_res_noop
> +#define	dmar_parse_one_atsr		dmar_res_noop
>  static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
>  {
>  	return 0;
> 


-- 
Thanks!
Yijing


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
@ 2014-09-19  6:49     ` Yijing Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Yijing Wang @ 2014-09-19  6:49 UTC (permalink / raw)
  To: Jiang Liu, Joerg Roedel, David Woodhouse, Yinghai Lu,
	Bjorn Helgaas, Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Ashok Raj, Tony Luck, iommu, linux-pci, linux-hotplug,
	linux-kernel, dmaengine

On 2014/9/19 13:18, Jiang Liu wrote:
> Implement required callback functions for intel_irq_remapping driver
> to support DMAR unit hotplug.
> 
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>

Reviewed-by: Yijing Wang <wangyijing@huawei.com>

> ---
>  drivers/iommu/intel_irq_remapping.c |  226 ++++++++++++++++++++++++++---------
>  1 file changed, 171 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
> index 9b140ed854ec..1edbbed8c6bc 100644
> --- a/drivers/iommu/intel_irq_remapping.c
> +++ b/drivers/iommu/intel_irq_remapping.c
> @@ -36,7 +36,6 @@ struct hpet_scope {
>  
>  static struct ioapic_scope ir_ioapic[MAX_IO_APICS];
>  static struct hpet_scope ir_hpet[MAX_HPET_TBS];
> -static int ir_ioapic_num, ir_hpet_num;
>  
>  /*
>   * Lock ordering:
> @@ -206,7 +205,7 @@ static struct intel_iommu *map_hpet_to_ir(u8 hpet_id)
>  	int i;
>  
>  	for (i = 0; i < MAX_HPET_TBS; i++)
> -		if (ir_hpet[i].id == hpet_id)
> +		if (ir_hpet[i].id == hpet_id && ir_hpet[i].iommu)
>  			return ir_hpet[i].iommu;
>  	return NULL;
>  }
> @@ -216,7 +215,7 @@ static struct intel_iommu *map_ioapic_to_ir(int apic)
>  	int i;
>  
>  	for (i = 0; i < MAX_IO_APICS; i++)
> -		if (ir_ioapic[i].id == apic)
> +		if (ir_ioapic[i].id == apic && ir_ioapic[i].iommu)
>  			return ir_ioapic[i].iommu;
>  	return NULL;
>  }
> @@ -325,7 +324,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
>  
>  	down_read(&dmar_global_lock);
>  	for (i = 0; i < MAX_IO_APICS; i++) {
> -		if (ir_ioapic[i].id == apic) {
> +		if (ir_ioapic[i].iommu && ir_ioapic[i].id == apic) {
>  			sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
>  			break;
>  		}
> @@ -352,7 +351,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
>  
>  	down_read(&dmar_global_lock);
>  	for (i = 0; i < MAX_HPET_TBS; i++) {
> -		if (ir_hpet[i].id == id) {
> +		if (ir_hpet[i].iommu && ir_hpet[i].id == id) {
>  			sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
>  			break;
>  		}
> @@ -474,17 +473,17 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
>  	raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
>  }
>  
> -
> -static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
> +static int intel_setup_irq_remapping(struct intel_iommu *iommu)
>  {
>  	struct ir_table *ir_table;
>  	struct page *pages;
>  	unsigned long *bitmap;
>  
> -	ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table),
> -					     GFP_ATOMIC);
> +	if (iommu->ir_table)
> +		return 0;
>  
> -	if (!iommu->ir_table)
> +	ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC);
> +	if (!ir_table)
>  		return -ENOMEM;
>  
>  	pages = alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO,
> @@ -493,7 +492,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
>  	if (!pages) {
>  		pr_err("IR%d: failed to allocate pages of order %d\n",
>  		       iommu->seq_id, INTR_REMAP_PAGE_ORDER);
> -		kfree(iommu->ir_table);
> +		kfree(ir_table);
>  		return -ENOMEM;
>  	}
>  
> @@ -508,11 +507,22 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
>  
>  	ir_table->base = page_address(pages);
>  	ir_table->bitmap = bitmap;
> +	iommu->ir_table = ir_table;
>  
> -	iommu_set_irq_remapping(iommu, mode);
>  	return 0;
>  }
>  
> +static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
> +{
> +	if (iommu && iommu->ir_table) {
> +		free_pages((unsigned long)iommu->ir_table->base,
> +			   INTR_REMAP_PAGE_ORDER);
> +		kfree(iommu->ir_table->bitmap);
> +		kfree(iommu->ir_table);
> +		iommu->ir_table = NULL;
> +	}
> +}
> +
>  /*
>   * Disable Interrupt Remapping.
>   */
> @@ -667,9 +677,10 @@ static int __init intel_enable_irq_remapping(void)
>  		if (!ecap_ir_support(iommu->ecap))
>  			continue;
>  
> -		if (intel_setup_irq_remapping(iommu, eim))
> +		if (intel_setup_irq_remapping(iommu))
>  			goto error;
>  
> +		iommu_set_irq_remapping(iommu, eim);
>  		setup = 1;
>  	}
>  
> @@ -700,12 +711,13 @@ error:
>  	return -1;
>  }
>  
> -static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
> -				      struct intel_iommu *iommu)
> +static int ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
> +				   struct intel_iommu *iommu,
> +				   struct acpi_dmar_hardware_unit *drhd)
>  {
>  	struct acpi_dmar_pci_path *path;
>  	u8 bus;
> -	int count;
> +	int count, free = -1;
>  
>  	bus = scope->bus;
>  	path = (struct acpi_dmar_pci_path *)(scope + 1);
> @@ -721,19 +733,36 @@ static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
>  					   PCI_SECONDARY_BUS);
>  		path++;
>  	}
> -	ir_hpet[ir_hpet_num].bus   = bus;
> -	ir_hpet[ir_hpet_num].devfn = PCI_DEVFN(path->device, path->function);
> -	ir_hpet[ir_hpet_num].iommu = iommu;
> -	ir_hpet[ir_hpet_num].id    = scope->enumeration_id;
> -	ir_hpet_num++;
> +
> +	for (count = 0; count < MAX_HPET_TBS; count++) {
> +		if (ir_hpet[count].iommu == iommu &&
> +		    ir_hpet[count].id == scope->enumeration_id)
> +			return 0;
> +		else if (ir_hpet[count].iommu == NULL && free == -1)
> +			free = count;
> +	}
> +	if (free == -1) {
> +		pr_warn("Exceeded Max HPET blocks\n");
> +		return -ENOSPC;
> +	}
> +
> +	ir_hpet[free].iommu = iommu;
> +	ir_hpet[free].id    = scope->enumeration_id;
> +	ir_hpet[free].bus   = bus;
> +	ir_hpet[free].devfn = PCI_DEVFN(path->device, path->function);
> +	pr_info("HPET id %d under DRHD base 0x%Lx\n",
> +		scope->enumeration_id, drhd->address);
> +
> +	return 0;
>  }
>  
> -static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
> -				      struct intel_iommu *iommu)
> +static int ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
> +				     struct intel_iommu *iommu,
> +				     struct acpi_dmar_hardware_unit *drhd)
>  {
>  	struct acpi_dmar_pci_path *path;
>  	u8 bus;
> -	int count;
> +	int count, free = -1;
>  
>  	bus = scope->bus;
>  	path = (struct acpi_dmar_pci_path *)(scope + 1);
> @@ -750,54 +779,63 @@ static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
>  		path++;
>  	}
>  
> -	ir_ioapic[ir_ioapic_num].bus   = bus;
> -	ir_ioapic[ir_ioapic_num].devfn = PCI_DEVFN(path->device, path->function);
> -	ir_ioapic[ir_ioapic_num].iommu = iommu;
> -	ir_ioapic[ir_ioapic_num].id    = scope->enumeration_id;
> -	ir_ioapic_num++;
> +	for (count = 0; count < MAX_IO_APICS; count++) {
> +		if (ir_ioapic[count].iommu == iommu &&
> +		    ir_ioapic[count].id == scope->enumeration_id)
> +			return 0;
> +		else if (ir_ioapic[count].iommu == NULL && free == -1)
> +			free = count;
> +	}
> +	if (free == -1) {
> +		pr_warn("Exceeded Max IO APICS\n");
> +		return -ENOSPC;
> +	}
> +
> +	ir_ioapic[free].bus   = bus;
> +	ir_ioapic[free].devfn = PCI_DEVFN(path->device, path->function);
> +	ir_ioapic[free].iommu = iommu;
> +	ir_ioapic[free].id    = scope->enumeration_id;
> +	pr_info("IOAPIC id %d under DRHD base  0x%Lx IOMMU %d\n",
> +		scope->enumeration_id, drhd->address, iommu->seq_id);
> +
> +	return 0;
>  }
>  
>  static int ir_parse_ioapic_hpet_scope(struct acpi_dmar_header *header,
>  				      struct intel_iommu *iommu)
>  {
> +	int ret = 0;
>  	struct acpi_dmar_hardware_unit *drhd;
>  	struct acpi_dmar_device_scope *scope;
>  	void *start, *end;
>  
>  	drhd = (struct acpi_dmar_hardware_unit *)header;
> -
>  	start = (void *)(drhd + 1);
>  	end = ((void *)drhd) + header->length;
>  
> -	while (start < end) {
> +	while (start < end && ret == 0) {
>  		scope = start;
> -		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC) {
> -			if (ir_ioapic_num == MAX_IO_APICS) {
> -				printk(KERN_WARNING "Exceeded Max IO APICS\n");
> -				return -1;
> -			}
> -
> -			printk(KERN_INFO "IOAPIC id %d under DRHD base "
> -			       " 0x%Lx IOMMU %d\n", scope->enumeration_id,
> -			       drhd->address, iommu->seq_id);
> +		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC)
> +			ret = ir_parse_one_ioapic_scope(scope, iommu, drhd);
> +		else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET)
> +			ret = ir_parse_one_hpet_scope(scope, iommu, drhd);
> +		start += scope->length;
> +	}
>  
> -			ir_parse_one_ioapic_scope(scope, iommu);
> -		} else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET) {
> -			if (ir_hpet_num == MAX_HPET_TBS) {
> -				printk(KERN_WARNING "Exceeded Max HPET blocks\n");
> -				return -1;
> -			}
> +	return ret;
> +}
>  
> -			printk(KERN_INFO "HPET id %d under DRHD base"
> -			       " 0x%Lx\n", scope->enumeration_id,
> -			       drhd->address);
> +static void ir_remove_ioapic_hpet_scope(struct intel_iommu *iommu)
> +{
> +	int i;
>  
> -			ir_parse_one_hpet_scope(scope, iommu);
> -		}
> -		start += scope->length;
> -	}
> +	for (i = 0; i < MAX_HPET_TBS; i++)
> +		if (ir_hpet[i].iommu == iommu)
> +			ir_hpet[i].iommu = NULL;
>  
> -	return 0;
> +	for (i = 0; i < MAX_IO_APICS; i++)
> +		if (ir_ioapic[i].iommu == iommu)
> +			ir_ioapic[i].iommu = NULL;
>  }
>  
>  /*
> @@ -1173,7 +1211,85 @@ struct irq_remap_ops intel_irq_remap_ops = {
>  	.setup_hpet_msi		= intel_setup_hpet_msi,
>  };
>  
> +/*
> + * Support of Interrupt Remapping Unit Hotplug
> + */
> +static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
> +{
> +	int ret;
> +	int eim = x2apic_enabled();
> +
> +	if (eim && !ecap_eim_support(iommu->ecap)) {
> +		pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
> +			iommu->reg_phys, iommu->ecap);
> +		return -ENODEV;
> +	}
> +
> +	if (ir_parse_ioapic_hpet_scope(dmaru->hdr, iommu)) {
> +		pr_warn("DRHD %Lx: failed to parse managed IOAPIC/HPET\n",
> +			iommu->reg_phys);
> +		return -ENODEV;
> +	}
> +
> +	/* TODO: check all IOAPICs are covered by IOMMU */
> +
> +	/* Setup Interrupt-remapping now. */
> +	ret = intel_setup_irq_remapping(iommu);
> +	if (ret) {
> +		pr_err("DRHD %Lx: failed to allocate resource\n",
> +		       iommu->reg_phys);
> +		ir_remove_ioapic_hpet_scope(iommu);
> +		return ret;
> +	}
> +
> +	if (!iommu->qi) {
> +		/* Clear previous faults. */
> +		dmar_fault(-1, iommu);
> +		iommu_disable_irq_remapping(iommu);
> +		dmar_disable_qi(iommu);
> +	}
> +
> +	/* Enable queued invalidation */
> +	ret = dmar_enable_qi(iommu);
> +	if (!ret) {
> +		iommu_set_irq_remapping(iommu, eim);
> +	} else {
> +		pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
> +		       iommu->reg_phys, iommu->ecap, ret);
> +		intel_teardown_irq_remapping(iommu);
> +		ir_remove_ioapic_hpet_scope(iommu);
> +	}
> +
> +	return ret;
> +}
> +
>  int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
>  {
> -	return irq_remapping_enabled ? -ENOSYS : 0;
> +	int ret = 0;
> +	struct intel_iommu *iommu = dmaru->iommu;
> +
> +	if (!irq_remapping_enabled)
> +		return 0;
> +	if (iommu == NULL)
> +		return -EINVAL;
> +	if (!ecap_ir_support(iommu->ecap))
> +		return 0;
> +
> +	if (insert) {
> +		if (!iommu->ir_table)
> +			ret = dmar_ir_add(dmaru, iommu);
> +	} else {
> +		if (iommu->ir_table) {
> +			if (!bitmap_empty(iommu->ir_table->bitmap,
> +					  INTR_REMAP_TABLE_ENTRIES)) {
> +				ret = -EBUSY;
> +			} else {
> +				iommu_disable_irq_remapping(iommu);
> +				intel_teardown_irq_remapping(iommu);
> +				ir_remove_ioapic_hpet_scope(iommu);
> +			}
> +		}
> +	}
> +
> +	return ret;
>  }
> 


-- 
Thanks!
Yijing


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
@ 2014-09-19  6:49     ` Yijing Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Yijing Wang @ 2014-09-19  6:49 UTC (permalink / raw)
  To: Jiang Liu, Joerg Roedel, David Woodhouse, Yinghai Lu,
	Bjorn Helgaas, Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Tony Luck, linux-pci-u79uwXL29TY76Z2rM5mHXA,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA

On 2014/9/19 13:18, Jiang Liu wrote:
> Implement required callback functions for intel_irq_remapping driver
> to support DMAR unit hotplug.
> 
> Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>

Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

> ---
>  drivers/iommu/intel_irq_remapping.c |  226 ++++++++++++++++++++++++++---------
>  1 file changed, 171 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
> index 9b140ed854ec..1edbbed8c6bc 100644
> --- a/drivers/iommu/intel_irq_remapping.c
> +++ b/drivers/iommu/intel_irq_remapping.c
> @@ -36,7 +36,6 @@ struct hpet_scope {
>  
>  static struct ioapic_scope ir_ioapic[MAX_IO_APICS];
>  static struct hpet_scope ir_hpet[MAX_HPET_TBS];
> -static int ir_ioapic_num, ir_hpet_num;
>  
>  /*
>   * Lock ordering:
> @@ -206,7 +205,7 @@ static struct intel_iommu *map_hpet_to_ir(u8 hpet_id)
>  	int i;
>  
>  	for (i = 0; i < MAX_HPET_TBS; i++)
> -		if (ir_hpet[i].id == hpet_id)
> +		if (ir_hpet[i].id == hpet_id && ir_hpet[i].iommu)
>  			return ir_hpet[i].iommu;
>  	return NULL;
>  }
> @@ -216,7 +215,7 @@ static struct intel_iommu *map_ioapic_to_ir(int apic)
>  	int i;
>  
>  	for (i = 0; i < MAX_IO_APICS; i++)
> -		if (ir_ioapic[i].id == apic)
> +		if (ir_ioapic[i].id == apic && ir_ioapic[i].iommu)
>  			return ir_ioapic[i].iommu;
>  	return NULL;
>  }
> @@ -325,7 +324,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
>  
>  	down_read(&dmar_global_lock);
>  	for (i = 0; i < MAX_IO_APICS; i++) {
> -		if (ir_ioapic[i].id == apic) {
> +		if (ir_ioapic[i].iommu && ir_ioapic[i].id == apic) {
>  			sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
>  			break;
>  		}
> @@ -352,7 +351,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
>  
>  	down_read(&dmar_global_lock);
>  	for (i = 0; i < MAX_HPET_TBS; i++) {
> -		if (ir_hpet[i].id == id) {
> +		if (ir_hpet[i].iommu && ir_hpet[i].id == id) {
>  			sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
>  			break;
>  		}
> @@ -474,17 +473,17 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
>  	raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
>  }
>  
> -
> -static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
> +static int intel_setup_irq_remapping(struct intel_iommu *iommu)
>  {
>  	struct ir_table *ir_table;
>  	struct page *pages;
>  	unsigned long *bitmap;
>  
> -	ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table),
> -					     GFP_ATOMIC);
> +	if (iommu->ir_table)
> +		return 0;
>  
> -	if (!iommu->ir_table)
> +	ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC);
> +	if (!ir_table)
>  		return -ENOMEM;
>  
>  	pages = alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO,
> @@ -493,7 +492,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
>  	if (!pages) {
>  		pr_err("IR%d: failed to allocate pages of order %d\n",
>  		       iommu->seq_id, INTR_REMAP_PAGE_ORDER);
> -		kfree(iommu->ir_table);
> +		kfree(ir_table);
>  		return -ENOMEM;
>  	}
>  
> @@ -508,11 +507,22 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
>  
>  	ir_table->base = page_address(pages);
>  	ir_table->bitmap = bitmap;
> +	iommu->ir_table = ir_table;
>  
> -	iommu_set_irq_remapping(iommu, mode);
>  	return 0;
>  }
>  
> +static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
> +{
> +	if (iommu && iommu->ir_table) {
> +		free_pages((unsigned long)iommu->ir_table->base,
> +			   INTR_REMAP_PAGE_ORDER);
> +		kfree(iommu->ir_table->bitmap);
> +		kfree(iommu->ir_table);
> +		iommu->ir_table = NULL;
> +	}
> +}
> +
>  /*
>   * Disable Interrupt Remapping.
>   */
> @@ -667,9 +677,10 @@ static int __init intel_enable_irq_remapping(void)
>  		if (!ecap_ir_support(iommu->ecap))
>  			continue;
>  
> -		if (intel_setup_irq_remapping(iommu, eim))
> +		if (intel_setup_irq_remapping(iommu))
>  			goto error;
>  
> +		iommu_set_irq_remapping(iommu, eim);
>  		setup = 1;
>  	}
>  
> @@ -700,12 +711,13 @@ error:
>  	return -1;
>  }
>  
> -static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
> -				      struct intel_iommu *iommu)
> +static int ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
> +				   struct intel_iommu *iommu,
> +				   struct acpi_dmar_hardware_unit *drhd)
>  {
>  	struct acpi_dmar_pci_path *path;
>  	u8 bus;
> -	int count;
> +	int count, free = -1;
>  
>  	bus = scope->bus;
>  	path = (struct acpi_dmar_pci_path *)(scope + 1);
> @@ -721,19 +733,36 @@ static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
>  					   PCI_SECONDARY_BUS);
>  		path++;
>  	}
> -	ir_hpet[ir_hpet_num].bus   = bus;
> -	ir_hpet[ir_hpet_num].devfn = PCI_DEVFN(path->device, path->function);
> -	ir_hpet[ir_hpet_num].iommu = iommu;
> -	ir_hpet[ir_hpet_num].id    = scope->enumeration_id;
> -	ir_hpet_num++;
> +
> +	for (count = 0; count < MAX_HPET_TBS; count++) {
> +		if (ir_hpet[count].iommu == iommu &&
> +		    ir_hpet[count].id == scope->enumeration_id)
> +			return 0;
> +		else if (ir_hpet[count].iommu == NULL && free == -1)
> +			free = count;
> +	}
> +	if (free == -1) {
> +		pr_warn("Exceeded Max HPET blocks\n");
> +		return -ENOSPC;
> +	}
> +
> +	ir_hpet[free].iommu = iommu;
> +	ir_hpet[free].id    = scope->enumeration_id;
> +	ir_hpet[free].bus   = bus;
> +	ir_hpet[free].devfn = PCI_DEVFN(path->device, path->function);
> +	pr_info("HPET id %d under DRHD base 0x%Lx\n",
> +		scope->enumeration_id, drhd->address);
> +
> +	return 0;
>  }
>  
> -static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
> -				      struct intel_iommu *iommu)
> +static int ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
> +				     struct intel_iommu *iommu,
> +				     struct acpi_dmar_hardware_unit *drhd)
>  {
>  	struct acpi_dmar_pci_path *path;
>  	u8 bus;
> -	int count;
> +	int count, free = -1;
>  
>  	bus = scope->bus;
>  	path = (struct acpi_dmar_pci_path *)(scope + 1);
> @@ -750,54 +779,63 @@ static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
>  		path++;
>  	}
>  
> -	ir_ioapic[ir_ioapic_num].bus   = bus;
> -	ir_ioapic[ir_ioapic_num].devfn = PCI_DEVFN(path->device, path->function);
> -	ir_ioapic[ir_ioapic_num].iommu = iommu;
> -	ir_ioapic[ir_ioapic_num].id    = scope->enumeration_id;
> -	ir_ioapic_num++;
> +	for (count = 0; count < MAX_IO_APICS; count++) {
> +		if (ir_ioapic[count].iommu == iommu &&
> +		    ir_ioapic[count].id == scope->enumeration_id)
> +			return 0;
> +		else if (ir_ioapic[count].iommu == NULL && free == -1)
> +			free = count;
> +	}
> +	if (free == -1) {
> +		pr_warn("Exceeded Max IO APICS\n");
> +		return -ENOSPC;
> +	}
> +
> +	ir_ioapic[free].bus   = bus;
> +	ir_ioapic[free].devfn = PCI_DEVFN(path->device, path->function);
> +	ir_ioapic[free].iommu = iommu;
> +	ir_ioapic[free].id    = scope->enumeration_id;
> +	pr_info("IOAPIC id %d under DRHD base  0x%Lx IOMMU %d\n",
> +		scope->enumeration_id, drhd->address, iommu->seq_id);
> +
> +	return 0;
>  }
>  
>  static int ir_parse_ioapic_hpet_scope(struct acpi_dmar_header *header,
>  				      struct intel_iommu *iommu)
>  {
> +	int ret = 0;
>  	struct acpi_dmar_hardware_unit *drhd;
>  	struct acpi_dmar_device_scope *scope;
>  	void *start, *end;
>  
>  	drhd = (struct acpi_dmar_hardware_unit *)header;
> -
>  	start = (void *)(drhd + 1);
>  	end = ((void *)drhd) + header->length;
>  
> -	while (start < end) {
> +	while (start < end && ret == 0) {
>  		scope = start;
> -		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC) {
> -			if (ir_ioapic_num == MAX_IO_APICS) {
> -				printk(KERN_WARNING "Exceeded Max IO APICS\n");
> -				return -1;
> -			}
> -
> -			printk(KERN_INFO "IOAPIC id %d under DRHD base "
> -			       " 0x%Lx IOMMU %d\n", scope->enumeration_id,
> -			       drhd->address, iommu->seq_id);
> +		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_IOAPIC)
> +			ret = ir_parse_one_ioapic_scope(scope, iommu, drhd);
> +		else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET)
> +			ret = ir_parse_one_hpet_scope(scope, iommu, drhd);
> +		start += scope->length;
> +	}
>  
> -			ir_parse_one_ioapic_scope(scope, iommu);
> -		} else if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_HPET) {
> -			if (ir_hpet_num == MAX_HPET_TBS) {
> -				printk(KERN_WARNING "Exceeded Max HPET blocks\n");
> -				return -1;
> -			}
> +	return ret;
> +}
>  
> -			printk(KERN_INFO "HPET id %d under DRHD base"
> -			       " 0x%Lx\n", scope->enumeration_id,
> -			       drhd->address);
> +static void ir_remove_ioapic_hpet_scope(struct intel_iommu *iommu)
> +{
> +	int i;
>  
> -			ir_parse_one_hpet_scope(scope, iommu);
> -		}
> -		start += scope->length;
> -	}
> +	for (i = 0; i < MAX_HPET_TBS; i++)
> +		if (ir_hpet[i].iommu == iommu)
> +			ir_hpet[i].iommu = NULL;
>  
> -	return 0;
> +	for (i = 0; i < MAX_IO_APICS; i++)
> +		if (ir_ioapic[i].iommu == iommu)
> +			ir_ioapic[i].iommu = NULL;
>  }
>  
>  /*
> @@ -1173,7 +1211,85 @@ struct irq_remap_ops intel_irq_remap_ops = {
>  	.setup_hpet_msi		= intel_setup_hpet_msi,
>  };
>  
> +/*
> + * Support of Interrupt Remapping Unit Hotplug
> + */
> +static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
> +{
> +	int ret;
> +	int eim = x2apic_enabled();
> +
> +	if (eim && !ecap_eim_support(iommu->ecap)) {
> +		pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
> +			iommu->reg_phys, iommu->ecap);
> +		return -ENODEV;
> +	}
> +
> +	if (ir_parse_ioapic_hpet_scope(dmaru->hdr, iommu)) {
> +		pr_warn("DRHD %Lx: failed to parse managed IOAPIC/HPET\n",
> +			iommu->reg_phys);
> +		return -ENODEV;
> +	}
> +
> +	/* TODO: check all IOAPICs are covered by IOMMU */
> +
> +	/* Setup Interrupt-remapping now. */
> +	ret = intel_setup_irq_remapping(iommu);
> +	if (ret) {
> +		pr_err("DRHD %Lx: failed to allocate resource\n",
> +		       iommu->reg_phys);
> +		ir_remove_ioapic_hpet_scope(iommu);
> +		return ret;
> +	}
> +
> +	if (!iommu->qi) {
> +		/* Clear previous faults. */
> +		dmar_fault(-1, iommu);
> +		iommu_disable_irq_remapping(iommu);
> +		dmar_disable_qi(iommu);
> +	}
> +
> +	/* Enable queued invalidation */
> +	ret = dmar_enable_qi(iommu);
> +	if (!ret) {
> +		iommu_set_irq_remapping(iommu, eim);
> +	} else {
> +		pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
> +		       iommu->reg_phys, iommu->ecap, ret);
> +		intel_teardown_irq_remapping(iommu);
> +		ir_remove_ioapic_hpet_scope(iommu);
> +	}
> +
> +	return ret;
> +}
> +
>  int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
>  {
> -	return irq_remapping_enabled ? -ENOSYS : 0;
> +	int ret = 0;
> +	struct intel_iommu *iommu = dmaru->iommu;
> +
> +	if (!irq_remapping_enabled)
> +		return 0;
> +	if (iommu == NULL)
> +		return -EINVAL;
> +	if (!ecap_ir_support(iommu->ecap))
> +		return 0;
> +
> +	if (insert) {
> +		if (!iommu->ir_table)
> +			ret = dmar_ir_add(dmaru, iommu);
> +	} else {
> +		if (iommu->ir_table) {
> +			if (!bitmap_empty(iommu->ir_table->bitmap,
> +					  INTR_REMAP_TABLE_ENTRIES)) {
> +				ret = -EBUSY;
> +			} else {
> +				iommu_disable_irq_remapping(iommu);
> +				intel_teardown_irq_remapping(iommu);
> +				ir_remove_ioapic_hpet_scope(iommu);
> +			}
> +		}
> +	}
> +
> +	return ret;
>  }
> 


-- 
Thanks!
Yijing

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug
@ 2014-09-19  6:49     ` Yijing Wang
  0 siblings, 0 replies; 35+ messages in thread
From: Yijing Wang @ 2014-09-19  6:49 UTC (permalink / raw)
  To: Jiang Liu, Joerg Roedel, David Woodhouse, Yinghai Lu,
	Bjorn Helgaas, Dan Williams, Vinod Koul, Rafael J . Wysocki
  Cc: Ashok Raj, Tony Luck, iommu, linux-pci, linux-hotplug,
	linux-kernel, dmaengine

On 2014/9/19 13:18, Jiang Liu wrote:
> Implement required callback functions for intel_irq_remapping driver
> to support DMAR unit hotplug.
> 
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>

Reviewed-by: Yijing Wang <wangyijing@huawei.com>

> ---
>  drivers/iommu/intel_irq_remapping.c |  226 ++++++++++++++++++++++++++---------
>  1 file changed, 171 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
> index 9b140ed854ec..1edbbed8c6bc 100644
> --- a/drivers/iommu/intel_irq_remapping.c
> +++ b/drivers/iommu/intel_irq_remapping.c
> @@ -36,7 +36,6 @@ struct hpet_scope {
>  
>  static struct ioapic_scope ir_ioapic[MAX_IO_APICS];
>  static struct hpet_scope ir_hpet[MAX_HPET_TBS];
> -static int ir_ioapic_num, ir_hpet_num;
>  
>  /*
>   * Lock ordering:
> @@ -206,7 +205,7 @@ static struct intel_iommu *map_hpet_to_ir(u8 hpet_id)
>  	int i;
>  
>  	for (i = 0; i < MAX_HPET_TBS; i++)
> -		if (ir_hpet[i].id = hpet_id)
> +		if (ir_hpet[i].id = hpet_id && ir_hpet[i].iommu)
>  			return ir_hpet[i].iommu;
>  	return NULL;
>  }
> @@ -216,7 +215,7 @@ static struct intel_iommu *map_ioapic_to_ir(int apic)
>  	int i;
>  
>  	for (i = 0; i < MAX_IO_APICS; i++)
> -		if (ir_ioapic[i].id = apic)
> +		if (ir_ioapic[i].id = apic && ir_ioapic[i].iommu)
>  			return ir_ioapic[i].iommu;
>  	return NULL;
>  }
> @@ -325,7 +324,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
>  
>  	down_read(&dmar_global_lock);
>  	for (i = 0; i < MAX_IO_APICS; i++) {
> -		if (ir_ioapic[i].id = apic) {
> +		if (ir_ioapic[i].iommu && ir_ioapic[i].id = apic) {
>  			sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
>  			break;
>  		}
> @@ -352,7 +351,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
>  
>  	down_read(&dmar_global_lock);
>  	for (i = 0; i < MAX_HPET_TBS; i++) {
> -		if (ir_hpet[i].id = id) {
> +		if (ir_hpet[i].iommu && ir_hpet[i].id = id) {
>  			sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
>  			break;
>  		}
> @@ -474,17 +473,17 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
>  	raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
>  }
>  
> -
> -static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
> +static int intel_setup_irq_remapping(struct intel_iommu *iommu)
>  {
>  	struct ir_table *ir_table;
>  	struct page *pages;
>  	unsigned long *bitmap;
>  
> -	ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table),
> -					     GFP_ATOMIC);
> +	if (iommu->ir_table)
> +		return 0;
>  
> -	if (!iommu->ir_table)
> +	ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC);
> +	if (!ir_table)
>  		return -ENOMEM;
>  
>  	pages = alloc_pages_node(iommu->node, GFP_ATOMIC | __GFP_ZERO,
> @@ -493,7 +492,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
>  	if (!pages) {
>  		pr_err("IR%d: failed to allocate pages of order %d\n",
>  		       iommu->seq_id, INTR_REMAP_PAGE_ORDER);
> -		kfree(iommu->ir_table);
> +		kfree(ir_table);
>  		return -ENOMEM;
>  	}
>  
> @@ -508,11 +507,22 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode)
>  
>  	ir_table->base = page_address(pages);
>  	ir_table->bitmap = bitmap;
> +	iommu->ir_table = ir_table;
>  
> -	iommu_set_irq_remapping(iommu, mode);
>  	return 0;
>  }
>  
> +static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
> +{
> +	if (iommu && iommu->ir_table) {
> +		free_pages((unsigned long)iommu->ir_table->base,
> +			   INTR_REMAP_PAGE_ORDER);
> +		kfree(iommu->ir_table->bitmap);
> +		kfree(iommu->ir_table);
> +		iommu->ir_table = NULL;
> +	}
> +}
> +
>  /*
>   * Disable Interrupt Remapping.
>   */
> @@ -667,9 +677,10 @@ static int __init intel_enable_irq_remapping(void)
>  		if (!ecap_ir_support(iommu->ecap))
>  			continue;
>  
> -		if (intel_setup_irq_remapping(iommu, eim))
> +		if (intel_setup_irq_remapping(iommu))
>  			goto error;
>  
> +		iommu_set_irq_remapping(iommu, eim);
>  		setup = 1;
>  	}
>  
> @@ -700,12 +711,13 @@ error:
>  	return -1;
>  }
>  
> -static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
> -				      struct intel_iommu *iommu)
> +static int ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
> +				   struct intel_iommu *iommu,
> +				   struct acpi_dmar_hardware_unit *drhd)
>  {
>  	struct acpi_dmar_pci_path *path;
>  	u8 bus;
> -	int count;
> +	int count, free = -1;
>  
>  	bus = scope->bus;
>  	path = (struct acpi_dmar_pci_path *)(scope + 1);
> @@ -721,19 +733,36 @@ static void ir_parse_one_hpet_scope(struct acpi_dmar_device_scope *scope,
>  					   PCI_SECONDARY_BUS);
>  		path++;
>  	}
> -	ir_hpet[ir_hpet_num].bus   = bus;
> -	ir_hpet[ir_hpet_num].devfn = PCI_DEVFN(path->device, path->function);
> -	ir_hpet[ir_hpet_num].iommu = iommu;
> -	ir_hpet[ir_hpet_num].id    = scope->enumeration_id;
> -	ir_hpet_num++;
> +
> +	for (count = 0; count < MAX_HPET_TBS; count++) {
> +		if (ir_hpet[count].iommu = iommu &&
> +		    ir_hpet[count].id = scope->enumeration_id)
> +			return 0;
> +		else if (ir_hpet[count].iommu = NULL && free = -1)
> +			free = count;
> +	}
> +	if (free = -1) {
> +		pr_warn("Exceeded Max HPET blocks\n");
> +		return -ENOSPC;
> +	}
> +
> +	ir_hpet[free].iommu = iommu;
> +	ir_hpet[free].id    = scope->enumeration_id;
> +	ir_hpet[free].bus   = bus;
> +	ir_hpet[free].devfn = PCI_DEVFN(path->device, path->function);
> +	pr_info("HPET id %d under DRHD base 0x%Lx\n",
> +		scope->enumeration_id, drhd->address);
> +
> +	return 0;
>  }
>  
> -static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
> -				      struct intel_iommu *iommu)
> +static int ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
> +				     struct intel_iommu *iommu,
> +				     struct acpi_dmar_hardware_unit *drhd)
>  {
>  	struct acpi_dmar_pci_path *path;
>  	u8 bus;
> -	int count;
> +	int count, free = -1;
>  
>  	bus = scope->bus;
>  	path = (struct acpi_dmar_pci_path *)(scope + 1);
> @@ -750,54 +779,63 @@ static void ir_parse_one_ioapic_scope(struct acpi_dmar_device_scope *scope,
>  		path++;
>  	}
>  
> -	ir_ioapic[ir_ioapic_num].bus   = bus;
> -	ir_ioapic[ir_ioapic_num].devfn = PCI_DEVFN(path->device, path->function);
> -	ir_ioapic[ir_ioapic_num].iommu = iommu;
> -	ir_ioapic[ir_ioapic_num].id    = scope->enumeration_id;
> -	ir_ioapic_num++;
> +	for (count = 0; count < MAX_IO_APICS; count++) {
> +		if (ir_ioapic[count].iommu = iommu &&
> +		    ir_ioapic[count].id = scope->enumeration_id)
> +			return 0;
> +		else if (ir_ioapic[count].iommu = NULL && free = -1)
> +			free = count;
> +	}
> +	if (free = -1) {
> +		pr_warn("Exceeded Max IO APICS\n");
> +		return -ENOSPC;
> +	}
> +
> +	ir_ioapic[free].bus   = bus;
> +	ir_ioapic[free].devfn = PCI_DEVFN(path->device, path->function);
> +	ir_ioapic[free].iommu = iommu;
> +	ir_ioapic[free].id    = scope->enumeration_id;
> +	pr_info("IOAPIC id %d under DRHD base  0x%Lx IOMMU %d\n",
> +		scope->enumeration_id, drhd->address, iommu->seq_id);
> +
> +	return 0;
>  }
>  
>  static int ir_parse_ioapic_hpet_scope(struct acpi_dmar_header *header,
>  				      struct intel_iommu *iommu)
>  {
> +	int ret = 0;
>  	struct acpi_dmar_hardware_unit *drhd;
>  	struct acpi_dmar_device_scope *scope;
>  	void *start, *end;
>  
>  	drhd = (struct acpi_dmar_hardware_unit *)header;
> -
>  	start = (void *)(drhd + 1);
>  	end = ((void *)drhd) + header->length;
>  
> -	while (start < end) {
> +	while (start < end && ret = 0) {
>  		scope = start;
> -		if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_IOAPIC) {
> -			if (ir_ioapic_num = MAX_IO_APICS) {
> -				printk(KERN_WARNING "Exceeded Max IO APICS\n");
> -				return -1;
> -			}
> -
> -			printk(KERN_INFO "IOAPIC id %d under DRHD base "
> -			       " 0x%Lx IOMMU %d\n", scope->enumeration_id,
> -			       drhd->address, iommu->seq_id);
> +		if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_IOAPIC)
> +			ret = ir_parse_one_ioapic_scope(scope, iommu, drhd);
> +		else if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_HPET)
> +			ret = ir_parse_one_hpet_scope(scope, iommu, drhd);
> +		start += scope->length;
> +	}
>  
> -			ir_parse_one_ioapic_scope(scope, iommu);
> -		} else if (scope->entry_type = ACPI_DMAR_SCOPE_TYPE_HPET) {
> -			if (ir_hpet_num = MAX_HPET_TBS) {
> -				printk(KERN_WARNING "Exceeded Max HPET blocks\n");
> -				return -1;
> -			}
> +	return ret;
> +}
>  
> -			printk(KERN_INFO "HPET id %d under DRHD base"
> -			       " 0x%Lx\n", scope->enumeration_id,
> -			       drhd->address);
> +static void ir_remove_ioapic_hpet_scope(struct intel_iommu *iommu)
> +{
> +	int i;
>  
> -			ir_parse_one_hpet_scope(scope, iommu);
> -		}
> -		start += scope->length;
> -	}
> +	for (i = 0; i < MAX_HPET_TBS; i++)
> +		if (ir_hpet[i].iommu = iommu)
> +			ir_hpet[i].iommu = NULL;
>  
> -	return 0;
> +	for (i = 0; i < MAX_IO_APICS; i++)
> +		if (ir_ioapic[i].iommu = iommu)
> +			ir_ioapic[i].iommu = NULL;
>  }
>  
>  /*
> @@ -1173,7 +1211,85 @@ struct irq_remap_ops intel_irq_remap_ops = {
>  	.setup_hpet_msi		= intel_setup_hpet_msi,
>  };
>  
> +/*
> + * Support of Interrupt Remapping Unit Hotplug
> + */
> +static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
> +{
> +	int ret;
> +	int eim = x2apic_enabled();
> +
> +	if (eim && !ecap_eim_support(iommu->ecap)) {
> +		pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
> +			iommu->reg_phys, iommu->ecap);
> +		return -ENODEV;
> +	}
> +
> +	if (ir_parse_ioapic_hpet_scope(dmaru->hdr, iommu)) {
> +		pr_warn("DRHD %Lx: failed to parse managed IOAPIC/HPET\n",
> +			iommu->reg_phys);
> +		return -ENODEV;
> +	}
> +
> +	/* TODO: check all IOAPICs are covered by IOMMU */
> +
> +	/* Setup Interrupt-remapping now. */
> +	ret = intel_setup_irq_remapping(iommu);
> +	if (ret) {
> +		pr_err("DRHD %Lx: failed to allocate resource\n",
> +		       iommu->reg_phys);
> +		ir_remove_ioapic_hpet_scope(iommu);
> +		return ret;
> +	}
> +
> +	if (!iommu->qi) {
> +		/* Clear previous faults. */
> +		dmar_fault(-1, iommu);
> +		iommu_disable_irq_remapping(iommu);
> +		dmar_disable_qi(iommu);
> +	}
> +
> +	/* Enable queued invalidation */
> +	ret = dmar_enable_qi(iommu);
> +	if (!ret) {
> +		iommu_set_irq_remapping(iommu, eim);
> +	} else {
> +		pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
> +		       iommu->reg_phys, iommu->ecap, ret);
> +		intel_teardown_irq_remapping(iommu);
> +		ir_remove_ioapic_hpet_scope(iommu);
> +	}
> +
> +	return ret;
> +}
> +
>  int dmar_ir_hotplug(struct dmar_drhd_unit *dmaru, bool insert)
>  {
> -	return irq_remapping_enabled ? -ENOSYS : 0;
> +	int ret = 0;
> +	struct intel_iommu *iommu = dmaru->iommu;
> +
> +	if (!irq_remapping_enabled)
> +		return 0;
> +	if (iommu = NULL)
> +		return -EINVAL;
> +	if (!ecap_ir_support(iommu->ecap))
> +		return 0;
> +
> +	if (insert) {
> +		if (!iommu->ir_table)
> +			ret = dmar_ir_add(dmaru, iommu);
> +	} else {
> +		if (iommu->ir_table) {
> +			if (!bitmap_empty(iommu->ir_table->bitmap,
> +					  INTR_REMAP_TABLE_ENTRIES)) {
> +				ret = -EBUSY;
> +			} else {
> +				iommu_disable_irq_remapping(iommu);
> +				intel_teardown_irq_remapping(iommu);
> +				ir_remove_ioapic_hpet_scope(iommu);
> +			}
> +		}
> +	}
> +
> +	return ret;
>  }
> 


-- 
Thanks!
Yijing


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug
@ 2014-09-24 18:37     ` Bjorn Helgaas
  0 siblings, 0 replies; 35+ messages in thread
From: Bjorn Helgaas @ 2014-09-24 18:37 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Joerg Roedel, David Woodhouse, Yinghai Lu, Dan Williams,
	Vinod Koul, Rafael J . Wysocki, Ashok Raj, Yijing Wang,
	Tony Luck, iommu, linux-pci, linux-hotplug, linux-kernel,
	dmaengine

On Fri, Sep 19, 2014 at 01:18:55PM +0800, Jiang Liu wrote:
> Finally enhance pci_root driver to support DMAR device hotplug when
> hot-plugging PCI host bridges.
> 
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
> Reviewed-by: Yijing Wang <wangyijing@huawei.com>

I assume this will be merged via a non-PCI tree, so:

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

Looks OK to me, but I expect you'll want Rafael's ack as well.

> ---
>  drivers/acpi/pci_root.c |   16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
> index e6ae603ed1a1..4e177daa18e3 100644
> --- a/drivers/acpi/pci_root.c
> +++ b/drivers/acpi/pci_root.c
> @@ -33,6 +33,7 @@
>  #include <linux/pci.h>
>  #include <linux/pci-acpi.h>
>  #include <linux/pci-aspm.h>
> +#include <linux/dmar.h>
>  #include <linux/acpi.h>
>  #include <linux/slab.h>
>  #include <acpi/apei.h>	/* for acpi_hest_init() */
> @@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	struct acpi_pci_root *root;
>  	acpi_handle handle = device->handle;
>  	int no_aspm = 0, clear_aspm = 0;
> +	bool hotadd = system_state != SYSTEM_BOOTING;
>  
>  	root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
>  	if (!root)
> @@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
>  	device->driver_data = root;
>  
> +	if (hotadd && dmar_device_add(handle)) {
> +		result = -ENXIO;
> +		goto end;
> +	}
> +
>  	pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
>  	       acpi_device_name(device), acpi_device_bid(device),
>  	       root->segment, &root->secondary);
> @@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  			root->segment, (unsigned int)root->secondary.start);
>  		device->driver_data = NULL;
>  		result = -ENODEV;
> -		goto end;
> +		goto remove_dmar;
>  	}
>  
>  	if (clear_aspm) {
> @@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	if (device->wakeup.flags.run_wake)
>  		device_set_run_wake(root->bus->bridge, true);
>  
> -	if (system_state != SYSTEM_BOOTING) {
> +	if (hotadd) {
>  		pcibios_resource_survey_bus(root->bus);
>  		pci_assign_unassigned_root_bus_resources(root->bus);
>  	}
> @@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	pci_unlock_rescan_remove();
>  	return 1;
>  
> +remove_dmar:
> +	if (hotadd)
> +		dmar_device_remove(handle);
>  end:
>  	kfree(root);
>  	return result;
> @@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
>  
>  	pci_remove_root_bus(root->bus);
>  
> +	dmar_device_remove(device->handle);
> +
>  	pci_unlock_rescan_remove();
>  
>  	kfree(root);
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug
@ 2014-09-24 18:37     ` Bjorn Helgaas
  0 siblings, 0 replies; 35+ messages in thread
From: Bjorn Helgaas @ 2014-09-24 18:37 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Tony Luck, Vinod Koul, David Woodhouse, Rafael J . Wysocki,
	linux-hotplug-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dmaengine-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, Dan Williams, Yinghai Lu

On Fri, Sep 19, 2014 at 01:18:55PM +0800, Jiang Liu wrote:
> Finally enhance pci_root driver to support DMAR device hotplug when
> hot-plugging PCI host bridges.
> 
> Signed-off-by: Jiang Liu <jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
> Reviewed-by: Yijing Wang <wangyijing-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

I assume this will be merged via a non-PCI tree, so:

Acked-by: Bjorn Helgaas <bhelgaas-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

Looks OK to me, but I expect you'll want Rafael's ack as well.

> ---
>  drivers/acpi/pci_root.c |   16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
> index e6ae603ed1a1..4e177daa18e3 100644
> --- a/drivers/acpi/pci_root.c
> +++ b/drivers/acpi/pci_root.c
> @@ -33,6 +33,7 @@
>  #include <linux/pci.h>
>  #include <linux/pci-acpi.h>
>  #include <linux/pci-aspm.h>
> +#include <linux/dmar.h>
>  #include <linux/acpi.h>
>  #include <linux/slab.h>
>  #include <acpi/apei.h>	/* for acpi_hest_init() */
> @@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	struct acpi_pci_root *root;
>  	acpi_handle handle = device->handle;
>  	int no_aspm = 0, clear_aspm = 0;
> +	bool hotadd = system_state != SYSTEM_BOOTING;
>  
>  	root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
>  	if (!root)
> @@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
>  	device->driver_data = root;
>  
> +	if (hotadd && dmar_device_add(handle)) {
> +		result = -ENXIO;
> +		goto end;
> +	}
> +
>  	pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
>  	       acpi_device_name(device), acpi_device_bid(device),
>  	       root->segment, &root->secondary);
> @@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  			root->segment, (unsigned int)root->secondary.start);
>  		device->driver_data = NULL;
>  		result = -ENODEV;
> -		goto end;
> +		goto remove_dmar;
>  	}
>  
>  	if (clear_aspm) {
> @@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	if (device->wakeup.flags.run_wake)
>  		device_set_run_wake(root->bus->bridge, true);
>  
> -	if (system_state != SYSTEM_BOOTING) {
> +	if (hotadd) {
>  		pcibios_resource_survey_bus(root->bus);
>  		pci_assign_unassigned_root_bus_resources(root->bus);
>  	}
> @@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	pci_unlock_rescan_remove();
>  	return 1;
>  
> +remove_dmar:
> +	if (hotadd)
> +		dmar_device_remove(handle);
>  end:
>  	kfree(root);
>  	return result;
> @@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
>  
>  	pci_remove_root_bus(root->bus);
>  
> +	dmar_device_remove(device->handle);
> +
>  	pci_unlock_rescan_remove();
>  
>  	kfree(root);
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug
@ 2014-09-24 18:37     ` Bjorn Helgaas
  0 siblings, 0 replies; 35+ messages in thread
From: Bjorn Helgaas @ 2014-09-24 18:37 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Joerg Roedel, David Woodhouse, Yinghai Lu, Dan Williams,
	Vinod Koul, Rafael J . Wysocki, Ashok Raj, Yijing Wang,
	Tony Luck, iommu, linux-pci, linux-hotplug, linux-kernel,
	dmaengine

On Fri, Sep 19, 2014 at 01:18:55PM +0800, Jiang Liu wrote:
> Finally enhance pci_root driver to support DMAR device hotplug when
> hot-plugging PCI host bridges.
> 
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
> Reviewed-by: Yijing Wang <wangyijing@huawei.com>

I assume this will be merged via a non-PCI tree, so:

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

Looks OK to me, but I expect you'll want Rafael's ack as well.

> ---
>  drivers/acpi/pci_root.c |   16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
> index e6ae603ed1a1..4e177daa18e3 100644
> --- a/drivers/acpi/pci_root.c
> +++ b/drivers/acpi/pci_root.c
> @@ -33,6 +33,7 @@
>  #include <linux/pci.h>
>  #include <linux/pci-acpi.h>
>  #include <linux/pci-aspm.h>
> +#include <linux/dmar.h>
>  #include <linux/acpi.h>
>  #include <linux/slab.h>
>  #include <acpi/apei.h>	/* for acpi_hest_init() */
> @@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	struct acpi_pci_root *root;
>  	acpi_handle handle = device->handle;
>  	int no_aspm = 0, clear_aspm = 0;
> +	bool hotadd = system_state != SYSTEM_BOOTING;
>  
>  	root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
>  	if (!root)
> @@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
>  	device->driver_data = root;
>  
> +	if (hotadd && dmar_device_add(handle)) {
> +		result = -ENXIO;
> +		goto end;
> +	}
> +
>  	pr_info(PREFIX "%s [%s] (domain %04x %pR)\n",
>  	       acpi_device_name(device), acpi_device_bid(device),
>  	       root->segment, &root->secondary);
> @@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  			root->segment, (unsigned int)root->secondary.start);
>  		device->driver_data = NULL;
>  		result = -ENODEV;
> -		goto end;
> +		goto remove_dmar;
>  	}
>  
>  	if (clear_aspm) {
> @@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	if (device->wakeup.flags.run_wake)
>  		device_set_run_wake(root->bus->bridge, true);
>  
> -	if (system_state != SYSTEM_BOOTING) {
> +	if (hotadd) {
>  		pcibios_resource_survey_bus(root->bus);
>  		pci_assign_unassigned_root_bus_resources(root->bus);
>  	}
> @@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
>  	pci_unlock_rescan_remove();
>  	return 1;
>  
> +remove_dmar:
> +	if (hotadd)
> +		dmar_device_remove(handle);
>  end:
>  	kfree(root);
>  	return result;
> @@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
>  
>  	pci_remove_root_bus(root->bus);
>  
> +	dmar_device_remove(device->handle);
> +
>  	pci_unlock_rescan_remove();
>  
>  	kfree(root);
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2014-09-24 18:37 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-19  5:18 [Patch Part3 V6 0/8] Enable support of Intel DMAR device hotplug Jiang Liu
2014-09-19  5:18 ` Jiang Liu
2014-09-19  5:18 ` [Patch Part3 V6 1/8] iommu/vt-d: Introduce helper function dmar_walk_resources() Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  6:49   ` Yijing Wang
2014-09-19  6:49     ` Yijing Wang
2014-09-19  6:49     ` Yijing Wang
2014-09-19  5:18 ` [Patch Part3 V6 2/8] iommu/vt-d: Dynamically allocate and free seq_id for DMAR units Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18 ` [Patch Part3 V6 3/8] iommu/vt-d: Implement DMAR unit hotplug framework Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18 ` [Patch Part3 V6 4/8] iommu/vt-d: Search for ACPI _DSM method for DMAR hotplug Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18 ` [Patch Part3 V6 5/8] iommu/vt-d: Enhance intel_irq_remapping driver to support DMAR unit hotplug Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  6:49   ` Yijing Wang
2014-09-19  6:49     ` Yijing Wang
2014-09-19  6:49     ` Yijing Wang
2014-09-19  5:18 ` [Patch Part3 V6 6/8] iommu/vt-d: Enhance error recovery in function intel_enable_irq_remapping() Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18 ` [Patch Part3 V6 7/8] iommu/vt-d: Enhance intel-iommu driver to support DMAR unit hotplug Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18 ` [Patch Part3 V6 8/8] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-19  5:18   ` Jiang Liu
2014-09-24 18:37   ` Bjorn Helgaas
2014-09-24 18:37     ` Bjorn Helgaas
2014-09-24 18:37     ` Bjorn Helgaas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.