linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Intel-IOMMU 00/10] Intel IOMMU Support
@ 2007-06-06 18:56 anil.s.keshavamurthy
  2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
                   ` (9 more replies)
  0 siblings, 10 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

Sorry for the resend as my previous posting did not make
it to several people.

Hi,
	We are pleased to announce the revised version of
the Intel IOMMU driver. This driver incorporates several
feedback received from Anid Kleen, David Miller and 
several others.

Most notable changes from previous postings (apart from
general code cleanup) are

1) Replaced linear linked list with RB tree to manage IOVA's.

2) IOVA address is now being allocated from the cards MAX DMA address capability or 
DMA32bit limit which ever is lower. This allowed us to get rid of having to
preserve certain address range when multiple cards of different DMA address
capabilities share the same domain.

3)Implements generic pre-allocated pools a.k.a. resource pool to allocate
memory for IOVA's and for vt-d page tables. This resource pools grows
automagically in the background (work queued to keventd) based
on the demand.

4) Did some tuning in terms of locking for iova allocation and freeing.

5) Changed command line options for isa and gfx workaround to CONFIG options,
so that when we have all the components adhere to PCI-DMA api's we can
easily yank this workarounds.

With all the above changes, the performance greatly improved and
the results showed that performance with IOMMU was comparable to 
without IOMMU configured.


Once again, thanks for providing valuable feedback, please
apply this set of patches to MM if you have no further objectios.


Cheers,
-Anil Keshavamurthy
e-mail: anil.s.keshavamurthy@intel.com
Open Source Technology Center
Intel Corp.

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 01/10] DMAR detection and parsing logic
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
@ 2007-06-06 18:56 ` anil.s.keshavamurthy
  2007-06-06 18:57 ` [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling anil.s.keshavamurthy
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_detection.patch --]
[-- Type: text/plain, Size: 14490 bytes --]

This patch adds support for early detection and parsing of DMAR's
reported to OS via ACPI tables.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/Kconfig   |   11 +
 drivers/pci/Makefile  |    3 
 drivers/pci/dmar.c    |  318 ++++++++++++++++++++++++++++++++++++++++++++++++++
 include/acpi/actbl1.h |   27 +++-
 include/linux/dmar.h  |   52 ++++++++
 5 files changed, 404 insertions(+), 7 deletions(-)

Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-04 12:33:15.000000000 -0700
@@ -716,6 +716,17 @@
 	bool "Support mmconfig PCI config space access"
 	depends on PCI && ACPI
 
+config DMAR
+	bool "Support for DMA Remapping Devices (EXPERIMENTAL)"
+	depends on PCI_MSI && ACPI && EXPERIMENTAL
+	default y
+	help
+	  DMA remapping(DMAR) devices support enables independent address
+	  translations for Direct Memory Access(DMA) from Devices.
+	  These DMA remapping devices are reported via ACPI tables
+	  and includes pci device scope covered by these DMA
+	  remapping device.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/drivers/pci/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-04 12:33:15.000000000 -0700
@@ -20,6 +20,9 @@
 # Build the Hypertransport interrupt support
 obj-$(CONFIG_HT_IRQ) += htirq.o
 
+# Build Intel IOMMU support
+obj-$(CONFIG_DMAR) += dmar.o
+
 #
 # Some architectures use the generic PCI setup functions
 #
Index: linux-2.6.22-rc3/drivers/pci/dmar.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/dmar.c	2007-06-04 12:33:15.000000000 -0700
@@ -0,0 +1,318 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * 	Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ *	Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ *
+ * 	This file implements early detection/parsing of DMA Remapping Devices
+ * reported to OS through BIOS via DMA remapping reporting (DMAR) ACPI
+ * tables.
+ */
+
+#include <linux/pci.h>
+#include <linux/dmar.h>
+
+#undef PREFIX
+#define PREFIX "DMAR:"
+
+/* No locks are needed as DMA remapping hardware unit
+ * list is constructed at boot time and hotplug of
+ * these units are not supported by the architecture.
+ */
+LIST_HEAD(dmar_drhd_units);
+LIST_HEAD(dmar_rmrr_units);
+
+static struct acpi_table_header * __initdata dmar_tbl;
+
+static void __init dmar_register_drhd_unit(struct dmar_drhd_unit *drhd)
+{
+	/*
+	 * add INCLUDE_ALL at the tail, so scan the list will find it at
+	 * the very end.
+	 */
+	if (drhd->include_all)
+		list_add_tail(&drhd->list, &dmar_drhd_units);
+	else
+		list_add(&drhd->list, &dmar_drhd_units);
+}
+
+static void __init dmar_register_rmrr_unit(struct dmar_rmrr_unit *rmrr)
+{
+	list_add(&rmrr->list, &dmar_rmrr_units);
+}
+
+static int __init dmar_parse_one_dev_scope(struct acpi_dmar_device_scope *scope,
+					   struct pci_dev **dev, u16 segment)
+{
+	struct pci_bus *bus;
+	struct pci_dev *pdev = NULL;
+	struct acpi_dmar_pci_path *path;
+	int count;
+
+	bus = pci_find_bus(segment, scope->bus);
+	path = (struct acpi_dmar_pci_path *)(scope + 1);
+	count = (scope->length - sizeof(struct acpi_dmar_device_scope))
+		/sizeof(struct acpi_dmar_pci_path);
+
+	while (count) {
+		if (pdev)
+			pci_dev_put(pdev);
+		/*
+		 * Some BIOSes list non-exist devices in DMAR table, just
+		 * ignore it
+		 */
+		if (!bus) {
+			printk(KERN_WARNING
+			PREFIX "Device scope bus [%d] not found\n",
+			scope->bus);
+			break;
+		}
+		pdev = pci_get_slot(bus, PCI_DEVFN(path->dev, path->fn));
+		if (!pdev) {
+			printk(KERN_WARNING PREFIX
+			"Device scope device [%04x:%02x:%02x.%02x] not found\n",
+				segment, bus->number, path->dev, path->fn);
+			break;
+		}
+		path ++;
+		count --;
+		bus = pdev->subordinate;
+	}
+	if (!pdev) {
+		printk(KERN_WARNING PREFIX
+		"Device scope device [%04x:%02x:%02x.%02x] not found\n",
+		segment, scope->bus, path->dev, path->fn);
+		*dev = NULL;
+		return 0;
+	}
+	if ((scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT && pdev->subordinate)
+	   || (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE && !pdev->subordinate)) {
+		pci_dev_put(pdev);
+		printk(KERN_WARNING PREFIX "Device scope type does not match for %s\n", pci_name(pdev));
+		return -EINVAL;
+	}
+	*dev = pdev;
+	return 0;
+}
+
+static int __init dmar_parse_dev_scope(void *start, void *end, int *cnt,
+				       struct pci_dev ***devices, u16 segment)
+{
+	struct acpi_dmar_device_scope *scope;
+	void * tmp = start;
+	int index;
+	int ret;
+
+	*cnt = 0;
+	while (start < end) {
+		scope = start;
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
+		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE)
+			(*cnt)++;
+		else
+			printk(KERN_WARNING PREFIX "Unsupported device scope\n");
+		start += scope->length;
+	}
+	if (*cnt == 0)
+		return 0;
+
+	*devices = kcalloc(*cnt, sizeof(struct pci_dev *), GFP_KERNEL);
+	if (!*devices)
+		return -ENOMEM;
+
+	start = tmp;
+	index = 0;
+	while (start < end) {
+		scope = start;
+		if (scope->entry_type == ACPI_DMAR_SCOPE_TYPE_ENDPOINT ||
+		    scope->entry_type == ACPI_DMAR_SCOPE_TYPE_BRIDGE) {
+			ret = dmar_parse_one_dev_scope(scope,
+				&(*devices)[index], segment);
+			if (ret) {
+				kfree(*devices);
+				return ret;
+			}
+			index ++;
+		}
+		start += scope->length;
+	}
+
+	return 0;
+}
+
+/**
+ * dmar_parse_one_drhd - parses exactly one DMA remapping hardware definition
+ * structure which uniquely represent one DMA remapping hardware unit
+ * present in the platform
+ */
+static int __init
+dmar_parse_one_drhd(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_hardware_unit * drhd = (struct acpi_dmar_hardware_unit *)header;
+	struct dmar_drhd_unit *dmaru;
+	int ret = 0;
+	static int include_all;
+
+	dmaru = kzalloc(sizeof(*dmaru), GFP_KERNEL);
+	if (!dmaru)
+		return -ENOMEM;
+
+	dmaru->reg_base_addr = drhd->address;
+	dmaru->include_all = drhd->flags & 0x1; /* BIT0: INCLUDE_ALL */
+
+	if (!dmaru->include_all)
+		ret = dmar_parse_dev_scope((void *)(drhd + 1),
+				((void *)drhd) + header->length,
+				&dmaru->devices_cnt, &dmaru->devices,
+				drhd->segment);
+	else {
+		/* Only allow one INCLUDE_ALL */
+		if (include_all) {
+			printk(KERN_WARNING PREFIX "Only one INCLUDE_ALL "
+				"device scope is allowed\n");
+			ret = -EINVAL;
+		}
+		include_all = 1;
+	}
+
+	if (ret || (dmaru->devices_cnt == 0 && !dmaru->include_all))
+		kfree(dmaru);
+	else
+		dmar_register_drhd_unit(dmaru);
+	return ret;
+}
+
+static int __init
+dmar_parse_one_rmrr(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_reserved_memory *rmrr = (struct acpi_dmar_reserved_memory *)header;
+	struct dmar_rmrr_unit *rmrru;
+	int ret = 0;
+
+	rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL);
+	if (!rmrru)
+		return -ENOMEM;
+
+	rmrru->base_address = rmrr->base_address;
+	rmrru->end_address = rmrr->end_address;
+	ret = dmar_parse_dev_scope((void *)(rmrr + 1),
+		((void*)rmrr) + header->length,
+		&rmrru->devices_cnt, &rmrru->devices, rmrr->segment);
+
+	if (ret || (rmrru->devices_cnt == 0))
+		kfree(rmrru);
+	else
+		dmar_register_rmrr_unit(rmrru);
+	return ret;
+}
+
+static void __init
+dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
+{
+	struct acpi_dmar_hardware_unit *drhd;
+	struct acpi_dmar_reserved_memory *rmrr;
+
+	switch (header->type) {
+	case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+		drhd = (struct acpi_dmar_hardware_unit *)header;
+		printk (KERN_INFO PREFIX
+			"DRHD (flags: 0x%08x)base: 0x%016Lx\n",
+			drhd->flags, drhd->address);
+		break;
+	case ACPI_DMAR_TYPE_RESERVED_MEMORY:
+		rmrr = (struct acpi_dmar_reserved_memory *)header;
+
+		printk (KERN_INFO PREFIX
+			"RMRR base: 0x%016Lx end: 0x%016Lx\n",
+			rmrr->base_address, rmrr->end_address);
+		break;
+	}
+}
+
+/**
+ * parse_dmar_table - parses the DMA reporting table
+ */
+static int __init
+parse_dmar_table(void)
+{
+	struct acpi_table_dmar *dmar;
+	struct acpi_dmar_header *entry_header;
+	int ret = 0;
+
+	dmar = (struct acpi_table_dmar *)dmar_tbl;
+
+	if (!dmar->width) {
+		printk (KERN_WARNING PREFIX "Zero: Invalid DMAR haw\n");
+		return -EINVAL;
+	}
+
+	printk (KERN_INFO PREFIX "Host address width %d\n",
+		dmar->width + 1);
+
+	entry_header = (struct acpi_dmar_header *)(dmar + 1);
+	while (((unsigned long)entry_header) < (((unsigned long)dmar) + dmar_tbl->length)) {
+		dmar_table_print_dmar_entry(entry_header);
+
+		switch (entry_header->type) {
+		case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+			ret = dmar_parse_one_drhd(entry_header);
+			break;
+		case ACPI_DMAR_TYPE_RESERVED_MEMORY:
+			ret = dmar_parse_one_rmrr(entry_header);
+			break;
+		default:
+			printk(KERN_WARNING PREFIX "Unknown DMAR structure type\n");
+			ret = 0; /* for forward compatibility */
+			break;
+		}
+		if (ret)
+			break;
+
+		entry_header = ((void *)entry_header + entry_header->length);
+	}
+	return ret;
+}
+
+
+int __init dmar_table_init(void)
+{
+
+	parse_dmar_table();
+	if (list_empty(&dmar_drhd_units)) {
+		printk(KERN_ERR PREFIX "No DMAR devices found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+/**
+ * early_dmar_detect - checks to see if the platform supports DMAR devices
+ */
+int __init early_dmar_detect(void)
+{
+	acpi_status status = AE_OK;
+
+	/* if we could find DMAR table, then there are DMAR devices */
+	status = acpi_get_table(ACPI_SIG_DMAR, 0,
+				(struct acpi_table_header **)&dmar_tbl);
+
+	if (ACPI_SUCCESS(status) && !dmar_tbl) {
+		printk (KERN_WARNING PREFIX "Unable to map DMAR\n");
+		status = AE_NOT_FOUND;
+	}
+
+	return (ACPI_SUCCESS(status) ? 1 : 0);
+}
Index: linux-2.6.22-rc3/include/acpi/actbl1.h
===================================================================
--- linux-2.6.22-rc3.orig/include/acpi/actbl1.h	2007-06-04 12:28:13.000000000 -0700
+++ linux-2.6.22-rc3/include/acpi/actbl1.h	2007-06-04 12:33:15.000000000 -0700
@@ -257,7 +257,8 @@
 struct acpi_table_dmar {
 	struct acpi_table_header header;	/* Common ACPI table header */
 	u8 width;		/* Host Address Width */
-	u8 reserved[11];
+	u8 flags;
+	u8 reserved[10];
 };
 
 /* DMAR subtable header */
@@ -265,8 +266,6 @@
 struct acpi_dmar_header {
 	u16 type;
 	u16 length;
-	u8 flags;
-	u8 reserved[3];
 };
 
 /* Values for subtable type in struct acpi_dmar_header */
@@ -274,13 +273,15 @@
 enum acpi_dmar_type {
 	ACPI_DMAR_TYPE_HARDWARE_UNIT = 0,
 	ACPI_DMAR_TYPE_RESERVED_MEMORY = 1,
-	ACPI_DMAR_TYPE_RESERVED = 2	/* 2 and greater are reserved */
+	ACPI_DMAR_TYPE_ATSR = 2,
+	ACPI_DMAR_TYPE_RESERVED = 3	/* 3 and greater are reserved */
 };
 
 struct acpi_dmar_device_scope {
 	u8 entry_type;
 	u8 length;
-	u8 segment;
+	u16 reserved;
+	u8 enumeration_id;
 	u8 bus;
 };
 
@@ -290,7 +291,14 @@
 	ACPI_DMAR_SCOPE_TYPE_NOT_USED = 0,
 	ACPI_DMAR_SCOPE_TYPE_ENDPOINT = 1,
 	ACPI_DMAR_SCOPE_TYPE_BRIDGE = 2,
-	ACPI_DMAR_SCOPE_TYPE_RESERVED = 3	/* 3 and greater are reserved */
+	ACPI_DMAR_SCOPE_TYPE_IOAPIC = 3,
+	ACPI_DMAR_SCOPE_TYPE_HPET = 4,
+	ACPI_DMAR_SCOPE_TYPE_RESERVED = 5	/* 5 and greater are reserved */
+};
+
+struct acpi_dmar_pci_path {
+	u8 dev;
+	u8 fn;
 };
 
 /*
@@ -301,6 +309,9 @@
 
 struct acpi_dmar_hardware_unit {
 	struct acpi_dmar_header header;
+	u8 flags;
+	u8 reserved;
+	u16 segment;
 	u64 address;		/* Register Base Address */
 };
 
@@ -312,7 +323,9 @@
 
 struct acpi_dmar_reserved_memory {
 	struct acpi_dmar_header header;
-	u64 address;		/* 4_k aligned base address */
+	u16 reserved;
+	u16 segment;
+	u64 base_address;		/* 4_k aligned base address */
 	u64 end_address;	/* 4_k aligned limit address */
 };
 
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:33:15.000000000 -0700
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ * Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ */
+
+#ifndef __DMAR_H__
+#define __DMAR_H__
+
+#include <linux/acpi.h>
+#include <linux/types.h>
+
+
+extern int dmar_table_init(void);
+extern int early_dmar_detect(void);
+
+extern struct list_head dmar_drhd_units;
+extern struct list_head dmar_rmrr_units;
+
+struct dmar_drhd_unit {
+	struct list_head list;		/* list of drhd units	*/
+	u64	reg_base_addr;		/* register base address*/
+	struct	pci_dev **devices; 	/* target device array	*/
+	int	devices_cnt;		/* target device count	*/
+	u8	ignored:1; 		/* ignore drhd		*/
+	u8	include_all:1;
+	struct intel_iommu *iommu;
+};
+
+struct dmar_rmrr_unit {
+	struct list_head list;		/* list of rmrr units	*/
+	u64	base_address;		/* reserved base address*/
+	u64	end_address;		/* reserved end address */
+	struct pci_dev **devices;	/* target devices */
+	int	devices_cnt;		/* target device count */
+};
+
+#endif /* __DMAR_H__ */

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
  2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-07 23:27   ` Andrew Morton
  2007-06-06 18:57 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: resource_pool.patch --]
[-- Type: text/plain, Size: 8705 bytes --]

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 include/linux/respool.h |   43 +++++++++
 lib/Makefile            |    1 
 lib/respool.c           |  222 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 266 insertions(+)

Index: linux-2.6.22-rc3/include/linux/respool.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/include/linux/respool.h	2007-06-06 11:33:24.000000000 -0700
@@ -0,0 +1,43 @@
+/*
+ * respool.c - library routines for handling generic pre-allocated pool of objects
+ *
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ *
+ */
+
+#ifndef _RESPOOL_H_
+#define _RESPOOL_H_
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+
+typedef void *(*rpool_alloc_t)(unsigned int, gfp_t);
+typedef void (*rpool_free_t)(void *, unsigned int);
+
+struct resource_pool {
+	struct work_struct work;
+	spinlock_t	pool_lock;	/* pool lock to walk the pool_head */
+	struct list_head pool_head;	/* pool objects list head	*/
+	unsigned int	min_count;	/* min count to maintain	*/
+	unsigned int	grow_count;	/* grow by count when time to grow */
+	unsigned int	curr_count;	/* count of current free objects */
+	unsigned int	alloc_size;	/* objects size			*/
+	rpool_alloc_t 	alloc_mem;	/* pool mem alloc function pointer */
+	rpool_free_t 	free_mem;	/* pool mem free function pointer */
+};
+
+void *get_resource_pool_obj(struct resource_pool *ppool);
+void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool);
+void destroy_resource_pool(struct resource_pool *ppool);
+int init_resource_pool(struct resource_pool *res,
+	unsigned int min_count, unsigned int alloc_size,
+	unsigned int grow_count, rpool_alloc_t alloc_fn,
+	rpool_free_t free_fn);
+
+#endif
Index: linux-2.6.22-rc3/lib/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/lib/Makefile	2007-06-06 11:33:21.000000000 -0700
+++ linux-2.6.22-rc3/lib/Makefile	2007-06-06 11:33:24.000000000 -0700
@@ -58,6 +58,7 @@
 obj-$(CONFIG_AUDIT_GENERIC) += audit.o
 
 obj-$(CONFIG_SWIOTLB) += swiotlb.o
+obj-$(CONFIG_DMAR) += respool.o
 obj-$(CONFIG_FAULT_INJECTION) += fault-inject.o
 
 lib-$(CONFIG_GENERIC_BUG) += bug.o
Index: linux-2.6.22-rc3/lib/respool.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/lib/respool.c	2007-06-06 11:34:46.000000000 -0700
@@ -0,0 +1,222 @@
+/*
+ * respool.c - library routines for handling generic pre-allocated pool of objects
+ *
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ */
+
+#include <linux/respool.h>
+
+/**
+ * get_resource_pool_obj - gets an object from the pool
+ * @ppool - resource pool in question
+ * This function gets an object from the pool and
+ * if the pool count drops below min_count, this
+ * function schedules work to grow the pool. If
+ * no elements are fount in the pool then this function
+ * tries to get memory from kernel.
+ */
+void * get_resource_pool_obj(struct resource_pool *ppool)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+	bool queue_work = 0;
+
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	if (!list_empty(&ppool->pool_head)) {
+		plist = ppool->pool_head.next;
+		list_del(plist);
+		ppool->curr_count--;
+	} else {
+		/*Making sure that curr_count is 0 when list is empty */
+		plist = NULL;
+		BUG_ON(ppool->curr_count != 0);
+	}
+
+	/* Check if pool needs to grow */
+	if (ppool->curr_count <= ppool->min_count)
+		queue_work = 1;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+
+	if (queue_work)
+		schedule_work(&ppool->work); /* queue work to grow the pool */
+
+
+	if (plist) {
+		memset(plist, 0, ppool->alloc_size); /* Zero out memory */
+		return plist;
+	}
+
+	/* Out of luck, try to get memory from kernel */
+	plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
+			GFP_ATOMIC);
+
+	return plist;
+}
+
+/**
+ * put_resource_pool_obj - puts an object back to the pool
+ * @vaddr - object's address
+ * @ppool - resource pool in question.
+ * This function puts an object back to the pool.
+ */
+void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool)
+{
+	unsigned long	flags;
+	struct list_head *plist = (struct list_head *)vaddr;
+	bool queue_work = 0;
+
+	BUG_ON(!vaddr);
+	BUG_ON(!ppool);
+
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	list_add(plist, &ppool->pool_head);
+	ppool->curr_count++;
+	if (ppool->curr_count > (ppool->min_count +
+		ppool->grow_count * 2))
+		queue_work = 1;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+
+	if (queue_work)
+		schedule_work(&ppool->work); /* queue work to shrink the pool */
+}
+
+void
+__grow_resource_pool(struct resource_pool *ppool,
+	unsigned int grow_count)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+
+	while(grow_count) {
+		plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
+			GFP_KERNEL);
+
+		if (!plist)
+			break;
+
+		/* Add the element to the list */
+		spin_lock_irqsave(&ppool->pool_lock, flags);
+		list_add(plist, &ppool->pool_head);
+		ppool->curr_count++;
+		spin_unlock_irqrestore(&ppool->pool_lock, flags);
+		grow_count--;
+	}
+}
+
+void
+__shrink_resource_pool(struct resource_pool *ppool,
+	unsigned int shrink_count)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+
+	while (shrink_count) {
+		/* remove an object from the pool */
+		spin_lock_irqsave(&ppool->pool_lock, flags);
+		if (list_empty(&ppool->pool_head)) {
+			spin_unlock_irqrestore(&ppool->pool_lock, flags);
+			break;
+		}
+		plist = ppool->pool_head.next;
+		list_del(plist);
+		ppool->curr_count--;
+		spin_unlock_irqrestore(&ppool->pool_lock, flags);
+		ppool->free_mem(plist, ppool->alloc_size);
+		shrink_count--;
+	}
+}
+
+
+/**
+ * resize_resource_pool - resize the given resource pool
+ * @work - work struct
+ * This functions gets the resource pool pointer from the
+ * work struct and grows the resource pool by grow_count.
+ */
+static void
+resize_resource_pool(struct work_struct * work)
+{
+	struct resource_pool *ppool;
+	unsigned int min_count, grow_count = 0;
+	unsigned int shrink_count = 0;
+	unsigned long	flags;
+
+	ppool = container_of(work, struct resource_pool, work);
+
+	/* compute the minimum count to grow */
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	min_count = ppool->min_count + ppool->grow_count;
+	if (ppool->curr_count < min_count)
+		grow_count = min_count - ppool->curr_count;
+	else if (ppool->curr_count > min_count + ppool->grow_count)
+		shrink_count = ppool->curr_count - min_count;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+
+	if (grow_count)
+		__grow_resource_pool(ppool, grow_count);
+	else if (shrink_count)
+		__shrink_resource_pool(ppool, shrink_count);
+}
+
+/**
+ * destroy_resource_pool - destroys the given resource pool
+ * @ppool - resource pool in question.
+ * This function walks throuhg its list and frees up the
+ * preallocated objects.
+ */
+void
+destroy_resource_pool(struct resource_pool *ppool)
+{
+	unsigned long	flags;
+	struct list_head *plist;
+
+	spin_lock_irqsave(&ppool->pool_lock, flags);
+	while (!list_empty(&ppool->pool_head)) {
+		plist = &ppool->pool_head;
+		list_del(plist);
+
+		ppool->free_mem(plist, ppool->alloc_size);
+
+	}
+	ppool->curr_count = 0;
+	spin_unlock_irqrestore(&ppool->pool_lock, flags);
+}
+
+/**
+ * init_resource_pool - initializes the resource pool
+ * @res: resource pool in question.
+ * @min_count: count of objectes to pre-allocate
+ * @alloc_size: size of each objects
+ * @grow_count: count of objects to grow when required
+ * @alloc_fn: function which allocates memory
+ * @free_fn: function which frees memory
+ *
+ * This function initializes the given resource pool and
+ * populates the min_count of objects to begin with.
+ */
+int
+init_resource_pool(struct resource_pool *res,
+	unsigned int min_count, unsigned int alloc_size,
+	unsigned int grow_count, rpool_alloc_t alloc_fn,
+	rpool_free_t free_fn)
+{
+	res->min_count = min_count;
+	res->alloc_size = alloc_size;
+	res->grow_count = grow_count;
+	res->curr_count = 0;
+	res->alloc_mem = alloc_fn;
+	res->free_mem = free_fn;
+	spin_lock_init(&res->pool_lock);
+	INIT_LIST_HEAD(&res->pool_head);
+	INIT_WORK(&res->work, resize_resource_pool);
+
+	/* grow the pool */
+	resize_resource_pool(&res->work);
+
+	return (res->curr_count == 0);
+}
+

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 03/10] PCI generic helper function
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
  2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
  2007-06-06 18:57 ` [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-06 18:57 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: pcie_port_type.patch --]
[-- Type: text/plain, Size: 4090 bytes --]

When devices are under a p2p bridge, upstream
transactions get replaced by the device id of the bridge as it owns the
PCIE transaction. Hence its necessary to setup translations on behalf of the
bridge as well. Due to this limitation all devices under a p2p share the same
domain in a DMAR.

We just cache the type of device, if its a native PCIe device
or not for later use.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 drivers/pci/pci.h    |    1 +
 drivers/pci/probe.c  |   14 ++++++++++++++
 drivers/pci/search.c |   30 ++++++++++++++++++++++++++++++
 include/linux/pci.h  |    2 ++
 4 files changed, 47 insertions(+)

Index: linux-2.6.22-rc3/drivers/pci/pci.h
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/pci.h	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/pci.h	2007-06-04 12:36:31.000000000 -0700
@@ -92,3 +92,4 @@
 	return NULL;
 }
 
+struct pci_dev *pci_find_upstream_pcie_bridge(struct pci_dev *pdev);
Index: linux-2.6.22-rc3/drivers/pci/probe.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/probe.c	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/probe.c	2007-06-04 12:36:31.000000000 -0700
@@ -803,6 +803,19 @@
 	kfree(pci_dev);
 }
 
+static void set_pcie_port_type(struct pci_dev *pdev)
+{
+	int pos;
+	u16 reg16;
+
+	pos = pci_find_capability(pdev, PCI_CAP_ID_EXP);
+	if (!pos)
+		return;
+	pdev->is_pcie = 1;
+	pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
+	pdev->pcie_type = (reg16 & PCI_EXP_FLAGS_TYPE) >> 4;
+}
+
 /**
  * pci_cfg_space_size - get the configuration space size of the PCI device.
  * @dev: PCI device
@@ -917,6 +930,7 @@
 	dev->device = (l >> 16) & 0xffff;
 	dev->cfg_size = pci_cfg_space_size(dev);
 	dev->error_state = pci_channel_io_normal;
+	set_pcie_port_type(dev);
 
 	/* Assume 32-bit PCI; let 64-bit PCI cards (which are far rarer)
 	   set this higher, assuming the system even supports it.  */
Index: linux-2.6.22-rc3/drivers/pci/search.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/search.c	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/search.c	2007-06-04 12:36:31.000000000 -0700
@@ -14,6 +14,36 @@
 #include "pci.h"
 
 DECLARE_RWSEM(pci_bus_sem);
+/*
+ * find the upstream PCIE-to-PCI bridge of a PCI device
+ * if the device is PCIE, return NULL
+ * if the device isn't connected to a PCIE bridge (that is its parent is a
+ * legacy PCI bridge and the bridge is directly connected to bus 0), return its
+ * parent
+ */
+struct pci_dev *
+pci_find_upstream_pcie_bridge(struct pci_dev *pdev)
+{
+	struct pci_dev *tmp = NULL;
+
+	if (pdev->is_pcie)
+		return NULL;
+	while (1) {
+		if (!pdev->bus->self)
+			break;
+		pdev = pdev->bus->self;
+		/* a p2p bridge */
+		if (!pdev->is_pcie) {
+			tmp = pdev;
+			continue;
+		}
+		/* PCI device should connect to a PCIE bridge */
+		BUG_ON(pdev->pcie_type != PCI_EXP_TYPE_PCI_BRIDGE);
+		return pdev;
+	}
+
+	return tmp;
+}
 
 static struct pci_bus *pci_do_find_bus(struct pci_bus *bus, unsigned char busnr)
 {
Index: linux-2.6.22-rc3/include/linux/pci.h
===================================================================
--- linux-2.6.22-rc3.orig/include/linux/pci.h	2007-06-04 12:27:34.000000000 -0700
+++ linux-2.6.22-rc3/include/linux/pci.h	2007-06-04 12:36:31.000000000 -0700
@@ -139,6 +139,7 @@
 	unsigned short	subsystem_device;
 	unsigned int	class;		/* 3 bytes: (base,sub,prog-if) */
 	u8		hdr_type;	/* PCI header type (`multi' flag masked out) */
+	u8		pcie_type;	/* PCI-E device/port type */
 	u8		rom_base_reg;	/* which config register controls the ROM */
 	u8		pin;  		/* which interrupt pin this device uses */
 
@@ -181,6 +182,7 @@
 	unsigned int 	msi_enabled:1;
 	unsigned int	msix_enabled:1;
 	unsigned int	is_managed:1;
+	unsigned int	is_pcie:1;
 	atomic_t	enable_cnt;	/* pci_enable_device has been called */
 
 	u32		saved_config_space[16]; /* config space saved at suspend time */

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 04/10] clflush_cache_range now takes size param
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (2 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-06 18:57 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: clflush_cache_range.patch --]
[-- Type: text/plain, Size: 1703 bytes --]

	Introduce the size param for clflush_cache_range().

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/mm/pageattr.c       |    6 +++---
 include/asm-x86_64/cacheflush.h |    1 +
 2 files changed, 4 insertions(+), 3 deletions(-)

Index: linux-2.6.22-rc3/arch/x86_64/mm/pageattr.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/mm/pageattr.c	2007-06-04 12:27:33.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/mm/pageattr.c	2007-06-04 12:37:30.000000000 -0700
@@ -61,10 +61,10 @@
 	return base;
 } 
 
-static void cache_flush_page(void *adr)
+void clflush_cache_range(void *adr, int size)
 {
 	int i;
-	for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size)
+	for (i = 0; i < size; i += boot_cpu_data.x86_clflush_size)
 		asm volatile("clflush (%0)" :: "r" (adr + i));
 }
 
@@ -80,7 +80,7 @@
 	list_for_each_entry(pg, l, lru) {
 		void *adr = page_address(pg);
 		if (cpu_has_clflush)
-			cache_flush_page(adr);
+ 			clflush_cache_range(adr, PAGE_SIZE);
 	}
 	__flush_tlb_all();
 }
Index: linux-2.6.22-rc3/include/asm-x86_64/cacheflush.h
===================================================================
--- linux-2.6.22-rc3.orig/include/asm-x86_64/cacheflush.h	2007-06-04 12:27:33.000000000 -0700
+++ linux-2.6.22-rc3/include/asm-x86_64/cacheflush.h	2007-06-04 12:37:30.000000000 -0700
@@ -27,6 +27,7 @@
 void global_flush_tlb(void); 
 int change_page_attr(struct page *page, int numpages, pgprot_t prot);
 int change_page_attr_addr(unsigned long addr, int numpages, pgprot_t prot);
+void clflush_cache_range(void *addr, int size);
 
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 05/10] IOVA allocation and management routines
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (3 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-07 23:34   ` Andrew Morton
  2007-06-06 18:57 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: generic_iova.patch --]
[-- Type: text/plain, Size: 12849 bytes --]

	This code implements a generic IOVA allocation and 
management. As per Dave's suggestion we are now allocating
IO virtual address from Higher DMA limit address rather
than lower end address and this eliminated the need to preserve
the IO virtual address for multiple devices sharing the same
domain virtual address.

Also this code uses red black trees to store the allocated and
reserved iova nodes. This showed a good performance improvements
over previous linear linked list.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 drivers/pci/iova.c |  344 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/pci/iova.h |   57 ++++++++
 2 files changed, 401 insertions(+)

Index: linux-2.6.22-rc3/drivers/pci/iova.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/iova.c	2007-06-04 12:40:20.000000000 -0700
@@ -0,0 +1,344 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ */
+
+#include "iova.h"
+
+void
+init_iova_domain(struct iova_domain *iovad)
+{
+	spin_lock_init(&iovad->iova_alloc_lock);
+	spin_lock_init(&iovad->iova_rbtree_lock);
+	iovad->rbroot = RB_ROOT;
+	iovad->cached32_node = NULL;
+
+}
+
+static struct rb_node *
+__get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn)
+{
+	if ((*limit_pfn != DMA_32BIT_PFN) ||
+		(iovad->cached32_node == NULL))
+		return rb_last(&iovad->rbroot);
+	else {
+		struct rb_node *prev_node = rb_prev(iovad->cached32_node);
+		struct iova *curr_iova =
+			container_of(iovad->cached32_node, struct iova, node);
+		*limit_pfn = curr_iova->pfn_lo - 1;
+		return prev_node;
+	}
+}
+
+static inline void
+__cached_rbnode_insert_update(struct iova_domain *iovad,
+	unsigned long limit_pfn, struct iova *new)
+{
+	if (limit_pfn != DMA_32BIT_PFN)
+		return;
+	iovad->cached32_node = &new->node;
+}
+
+static inline void
+__cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
+{
+	struct iova *cached_iova;
+	struct rb_node *curr;
+
+	if (!iovad->cached32_node)
+		return;
+	curr = iovad->cached32_node;
+	cached_iova = container_of(curr, struct iova, node);
+
+	if (free->pfn_lo >= cached_iova->pfn_lo)
+		iovad->cached32_node = rb_next(&free->node);
+}
+
+static inline int __alloc_iova_range(struct iova_domain *iovad,
+	unsigned long size, unsigned long limit_pfn, struct iova *new)
+{
+	struct rb_node *curr = NULL;
+	unsigned long flags;
+	unsigned long saved_pfn;
+
+	/* Walk the tree backwards */
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	saved_pfn = limit_pfn;
+	curr = __get_cached_rbnode(iovad, &limit_pfn);
+	while (curr) {
+		struct iova *curr_iova = container_of(curr, struct iova, node);
+		if (limit_pfn < curr_iova->pfn_lo)
+			goto move_left;
+		if (limit_pfn < curr_iova->pfn_hi)
+			goto adjust_limit_pfn;
+		if ((curr_iova->pfn_hi + size) <= limit_pfn)
+			break;	/* found a free slot */
+adjust_limit_pfn:
+		limit_pfn = curr_iova->pfn_lo - 1;
+move_left:
+		curr = rb_prev(curr);
+	}
+
+	if ((!curr) && !(IOVA_START_PFN + size <= limit_pfn)) {
+		spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+		return -ENOMEM;
+	}
+	new->pfn_hi = limit_pfn;
+	new->pfn_lo = limit_pfn - size + 1;
+
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+	return 0;
+}
+
+static void
+iova_insert_rbtree(struct rb_root *root, struct iova *iova)
+{
+	struct rb_node **new = &(root->rb_node), *parent = NULL;
+	/* Figure out where to put new node */
+	while (*new) {
+		struct iova *this = container_of(*new, struct iova, node);
+		parent = *new;
+
+		if (iova->pfn_lo < this->pfn_lo)
+			new = &((*new)->rb_left);
+		else if (iova->pfn_lo > this->pfn_lo)
+			new = &((*new)->rb_right);
+		else
+			BUG(); /* this should not happen */
+	}
+	/* Add new node and rebalance tree. */
+	rb_link_node(&iova->node, parent, new);
+	rb_insert_color(&iova->node, root);
+}
+
+/**
+ * alloc_iova - allocates an iova
+ * @iovad - iova domain in question
+ * @size - size of page frames to allocate
+ * @limit_pfn - max limit address
+ * This function allocates an iova in the range limit_pfn to IOVA_START_PFN
+ * looking from limit_pfn instead from IOVA_START_PFN.
+ */
+
+struct iova *
+alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn)
+{
+	unsigned long flags,flags1;
+	struct iova *new_iova;
+	int ret;
+
+	new_iova = alloc_iova_mem();
+	if (!new_iova)
+		return NULL;
+
+	spin_lock_irqsave(&iovad->iova_alloc_lock, flags1);
+	ret = __alloc_iova_range(iovad, size, limit_pfn, new_iova);
+
+	if (ret) {
+		spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags1);
+		free_iova_mem(new_iova);
+		return NULL;
+	}
+
+	/* Insert the new_iova into domain rbtree by holding writer lock */
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	iova_insert_rbtree(&iovad->rbroot, new_iova);
+	__cached_rbnode_insert_update(iovad, limit_pfn, new_iova);
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+
+	spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags1);
+
+	return new_iova;
+}
+
+/**
+ * find_iova - find's an iova for a given pfn
+ * @iovad - iova domain in question.
+ * pfn - page frame number
+ * This function finds and returns an iova belonging to the
+ * given doamin which matches the given pfn.
+ */
+struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn)
+{
+	unsigned long flags;
+	struct rb_node *node;
+
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	node = iovad->rbroot.rb_node;
+	while (node) {
+		struct iova *iova = container_of(node, struct iova, node);
+
+		/* If pfn falls within iova's range, return iova */
+		if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) {
+			spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+			return iova;
+		}
+
+		if (pfn < iova->pfn_lo)
+			node = node->rb_left;
+		else if (pfn > iova->pfn_lo)
+			node = node->rb_right;
+	}
+
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+	return NULL;
+}
+
+/**
+ * __free_iova - frees the given iova
+ * @iovad: iova domain in question.
+ * @iova: iova in question.
+ * Frees the given iova belonging to the giving domain
+ */
+void
+__free_iova(struct iova_domain *iovad, struct iova *iova)
+{
+	unsigned long flags;
+
+	if (iova) {
+		spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+		__cached_rbnode_delete_update(iovad, iova);
+		rb_erase(&iova->node, &iovad->rbroot);
+		spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+		free_iova_mem(iova);
+	}
+}
+/**
+ * free_iova - finds and frees the iova for a given pfn
+ * @iovad: - iova domain in question.
+ * @pfn: - pfn that is allocated previously
+ * This functions finds an iova for a given pfn and then
+ * frees the iova from that domain.
+ */
+
+void
+free_iova(struct iova_domain *iovad, unsigned long pfn)
+{
+	struct iova *iova = find_iova(iovad, pfn);
+	__free_iova(iovad, iova);
+
+}
+
+/**
+ * put_iova_domain - destroys the iova doamin
+ * @iovad: - iova domain in question.
+ * All the iova's in that domain are destroyed.
+ */
+void put_iova_domain(struct iova_domain *iovad)
+{
+	struct rb_node *node;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+	while ((node = rb_first(&iovad->rbroot))) {
+		struct iova *iova = container_of(node, struct iova, node);
+		rb_erase(node, &iovad->rbroot);
+		free_iova_mem(iova);
+	}
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+}
+
+static inline int
+__is_range_overlap(struct rb_node *node, unsigned long pfn_lo, unsigned long pfn_hi)
+{
+	struct iova * iova = container_of(node, struct iova, node);
+
+	if ((pfn_lo <= iova->pfn_hi) && (pfn_hi >= iova->pfn_lo))
+		return 1;
+	return 0;
+}
+
+static inline struct iova *
+__insert_new_range(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi)
+{
+	struct iova *iova;
+
+	iova = alloc_iova_mem();
+	if (!iova)
+		return iova;
+
+	iova->pfn_hi = pfn_hi;
+	iova->pfn_lo = pfn_lo;
+	iova_insert_rbtree(&iovad->rbroot,iova);
+	return iova;
+}
+
+static inline void
+__adjust_overlap_range(struct iova *iova, unsigned long *pfn_lo, unsigned long *pfn_hi)
+{
+	if (*pfn_lo < iova->pfn_lo)
+		iova->pfn_lo = *pfn_lo;
+	if (*pfn_hi > iova->pfn_hi)
+		*pfn_lo = iova->pfn_hi + 1;
+}
+
+/**
+ * reserve_iova - reserves an iova in the given range
+ * @iovad: - iova domain pointer
+ * @pfn_lo: - lower page frame address
+ * @pfn_hi:- higher pfn adderss
+ * This function allocates reserves the address range from pfn_lo to pfn_hi so
+ * that this address is not dished out as part of alloc_iova.
+ */
+struct iova *
+reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi)
+{
+	struct rb_node *node;
+	unsigned long flags, flags1;
+	struct iova *iova;
+	unsigned int overlap = 0;
+
+	spin_lock_irqsave(&iovad->iova_alloc_lock, flags);
+	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags1);
+	for (node = rb_first(&iovad->rbroot); node; node = rb_next(node)) {
+		if (__is_range_overlap(node, pfn_lo, pfn_hi)) {
+			iova = container_of(node, struct iova, node);
+			__adjust_overlap_range(iova, &pfn_lo, &pfn_hi);
+			if ((pfn_lo >= iova->pfn_lo) &&
+				(pfn_hi <= iova->pfn_hi))
+				goto finish;
+			overlap = 1;
+
+		} else if (overlap)
+				break;
+	}
+
+	/* We are here either becasue this is the first reserver node
+	 * or need to insert remaining non overlap addr range
+	 */
+	iova = __insert_new_range(iovad, pfn_lo, pfn_hi);
+finish:
+
+	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags1);
+	spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags);
+	return iova;
+}
+
+/**
+ * copy_reserved_iova - copies the reserved between domains
+ * @from: - source doamin from where to copy
+ * @to: - destination domin where to copy
+ * This function copies reserved iova's from one doamin to
+ * other.
+ */
+void
+copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
+{
+	unsigned long flags, flags1;
+	struct rb_node *node;
+	spin_lock_irqsave(&from->iova_alloc_lock, flags);
+	spin_lock_irqsave(&from->iova_rbtree_lock, flags1);
+	for (node = rb_first(&from->rbroot); node; node = rb_next(node)) {
+		struct iova *iova = container_of(node, struct iova, node);
+		struct iova *new_iova;
+		new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
+		if (!new_iova)
+			printk(KERN_ERR "Reserve iova range %lx@%lx failed\n",
+				iova->pfn_lo, iova->pfn_lo);
+	}
+	spin_unlock_irqrestore(&from->iova_rbtree_lock, flags1);
+	spin_unlock_irqrestore(&from->iova_alloc_lock, flags);
+}
Index: linux-2.6.22-rc3/drivers/pci/iova.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/iova.h	2007-06-04 12:40:20.000000000 -0700
@@ -0,0 +1,57 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ *
+ */
+
+#ifndef _IOVA_H_
+#define _IOVA_H_
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/rbtree.h>
+#include <linux/dma-mapping.h>
+
+
+#define PAGE_SHIFT_4K		(12)
+#define PAGE_SIZE_4K		(1UL << PAGE_SHIFT_4K)
+#define PAGE_MASK_4K		(((u64)-1) << PAGE_SHIFT_4K)
+#define PAGE_ALIGN_4K(addr)	(((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
+
+#define IOVA_START_ADDR		(0x1000)
+#define IOVA_START_PFN		(IOVA_START_ADDR >> PAGE_SHIFT_4K)
+
+#define IOVA_PFN(addr)		((addr) >> PAGE_SHIFT_4K)
+#define DMA_32BIT_PFN	IOVA_PFN(DMA_32BIT_MASK)
+#define DMA_64BIT_PFN	IOVA_PFN(DMA_64BIT_MASK)
+
+/* iova structure */
+struct iova {
+	struct rb_node	node;
+	unsigned long	pfn_hi; /* IOMMU dish out addr hi */
+	unsigned long	pfn_lo; /* IOMMU dish out addr lo */
+};
+
+/* holds all the iova translations for a domain */
+struct iova_domain {
+	spinlock_t	iova_alloc_lock;/* Lock to protect iova  allocation */
+	spinlock_t	iova_rbtree_lock; /* Lock to protect update of rbtree */
+	struct rb_root	rbroot;		/* iova domain rbtree root */
+	struct rb_node	*cached32_node; /* Save last alloced node to optimize alloc */
+};
+
+struct iova *alloc_iova_mem(void);
+void free_iova_mem(struct iova *iova);
+void free_iova(struct iova_domain *iovad, unsigned long pfn);
+void __free_iova(struct iova_domain *iovad, struct iova *iova);
+struct iova * alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn);
+struct iova * reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi);
+void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
+void init_iova_domain(struct iova_domain *iovad);
+struct iova * find_iova(struct iova_domain *iovad, unsigned long pfn);
+void put_iova_domain(struct iova_domain *iovad);
+
+#endif

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 06/10] Intel IOMMU driver
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (4 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-07 23:57   ` Andrew Morton
  2007-06-06 18:57 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: intel_iommu.patch --]
[-- Type: text/plain, Size: 68642 bytes --]

	Actual intel IOMMU driver. Hardware spec can be found at:
http://www.intel.com/technology/virtualization

This driver sets X86_64 'dma_ops', so hook into standard DMA APIs. In this way,
PCI driver will get virtual DMA address. This change is transparent to PCI
drivers.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/Intel-IOMMU.txt       |   93 +
 Documentation/kernel-parameters.txt |   10 
 arch/x86_64/kernel/pci-dma.c        |    9 
 drivers/pci/Makefile                |    5 
 drivers/pci/intel-iommu.c           | 1918 ++++++++++++++++++++++++++++++++++++
 drivers/pci/intel-iommu.h           |  296 +++++
 include/linux/dmar.h                |   23 
 7 files changed, 2353 insertions(+), 1 deletion(-)

Index: linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt	2007-06-06 11:35:01.000000000 -0700
@@ -0,0 +1,93 @@
+Linux IOMMU Support
+===================
+
+The architecture spec can be obtained from the below location.
+
+http://www.intel.com/technology/virtualization/
+
+This guide gives a quick cheat sheet for some basic understanding.
+
+Some Keywords
+
+DMAR - DMA remapping
+DRHD - DMA Engine Reporting Structure
+RMRR - Reserved memory Region Reporting Structure
+ZLR  - Zero length reads from PCI devices
+IOVA - IO Virtual address.
+
+Basic stuff
+-----------
+
+ACPI enumerates and lists the different DMA engines in the platform, and
+device scope relationships between PCI devices and which DMA engine  controls
+them.
+
+What is RMRR?
+-------------
+
+There are some devices the BIOS controls, for e.g USB devices to perform
+PS2 emulation. The regions of memory used for these devices are marked
+reserved in the e820 map. When we turn on DMA translation, DMA to those
+regions will fail. Hence BIOS uses RMRR to specify these regions along with
+devices that need to access these regions. OS is expected to setup
+unity mappings for these regions for these devices to access these regions.
+
+How is IOVA generated?
+---------------------
+
+Well behaved drivers call pci_map_*() calls before sending command to device
+that needs to perform DMA. Once DMA is completed and mapping is no longer
+required, device performs a pci_unmap_*() calls to unmap the region.
+
+The Intel IOMMU driver allocates a virtual address per domain. Each PCIE
+device has its own domain (hence protection). Devices under p2p bridges
+share the virtual address with all devices under the p2p bridge due to
+transaction id aliasing for p2p bridges.
+
+IOVA generation is pretty generic. We used the same technique as vmalloc()
+but these are not global address spaces, but separate for each domain.
+Different DMA engines may support different number of domains.
+
+We also allocate gaurd pages with each mapping, so we can attempt to catch
+any overflow that might happen.
+
+
+Graphics Problems?
+------------------
+If you encounter issues with graphics devices, you can try adding
+option intel_iommu=igfx_off to turn off the integrated graphics engine.
+
+Some exceptions to IOVA
+-----------------------
+Interrupt ranges are not address translated, (0xfee00000 - 0xfeefffff).
+The same is true for peer to peer transactions. Hence we reserve the
+address from PCI MMIO ranges so they are not allocated for IOVA addresses.
+
+Boot Message Sample
+-------------------
+
+Something like this gets printed indicating presence of DMAR tables
+in ACPI.
+
+ACPI: DMAR (v001 A M I  OEMDMAR  0x00000001 MSFT 0x00000097) @ 0x000000007f5b5ef0
+
+When DMAR is being processed and initialized by ACPI, prints DMAR locations
+and any RMRR's processed.
+
+ACPI DMAR:Host address width 36
+ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed90000
+ACPI DMAR:DRHD (flags: 0x00000000)base: 0x00000000fed91000
+ACPI DMAR:DRHD (flags: 0x00000001)base: 0x00000000fed93000
+ACPI DMAR:RMRR base: 0x00000000000ed000 end: 0x00000000000effff
+ACPI DMAR:RMRR base: 0x000000007f600000 end: 0x000000007fffffff
+
+When DMAR is enabled for use, you will notice..
+
+PCI-DMA: Using DMAR IOMMU
+
+TBD
+----
+
+- For compatibility testing, could use unity map domain for all devices, just
+  provide a 1-1 for all useful memory under a single domain for all devices.
+- API for paravirt ops for abstracting functionlity for VMM folks.
Index: linux-2.6.22-rc3/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/kernel-parameters.txt	2007-06-06 11:33:21.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/kernel-parameters.txt	2007-06-06 11:35:01.000000000 -0700
@@ -776,6 +776,16 @@
 
 	inttest=	[IA64]
 
+	intel_iommu=	[DMAR] Intel IOMMU driver (DMAR) option
+		off
+			Disable intel iommu driver.
+		igfx_off [Default Off]
+			By default, gfx is mapped as normal device. If a gfx
+			device has a dedicated DMAR unit, the DMAR unit is
+			bypassed by not enabling DMAR with this option. In
+			this case, gfx device will use physical address for
+			DMA.
+
 	io7=		[HW] IO7 for Marvel based alpha systems
 			See comment before marvel_specify_io7 in
 			arch/alpha/kernel/core_marvel.c.
Index: linux-2.6.22-rc3/arch/x86_64/kernel/pci-dma.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/kernel/pci-dma.c	2007-06-06 11:33:21.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/kernel/pci-dma.c	2007-06-06 11:35:01.000000000 -0700
@@ -7,6 +7,7 @@
 #include <linux/string.h>
 #include <linux/pci.h>
 #include <linux/module.h>
+#include <linux/dmar.h>
 #include <asm/io.h>
 #include <asm/proto.h>
 #include <asm/calgary.h>
@@ -303,6 +304,10 @@
 	detect_calgary();
 #endif
 
+#ifdef CONFIG_DMAR
+	detect_intel_iommu();
+#endif
+
 #ifdef CONFIG_SWIOTLB
 	pci_swiotlb_init();
 #endif
@@ -314,6 +319,10 @@
 	calgary_iommu_init();
 #endif
 
+#ifdef CONFIG_DMAR
+	intel_iommu_init();
+#endif
+
 #ifdef CONFIG_IOMMU
 	gart_iommu_init();
 #endif
Index: linux-2.6.22-rc3/drivers/pci/Makefile
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/Makefile	2007-06-06 11:33:23.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/Makefile	2007-06-06 11:35:01.000000000 -0700
@@ -21,7 +21,10 @@
 obj-$(CONFIG_HT_IRQ) += htirq.o
 
 # Build Intel IOMMU support
-obj-$(CONFIG_DMAR) += dmar.o
+obj-$(CONFIG_DMAR) += dmar.o iova.o intel-iommu.o
+
+#Build Intel-IOMMU support
+obj-$(CONFIG_DMAR) += iova.o dmar.o intel-iommu.o
 
 #
 # Some architectures use the generic PCI setup functions
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-06 11:35:36.000000000 -0700
@@ -0,0 +1,1918 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ * Copyright (C) Shaohua Li <shaohua.li@intel.com>
+ */
+
+#include <linux/init.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/sysdev.h>
+#include <linux/spinlock.h>
+#include <linux/pci.h>
+#include <linux/dmar.h>
+#include <linux/dma-mapping.h>
+#include <linux/mempool.h>
+#include <linux/respool.h>
+#include "iova.h"
+#include "intel-iommu.h"
+#include <asm/proto.h> /* force_iommu in this header in x86-64*/
+#include <asm/cacheflush.h>
+#include "pci.h"
+
+#define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)
+#define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)
+
+#define IOAPIC_RANGE_START	(0xfee00000)
+#define IOAPIC_RANGE_END	(0xfeefffff)
+#define IOVA_START_ADDR		(0x1000)
+
+#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
+
+#define DMAR_OPERATION_TIMEOUT (HZ*60) /* 1m */
+
+#define DOMAIN_MAX_ADDR(gaw) ((((u64)1) << gaw) - 1)
+
+static void domain_remove_dev_info(struct domain *domain);
+
+static int dmar_disabled;
+static int __initdata dmar_map_gfx = 1;
+
+#define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1))
+static DEFINE_SPINLOCK(device_domain_lock);
+static LIST_HEAD(device_domain_list);
+
+static int __init intel_iommu_setup(char *str)
+{
+	if (!str)
+		return -EINVAL;
+	while (*str) {
+		if (!strncmp(str, "off", 3)) {
+			dmar_disabled = 1;
+			printk(KERN_INFO"Intel-IOMMU: disabled\n");
+		} else if (!strncmp(str, "igfx_off", 8)) {
+			dmar_map_gfx = 0;
+			printk(KERN_INFO"Intel-IOMMU: disable GFX device mapping\n");
+		}
+
+		str += strcspn(str, ",");
+		while (*str == ',')
+			str++;
+	}
+	return 0;
+}
+__setup("intel_iommu=", intel_iommu_setup);
+
+#define MIN_PGTABLE_PAGES	(10)
+#define GROW_PGTABLE_PAGES	(6)
+
+#define MIN_DOMAIN_REQ		(10)
+#define GROW_DOMAIN_REQ		(4)
+
+#define MIN_DEVINFO_REQ		(10)
+#define GROW_DEVINFO_REQ	(4)
+
+#define MIN_IOVA_REQ		(1024)
+#define GROW_IOVA_REQ		(256)
+
+static struct resource_pool iommu_pgtable_pool;
+static struct resource_pool iommu_domain_pool;
+static struct resource_pool iommu_devinfo_pool;
+static struct resource_pool iommu_iova_pool;
+
+static inline void *alloc_pgtable_page(void)
+{
+	return get_resource_pool_obj(&iommu_pgtable_pool);
+}
+
+static inline void free_pgtable_page(void *vaddr)
+{
+	return put_resource_pool_obj(vaddr, &iommu_pgtable_pool);
+}
+
+static inline void *alloc_domain_mem(void)
+{
+	return get_resource_pool_obj(&iommu_domain_pool);
+}
+
+static inline void free_domain_mem(void *vaddr)
+{
+	return put_resource_pool_obj(vaddr, &iommu_domain_pool);
+}
+
+static inline void * alloc_devinfo_mem(void)
+{
+	return get_resource_pool_obj(&iommu_devinfo_pool);
+}
+
+static inline void free_devinfo_mem(void *vaddr)
+{
+	return put_resource_pool_obj(vaddr, &iommu_devinfo_pool);
+}
+
+struct iova *alloc_iova_mem(void)
+{
+	return get_resource_pool_obj(&iommu_iova_pool);
+}
+
+void free_iova_mem(struct iova *iova)
+{
+	put_resource_pool_obj(iova, &iommu_iova_pool);
+}
+
+static inline void __iommu_flush_cache(struct intel_iommu *iommu, void *addr, int size)
+{
+	if (!ecap_coherent(iommu->ecap))
+		clflush_cache_range(addr, size);
+}
+
+/* context entry handling */
+static struct context_entry * device_to_context_entry(struct intel_iommu *iommu,
+		u8 bus, u8 devfn)
+{
+	struct root_entry *root;
+	struct context_entry *context;
+	unsigned long phy_addr;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	root = &iommu->root_entry[bus];
+	if (!(context = get_context_addr_from_root(*root))) {
+		context = (struct context_entry *)alloc_pgtable_page();
+		if (!context) {
+			spin_unlock_irqrestore(&iommu->lock, flags);
+			return NULL;
+		}
+		__iommu_flush_cache(iommu, (void *)context, PAGE_SIZE_4K);
+		phy_addr = virt_to_phys((void *)context);
+		set_root_value(*root, phy_addr);
+		set_root_present(*root);
+		__iommu_flush_cache(iommu, root, sizeof(*root));
+	}
+	spin_unlock_irqrestore(&iommu->lock, flags);
+	return &context[devfn];
+}
+
+static int device_context_mapped(struct intel_iommu *iommu, u8 bus, u8 devfn)
+{
+	struct root_entry *root;
+	struct context_entry *context;
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	root = &iommu->root_entry[bus];
+	if (!(context = get_context_addr_from_root(*root))) {
+		ret = 0;
+		goto out;
+	}
+	ret = context_present(context[devfn]);
+out:
+	spin_unlock_irqrestore(&iommu->lock, flags);
+	return ret;
+}
+
+static void clear_context_table(struct intel_iommu *iommu, u8 bus, u8 devfn)
+{
+	struct root_entry *root;
+	struct context_entry *context;
+	unsigned long flags;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	root = &iommu->root_entry[bus];
+	if ((context = get_context_addr_from_root(*root))) {
+		context_clear_entry(context[devfn]);
+		__iommu_flush_cache(iommu, &context[devfn], \
+			sizeof(*context));
+	}
+	spin_unlock_irqrestore(&iommu->lock, flags);
+}
+
+static void free_context_table(struct intel_iommu *iommu)
+{
+	struct root_entry *root;
+	int i;
+	unsigned long flags;
+	struct context_entry *context;
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	if (!iommu->root_entry) {
+		goto out;
+	}
+	for (i = 0; i < ROOT_ENTRY_NR; i++) {
+		root = &iommu->root_entry[i];
+		if ((context = get_context_addr_from_root(*root)))
+			free_pgtable_page(context);
+	}
+	free_pgtable_page(iommu->root_entry);
+	iommu->root_entry = NULL;
+out:
+	spin_unlock_irqrestore(&iommu->lock, flags);
+}
+
+/* page table handling */
+#define LEVEL_STRIDE		(9)
+#define LEVEL_MASK		(((u64)1 << LEVEL_STRIDE) - 1)
+#define agaw_to_level(val) ((val) + 2)
+#define agaw_to_width(val) (30 + val * LEVEL_STRIDE)
+#define width_to_agaw(w)  ((w - 30)/LEVEL_STRIDE)
+#define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
+#define address_level_offset(addr, level) \
+	((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
+#define level_size(l) ((u64)1 << level_to_offset_bits(l))
+#define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
+static struct dma_pte * addr_to_dma_pte(struct domain *domain, u64 addr)
+{
+	int addr_width = agaw_to_width(domain->agaw);
+	struct dma_pte *parent, *pte = NULL;
+	int level = agaw_to_level(domain->agaw);
+	int offset;
+	unsigned long flags;
+
+	BUG_ON(!domain->pgd);
+
+	addr &= (((u64)1) << addr_width) - 1;
+	parent = domain->pgd;
+
+	spin_lock_irqsave(&domain->mapping_lock, flags);
+	while (level > 0) {
+		void *tmp_page;
+
+		offset = address_level_offset(addr, level);
+		pte = &parent[offset];
+		if (level == 1)
+			break;
+
+		if (!dma_pte_present(*pte)) {
+			tmp_page = alloc_pgtable_page();
+
+			if (!tmp_page) {
+				spin_unlock_irqrestore(&domain->mapping_lock, flags);
+				return NULL;
+			}
+			__iommu_flush_cache(domain->iommu, tmp_page, PAGE_SIZE_4K);
+			dma_set_pte_addr(*pte, virt_to_phys(tmp_page));
+			/*
+			 * high level table always sets r/w, last level page
+			 * table control read/write
+			 */
+			dma_set_pte_readable(*pte);
+			dma_set_pte_writable(*pte);
+			__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+		}
+		parent = phys_to_virt(dma_pte_addr(*pte));
+		level--;
+	}
+
+	spin_unlock_irqrestore(&domain->mapping_lock, flags);
+	return pte;
+}
+
+/* return address's pte at specific level */
+static struct dma_pte *dma_addr_level_pte(struct domain *domain, u64 addr,
+		int level)
+{
+	struct dma_pte *parent, *pte = NULL;
+	int total = agaw_to_level(domain->agaw);
+	int offset;
+
+	parent = domain->pgd;
+	while (level <= total) {
+		offset = address_level_offset(addr, total);
+		pte = &parent[offset];
+		if (level == total)
+			return pte;
+
+		if (!dma_pte_present(*pte))
+			break;
+		parent = phys_to_virt(dma_pte_addr(*pte));
+		total--;
+	}
+	return NULL;
+}
+
+/* clear one page's page table */
+static void dma_pte_clear_one(struct domain *domain, u64 addr)
+{
+	struct dma_pte *pte = NULL;
+
+	/* get last level pte */
+	pte = dma_addr_level_pte(domain, addr, 1);
+
+	if (pte) {
+		dma_clear_pte(*pte);
+		__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+	}
+}
+
+/* clear last level pte, a tlb flush should be followed */
+static void dma_pte_clear_range(struct domain *domain, u64 start, u64 end)
+{
+	int addr_width = agaw_to_width(domain->agaw);
+
+	start &= (((u64)1) << addr_width) - 1;
+	end &= (((u64)1) << addr_width) - 1;
+	/* in case it's partial page */
+	start = PAGE_ALIGN_4K(start);
+	end &= PAGE_MASK_4K;
+
+	/* we don't need lock here, nobody else touches the iova range */
+	while (start < end) {
+		dma_pte_clear_one(domain, start);
+		start += PAGE_SIZE_4K;
+	}
+}
+
+/* free page table pages. last level pte should already be cleared */
+static void dma_pte_free_pagetable(struct domain *domain, u64 start, u64 end)
+{
+	int addr_width = agaw_to_width(domain->agaw);
+	struct dma_pte *pte;
+	int total = agaw_to_level(domain->agaw);
+	int level;
+	u64 tmp;
+
+	start &= (((u64)1) << addr_width) - 1;
+	end &= (((u64)1) << addr_width) - 1;
+
+	/* we don't need lock here, nobody else touches the iova range */
+	level = 2;
+	while (level <= total) {
+		tmp = align_to_level(start, level);
+		if (tmp >= end || (tmp + level_size(level) > end))
+			return;
+
+		while (tmp < end) {
+			pte = dma_addr_level_pte(domain, tmp, level);
+			if (pte) {
+				free_pgtable_page(
+					phys_to_virt(dma_pte_addr(*pte)));
+				dma_clear_pte(*pte);
+				__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+			}
+			tmp += level_size(level);
+		}
+		level++;
+	}
+	/* free pgd */
+	if (start == 0 && end >= ((((u64)1) << addr_width) - 1)) {
+		free_pgtable_page(domain->pgd);
+		domain->pgd = NULL;
+	}
+}
+
+/* iommu handling */
+static int iommu_alloc_root_entry(struct intel_iommu *iommu)
+{
+	struct root_entry *root;
+	unsigned long flags;
+
+	root = (struct root_entry *)alloc_pgtable_page();
+	if (!root)
+		return -ENOMEM;
+
+	__iommu_flush_cache(iommu, root, PAGE_SIZE_4K);
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	iommu->root_entry = root;
+	spin_unlock_irqrestore(&iommu->lock, flags);
+
+	return 0;
+}
+
+#define IOMMU_WAIT_OP(iommu, offset, op, cond, sts) \
+{\
+	unsigned long start_time = jiffies;\
+	while (1) {\
+		sts = op (iommu->reg, offset);\
+		if (cond)\
+			break;\
+		if (time_after(jiffies, start_time + DMAR_OPERATION_TIMEOUT))\
+			panic("DMAR hardware is malfunctional, please disable IOMMU\n");\
+		cpu_relax();\
+	}\
+}
+
+static void iommu_set_root_entry(struct intel_iommu *iommu)
+{
+	void *addr;
+	u32 cmd, sts;
+	unsigned long flag;
+
+	addr = iommu->root_entry;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writeq(iommu->reg, DMAR_RTADDR_REG, virt_to_phys(addr));
+
+	cmd = iommu->gcmd | DMA_GCMD_SRTP;
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, cmd);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (sts & DMA_GSTS_RTPS), sts);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+static void iommu_flush_write_buffer(struct intel_iommu *iommu)
+{
+	u32 val;
+	unsigned long flag;
+
+	if (!cap_rwbf(iommu->cap))
+		return;
+	val = iommu->gcmd | DMA_GCMD_WBF;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, val);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (!(val & DMA_GSTS_WBFS)), val);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+/* return value determine if we need a write buffer flush */
+static int __iommu_flush_context(struct intel_iommu *iommu,
+	u16 did, u16 source_id, u8 function_mask, u64 type,
+	int non_present_entry_flush)
+{
+	u64 val = 0;
+	unsigned long flag;
+
+	/*
+	 * In the non-present entry flush case, if hardware doesn't cache
+	 * non-present entry we do nothing and if hardware cache non-present
+	 * entry, we flush entries of domain 0 (the domain id is used to cache
+	 * any non-present entries)
+	 */
+	if (non_present_entry_flush) {
+		if (!cap_caching_mode(iommu->cap))
+			return 1;
+		else
+			did = 0;
+	}
+
+	switch (type)
+	{
+	case DMA_CCMD_GLOBAL_INVL:
+		val = DMA_CCMD_GLOBAL_INVL;
+		break;
+	case DMA_CCMD_DOMAIN_INVL:
+		val = DMA_CCMD_DOMAIN_INVL|DMA_CCMD_DID(did);
+		break;
+	case DMA_CCMD_DEVICE_INVL:
+		val = DMA_CCMD_DEVICE_INVL|DMA_CCMD_DID(did)
+			|DMA_CCMD_SID(source_id)|DMA_CCMD_FM(function_mask);
+		break;
+	default:
+		BUG();
+	}
+	val |= DMA_CCMD_ICC;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writeq(iommu->reg, DMAR_CCMD_REG, val);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_CCMD_REG, dmar_readq, (!(val & DMA_CCMD_ICC)), val);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+	/* flush context entry will implictly flush write buffer */
+	return 0;
+}
+
+static int inline iommu_flush_context_global(struct intel_iommu *iommu,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL,
+		non_present_entry_flush);
+}
+
+static int inline iommu_flush_context_domain(struct intel_iommu *iommu, u16 did,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_context(iommu, did, 0, 0, DMA_CCMD_DOMAIN_INVL,
+		non_present_entry_flush);
+}
+
+static int inline iommu_flush_context_device(struct intel_iommu *iommu,
+	u16 did, u16 source_id, u8 function_mask, int non_present_entry_flush)
+{
+	return __iommu_flush_context(iommu, did, source_id, function_mask,
+		DMA_CCMD_DEVICE_INVL, non_present_entry_flush);
+}
+
+/* return value determine if we need a write buffer flush */
+static int __iommu_flush_iotlb(struct intel_iommu *iommu, u16 did,
+	u64 addr, unsigned int size_order, u64 type,
+	int non_present_entry_flush)
+{
+	int tlb_offset = ecap_iotlb_offset(iommu->ecap);
+	u64 val = 0, val_iva = 0;
+	unsigned long flag;
+
+	/*
+	 * In the non-present entry flush case, if hardware doesn't cache
+	 * non-present entry we do nothing and if hardware cache non-present
+	 * entry, we flush entries of domain 0 (the domain id is used to cache
+	 * any non-present entries)
+	 */
+	if (non_present_entry_flush) {
+		if (!cap_caching_mode(iommu->cap))
+			return 1;
+		else
+			did = 0;
+	}
+
+	switch (type) {
+	case DMA_TLB_GLOBAL_FLUSH:
+		/* global flush doesn't need set IVA_REG */
+		val = DMA_TLB_GLOBAL_FLUSH|DMA_TLB_IVT;
+		break;
+	case DMA_TLB_DSI_FLUSH:
+		val = DMA_TLB_DSI_FLUSH|DMA_TLB_IVT|DMA_TLB_DID(did);
+		break;
+	case DMA_TLB_PSI_FLUSH:
+		val = DMA_TLB_PSI_FLUSH|DMA_TLB_IVT|DMA_TLB_DID(did);
+		/* Note: always flush non-leaf currently */
+		val_iva = size_order | addr;
+		break;
+	default:
+		BUG();
+	}
+	/* Note: set drain read/write */
+#if 0
+	/*
+	 * This is probably to be super secure.. Looks like we can
+	 * ignore it without any impact.
+	 */
+	if (cap_read_drain(iommu->cap))
+		val |= DMA_TLB_READ_DRAIN;
+#endif
+	if (cap_write_drain(iommu->cap))
+		val |= DMA_TLB_WRITE_DRAIN;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	/* Note: Only uses first TLB reg currently */
+	if (val_iva)
+		dmar_writeq(iommu->reg, tlb_offset, val_iva);
+	dmar_writeq(iommu->reg, tlb_offset + 8, val);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, tlb_offset + 8, dmar_readq, (!(val & DMA_TLB_IVT)), val);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+	/* check IOTLB invalidation granularity */
+	if (DMA_TLB_IAIG(val) == 0)
+		printk(KERN_ERR"IOMMU: flush IOTLB failed\n");
+	if (DMA_TLB_IAIG(val) != DMA_TLB_IIRG(type))
+		pr_debug("IOMMU: tlb flush request %Lx, actual %Lx\n",
+			DMA_TLB_IIRG(type), DMA_TLB_IAIG(val));
+	/* flush context entry will implictly flush write buffer */
+	return 0;
+}
+
+static int inline iommu_flush_iotlb_global(struct intel_iommu *iommu,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH,
+		non_present_entry_flush);
+}
+
+static int inline iommu_flush_iotlb_dsi(struct intel_iommu *iommu, u16 did,
+	int non_present_entry_flush)
+{
+	return __iommu_flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH,
+		non_present_entry_flush);
+}
+
+static int inline get_alignment(u64 base, unsigned int size)
+{
+	int t = 0;
+	u64 end;
+
+	end = base + size - 1;
+	while (base != end) {
+		t++;
+		base >>= 1;
+		end >>= 1;
+	}
+	return t;
+}
+
+static int inline iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did,
+	u64 addr, unsigned int pages, int non_present_entry_flush)
+{
+	unsigned int align;
+
+	BUG_ON(addr & (~PAGE_MASK_4K));
+	BUG_ON(pages == 0);
+
+	/* Fallback to domain selective flush if no PSI support */
+	if (!cap_pgsel_inv(iommu->cap))
+		return iommu_flush_iotlb_dsi(iommu, did,
+			non_present_entry_flush);
+
+	/*
+	 * PSI requires page size is 2 ^ x, and the base address is naturally
+	 * aligned to the size
+	 */
+	align = get_alignment(addr >> PAGE_SHIFT_4K, pages);
+	/* Fallback to domain selective flush if size is too big */
+	if (align > cap_max_amask_val(iommu->cap))
+		return iommu_flush_iotlb_dsi(iommu, did,
+			non_present_entry_flush);
+
+	addr >>= PAGE_SHIFT_4K + align;
+	addr <<= PAGE_SHIFT_4K + align;
+
+	return __iommu_flush_iotlb(iommu, did, addr, align,
+		DMA_TLB_PSI_FLUSH, non_present_entry_flush);
+}
+
+static int iommu_enable_translation(struct intel_iommu *iommu)
+{
+	u32 sts;
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, iommu->gcmd|DMA_GCMD_TE);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (sts & DMA_GSTS_TES), sts);
+
+	iommu->gcmd |= DMA_GCMD_TE;
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+	return 0;
+}
+
+static int iommu_disable_translation(struct intel_iommu *iommu)
+{
+	u32 sts;
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	iommu->gcmd &= ~ DMA_GCMD_TE;
+	dmar_writel(iommu->reg, DMAR_GCMD_REG, iommu->gcmd);
+
+	/* Make sure hardware complete it */
+	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (!(sts & DMA_GSTS_TES)), sts);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+	return 0;
+}
+
+static int iommu_init_domains(struct intel_iommu *iommu)
+{
+	unsigned long ndomains;
+	unsigned long nlongs;
+
+	ndomains = cap_ndoms(iommu->cap);
+	pr_debug("Number of Domains supportd <%ld>\n", ndomains);
+	nlongs = BITS_TO_LONGS(ndomains);
+
+	/* TBD: there might be 64K domains, consider other allocation for future chip */
+	iommu->domain_ids = kcalloc(nlongs, sizeof(unsigned long), GFP_KERNEL);
+	if (!iommu->domain_ids) {
+		printk(KERN_ERR "Allocating domain id array failed\n");
+		return -ENOMEM;
+	}
+	iommu->domains = kcalloc(ndomains, sizeof(struct domain *), GFP_KERNEL);
+	if (!iommu->domains) {
+		printk(KERN_ERR "Allocating domain array failed\n");
+		kfree(iommu->domain_ids);
+		return -ENOMEM;
+	}
+
+	/*
+	 * if Caching mode is set, then invalid translations are tagged
+	 * with domainid 0. Hence we need to pre-allocate it.
+	 */
+	if (cap_caching_mode(iommu->cap))
+		set_bit(0, iommu->domain_ids);
+	return 0;
+}
+
+static struct intel_iommu *alloc_iommu(struct dmar_drhd_unit *drhd)
+{
+	struct intel_iommu *iommu;
+	int ret;
+	int map_size;
+	u32 ver;
+
+	iommu = kzalloc(sizeof(*iommu), GFP_KERNEL);
+	if (!iommu)
+		return NULL;
+	iommu->reg = ioremap(drhd->reg_base_addr, PAGE_SIZE_4K);
+	if (!iommu->reg) {
+		printk(KERN_ERR "IOMMU: can't map the region\n");
+		goto error;
+	}
+	iommu->cap = dmar_readq(iommu->reg, DMAR_CAP_REG);
+	iommu->ecap = dmar_readq(iommu->reg, DMAR_ECAP_REG);
+
+	/* the registers might be more than one page */
+	map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap),
+		cap_max_fault_reg_offset(iommu->cap));
+	map_size = PAGE_ALIGN_4K(map_size);
+	if (map_size > PAGE_SIZE_4K) {
+		iounmap(iommu->reg);
+		iommu->reg = ioremap(drhd->reg_base_addr, map_size);
+		if (!iommu->reg) {
+			printk(KERN_ERR "IOMMU: can't map the region\n");
+			goto error;
+		}
+	}
+
+	ver = dmar_readl(iommu->reg, DMAR_VER_REG);
+	pr_debug("IOMMU %llx: ver %d:%d cap %llx ecap %llx\n",
+		drhd->reg_base_addr, VER_MAJOR(ver), VER_MINOR(ver),
+		iommu->cap, iommu->ecap);
+	ret = iommu_init_domains(iommu);
+	if (ret)
+		goto error_unmap;
+	spin_lock_init(&iommu->lock);
+	spin_lock_init(&iommu->register_lock);
+
+	drhd->iommu = iommu;
+	return iommu;
+error_unmap:
+	iounmap(iommu->reg);
+	iommu->reg = 0;
+error:
+	kfree(iommu);
+	return NULL;
+}
+
+#define iommu_for_each_domain_id(iommu, i) \
+for (i = find_first_bit(iommu->domain_ids, cap_ndoms(iommu->cap)); \
+	i < cap_ndoms(iommu->cap); \
+	i = find_next_bit(iommu->domain_ids, cap_ndoms(iommu->cap), i+1))
+static void domain_exit(struct domain *domain);
+static void free_iommu(struct intel_iommu *iommu)
+{
+	struct domain *domain;
+	int i;
+
+	if (!iommu)
+		return;
+
+	iommu_for_each_domain_id(iommu, i) {
+		domain = iommu->domains[i];
+		clear_bit(i, iommu->domain_ids);
+		domain_exit(domain);
+	}
+
+	if (iommu->gcmd & DMA_GCMD_TE)
+		iommu_disable_translation(iommu);
+
+	if (iommu->irq) {
+		set_irq_data(iommu->irq, NULL);
+		/* This will mask the irq */
+		free_irq(iommu->irq, iommu);
+		destroy_irq(iommu->irq);
+	}
+
+	kfree(iommu->domains);
+	kfree(iommu->domain_ids);
+
+	/* free context mapping */
+	free_context_table(iommu);
+
+	if (iommu->reg)
+		iounmap(iommu->reg);
+	kfree(iommu);
+}
+
+static struct domain * iommu_alloc_domain(struct intel_iommu *iommu)
+{
+	unsigned long num;
+	unsigned long ndomains;
+	struct domain *domain;
+	unsigned long flags;
+
+	domain = alloc_domain_mem();
+	if (!domain)
+		return NULL;
+
+	ndomains = cap_ndoms(iommu->cap);
+
+	spin_lock_irqsave(&iommu->lock, flags);
+	num = find_first_zero_bit(iommu->domain_ids, ndomains);
+	if (num >= ndomains) {
+		spin_unlock_irqrestore(&iommu->lock, flags);
+		free_domain_mem(domain);
+		printk(KERN_ERR "IOMMU: no free domain ids\n");
+		return NULL;
+	}
+
+	set_bit(num, iommu->domain_ids);
+	domain->id = num;
+	domain->iommu = iommu;
+	iommu->domains[num] = domain;
+	spin_unlock_irqrestore(&iommu->lock, flags);
+
+	return domain;
+}
+
+static void iommu_free_domain(struct domain *domain)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&domain->iommu->lock, flags);
+	clear_bit(domain->id, domain->iommu->domain_ids);
+	spin_unlock_irqrestore(&domain->iommu->lock, flags);
+}
+
+static struct iova_domain reserved_iova_list;
+#ifdef DEBUG
+static void print_iova_list(struct iova_domain *head)
+{
+	struct rb_node *node = rb_first(&head->rbroot);
+	struct iova *iova;
+
+	while (node) {
+		iova = container_of(node, struct iova, node);
+
+		pr_debug("Start %lx, end %lx\n",
+			iova->pfn_lo, iova->pfn_hi);
+		node = rb_next(node);
+	}
+}
+#endif
+
+static void dmar_init_reserved_ranges(void)
+{
+	struct pci_dev *pdev = NULL;
+	struct iova *iova;
+	int i;
+	u64 addr, size;
+
+	init_iova_domain(&reserved_iova_list);
+
+	/* IOAPIC ranges shouldn't be accessed by DMA */
+	iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START),
+		IOVA_PFN(IOAPIC_RANGE_END));
+	if (!iova)
+		printk(KERN_ERR "Reserve IOAPIC range failed\n");
+
+	/* Reserve all PCI MMIO to avoid peer-to-peer access */
+	for_each_pci_dev(pdev) {
+		struct resource *r;
+
+		for (i = 0; i < PCI_NUM_RESOURCES; i++) {
+			r = &pdev->resource[i];
+			if (!r->flags || !(r->flags & IORESOURCE_MEM))
+				continue;
+			addr = r->start;
+			addr &= PAGE_MASK_4K;
+			size = r->end - addr;
+			size = PAGE_ALIGN_4K(size);
+			iova = reserve_iova(&reserved_iova_list, IOVA_PFN(addr),
+				IOVA_PFN(size + addr) - 1);
+			if (!iova)
+				printk(KERN_ERR "Reserve iova failed\n");
+		}
+	}
+
+#ifdef DEBUG
+	pr_debug("System reserved iova ranges:\n");
+	print_iova_list(&reserved_iova_list);
+#endif
+}
+
+static void domain_reserve_special_ranges(struct domain *domain)
+{
+	copy_reserved_iova(&reserved_iova_list, &domain->iovad);
+}
+
+static inline int guestwidth_to_adjustwidth(int gaw)
+{
+	int agaw;
+	int r = (gaw - 12) % 9;
+
+	if (r == 0)
+		agaw = gaw;
+	else
+		agaw = gaw + 9 - r;
+	if (agaw > 64)
+		agaw = 64;
+	return agaw;
+}
+
+static int domain_init(struct domain *domain, int guest_width)
+{
+	struct intel_iommu *iommu;
+	int adjust_width, agaw;
+	unsigned long sagaw;
+
+	init_iova_domain(&domain->iovad);
+	spin_lock_init(&domain->mapping_lock);
+
+	domain_reserve_special_ranges(domain);
+
+	/* calculate AGAW */
+	iommu = domain->iommu;
+	if (guest_width > cap_mgaw(iommu->cap))
+		guest_width = cap_mgaw(iommu->cap);
+	domain->gaw = guest_width;
+	adjust_width = guestwidth_to_adjustwidth(guest_width);
+	agaw = width_to_agaw(adjust_width);
+	sagaw = cap_sagaw(iommu->cap);
+	if (!test_bit(agaw, &sagaw)) {
+		/* hardware doesn't support it, choose a bigger one */
+		pr_debug("IOMMU: hardware doesn't support agaw %d\n", agaw);
+		agaw = find_next_bit(&sagaw, 5, agaw);
+		if (agaw >= 5)
+			return -ENODEV;
+	}
+	domain->agaw = agaw;
+	INIT_LIST_HEAD(&domain->devices);
+
+	/* always allocate the top pgd */
+	domain->pgd = (struct dma_pte *)alloc_pgtable_page();
+	if (!domain->pgd)
+		return -ENOMEM;
+	__iommu_flush_cache(iommu, domain->pgd, PAGE_SIZE_4K);
+	return 0;
+}
+
+
+static void domain_exit(struct domain *domain)
+{
+	u64 end;
+
+	/* Domain 0 is reserved, so dont process it */
+	if (!domain)
+		return;
+
+	domain_remove_dev_info(domain);
+	/* destroy iovas */
+	put_iova_domain(&domain->iovad);
+	end = DOMAIN_MAX_ADDR(domain->gaw);
+	end = end & (~PAGE_MASK_4K);
+
+	/* clear ptes */
+	dma_pte_clear_range(domain, 0, end);
+
+	/* free page tables */
+	dma_pte_free_pagetable(domain, 0, end);
+
+	iommu_free_domain(domain);
+	free_domain_mem(domain);
+}
+
+static int domain_context_mapping_one(struct domain *domain, u8 bus, u8 devfn)
+{
+	struct context_entry *context;
+	struct intel_iommu *iommu = domain->iommu;
+	unsigned long flags;
+
+	pr_debug("Set context mapping for %02x:%02x.%d\n", bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+	BUG_ON(!domain->pgd);
+	context = device_to_context_entry(iommu, bus, devfn);
+	if (!context)
+		return -ENOMEM;
+	spin_lock_irqsave(&iommu->lock, flags);
+	if (context_present(*context)) {
+		spin_unlock_irqrestore(&iommu->lock, flags);
+		return 0;
+	}
+
+	context_set_domain_id(*context, domain->id);
+	context_set_address_width(*context, domain->agaw);
+	context_set_address_root(*context, virt_to_phys(domain->pgd));
+	context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
+	context_set_fault_enable(*context);
+	context_set_present(*context);
+	__iommu_flush_cache(iommu, context, sizeof(*context));
+
+	/* it's a non-present to present mapping */
+	if (iommu_flush_context_device(iommu, domain->id,
+			(((u16)bus) << 8) | devfn, DMA_CCMD_MASK_NOBIT, 1))
+		iommu_flush_write_buffer(iommu);
+	else
+		iommu_flush_iotlb_dsi(iommu, 0, 0);
+	spin_unlock_irqrestore(&iommu->lock, flags);
+	return 0;
+}
+
+static int
+domain_context_mapping(struct domain *domain, struct pci_dev *pdev)
+{
+	int ret;
+	struct pci_dev *tmp, *parent;
+
+	ret = domain_context_mapping_one(domain, pdev->bus->number,
+		pdev->devfn);
+	if (ret)
+		return ret;
+
+	/* dependent device mapping */
+	tmp = pci_find_upstream_pcie_bridge(pdev);
+	if (!tmp)
+		return 0;
+	/* Secondary interface's bus number and devfn 0 */
+	parent = pdev->bus->self;
+	while (parent != tmp) {
+		ret = domain_context_mapping_one(domain, parent->bus->number,
+			parent->devfn);
+		if (ret)
+			return ret;
+		parent = parent->bus->self;
+	}
+	if (tmp->is_pcie) /* this is a PCIE-to-PCI bridge */
+		return domain_context_mapping_one(domain,
+			tmp->subordinate->number, 0);
+	else /* this is a legacy PCI bridge */
+		return domain_context_mapping_one(domain,
+			tmp->bus->number, tmp->devfn);
+}
+
+static int domain_context_mapped(struct domain *domain, struct pci_dev *pdev)
+{
+	int ret;
+	struct pci_dev *tmp, *parent;
+
+	ret = device_context_mapped(domain->iommu, pdev->bus->number, pdev->devfn);
+	if (!ret)
+		return ret;
+	/* dependent device mapping */
+	tmp = pci_find_upstream_pcie_bridge(pdev);
+	if (!tmp)
+		return ret;
+	/* Secondary interface's bus number and devfn 0 */
+	parent = pdev->bus->self;
+	while (parent != tmp) {
+		ret = device_context_mapped(domain->iommu, parent->bus->number,
+			parent->devfn);
+		if (!ret)
+			return ret;
+		parent = parent->bus->self;
+	}
+	if (tmp->is_pcie)
+		return device_context_mapped(domain->iommu,
+			tmp->subordinate->number, 0);
+	else
+		return device_context_mapped(domain->iommu,
+			tmp->bus->number, tmp->devfn);
+}
+
+static int
+domain_page_mapping(struct domain *domain, dma_addr_t iova,
+			u64 hpa, size_t size, int prot)
+{
+	u64 start_pfn, end_pfn;
+	struct dma_pte *pte;
+	int index;
+
+	if ((prot & (DMA_PTE_READ|DMA_PTE_WRITE)) == 0)
+		return -EINVAL;
+	iova &= PAGE_MASK_4K;
+	start_pfn = ((u64)hpa) >> PAGE_SHIFT_4K;
+	end_pfn = (PAGE_ALIGN_4K(((u64)hpa) + size)) >> PAGE_SHIFT_4K;
+	index = 0;
+	while (start_pfn < end_pfn) {
+		pte = addr_to_dma_pte(domain, iova + PAGE_SIZE_4K * index);
+		if (!pte)
+			return -ENOMEM;
+		/* we don't need lock here, nobody else touches the iova range */
+		BUG_ON(dma_pte_addr(*pte));
+		dma_set_pte_addr(*pte, start_pfn << PAGE_SHIFT_4K);
+		dma_set_pte_prot(*pte, prot);
+		__iommu_flush_cache(domain->iommu, pte, sizeof(*pte));
+		start_pfn++;
+		index++;
+	}
+	return 0;
+}
+
+
+static void detach_domain_for_dev(struct domain *domain, u8 bus, u8 devfn)
+{
+	clear_context_table(domain->iommu, bus, devfn);
+	iommu_flush_context_global(domain->iommu, 0);
+	iommu_flush_iotlb_global(domain->iommu, 0);
+}
+
+static void domain_remove_dev_info(struct domain *domain)
+{
+	struct device_domain_info *info;
+	unsigned long flags;
+
+	spin_lock_irqsave(&device_domain_lock, flags);
+	while (!list_empty(&domain->devices)) {
+		info = list_entry(domain->devices.next,
+			struct device_domain_info, link);
+		list_del(&info->link);
+		list_del(&info->global);
+		if (info->dev)
+			info->dev->sysdata = NULL;
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+
+		detach_domain_for_dev(info->domain, info->bus, info->devfn);
+		free_devinfo_mem(info);
+
+		spin_lock_irqsave(&device_domain_lock, flags);
+	}
+	spin_unlock_irqrestore(&device_domain_lock, flags);
+}
+
+/*
+ * find_domain
+ * Note: we use struct pci_dev->sysdata stores the info
+ */
+struct domain *
+find_domain(struct pci_dev *pdev)
+{
+	struct device_domain_info *info;
+
+	/* No lock here, assumes no domain exit in normal case */
+	info = (struct device_domain_info *)pdev->sysdata;
+	if (info)
+		return info->domain;
+	return NULL;
+}
+
+static int dmar_pci_device_match(struct pci_dev *devices[], int cnt,
+			     struct pci_dev *dev)
+{
+	int index;
+
+	while (dev) {
+		for (index = 0; index < cnt; index ++)
+			if (dev == devices[index])
+				return 1;
+
+		/* Check our parent */
+		dev = dev->bus->self;
+	}
+
+	return 0;
+}
+
+static struct dmar_drhd_unit *
+dmar_find_matched_drhd_unit(struct pci_dev *dev)
+{
+	struct dmar_drhd_unit *drhd = NULL;
+
+	list_for_each_entry(drhd, &dmar_drhd_units, list) {
+		if (drhd->include_all || dmar_pci_device_match(drhd->devices,
+						drhd->devices_cnt, dev))
+			return drhd;
+	}
+
+	return NULL;
+}
+
+
+/* domain is initialized */
+static struct domain *get_domain_for_dev(struct pci_dev *pdev, int gaw)
+{
+	struct domain *domain, *found = NULL;
+	struct intel_iommu *iommu;
+	struct dmar_drhd_unit *drhd;
+	struct device_domain_info *info, *tmp;
+	struct pci_dev *dev_tmp;
+	unsigned long flags;
+	int bus = 0, devfn = 0;
+
+	domain = find_domain(pdev);
+	if (domain)
+		return domain;
+
+	dev_tmp = pci_find_upstream_pcie_bridge(pdev);
+	if (dev_tmp) {
+		if (dev_tmp->is_pcie) {
+			bus = dev_tmp->subordinate->number;
+			devfn = 0;
+		} else {
+			bus = dev_tmp->bus->number;
+			devfn = dev_tmp->devfn;
+		}
+		spin_lock_irqsave(&device_domain_lock, flags);
+		list_for_each_entry(info, &device_domain_list, global) {
+			if (info->bus == bus && info->devfn == devfn) {
+				found = info->domain;
+				break;
+			}
+		}
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+		/* pcie-pci bridge already has a domain, uses it */
+		if (found) {
+			domain = found;
+			goto found_domain;
+		}
+	}
+
+	/* Allocate new domain for the device */
+	drhd = dmar_find_matched_drhd_unit(pdev);
+	if (!drhd) {
+		printk(KERN_ERR "IOMMU: can't find DMAR for device %s\n",
+			pci_name(pdev));
+		return NULL;
+	}
+	iommu = drhd->iommu;
+
+	domain = iommu_alloc_domain(iommu);
+	if (!domain)
+		goto error;
+
+	if (domain_init(domain, gaw)) {
+		domain_exit(domain);
+		goto error;
+	}
+
+	/* register pcie-to-pci device */
+	if (dev_tmp) {
+		info = alloc_devinfo_mem();
+		if (!info) {
+			domain_exit(domain);
+			goto error;
+		}
+		info->bus = bus;
+		info->devfn = devfn;
+		info->dev = NULL;
+		info->domain = domain;
+		/* This domain is shared by devices under p2p bridge */
+		domain->flags |= DOMAIN_FLAG_MULTIPLE_DEVICES;
+
+		/* pcie-to-pci bridge already has a domain, uses it */
+		found = NULL;
+		spin_lock_irqsave(&device_domain_lock, flags);
+		list_for_each_entry(tmp, &device_domain_list, global) {
+			if (tmp->bus == bus && tmp->devfn == devfn) {
+				found = tmp->domain;
+				break;
+			}
+		}
+		if (found) {
+			free_devinfo_mem(info);
+			domain_exit(domain);
+			domain = found;
+		} else {
+			list_add(&info->link, &domain->devices);
+			list_add(&info->global, &device_domain_list);
+		}
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+	}
+
+found_domain:
+	info = alloc_devinfo_mem();
+	if (!info)
+		goto error;
+	info->bus = pdev->bus->number;
+	info->devfn = pdev->devfn;
+	info->dev = pdev;
+	info->domain = domain;
+	spin_lock_irqsave(&device_domain_lock, flags);
+	/* somebody is fast */
+	if ((found = find_domain(pdev)) != NULL) {
+		spin_unlock_irqrestore(&device_domain_lock, flags);
+		if (found != domain) {
+			domain_exit(domain);
+			domain = found;
+		}
+		free_devinfo_mem(info);
+		return domain;
+	}
+	list_add(&info->link, &domain->devices);
+	list_add(&info->global, &device_domain_list);
+	pdev->sysdata = info;
+	spin_unlock_irqrestore(&device_domain_lock, flags);
+	return domain;
+error:
+	/* recheck it here, maybe others set it */
+	return find_domain(pdev);
+}
+
+static int iommu_prepare_identity_map(struct pci_dev *pdev, u64 start, u64 end)
+{
+	struct domain *domain;
+	unsigned long size;
+	u64 base;
+	int ret;
+
+	printk(KERN_INFO
+		"IOMMU: Setting identity map for device %s [0x%Lx - 0x%Lx]\n",
+		pci_name(pdev), start, end);
+	/* page table init */
+	domain = get_domain_for_dev(pdev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
+	if (!domain)
+		return -ENOMEM;
+
+	/* The address might not be aligned */
+	base = start & PAGE_MASK_4K;
+	size = end - base;
+	size = PAGE_ALIGN_4K(size);
+	if (!reserve_iova(&domain->iovad, IOVA_PFN(base),
+			IOVA_PFN(base + size) - 1)) {
+		printk(KERN_ERR "IOMMU: reserve iova failed\n");
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	pr_debug("Mapping reserved region %lx@%llx for %s\n",
+		size, base, pci_name(pdev));
+	/*
+	 * RMRR range might have overlap with physical memory range,
+	 * clear it first
+	 */
+	dma_pte_clear_range(domain, base, base + size);
+
+	ret = domain_page_mapping(domain, base, base, size,
+		DMA_PTE_READ|DMA_PTE_WRITE);
+	if (ret)
+		goto error;
+
+	/* context entry init */
+	ret = domain_context_mapping(domain, pdev);
+	if (!ret)
+		return 0;
+error:
+	domain_exit(domain);
+	return ret;
+
+}
+
+static inline int iommu_prepare_rmrr_dev(struct dmar_rmrr_unit *rmrr,
+	struct pci_dev *pdev)
+{
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return 0;
+	return iommu_prepare_identity_map(pdev, rmrr->base_address,
+		rmrr->end_address + 1);
+}
+
+int __init init_dmars(void)
+{
+	struct dmar_drhd_unit *drhd;
+	struct dmar_rmrr_unit *rmrr;
+	struct pci_dev *pdev;
+	struct intel_iommu *iommu;
+	int ret, unit = 0;
+
+	/*
+	 * for each drhd
+	 *    allocate root
+	 *    initialize and program root entry to not present
+	 * endfor
+	 */
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+		iommu = alloc_iommu(drhd);
+		if (!iommu) {
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		/*
+		 * TBD:
+		 * we could share the same root & context tables
+		 * amoung all IOMMU's. Need to Split it later.
+		 */
+		ret = iommu_alloc_root_entry(iommu);
+		if (ret) {
+			printk(KERN_ERR "IOMMU: allocate root entry failed\n");
+			goto error;
+		}
+	}
+
+	/*
+	 * For each rmrr
+	 *   for each dev attached to rmrr
+	 *   do
+	 *     locate drhd for dev, alloc domain for dev
+	 *     allocate free domain
+	 *     allocate page table entries for rmrr
+	 *     if context not allocated for bus
+	 *           allocate and init context
+	 *           set present in root table for this bus
+	 *     init context with domain, translation etc
+	 *    endfor
+	 * endfor
+	 */
+	begin_for_each_rmrr_device(rmrr, pdev)
+		ret = iommu_prepare_rmrr_dev(rmrr, pdev);
+		if (ret)
+			printk(KERN_ERR "IOMMU: mapping reserved region failed\n");
+	end_for_each_rmrr_device(rmrr, pdev)
+
+	/*
+	 * for each drhd
+	 *   enable fault log
+	 *   global invalidate context cache
+	 *   global invalidate iotlb
+	 *   enable translation
+	 */
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+		iommu = drhd->iommu;
+		sprintf (iommu->name, "dmar%d", unit++);
+
+		iommu_flush_write_buffer(iommu);
+
+		iommu_set_root_entry(iommu);
+
+		iommu_flush_context_global(iommu, 0);
+		iommu_flush_iotlb_global(iommu, 0);
+
+		ret = iommu_enable_translation(iommu);
+		if (ret)
+			goto error;
+	}
+
+	return 0;
+error:
+	for_each_drhd_unit(drhd) {
+		if (drhd->ignored)
+			continue;
+		iommu = drhd->iommu;
+		free_iommu(iommu);
+	}
+	return ret;
+}
+
+#define aligned_size(host_addr, size) \
+	PAGE_ALIGN_4K((host_addr & (~PAGE_MASK_4K)) + size)
+struct iova *
+iommu_alloc_iova(struct domain *domain, void *host_addr, size_t size,
+		u64 start, u64 end)
+{
+	u64 start_addr;
+	struct iova *piova;
+
+	/* Make sure it's in range */
+	if ((start > DOMAIN_MAX_ADDR(domain->gaw)) || end < start)
+		return NULL;
+
+	end = min_t(u64, DOMAIN_MAX_ADDR(domain->gaw), end);
+	start_addr = PAGE_ALIGN_4K(start);
+	size = aligned_size((u64)host_addr, size);
+	if (!size || (start_addr + size > end))
+		return NULL;
+
+	piova = alloc_iova(&domain->iovad, size >> PAGE_SHIFT_4K, IOVA_PFN(end));
+
+	return piova;
+}
+
+
+/* iotlb */
+static dma_addr_t __intel_map_single(struct device *dev, void *addr,
+	size_t size, int dir, u64 *flush_addr, unsigned int *flush_size)
+{
+	struct domain *domain;
+	struct pci_dev *pdev = to_pci_dev(dev);
+	int ret;
+	int prot = 0;
+	struct iova *iova = NULL;
+	u64 start_addr;
+
+	addr = (void *)virt_to_phys(addr);
+
+	domain = get_domain_for_dev(pdev,
+			DEFAULT_DOMAIN_ADDRESS_WIDTH);
+	if (!domain) {
+		printk(KERN_ERR"Allocating domain for %s failed", pci_name(pdev));
+		return 0;
+	}
+
+	start_addr = IOVA_START_ADDR;
+
+	if (pdev->dma_mask <= DMA_32BIT_MASK) {
+		iova = iommu_alloc_iova(domain, addr, size, start_addr,
+			pdev->dma_mask);
+	} else  {
+		/*
+		 * First try to allocate an io virtual address in
+		 * DMA_32BIT_MASK and if that fails then try allocating
+		 * from higer range
+		 */
+		iova = iommu_alloc_iova(domain, addr, size, start_addr,
+			DMA_32BIT_MASK);
+		if (!iova)
+			iova = iommu_alloc_iova(domain, addr, size, start_addr,
+			pdev->dma_mask);
+	}
+
+	if (!iova) {
+		printk(KERN_ERR"Allocating iova for %s failed", pci_name(pdev));
+		return 0;
+	}
+
+	/* make sure context mapping is ok */
+	if (unlikely(!domain_context_mapped(domain, pdev))) {
+		ret = domain_context_mapping(domain, pdev);
+		if (ret)
+			goto error;
+	}
+
+	/*
+	 * Check if DMAR supports zero-length reads on write only
+	 * mappings..
+	 */
+	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \
+			!cap_zlr(domain->iommu->cap))
+		prot |= DMA_PTE_READ;
+	if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
+		prot |= DMA_PTE_WRITE;
+	/*
+	 * addr - (addr + size) might be partial page, we should map the whole
+	 * page.  Note: if two part of one page are separately mapped, we
+	 * might have two guest_addr mapping to the same host addr, but this
+	 * is not a big problem
+	 */
+	ret = domain_page_mapping(domain, iova->pfn_lo << PAGE_SHIFT_4K,
+		((u64)addr) & PAGE_MASK_4K,
+		(iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K, prot);
+	if (ret)
+		goto error;
+
+	pr_debug("Device %s request: %lx@%llx mapping: %lx@%llx, dir %d\n",
+		pci_name(pdev), size, (u64)addr,
+		(iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K,
+		(u64)(iova->pfn_lo << PAGE_SHIFT_4K), dir);
+
+	*flush_addr = iova->pfn_lo << PAGE_SHIFT_4K;
+	*flush_size = (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K;
+	return (iova->pfn_lo << PAGE_SHIFT_4K) + ((u64)addr & (~PAGE_MASK_4K));
+error:
+	__free_iova(&domain->iovad, iova);
+	printk(KERN_ERR"Device %s request: %lx@%llx dir %d --- failed\n",
+		pci_name(pdev), size, (u64)addr, dir);
+	return 0;
+}
+
+static dma_addr_t intel_map_single(struct device *hwdev, void *addr,
+	size_t size, int dir)
+{
+	struct pci_dev *pdev = to_pci_dev(hwdev);
+	dma_addr_t ret;
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	BUG_ON(dir == DMA_NONE);
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return virt_to_bus(addr);
+
+	ret = __intel_map_single(hwdev, addr, size, dir, &flush_addr, &flush_size);
+	if (ret) {
+		domain = find_domain(pdev);
+		/* it's a non-present to present mapping */
+		if (iommu_flush_iotlb_psi(domain->iommu, domain->id,
+				flush_addr, flush_size >> PAGE_SHIFT_4K, 1))
+			iommu_flush_write_buffer(domain->iommu);
+	}
+	return ret;
+}
+
+static void __intel_unmap_single(struct device *dev, dma_addr_t dev_addr,
+	size_t size, int dir, u64 *flush_addr, unsigned int *flush_size)
+{
+	struct domain *domain;
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct iova *iova;
+
+	domain = find_domain(pdev);
+	BUG_ON(!domain);
+
+	iova = find_iova(&domain->iovad, IOVA_PFN(dev_addr));
+	if (!iova) {
+		*flush_size = 0;
+		return;
+	}
+	pr_debug("Device %s unmapping: %lx@%llx\n",
+		pci_name(pdev), (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K,
+		(u64)(iova->pfn_lo << PAGE_SHIFT_4K));
+
+	*flush_addr = iova->pfn_lo << PAGE_SHIFT_4K;
+	*flush_size = (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT_4K;
+	/*  clear the whole page, not just dev_addr - (dev_addr + size) */
+	dma_pte_clear_range(domain, *flush_addr, *flush_addr + *flush_size);
+	/* free page tables */
+	dma_pte_free_pagetable(domain, *flush_addr, *flush_addr + *flush_size);
+	/* free iova */
+	__free_iova(&domain->iovad, iova);
+}
+
+static void intel_unmap_single(struct device *dev, dma_addr_t dev_addr,
+	size_t size, int dir)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return;
+
+	domain = find_domain(pdev);
+	__intel_unmap_single(dev, dev_addr, size, dir, &flush_addr, &flush_size);
+	if (flush_size == 0)
+		return;
+	if (iommu_flush_iotlb_psi(domain->iommu, domain->id, flush_addr,
+			flush_size >> PAGE_SHIFT_4K, 0))
+		iommu_flush_write_buffer(domain->iommu);
+}
+
+static void * intel_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, gfp_t flags)
+{
+	void *vaddr;
+	int order;
+
+	size = PAGE_ALIGN_4K(size);
+	order = get_order(size);
+	flags &= ~(GFP_DMA | GFP_DMA32);
+
+	vaddr = (void *)__get_free_pages(flags, order);
+	if (!vaddr)
+		return NULL;
+	memset(vaddr, 0, size);
+
+	*dma_handle = intel_map_single(hwdev, vaddr, size, DMA_BIDIRECTIONAL);
+	if (*dma_handle)
+		return vaddr;
+	free_pages((unsigned long)vaddr, order);
+	return NULL;
+}
+
+static void intel_free_coherent(struct device *hwdev, size_t size,
+	void *vaddr, dma_addr_t dma_handle)
+{
+	int order;
+
+	size = PAGE_ALIGN_4K(size);
+	order = get_order(size);
+
+	intel_unmap_single(hwdev, dma_handle, size, DMA_BIDIRECTIONAL);
+	free_pages((unsigned long)vaddr, order);
+}
+
+static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sg,
+	int nelems, int dir)
+{
+	int i;
+	struct pci_dev *pdev = to_pci_dev(hwdev);
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return;
+
+	domain = find_domain(pdev);
+	for (i = 0; i < nelems; i++, sg++)
+		__intel_unmap_single(hwdev, sg->dma_address,
+			sg->dma_length, dir, &flush_addr, &flush_size);
+
+	if (iommu_flush_iotlb_dsi(domain->iommu, domain->id, 0))
+		iommu_flush_write_buffer(domain->iommu);
+}
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+static int intel_nontranslate_map_sg(struct device* hddev,
+	struct scatterlist *sg, int nelems, int dir)
+{
+	int i;
+
+ 	for (i = 0; i < nelems; i++) {
+		struct scatterlist *s = &sg[i];
+		BUG_ON(!s->page);
+		s->dma_address = virt_to_bus(SG_ENT_VIRT_ADDRESS(s));
+		s->dma_length = s->length;
+	}
+	return nelems;
+}
+
+static int intel_map_sg(struct device *hwdev, struct scatterlist *sg,
+	int nelems, int dir)
+{
+	void *addr;
+	int i;
+	dma_addr_t dma_handle;
+	struct pci_dev *pdev = to_pci_dev(hwdev);
+	struct domain *domain;
+	u64 flush_addr;
+	unsigned int flush_size;
+
+	BUG_ON(dir == DMA_NONE);
+	if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO)
+		return intel_nontranslate_map_sg(hwdev, sg, nelems, dir);
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dma_handle = __intel_map_single(hwdev, addr,
+				sg->length, dir, &flush_addr, &flush_size);
+		if (!dma_handle) {
+			intel_unmap_sg(hwdev, sg - i, i, dir);
+			sg[0].dma_length = 0;
+			return 0;
+		}
+		sg->dma_address = dma_handle;
+		sg->dma_length = sg->length;
+	}
+
+	domain = find_domain(pdev);
+
+	/* it's a non-present to present mapping */
+	if (iommu_flush_iotlb_dsi(domain->iommu, domain->id, 1))
+		iommu_flush_write_buffer(domain->iommu);
+	return nelems;
+}
+
+struct dma_mapping_ops intel_dma_ops = {
+	.alloc_coherent = intel_alloc_coherent,
+	.free_coherent = intel_free_coherent,
+	.map_single = intel_map_single,
+	.unmap_single = intel_unmap_single,
+	.map_sg = intel_map_sg,
+	.unmap_sg = intel_unmap_sg,
+};
+
+void *iommu_rpool_alloc(unsigned int size, gfp_t flag)
+{
+	if (size == PAGE_SIZE_4K)
+		return(void *)get_zeroed_page(flag);
+	else
+		return kzalloc(size, flag);
+}
+
+void iommu_rpool_free(void *pobj, unsigned int size)
+{
+	if (size == PAGE_SIZE_4K)
+		free_page((unsigned long)pobj);
+	else
+		kfree(pobj);
+}
+
+static inline int
+iommu_pgtable_pool_init(void)
+{
+
+	return init_resource_pool(&iommu_pgtable_pool, MIN_PGTABLE_PAGES,
+		PAGE_SIZE_4K, GROW_PGTABLE_PAGES, iommu_rpool_alloc,
+		iommu_rpool_free);
+}
+
+static inline int
+iommu_domain_pool_init(void)
+{
+	return init_resource_pool(&iommu_domain_pool, MIN_DOMAIN_REQ,
+		sizeof(struct domain), GROW_DOMAIN_REQ, iommu_rpool_alloc,
+		iommu_rpool_free);
+}
+
+static inline int
+iommu_devinfo_pool_init(void)
+{
+	return init_resource_pool(&iommu_devinfo_pool, MIN_DEVINFO_REQ,
+		sizeof(struct device_domain_info),
+		GROW_DEVINFO_REQ, iommu_rpool_alloc,
+		iommu_rpool_free);
+}
+
+static inline int
+iommu_iova_pool_init(void)
+{
+	return init_resource_pool(&iommu_iova_pool, MIN_IOVA_REQ,
+		sizeof(struct iova),
+		GROW_IOVA_REQ, iommu_rpool_alloc, iommu_rpool_free);
+}
+
+static int iommu_init_mempool(void)
+{
+	int ret;
+	ret = iommu_iova_pool_init();
+	if (ret)
+		return ret;
+
+	ret = iommu_pgtable_pool_init();
+	if (ret)
+		goto pgtable_error;
+
+	ret = iommu_domain_pool_init();
+	if (ret)
+		goto domain_error;
+
+	ret = iommu_devinfo_pool_init();
+	if (!ret)
+		return ret;
+
+	destroy_resource_pool(&iommu_domain_pool);
+domain_error:
+	destroy_resource_pool(&iommu_pgtable_pool);
+pgtable_error:
+	destroy_resource_pool(&iommu_iova_pool);
+
+	return -ENOMEM;
+}
+
+static void iommu_exit_mempool(void)
+{
+	destroy_resource_pool(&iommu_devinfo_pool);
+	destroy_resource_pool(&iommu_domain_pool);
+	destroy_resource_pool(&iommu_pgtable_pool);
+	destroy_resource_pool(&iommu_iova_pool);
+}
+
+void __init detect_intel_iommu(void)
+{
+	if (swiotlb || no_iommu || iommu_detected || dmar_disabled)
+		return;
+	if (early_dmar_detect()) {
+		iommu_detected = 1;
+	}
+}
+
+static void __init init_no_remapping_devices(void)
+{
+	struct dmar_drhd_unit *drhd;
+
+	for_each_drhd_unit(drhd)
+		if (!drhd->include_all) {
+			int i;
+			for (i=0; i < drhd->devices_cnt; i++)
+				if (drhd->devices[i] != NULL)
+					break;
+			/* ignore DMAR unit if no pci devices exist */
+			if (i == drhd->devices_cnt)
+				drhd->ignored = 1;
+		}
+
+	if (dmar_map_gfx)
+		return;
+
+	for_each_drhd_unit(drhd) {
+		int i;
+		if (drhd->ignored || drhd->include_all)
+			continue;
+
+		for (i = 0; i < drhd->devices_cnt; i++)
+			if (drhd->devices[i] && !IS_GFX_DEVICE(drhd->devices[i]))
+				break;
+
+		if (i < drhd->devices_cnt)
+			continue;
+
+		/* bypass IOMMU if it is just for gfx devices */
+		drhd->ignored = 1;
+		for (i = 0; i < drhd->devices_cnt; i++) {
+			if (!drhd->devices[i])
+				continue;
+			drhd->devices[i]->sysdata = DUMMY_DEVICE_DOMAIN_INFO;
+		}
+	}
+}
+
+int __init intel_iommu_init(void)
+{
+	int ret = 0;
+
+	if (no_iommu || swiotlb || dmar_disabled)
+		return -ENODEV;
+
+	if (dmar_table_init())
+		return 	-ENODEV;
+
+	iommu_init_mempool();
+	dmar_init_reserved_ranges();
+
+	init_no_remapping_devices();
+
+	ret = init_dmars();
+	if (ret) {
+		printk(KERN_ERR "IOMMU: dmar init failed\n");
+		put_iova_domain(&reserved_iova_list);
+		iommu_exit_mempool();
+		return ret;
+	}
+	printk(KERN_INFO
+		"PCI-DMA: Intel(R) Virtualization Technology for Directed I/O\n");
+
+	force_iommu = 1;
+	dma_ops = &intel_dma_ops;
+	return 0;
+}
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.h	2007-06-06 11:35:01.000000000 -0700
@@ -0,0 +1,296 @@
+/*
+ * Copyright (c) 2006, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Copyright (C) Ashok Raj <ashok.raj@intel.com>
+ */
+
+#ifndef _INTEL_IOMMU_H_
+#define _INTEL_IOMMU_H_
+
+#include <linux/types.h>
+#include <linux/msi.h>
+#include "iova.h"
+#include <asm/io.h>
+
+/*
+ * Intel IOMMU register specification per version 1.0 public spec.
+ */
+
+#define	DMAR_VER_REG	0x0	/* Arch version supported by this IOMMU */
+#define	DMAR_CAP_REG	0x8	/* Hardware supported capabilities */
+#define	DMAR_ECAP_REG	0x10	/* Extended capabilities supported */
+#define	DMAR_GCMD_REG	0x18	/* Global command register */
+#define	DMAR_GSTS_REG	0x1c	/* Global status register */
+#define	DMAR_RTADDR_REG	0x20	/* Root entry table */
+#define	DMAR_CCMD_REG	0x28	/* Context command reg */
+#define	DMAR_FSTS_REG	0x34	/* Fault Status register */
+#define	DMAR_FECTL_REG	0x38	/* Fault control register */
+#define	DMAR_FEDATA_REG	0x3c	/* Fault event interrupt data register */
+#define	DMAR_FEADDR_REG	0x40	/* Fault event interrupt addr register */
+#define	DMAR_FEUADDR_REG 0x44	/* Upper address register */
+#define	DMAR_AFLOG_REG	0x58	/* Advanced Fault control */
+#define	DMAR_PMEN_REG	0x64	/* Enable Protected Memory Region */
+#define	DMAR_PLMBASE_REG 0x68	/* PMRR Low addr */
+#define	DMAR_PLMLIMIT_REG 0x6c	/* PMRR low limit */
+#define	DMAR_PHMBASE_REG 0x70	/* pmrr high base addr */
+#define	DMAR_PHMLIMIT_REG 0x78	/* pmrr high limit */
+
+#define OFFSET_STRIDE		(9)
+#define dmar_readl(dmar, reg) readl(dmar + reg)
+#define dmar_writel(dmar, reg, val) writel((val), dmar + reg)
+#define dmar_readq(dmar, reg) ({ \
+		u32 lo, hi; \
+		lo = dmar_readl(dmar, reg); \
+		hi = dmar_readl(dmar, reg + 4); \
+		(((u64) hi) << 32) + lo; })
+#define dmar_writeq(dmar, reg, val) do {\
+		dmar_writel(dmar, reg, (u32)(val)); \
+		dmar_writel(dmar, reg + 4, (u32)((val) >> 32)); \
+	} while (0)
+
+#define VER_MAJOR(v)		(((v) & 0xf0) >> 4)
+#define VER_MINOR(v)		((v) & 0x0f)
+
+/*
+ * Decoding Capability Register
+ */
+#define cap_read_drain(c)	(((c) >> 55) & 1)
+#define cap_write_drain(c)	(((c) >> 54) & 1)
+#define cap_max_amask_val(c)	(((c) >> 48) & 0x3f)
+#define cap_num_fault_regs(c)	((((c) >> 40) & 0xff) + 1)
+#define cap_pgsel_inv(c)	(((c) >> 39) & 1)
+
+#define cap_super_page_val(c)	(((c) >> 34) & 0xf)
+#define cap_super_offset(c)	(((find_first_bit(&cap_super_page_val(c), 4)) \
+					* OFFSET_STRIDE) + 21)
+
+#define cap_fault_reg_offset(c)	((((c) >> 24) & 0x3ff) * 16)
+#define cap_max_fault_reg_offset(c) \
+	(cap_fault_reg_offset(c) + cap_num_fault_regs(c) * 16)
+
+#define cap_zlr(c)		(((c) >> 22) & 1)
+#define cap_isoch(c)		(((c) >> 23) & 1)
+#define cap_mgaw(c)		((((c) >> 16) & 0x3f) + 1)
+#define cap_sagaw(c)		(((c) >> 8) & 0x1f)
+#define cap_caching_mode(c)	(((c) >> 7) & 1)
+#define cap_phmr(c)		(((c) >> 6) & 1)
+#define cap_plmr(c)		(((c) >> 5) & 1)
+#define cap_rwbf(c)		(((c) >> 4) & 1)
+#define cap_afl(c)		(((c) >> 3) & 1)
+#define cap_ndoms(c)		(((unsigned long)1) << (4 + 2 * ((c) & 0x7)))
+/*
+ * Extended Capability Register
+ */
+
+#define ecap_niotlb_iunits(e)	((((e) >> 24) & 0xff) + 1)
+#define ecap_iotlb_offset(e) 	((((e) >> 8) & 0x3ff) * 16)
+#define ecap_max_iotlb_offset(e) \
+	(ecap_iotlb_offset(e) + ecap_niotlb_iunits(e) * 16)
+#define ecap_coherent(e)	((e) & 0x1)
+
+
+/* IOTLB_REG */
+#define DMA_TLB_GLOBAL_FLUSH (((u64)1) << 60)
+#define DMA_TLB_DSI_FLUSH (((u64)2) << 60)
+#define DMA_TLB_PSI_FLUSH (((u64)3) << 60)
+#define DMA_TLB_IIRG(type) ((type >> 60) & 7)
+#define DMA_TLB_IAIG(val) (((val) >> 57) & 7)
+#define DMA_TLB_READ_DRAIN (((u64)1) << 49)
+#define DMA_TLB_WRITE_DRAIN (((u64)1) << 48)
+#define DMA_TLB_DID(id)	(((u64)((id) & 0xffff)) << 32)
+#define DMA_TLB_IVT (((u64)1) << 63)
+#define DMA_TLB_IH_NONLEAF (((u64)1) << 6)
+#define DMA_TLB_MAX_SIZE (0x3f)
+
+/* GCMD_REG */
+#define DMA_GCMD_TE (((u32)1) << 31)
+#define DMA_GCMD_SRTP (((u32)1) << 30)
+#define DMA_GCMD_SFL (((u32)1) << 29)
+#define DMA_GCMD_EAFL (((u32)1) << 28)
+#define DMA_GCMD_WBF (((u32)1) << 27)
+
+/* GSTS_REG */
+#define DMA_GSTS_TES (((u32)1) << 31)
+#define DMA_GSTS_RTPS (((u32)1) << 30)
+#define DMA_GSTS_FLS (((u32)1) << 29)
+#define DMA_GSTS_AFLS (((u32)1) << 28)
+#define DMA_GSTS_WBFS (((u32)1) << 27)
+
+/* CCMD_REG */
+#define DMA_CCMD_ICC (((u64)1) << 63)
+#define DMA_CCMD_GLOBAL_INVL (((u64)1) << 61)
+#define DMA_CCMD_DOMAIN_INVL (((u64)2) << 61)
+#define DMA_CCMD_DEVICE_INVL (((u64)3) << 61)
+#define DMA_CCMD_FM(m) (((u64)((m) & 0x3)) << 32)
+#define DMA_CCMD_MASK_NOBIT 0
+#define DMA_CCMD_MASK_1BIT 1
+#define DMA_CCMD_MASK_2BIT 2
+#define DMA_CCMD_MASK_3BIT 3
+#define DMA_CCMD_SID(s) (((u64)((s) & 0xffff)) << 16)
+#define DMA_CCMD_DID(d) ((u64)((d) & 0xffff))
+
+/* FECTL_REG */
+#define DMA_FECTL_IM (((u32)1) << 31)
+
+/* FSTS_REG */
+#define DMA_FSTS_PPF ((u32)2)
+#define DMA_FSTS_PFO ((u32)1)
+#define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff)
+
+/* FRCD_REG, 32 bits access */
+#define DMA_FRCD_F (((u32)1) << 31)
+#define dma_frcd_type(d) ((d >> 30) & 1)
+#define dma_frcd_fault_reason(c) (c & 0xff)
+#define dma_frcd_source_id(c) (c & 0xffff)
+#define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */
+
+/*
+ * 0: Present
+ * 1-11: Reserved
+ * 12-63: Context Ptr (12 - (haw-1))
+ * 64-127: Reserved
+ */
+struct root_entry {
+	u64	val;
+	u64	rsvd1;
+};
+#define ROOT_ENTRY_NR (PAGE_SIZE_4K/sizeof(struct root_entry))
+#define root_present(root)	((root).val & 1)
+#define set_root_present(root) do {(root).val |= 1;} while(0)
+
+struct context_entry;
+static inline struct context_entry *
+get_context_addr_from_root(struct root_entry root)
+{
+	return (struct context_entry *) (root_present(root)?phys_to_virt( 	\
+		(root).val & PAGE_MASK_4K):					\
+		NULL);								\
+}
+
+#define set_root_value(root, value) \
+	do {(root).val |= ((value) & PAGE_MASK_4K);} while(0)
+
+/*
+ * low 64 bits:
+ * 0: present
+ * 1: fault processing disable
+ * 2-3: translation type
+ * 12-63: address space root
+ * high 64 bits:
+ * 0-2: address width
+ * 3-6: aval
+ * 8-23: domain id
+ */
+struct context_entry {
+	u64 lo;
+	u64 hi;
+};
+#define context_present(c) ((c).lo & 1)
+#define context_fault_disable(c) (((c).lo >> 1) & 1)
+#define context_translation_type(c) (((c).lo >> 2) & 3)
+#define context_address_root(c) ((c).lo & PAGE_MASK_4K)
+#define context_address_width(c) ((c).hi &  7)
+#define context_domain_id(c) (((c).hi >> 8) & ((1 << 16) - 1))
+
+#define context_set_present(c) do {(c).lo |= 1;} while(0)
+#define context_set_fault_enable(c) \
+	do {(c).lo &= (((u64)-1) << 2) | 1;} while(0)
+#define context_set_translation_type(c, val) do { \
+		(c).lo &= (((u64)-1) << 4) | 3; \
+		(c).lo |= ((val) & 3) << 2; \
+	} while(0)
+#define CONTEXT_TT_MULTI_LEVEL 0
+#define context_set_address_root(c, val) \
+	do {(c).lo |= (val) & PAGE_MASK_4K;} while(0)
+#define context_set_address_width(c, val) do {(c).hi |= (val) & 7;} while(0)
+#define context_set_domain_id(c, val) \
+	do {(c).hi |= ((val) & ((1 << 16) - 1)) << 8;} while(0)
+#define context_clear_entry(c) do {(c).lo = 0; (c).hi = 0;} while(0)
+
+/*
+ * 0: readable
+ * 1: writable
+ * 2-6: reserved
+ * 7: super page
+ * 8-11: available
+ * 12-63: Host physcial address
+ */
+struct dma_pte {
+	u64 val;
+};
+#define dma_clear_pte(p)	do {(p).val = 0;} while(0)
+
+#define DMA_PTE_READ (1)
+#define DMA_PTE_WRITE (2)
+
+#define dma_set_pte_readable(p) do {(p).val |= DMA_PTE_READ;} while(0)
+#define dma_set_pte_writable(p) do {(p).val |= DMA_PTE_WRITE;} while(0)
+#define dma_set_pte_prot(p, prot) do {\
+	(p).val = ((p).val & ~3) | ((prot) & 3); } while(0)
+#define dma_pte_addr(p) ((p).val & PAGE_MASK_4K)
+#define dma_set_pte_addr(p, addr) do {\
+		(p).val |= ((addr) & PAGE_MASK_4K); } while(0)
+#define dma_pte_present(p) (((p).val & 3) != 0)
+
+struct intel_iommu;
+
+struct domain {
+	int	id;			/* domain id */
+	struct intel_iommu *iommu;	/* back pointer to owning iommu */
+
+	struct list_head devices; 	/* all devices' list */
+	struct iova_domain iovad;	/* iova's that belong to this domain */
+
+	struct dma_pte	*pgd;		/* virtual address */
+	spinlock_t	mapping_lock;	/* page table lock */
+	int		gaw;		/* max guest address width */
+	int		agaw;		/* adjusted guest address width, 0 is level 2 30-bit */
+
+#define DOMAIN_FLAG_MULTIPLE_DEVICES 1
+	int		flags;
+};
+
+/* PCI domain-device relationship */
+struct device_domain_info {
+	struct list_head link;	/* link to domain siblings */
+	struct list_head global; /* link to global list */
+	u8 bus;			/* PCI bus numer */
+	u8 devfn;		/* PCI devfn number */
+	struct pci_dev *dev; /* it's NULL for PCIE-to-PCI bridge */
+	struct domain *domain; /* pointer to domain */
+};
+
+extern int init_dmars(void);
+
+struct intel_iommu {
+	void __iomem	*reg; /* Pointer to hardware regs, virtual addr */
+	u64		cap;
+	u64		ecap;
+	unsigned long 	*domain_ids; /* bitmap of domains */
+	struct domain **domains; /* ptr to domains */
+	int		seg;
+	u32		gcmd; /* Holds TE, EAFL. Don't need SRTP, SFL, WBF */
+	spinlock_t	lock; /* protect context, domain ids */
+	spinlock_t	register_lock; /* protect register handling */
+	struct root_entry *root_entry; /* virtual address */
+
+	unsigned int irq;
+	unsigned char name[7];    /* Device Name */
+	struct msi_msg saved_msg;
+	struct sys_device sysdev;
+};
+
+#endif
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- linux-2.6.22-rc3.orig/include/linux/dmar.h	2007-06-06 11:33:23.000000000 -0700
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-06 11:35:01.000000000 -0700
@@ -23,8 +23,15 @@
 
 #include <linux/acpi.h>
 #include <linux/types.h>
+#include <linux/msi.h>
 
 
+struct intel_iommu;
+
+/* Intel IOMMU detection and initialization functions */
+extern void detect_intel_iommu(void);
+extern int intel_iommu_init(void);
+
 extern int dmar_table_init(void);
 extern int early_dmar_detect(void);
 
@@ -49,4 +56,20 @@
 	int	devices_cnt;		/* target device count */
 };
 
+#define for_each_drhd_unit(drhd) \
+	list_for_each_entry(drhd, &dmar_drhd_units, list)
+#define for_each_rmrr_units(rmrr) \
+	list_for_each_entry(rmrr, &dmar_rmrr_units, list)
+#define begin_for_each_rmrr_device(rmrr, pdev) \
+	for_each_rmrr_units(rmrr) { \
+		int _i; \
+		for (_i = 0; _i < rmrr->devices_cnt; _i++) { \
+			pdev = rmrr->devices[_i]; \
+			/* some BIOS lists non-exist devices in DMAR table */\
+			if (!pdev) \
+				continue;
+#define end_for_each_rmrr_device(rmrr, pdev) \
+		} \
+	}
+
 #endif /* __DMAR_H__ */

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (5 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-07 23:58   ` Andrew Morton
  2007-06-06 18:57 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_forcedac.patch --]
[-- Type: text/plain, Size: 2392 bytes --]

	Introduce intel_iommu=forcedac commandline option.
This option is helpful to verify the pci device capability
of handling physical dma'able address greater than 4G.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/kernel-parameters.txt |    7 +++++++
 drivers/pci/intel-iommu.c           |    6 +++++-
 2 files changed, 12 insertions(+), 1 deletion(-)

Index: linux-2.6.22-rc3/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/kernel-parameters.txt	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/kernel-parameters.txt	2007-06-04 12:40:41.000000000 -0700
@@ -785,6 +785,13 @@
 			bypassed by not enabling DMAR with this option. In
 			this case, gfx device will use physical address for
 			DMA.
+		forcedac
+			With this option iommu will not optimize to look
+			for io virtual address below 32 bit forcing dual
+			address cycle on pci bus for cards supporting greater
+			than 32 bit addressing. The default is to look
+			for translation below 32 bit and if not available
+			then look in the higher range.
 
 	io7=		[HW] IO7 for Marvel based alpha systems
 			See comment before marvel_specify_io7 in
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:40:41.000000000 -0700
@@ -53,6 +53,7 @@
 
 static int dmar_disabled;
 static int __initdata dmar_map_gfx = 1;
+static int dmar_forcedac = 0;
 
 #define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1))
 static DEFINE_SPINLOCK(device_domain_lock);
@@ -69,6 +70,9 @@
 		} else if (!strncmp(str, "igfx_off", 8)) {
 			dmar_map_gfx = 0;
 			printk(KERN_INFO"Intel-IOMMU: disable GFX device mapping\n");
+ 		} else if (!strncmp(str, "forcedac", 8)) {
+			printk (KERN_INFO"Intel-IOMMU: Enabling DAC for PCI supporting > 32Bit DMA\n");
+			dmar_forcedac = 1;
 		}
 
 		str += strcspn(str, ",");
@@ -1500,7 +1504,7 @@
 
 	start_addr = IOVA_START_ADDR;
 
-	if (pdev->dma_mask <= DMA_32BIT_MASK) {
+	if ((pdev->dma_mask <= DMA_32BIT_MASK) || (dmar_forcedac)) {
 		iova = iommu_alloc_iova(domain, addr, size, start_addr,
 			pdev->dma_mask);
 	} else  {

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 08/10] DMAR fault handling support
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (6 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-06 18:57 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
  2007-06-06 18:57 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy
  9 siblings, 0 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: dmar_fault_handling_support.patch --]
[-- Type: text/plain, Size: 10194 bytes --]

	MSI interrupt handler registrations and fault handling support
for Intel-IOMMU hadrware.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/Intel-IOMMU.txt |   17 +++
 arch/x86_64/kernel/io_apic.c  |   59 ++++++++++++
 drivers/pci/intel-iommu.c     |  194 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/dmar.h          |   12 ++
 4 files changed, 281 insertions(+), 1 deletion(-)

Index: linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:58.000000000 -0700
@@ -63,6 +63,15 @@
 The same is true for peer to peer transactions. Hence we reserve the
 address from PCI MMIO ranges so they are not allocated for IOVA addresses.
 
+
+Fault reporting
+---------------
+When errors are reported, the DMA engine signals via an interrupt. The fault
+reason and device that caused it with fault reason is printed on console.
+
+See below for sample.
+
+
 Boot Message Sample
 -------------------
 
@@ -85,6 +94,14 @@
 
 PCI-DMA: Using DMAR IOMMU
 
+Fault reporting
+---------------
+
+DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000
+DMAR:[fault reason 05] PTE Write access is not set
+DMAR:[DMA Write] Request device [00:02.0] fault addr 6df084000
+DMAR:[fault reason 05] PTE Write access is not set
+
 TBD
 ----
 
Index: linux-2.6.22-rc3/arch/x86_64/kernel/io_apic.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/kernel/io_apic.c	2007-06-04 12:19:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/kernel/io_apic.c	2007-06-04 12:40:58.000000000 -0700
@@ -31,6 +31,7 @@
 #include <linux/sysdev.h>
 #include <linux/msi.h>
 #include <linux/htirq.h>
+#include <linux/dmar.h>
 #ifdef CONFIG_ACPI
 #include <acpi/acpi_bus.h>
 #endif
@@ -1972,8 +1973,64 @@
 	destroy_irq(irq);
 }
 
-#endif /* CONFIG_PCI_MSI */
+#ifdef CONFIG_DMAR
+#ifdef CONFIG_SMP
+static void dmar_msi_set_affinity(unsigned int irq, cpumask_t mask)
+{
+	struct irq_cfg *cfg = irq_cfg + irq;
+	struct msi_msg msg;
+	unsigned int dest;
+	cpumask_t tmp;
+
+	cpus_and(tmp, mask, cpu_online_map);
+	if (cpus_empty(tmp))
+		return;
+
+	if (assign_irq_vector(irq, mask))
+		return;
+
+	cpus_and(tmp, cfg->domain, mask);
+	dest = cpu_mask_to_apicid(tmp);
+
+	dmar_msi_read(irq, &msg);
+
+	msg.data &= ~MSI_DATA_VECTOR_MASK;
+	msg.data |= MSI_DATA_VECTOR(cfg->vector);
+	msg.address_lo &= ~MSI_ADDR_DEST_ID_MASK;
+	msg.address_lo |= MSI_ADDR_DEST_ID(dest);
+
+	dmar_msi_write(irq, &msg);
+	irq_desc[irq].affinity = mask;
+}
+#endif /* CONFIG_SMP */
+
+struct irq_chip dmar_msi_type = {
+	.name = "DMAR_MSI",
+	.unmask = dmar_msi_unmask,
+	.mask = dmar_msi_mask,
+	.ack = ack_apic_edge,
+#ifdef CONFIG_SMP
+	.set_affinity = dmar_msi_set_affinity,
+#endif
+	.retrigger = ioapic_retrigger_irq,
+};
+
+int arch_setup_dmar_msi(unsigned int irq)
+{
+	int ret;
+	struct msi_msg msg;
+
+	ret = msi_compose_msg(NULL, irq, &msg);
+	if (ret < 0)
+		return ret;
+	dmar_msi_write(irq, &msg);
+	set_irq_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq,
+		"edge");
+	return 0;
+}
+#endif
 
+#endif /* CONFIG_PCI_MSI */
 /*
  * Hypertransport interrupt support
  */
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:40:41.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:40:58.000000000 -0700
@@ -684,6 +684,196 @@
 	return 0;
 }
 
+/* iommu interrupt handling. Most stuff are MSI-like. */
+
+static char *fault_reason_strings[] =
+{
+	"Software",
+	"Present bit in root entry is clear",
+	"Present bit in context entry is clear",
+	"Invalid context entry",
+	"Access beyond MGAW",
+	"PTE Write access is not set",
+	"PTE Read access is not set",
+	"Next page table ptr is invalid",
+	"Root table address invalid",
+	"Context table ptr is invalid",
+	"non-zero reserved fields in RTP",
+	"non-zero reserved fields in CTP",
+	"non-zero reserved fields in PTE",
+	"Unknown"
+};
+#define MAX_FAULT_REASON_IDX 	ARRAY_SIZE(fault_reason_strings)
+
+char *dmar_get_fault_reason(u8 fault_reason)
+{
+	if (fault_reason > MAX_FAULT_REASON_IDX)
+		return fault_reason_strings[MAX_FAULT_REASON_IDX];
+	else
+		return fault_reason_strings[fault_reason];
+}
+
+void dmar_msi_unmask(unsigned int irq)
+{
+	struct intel_iommu *iommu = get_irq_data(irq);
+	unsigned long flag;
+
+	/* unmask it */
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_FECTL_REG, 0);
+	/* Read a reg to force flush the post write */
+	dmar_readl(iommu->reg, DMAR_FECTL_REG);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+void dmar_msi_mask(unsigned int irq)
+{
+	unsigned long flag;
+	struct intel_iommu *iommu = get_irq_data(irq);
+
+	/* mask it */
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_FECTL_REG, DMA_FECTL_IM);
+	/* Read a reg to force flush the post write */
+	dmar_readl(iommu->reg, DMAR_FECTL_REG);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+void dmar_msi_write(int irq, struct msi_msg *msg)
+{
+	struct intel_iommu *iommu = get_irq_data(irq);
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	dmar_writel(iommu->reg, DMAR_FEDATA_REG, msg->data);
+	dmar_writel(iommu->reg, DMAR_FEADDR_REG, msg->address_lo);
+	dmar_writel(iommu->reg, DMAR_FEUADDR_REG, msg->address_hi);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+void dmar_msi_read(int irq, struct msi_msg *msg)
+{
+	struct intel_iommu *iommu = get_irq_data(irq);
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	msg->data = dmar_readl(iommu->reg, DMAR_FEDATA_REG);
+	msg->address_lo = dmar_readl(iommu->reg, DMAR_FEADDR_REG);
+	msg->address_hi = dmar_readl(iommu->reg, DMAR_FEUADDR_REG);
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+}
+
+static int iommu_page_fault_do_one(struct intel_iommu *iommu, int type,
+		u8 fault_reason, u16 source_id, u64 addr)
+{
+	char *reason;
+
+	reason = dmar_get_fault_reason(fault_reason);
+
+	printk(KERN_ERR
+		"DMAR:[%s] Request device [%02x:%02x.%d] "
+		"fault addr %llx \n"
+		"DMAR:[fault reason %02d] %s\n",
+		(type ? "DMA Read" : "DMA Write"),
+		(source_id >> 8), PCI_SLOT(source_id & 0xFF),
+		PCI_FUNC(source_id & 0xFF), addr, fault_reason, reason);
+	return 0;
+}
+
+#define PRIMARY_FAULT_REG_LEN (16)
+static irqreturn_t iommu_page_fault(int irq, void *dev_id)
+{
+	struct intel_iommu *iommu = dev_id;
+	int reg, fault_index;
+	u32 fault_status;
+	unsigned long flag;
+
+	spin_lock_irqsave(&iommu->register_lock, flag);
+	fault_status = dmar_readl(iommu->reg, DMAR_FSTS_REG);
+
+	/* TBD: ignore advanced fault log currently */
+	if (!(fault_status & DMA_FSTS_PPF))
+		goto clear_overflow;
+
+	fault_index = dma_fsts_fault_record_index(fault_status);
+	reg = cap_fault_reg_offset(iommu->cap);
+	while (1) {
+		u8 fault_reason;
+		u16 source_id;
+		u64 guest_addr;
+		int type;
+		u32 data;
+
+		/* highest 32 bits */
+		data = dmar_readl(iommu->reg, reg +
+				fault_index * PRIMARY_FAULT_REG_LEN + 12);
+		if (!(data & DMA_FRCD_F))
+			break;
+
+		fault_reason = dma_frcd_fault_reason(data);
+		type = dma_frcd_type(data);
+
+		data = dmar_readl(iommu->reg, reg +
+				fault_index * PRIMARY_FAULT_REG_LEN + 8);
+		source_id = dma_frcd_source_id(data);
+
+		guest_addr = dmar_readq(iommu->reg, reg +
+				fault_index * PRIMARY_FAULT_REG_LEN);
+		guest_addr = dma_frcd_page_addr(guest_addr);
+		/* clear the fault */
+		dmar_writel(iommu->reg, reg +
+			fault_index * PRIMARY_FAULT_REG_LEN + 12, DMA_FRCD_F);
+
+		spin_unlock_irqrestore(&iommu->register_lock, flag);
+
+		iommu_page_fault_do_one(iommu, type, fault_reason,
+				source_id, guest_addr);
+
+		fault_index++;
+		if (fault_index > cap_num_fault_regs(iommu->cap))
+			fault_index = 0;
+		spin_lock_irqsave(&iommu->register_lock, flag);
+	}
+clear_overflow:
+	/* clear primary fault overflow */
+	fault_status = dmar_readl(iommu->reg, DMAR_FSTS_REG);
+	if (fault_status & DMA_FSTS_PFO)
+		dmar_writel(iommu->reg, DMAR_FSTS_REG, DMA_FSTS_PFO);
+
+	spin_unlock_irqrestore(&iommu->register_lock, flag);
+	return IRQ_HANDLED;
+}
+
+int dmar_set_interrupt(struct intel_iommu *iommu)
+{
+	int irq, ret;
+
+	irq = create_irq();
+	if (!irq) {
+		printk(KERN_ERR "IOMMU: no free vectors\n");
+		return -EINVAL;
+	}
+
+	set_irq_data(irq, iommu);
+	iommu->irq = irq;
+
+	ret = arch_setup_dmar_msi(irq);
+	if (ret) {
+		set_irq_data(irq, NULL);
+		iommu->irq = 0;
+		destroy_irq(irq);
+		return 0;
+	}
+
+	/* Force fault register is cleared */
+	iommu_page_fault(irq, iommu);
+
+	ret = request_irq(irq, iommu_page_fault, 0, iommu->name, iommu);
+	if (ret)
+		printk(KERN_ERR "IOMMU: can't request irq\n");
+	return ret;
+}
+
 static int iommu_init_domains(struct intel_iommu *iommu)
 {
 	unsigned long ndomains;
@@ -1436,6 +1626,10 @@
 
 		iommu_flush_write_buffer(iommu);
 
+		ret = dmar_set_interrupt(iommu);
+		if (ret)
+			goto error;
+
 		iommu_set_root_entry(iommu);
 
 		iommu_flush_context_global(iommu, 0);
Index: linux-2.6.22-rc3/include/linux/dmar.h
===================================================================
--- linux-2.6.22-rc3.orig/include/linux/dmar.h	2007-06-04 12:40:29.000000000 -0700
+++ linux-2.6.22-rc3/include/linux/dmar.h	2007-06-04 12:40:58.000000000 -0700
@@ -28,6 +28,18 @@
 
 struct intel_iommu;
 
+extern char *dmar_get_fault_reason(u8 fault_reason);
+
+/* Can't use the common MSI interrupt functions
+ * since DMAR is not a pci device
+ */
+extern void dmar_msi_unmask(unsigned int irq);
+extern void dmar_msi_mask(unsigned int irq);
+extern void dmar_msi_read(int irq, struct msi_msg *msg);
+extern void dmar_msi_write(int irq, struct msi_msg *msg);
+extern int dmar_set_interrupt(struct intel_iommu *iommu);
+extern int arch_setup_dmar_msi(unsigned int irq);
+
 /* Intel IOMMU detection and initialization functions */
 extern void detect_intel_iommu(void);
 extern int intel_iommu_init(void);

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 09/10] Iommu Gfx workaround
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (7 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  2007-06-08  0:01   ` Andrew Morton
  2007-06-06 18:57 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy
  9 siblings, 1 reply; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: gfx_wrkaround.patch --]
[-- Type: text/plain, Size: 4348 bytes --]

When we fix all the opensource gfx drivers to use the DMA api's,
at that time we can yank this config options out.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 Documentation/Intel-IOMMU.txt |    5 +++++
 arch/x86_64/Kconfig           |   11 +++++++++++
 arch/x86_64/kernel/e820.c     |   19 +++++++++++++++++++
 drivers/pci/intel-iommu.c     |   32 ++++++++++++++++++++++++++++++++
 4 files changed, 67 insertions(+)

Index: linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt
===================================================================
--- linux-2.6.22-rc3.orig/Documentation/Intel-IOMMU.txt	2007-06-04 12:40:58.000000000 -0700
+++ linux-2.6.22-rc3/Documentation/Intel-IOMMU.txt	2007-06-04 12:41:11.000000000 -0700
@@ -57,6 +57,11 @@
 If you encounter issues with graphics devices, you can try adding
 option intel_iommu=igfx_off to turn off the integrated graphics engine.
 
+If it happens to be a PCI device included in the INCLUDE_ALL Engine,
+then try enabling CONFIG_DMAR_GFX_WA to setup a 1-1 map. We hear
+graphics drivers may be in process of using DMA api's in the near
+future and at that time this option can be yanked out.
+
 Some exceptions to IOVA
 -----------------------
 Interrupt ranges are not address translated, (0xfee00000 - 0xfeefffff).
Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-04 12:35:19.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-04 12:41:11.000000000 -0700
@@ -727,6 +727,17 @@
 	  and includes pci device scope covered by these DMA
 	  remapping device.
 
+config DMAR_GFX_WA
+	bool "Support for Graphics workaround"
+	depends on DMAR
+	default y
+	help
+	 Current Graphics drivers tend to use physical address
+	 for DMA and avoid using DMA api's. Setting this config
+	 option permits the IOMMU driver to set a unity map for
+	 all the OS visible memory. Hence the driver can continue
+	 to use physical addresses for DMA.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/arch/x86_64/kernel/e820.c
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/kernel/e820.c	2007-06-04 12:19:13.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/kernel/e820.c	2007-06-04 12:41:11.000000000 -0700
@@ -717,3 +717,22 @@
 	printk(KERN_INFO "Allocating PCI resources starting at %lx (gap: %lx:%lx)\n",
 		pci_mem_start, gapstart, gapsize);
 }
+
+int __init arch_get_ram_range(int slot, u64 *addr, u64 *size)
+{
+	int i;
+
+	if (slot < 0 || slot >= e820.nr_map)
+		return -1;
+	for (i = slot; i < e820.nr_map; i++) {
+		if(e820.map[i].type != E820_RAM)
+			continue;
+		break;
+	}
+	if (i == e820.nr_map || e820.map[i].addr > (max_pfn << PAGE_SHIFT))
+		return -1;
+	*addr = e820.map[i].addr;
+	*size = min_t(u64, e820.map[i].size + e820.map[i].addr,
+		max_pfn << PAGE_SHIFT) - *addr;
+	return i + 1;
+}
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-04 12:40:58.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-04 12:41:11.000000000 -0700
@@ -1556,6 +1556,34 @@
 		rmrr->end_address + 1);
 }
 
+#ifdef CONFIG_DMAR_GFX_WA
+extern int arch_get_ram_range(int slot, u64 *addr, u64 *size);
+static void __init iommu_prepare_gfx_mapping(void)
+{
+	struct pci_dev *pdev = NULL;
+	u64 base, size;
+	int slot;
+	int ret;
+
+	for_each_pci_dev(pdev) {
+		if (pdev->sysdata == DUMMY_DEVICE_DOMAIN_INFO ||
+				!IS_GFX_DEVICE(pdev))
+			continue;
+		printk(KERN_INFO "IOMMU: gfx device %s 1-1 mapping\n",
+			pci_name(pdev));
+		slot = 0;
+		while ((slot = arch_get_ram_range(slot, &base, &size)) >= 0) {
+			ret = iommu_prepare_identity_map(pdev, base, base + size);
+			if (ret)
+				goto error;
+		}
+		continue;
+error:
+		printk(KERN_ERR "IOMMU: mapping reserved region failed\n");
+	}
+}
+#endif
+
 int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -1611,6 +1639,10 @@
 			printk(KERN_ERR "IOMMU: mapping reserved region failed\n");
 	end_for_each_rmrr_device(rmrr, pdev)
 
+#ifdef CONFIG_DMAR_GFX_WA
+	iommu_prepare_gfx_mapping();
+#endif
+
 	/*
 	 * for each drhd
 	 *   enable fault log

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 10/10] Iommu floppy workaround
  2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
                   ` (8 preceding siblings ...)
  2007-06-06 18:57 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
@ 2007-06-06 18:57 ` anil.s.keshavamurthy
  9 siblings, 0 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-06 18:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

[-- Attachment #1: floppy_disk_wrkaround.patch --]
[-- Type: text/plain, Size: 2143 bytes --]

	This config option (DMAR_FLPY_WA) sets up 1:1 mapping for the
floppy device so that the floppy device which does not use
DMA api's will continue to work.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
---
 arch/x86_64/Kconfig       |   10 ++++++++++
 drivers/pci/intel-iommu.c |   23 +++++++++++++++++++++++
 2 files changed, 33 insertions(+)

Index: linux-2.6.22-rc3/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.22-rc3.orig/arch/x86_64/Kconfig	2007-06-05 12:58:08.000000000 -0700
+++ linux-2.6.22-rc3/arch/x86_64/Kconfig	2007-06-05 12:58:08.000000000 -0700
@@ -738,6 +738,16 @@
 	 all the OS visible memory. Hence the driver can continue
 	 to use physical addresses for DMA.
 
+config DMAR_FLPY_WA
+	bool "Support for Floppy disk workaround"
+	depends on DMAR
+	default y
+	help
+	 Floppy disk drivers are know to by pass dma api calls
+	 their by failing to work when IOMMU is enabled. This
+	 work around will setup a 1 to 1 mappings for the first
+	 16M to make floppy(isa device) work.
+
 source "drivers/pci/pcie/Kconfig"
 
 source "drivers/pci/Kconfig"
Index: linux-2.6.22-rc3/drivers/pci/intel-iommu.c
===================================================================
--- linux-2.6.22-rc3.orig/drivers/pci/intel-iommu.c	2007-06-05 12:58:08.000000000 -0700
+++ linux-2.6.22-rc3/drivers/pci/intel-iommu.c	2007-06-05 12:58:08.000000000 -0700
@@ -1584,6 +1584,25 @@
 }
 #endif
 
+#ifdef CONFIG_DMAR_FLPY_WA
+static void iommu_prepare_isa(void)
+{
+	struct pci_dev *pdev = NULL;
+	int ret;
+
+	pdev = pci_get_class (PCI_CLASS_BRIDGE_ISA << 8, NULL);
+	if (!pdev)
+		return;
+
+	printk (KERN_INFO "IOMMU: Prepare 0-16M unity mapping for LPC\n");
+	ret = iommu_prepare_identity_map(pdev, 0, 16*1024*1024);
+
+	if (ret)
+		printk ("IOMMU: Failed to create 0-64M identity map, Floppy might not work\n");
+
+}
+#endif
+
 int __init init_dmars(void)
 {
 	struct dmar_drhd_unit *drhd;
@@ -1643,6 +1662,10 @@
 	iommu_prepare_gfx_mapping();
 #endif
 
+#ifdef CONFIG_DMAR_FLPY_WA
+	iommu_prepare_isa();
+#endif
+
 	/*
 	 * for each drhd
 	 *   enable fault log

-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-06 18:57 ` [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling anil.s.keshavamurthy
@ 2007-06-07 23:27   ` Andrew Morton
  2007-06-08 18:21     ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 64+ messages in thread
From: Andrew Morton @ 2007-06-07 23:27 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Wed, 06 Jun 2007 11:57:00 -0700
anil.s.keshavamurthy@intel.com wrote:

> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>

That was a terse changelog.

Obvious question: how does this differ from mempools, and would it be
better to fill in any gaps in mempool functionality instead of
implementing something similar-looking?

The changelog very much should describe all this, as well as explaining
what the dynamic behaviour of this new thing is, and what applications are
envisaged, what problems it solves, etc, etc.


> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/lib/respool.c	2007-06-06 11:34:46.000000000 -0700

There are a number of coding-style glitches in here, but
scripts/checkpatch.pl catches most of them.  Please run it, and fix.

> @@ -0,0 +1,222 @@
> +/*
> + * respool.c - library routines for handling generic pre-allocated pool of objects
> + *
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This file is released under the GPLv2.
> + *
> + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> + */
> +
> +#include <linux/respool.h>
> +
> +/**
> + * get_resource_pool_obj - gets an object from the pool
> + * @ppool - resource pool in question
> + * This function gets an object from the pool and
> + * if the pool count drops below min_count, this
> + * function schedules work to grow the pool. If
> + * no elements are fount in the pool then this function
> + * tries to get memory from kernel.
> + */
> +void * get_resource_pool_obj(struct resource_pool *ppool)
> +{
> +	unsigned long	flags;
> +	struct list_head *plist;
> +	bool queue_work = 0;
> +
> +	spin_lock_irqsave(&ppool->pool_lock, flags);
> +	if (!list_empty(&ppool->pool_head)) {
> +		plist = ppool->pool_head.next;
> +		list_del(plist);
> +		ppool->curr_count--;
> +	} else {
> +		/*Making sure that curr_count is 0 when list is empty */
> +		plist = NULL;
> +		BUG_ON(ppool->curr_count != 0);
> +	}
> +
> +	/* Check if pool needs to grow */
> +	if (ppool->curr_count <= ppool->min_count)
> +		queue_work = 1;
> +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +
> +	if (queue_work)
> +		schedule_work(&ppool->work); /* queue work to grow the pool */
> +
> +
> +	if (plist) {
> +		memset(plist, 0, ppool->alloc_size); /* Zero out memory */
> +		return plist;
> +	}
> +
> +	/* Out of luck, try to get memory from kernel */
> +	plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> +			GFP_ATOMIC);
> +
> +	return plist;
> +}

A function like this should take a gfp_t from the caller, and pass it on.

> +/**
> + * put_resource_pool_obj - puts an object back to the pool
> + * @vaddr - object's address
> + * @ppool - resource pool in question.
> + * This function puts an object back to the pool.
> + */
> +void put_resource_pool_obj(void * vaddr, struct resource_pool *ppool)
> +{
> +	unsigned long	flags;
> +	struct list_head *plist = (struct list_head *)vaddr;
> +	bool queue_work = 0;
> +
> +	BUG_ON(!vaddr);
> +	BUG_ON(!ppool);
> +
> +	spin_lock_irqsave(&ppool->pool_lock, flags);
> +	list_add(plist, &ppool->pool_head);
> +	ppool->curr_count++;
> +	if (ppool->curr_count > (ppool->min_count +
> +		ppool->grow_count * 2))
> +		queue_work = 1;

Some of the indenting is a bit funny-looking in here.

> +	spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +
> +	if (queue_work)
> +		schedule_work(&ppool->work); /* queue work to shrink the pool */
> +}
> +
> +void
> +__grow_resource_pool(struct resource_pool *ppool,
> +	unsigned int grow_count)
> +{
> +	unsigned long	flags;
> +	struct list_head *plist;
> +
> +	while(grow_count) {
> +		plist = (struct list_head *)ppool->alloc_mem(ppool->alloc_size,
> +			GFP_KERNEL);

resource_pool.alloc_mem() already returns void *, so there is never a need
to cast its return value.

> +		if (!plist)
> +			break;
> +
> +		/* Add the element to the list */
> +		spin_lock_irqsave(&ppool->pool_lock, flags);
> +		list_add(plist, &ppool->pool_head);
> +		ppool->curr_count++;
> +		spin_unlock_irqrestore(&ppool->pool_lock, flags);
> +		grow_count--;
> +	}
> +}
> +


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 05/10] IOVA allocation and management routines
  2007-06-06 18:57 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
@ 2007-06-07 23:34   ` Andrew Morton
  2007-06-08 18:25     ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 64+ messages in thread
From: Andrew Morton @ 2007-06-07 23:34 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Wed, 06 Jun 2007 11:57:03 -0700
anil.s.keshavamurthy@intel.com wrote:

> 	This code implements a generic IOVA allocation and 
> management. As per Dave's suggestion we are now allocating
> IO virtual address from Higher DMA limit address rather
> than lower end address and this eliminated the need to preserve
> the IO virtual address for multiple devices sharing the same
> domain virtual address.
> 
> Also this code uses red black trees to store the allocated and
> reserved iova nodes. This showed a good performance improvements
> over previous linear linked list.
> 
> ...
>
> +
> +/**
> + * alloc_iova - allocates an iova
> + * @iovad - iova domain in question
> + * @size - size of page frames to allocate
> + * @limit_pfn - max limit address
> + * This function allocates an iova in the range limit_pfn to IOVA_START_PFN
> + * looking from limit_pfn instead from IOVA_START_PFN.
> + */
> +
> +struct iova *
> +alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn)

Generally we omit the blank line between the end-of-comment and the
function definition.

> +{
> +	unsigned long flags,flags1;
> +	struct iova *new_iova;
> +	int ret;
> +
> +	new_iova = alloc_iova_mem();
> +	if (!new_iova)
> +		return NULL;
> +
> +	spin_lock_irqsave(&iovad->iova_alloc_lock, flags1);
> +	ret = __alloc_iova_range(iovad, size, limit_pfn, new_iova);
> +
> +	if (ret) {
> +		spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags1);
> +		free_iova_mem(new_iova);
> +		return NULL;
> +	}
> +
> +	/* Insert the new_iova into domain rbtree by holding writer lock */
> +	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
> +	iova_insert_rbtree(&iovad->rbroot, new_iova);
> +	__cached_rbnode_insert_update(iovad, limit_pfn, new_iova);
> +	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
> +
> +	spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags1);

spin_unlock_irqrestore() within spin_unlock_irqrestore() is fairly
pointless.  You can just use spin_lock()/spin_unlock() for the innermost
pair.

> +	return new_iova;
> +}
>
> ...
>
> +__is_range_overlap(struct rb_node *node, unsigned long pfn_lo, unsigned long pfn_hi)
> +{
> +	struct iova * iova = container_of(node, struct iova, node);

run checkpatch.pl, please.

> +	if ((pfn_lo <= iova->pfn_hi) && (pfn_hi >= iova->pfn_lo))
> +		return 1;
> +	return 0;
> +}
> +
>
> ...
>
> +struct iova *
> +reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi)
> +{
> +	struct rb_node *node;
> +	unsigned long flags, flags1;
> +	struct iova *iova;
> +	unsigned int overlap = 0;
> +
> +	spin_lock_irqsave(&iovad->iova_alloc_lock, flags);
> +	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags1);
> +	for (node = rb_first(&iovad->rbroot); node; node = rb_next(node)) {
> +		if (__is_range_overlap(node, pfn_lo, pfn_hi)) {
> +			iova = container_of(node, struct iova, node);
> +			__adjust_overlap_range(iova, &pfn_lo, &pfn_hi);
> +			if ((pfn_lo >= iova->pfn_lo) &&
> +				(pfn_hi <= iova->pfn_hi))
> +				goto finish;
> +			overlap = 1;
> +
> +		} else if (overlap)
> +				break;
> +	}
> +
> +	/* We are here either becasue this is the first reserver node
> +	 * or need to insert remaining non overlap addr range
> +	 */
> +	iova = __insert_new_range(iovad, pfn_lo, pfn_hi);
> +finish:
> +
> +	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags1);
> +	spin_unlock_irqrestore(&iovad->iova_alloc_lock, flags);

ditto

> +	return iova;
> +}
> +
> +/**
> + * copy_reserved_iova - copies the reserved between domains
> + * @from: - source doamin from where to copy
> + * @to: - destination domin where to copy
> + * This function copies reserved iova's from one doamin to
> + * other.
> + */
> +void
> +copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
> +{
> +	unsigned long flags, flags1;
> +	struct rb_node *node;
> +	spin_lock_irqsave(&from->iova_alloc_lock, flags);
> +	spin_lock_irqsave(&from->iova_rbtree_lock, flags1);

Add a blank line betwee the end-of-locals and the start-of-statements

> +	for (node = rb_first(&from->rbroot); node; node = rb_next(node)) {
> +		struct iova *iova = container_of(node, struct iova, node);
> +		struct iova *new_iova;
> +		new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
> +		if (!new_iova)
> +			printk(KERN_ERR "Reserve iova range %lx@%lx failed\n",
> +				iova->pfn_lo, iova->pfn_lo);
> +	}
> +	spin_unlock_irqrestore(&from->iova_rbtree_lock, flags1);
> +	spin_unlock_irqrestore(&from->iova_alloc_lock, flags);

ditto

> +}
> Index: linux-2.6.22-rc3/drivers/pci/iova.h
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6.22-rc3/drivers/pci/iova.h	2007-06-04 12:40:20.000000000 -0700
> @@ -0,0 +1,57 @@
> +/*
> + * Copyright (c) 2006, Intel Corporation.
> + *
> + * This file is released under the GPLv2.
> + *
> + * Copyright (C) 2006 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> + *
> + */
> +
> +#ifndef _IOVA_H_
> +#define _IOVA_H_
> +
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +#include <linux/rbtree.h>
> +#include <linux/dma-mapping.h>
> +
> +
> +#define PAGE_SHIFT_4K		(12)
> +#define PAGE_SIZE_4K		(1UL << PAGE_SHIFT_4K)
> +#define PAGE_MASK_4K		(((u64)-1) << PAGE_SHIFT_4K)
> +#define PAGE_ALIGN_4K(addr)	(((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)

hm.  We can't use the architecture's PAGE_SHIFT and friends here?

> +#define IOVA_START_ADDR		(0x1000)
> +#define IOVA_START_PFN		(IOVA_START_ADDR >> PAGE_SHIFT_4K)
> +
> +#define IOVA_PFN(addr)		((addr) >> PAGE_SHIFT_4K)
> +#define DMA_32BIT_PFN	IOVA_PFN(DMA_32BIT_MASK)
> +#define DMA_64BIT_PFN	IOVA_PFN(DMA_64BIT_MASK)
> +
> +/* iova structure */
> +struct iova {
> +	struct rb_node	node;
> +	unsigned long	pfn_hi; /* IOMMU dish out addr hi */
> +	unsigned long	pfn_lo; /* IOMMU dish out addr lo */
> +};
> +
> +/* holds all the iova translations for a domain */
> +struct iova_domain {
> +	spinlock_t	iova_alloc_lock;/* Lock to protect iova  allocation */
> +	spinlock_t	iova_rbtree_lock; /* Lock to protect update of rbtree */
> +	struct rb_root	rbroot;		/* iova domain rbtree root */
> +	struct rb_node	*cached32_node; /* Save last alloced node to optimize alloc */
> +};
> +
> +struct iova *alloc_iova_mem(void);
> +void free_iova_mem(struct iova *iova);
> +void free_iova(struct iova_domain *iovad, unsigned long pfn);
> +void __free_iova(struct iova_domain *iovad, struct iova *iova);
> +struct iova * alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn);
> +struct iova * reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi);
> +void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
> +void init_iova_domain(struct iova_domain *iovad);
> +struct iova * find_iova(struct iova_domain *iovad, unsigned long pfn);
> +void put_iova_domain(struct iova_domain *iovad);
> +
> +#endif
> 
> -- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 06/10] Intel IOMMU driver
  2007-06-06 18:57 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
@ 2007-06-07 23:57   ` Andrew Morton
  2007-06-08 22:30     ` Christoph Lameter
  2007-06-13 20:20     ` Keshavamurthy, Anil S
  0 siblings, 2 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-07 23:57 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Wed, 06 Jun 2007 11:57:04 -0700
anil.s.keshavamurthy@intel.com wrote:

> 	Actual intel IOMMU driver. Hardware spec can be found at:
> http://www.intel.com/technology/virtualization
> 
> This driver sets X86_64 'dma_ops', so hook into standard DMA APIs. In this way,
> PCI driver will get virtual DMA address. This change is transparent to PCI
> drivers.
> 
> ...
>  
> +#ifdef CONFIG_DMAR
> +	detect_intel_iommu();
> +#endif
> +
>  #ifdef CONFIG_SWIOTLB
>  	pci_swiotlb_init();
>  #endif
> @@ -314,6 +319,10 @@
>  	calgary_iommu_init();
>  #endif
>  
> +#ifdef CONFIG_DMAR
> +	intel_iommu_init();
> +#endif

We'd prefer that the header file have suitable #ifndef CONFIG_DMAR stubs,
so the ifdefs here become unneeded.

> +/* context entry handling */
> +static struct context_entry * device_to_context_entry(struct intel_iommu *iommu,
> +		u8 bus, u8 devfn)
> +{
> +	struct root_entry *root;
> +	struct context_entry *context;
> +	unsigned long phy_addr;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&iommu->lock, flags);
> +	root = &iommu->root_entry[bus];
> +	if (!(context = get_context_addr_from_root(*root))) {
> +		context = (struct context_entry *)alloc_pgtable_page();
> +		if (!context) {
> +			spin_unlock_irqrestore(&iommu->lock, flags);
> +			return NULL;
> +		}
> +		__iommu_flush_cache(iommu, (void *)context, PAGE_SIZE_4K);
> +		phy_addr = virt_to_phys((void *)context);
> +		set_root_value(*root, phy_addr);
> +		set_root_present(*root);
> +		__iommu_flush_cache(iommu, root, sizeof(*root));
> +	}
> +	spin_unlock_irqrestore(&iommu->lock, flags);
> +	return &context[devfn];
> +}

checkpatch.pl has lots of fun with this patch.

> +/* page table handling */
> +#define LEVEL_STRIDE		(9)
> +#define LEVEL_MASK		(((u64)1 << LEVEL_STRIDE) - 1)
> +#define agaw_to_level(val) ((val) + 2)
> +#define agaw_to_width(val) (30 + val * LEVEL_STRIDE)
> +#define width_to_agaw(w)  ((w - 30)/LEVEL_STRIDE)
> +#define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
> +#define address_level_offset(addr, level) \
> +	((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
> +#define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
> +#define level_size(l) ((u64)1 << level_to_offset_bits(l))
> +#define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))

static inlines are better than macros - please consider.

If you're going to stick with macros here then you'll find that many of the
above macro's arguments are insufficiently parenthesised.

> +#define IOMMU_WAIT_OP(iommu, offset, op, cond, sts) \
> +{\
> +	unsigned long start_time = jiffies;\
> +	while (1) {\
> +		sts = op (iommu->reg, offset);\
> +		if (cond)\
> +			break;\
> +		if (time_after(jiffies, start_time + DMAR_OPERATION_TIMEOUT))\
> +			panic("DMAR hardware is malfunctional, please disable IOMMU\n");\
> +		cpu_relax();\
> +	}\
> +}

wow, harsh treatment.

"malfunctioning" might parse better here.

> +static int inline get_alignment(u64 base, unsigned int size)
> +{
> +	int t = 0;
> +	u64 end;
> +
> +	end = base + size - 1;
> +	while (base != end) {
> +		t++;
> +		base >>= 1;
> +		end >>= 1;
> +	}
> +	return t;
> +}

What's this (too large to inline) function doing?  I suspect we might have
helper functions which already do it...  If not, perhasp we should.


> +static int inline iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did,
> +	u64 addr, unsigned int pages, int non_present_entry_flush)
> +{
> +	unsigned int align;
> +
> +	BUG_ON(addr & (~PAGE_MASK_4K));
> +	BUG_ON(pages == 0);
> +
> +	/* Fallback to domain selective flush if no PSI support */
> +	if (!cap_pgsel_inv(iommu->cap))
> +		return iommu_flush_iotlb_dsi(iommu, did,
> +			non_present_entry_flush);
> +
> +	/*
> +	 * PSI requires page size is 2 ^ x, and the base address is naturally
> +	 * aligned to the size
> +	 */
> +	align = get_alignment(addr >> PAGE_SHIFT_4K, pages);
> +	/* Fallback to domain selective flush if size is too big */
> +	if (align > cap_max_amask_val(iommu->cap))
> +		return iommu_flush_iotlb_dsi(iommu, did,
> +			non_present_entry_flush);
> +
> +	addr >>= PAGE_SHIFT_4K + align;
> +	addr <<= PAGE_SHIFT_4K + align;
> +
> +	return __iommu_flush_iotlb(iommu, did, addr, align,
> +		DMA_TLB_PSI_FLUSH, non_present_entry_flush);
> +}

too large for inlining.

> +static int iommu_enable_translation(struct intel_iommu *iommu)
> +{
> +	u32 sts;
> +	unsigned long flag;

we conventionally use "flags" for this.

> +	spin_lock_irqsave(&iommu->register_lock, flag);
> +	dmar_writel(iommu->reg, DMAR_GCMD_REG, iommu->gcmd|DMA_GCMD_TE);
> +
> +	/* Make sure hardware complete it */
> +	IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl, (sts & DMA_GSTS_TES), sts);
> +
> +	iommu->gcmd |= DMA_GCMD_TE;
> +	spin_unlock_irqrestore(&iommu->register_lock, flag);
> +	return 0;
> +}
>
> ...
>
>
> +#define aligned_size(host_addr, size) \
> +	PAGE_ALIGN_4K((host_addr & (~PAGE_MASK_4K)) + size)

insufficiently parenthesized.  Consider usign a static inline.

> +struct dma_mapping_ops intel_dma_ops = {
> +	.alloc_coherent = intel_alloc_coherent,
> +	.free_coherent = intel_free_coherent,
> +	.map_single = intel_map_single,
> +	.unmap_single = intel_unmap_single,
> +	.map_sg = intel_map_sg,
> +	.unmap_sg = intel_unmap_sg,
> +};

can it be static?

> +void *iommu_rpool_alloc(unsigned int size, gfp_t flag)
> +{
> +	if (size == PAGE_SIZE_4K)
> +		return(void *)get_zeroed_page(flag);
> +	else
> +		return kzalloc(size, flag);
> +}

kmalloc(4k) is pretty efficient and (I think) is guaranteed to return a
page-aligned address.

iow: can we just use kmalloc here?

> +
> +static inline int
> +iommu_devinfo_pool_init(void)
> +{
> +	return init_resource_pool(&iommu_devinfo_pool, MIN_DEVINFO_REQ,
> +		sizeof(struct device_domain_info),
> +		GROW_DEVINFO_REQ, iommu_rpool_alloc,
> +		iommu_rpool_free);
> +}
> +
> +static inline int
> +iommu_iova_pool_init(void)
> +{
> +	return init_resource_pool(&iommu_iova_pool, MIN_IOVA_REQ,
> +		sizeof(struct iova),
> +		GROW_IOVA_REQ, iommu_rpool_alloc, iommu_rpool_free);
> +}
> +
> +static int iommu_init_mempool(void)
> +{
> +	int ret;
> +	ret = iommu_iova_pool_init();
> +	if (ret)
> +		return ret;
> +
> +	ret = iommu_pgtable_pool_init();
> +	if (ret)
> +		goto pgtable_error;
> +
> +	ret = iommu_domain_pool_init();
> +	if (ret)
> +		goto domain_error;
> +
> +	ret = iommu_devinfo_pool_init();
> +	if (!ret)
> +		return ret;
> +
> +	destroy_resource_pool(&iommu_domain_pool);
> +domain_error:
> +	destroy_resource_pool(&iommu_pgtable_pool);
> +pgtable_error:
> +	destroy_resource_pool(&iommu_iova_pool);
> +
> +	return -ENOMEM;
> +}

can be __init

> +static void iommu_exit_mempool(void)
> +{
> +	destroy_resource_pool(&iommu_devinfo_pool);
> +	destroy_resource_pool(&iommu_domain_pool);
> +	destroy_resource_pool(&iommu_pgtable_pool);
> +	destroy_resource_pool(&iommu_iova_pool);
> +}

ditto (unexpectedly)

> +void __init detect_intel_iommu(void)
> +{
> +	if (swiotlb || no_iommu || iommu_detected || dmar_disabled)
> +		return;
> +	if (early_dmar_detect()) {
> +		iommu_detected = 1;
> +	}
> +}
> +
> +static void __init init_no_remapping_devices(void)
> +{
> +	struct dmar_drhd_unit *drhd;
> +
> +	for_each_drhd_unit(drhd)
> +		if (!drhd->include_all) {
> +			int i;
> +			for (i=0; i < drhd->devices_cnt; i++)
> +				if (drhd->devices[i] != NULL)
> +					break;
> +			/* ignore DMAR unit if no pci devices exist */
> +			if (i == drhd->devices_cnt)
> +				drhd->ignored = 1;
> +		}

looks weird - I'd add the extra braces here.

> +	if (dmar_map_gfx)
> +		return;
> +
> +	for_each_drhd_unit(drhd) {
> +		int i;
> +		if (drhd->ignored || drhd->include_all)
> +			continue;
> +
> +		for (i = 0; i < drhd->devices_cnt; i++)
> +			if (drhd->devices[i] && !IS_GFX_DEVICE(drhd->devices[i]))
> +				break;
> +
> +		if (i < drhd->devices_cnt)
> +			continue;
> +
> +		/* bypass IOMMU if it is just for gfx devices */
> +		drhd->ignored = 1;
> +		for (i = 0; i < drhd->devices_cnt; i++) {
> +			if (!drhd->devices[i])
> +				continue;
> +			drhd->devices[i]->sysdata = DUMMY_DEVICE_DOMAIN_INFO;
> +		}
> +	}
> +}
> +
>
> ...
>
> +#define OFFSET_STRIDE		(9)
> +#define dmar_readl(dmar, reg) readl(dmar + reg)
> +#define dmar_writel(dmar, reg, val) writel((val), dmar + reg)

Is this pointless obfuscation?

> +#define dmar_readq(dmar, reg) ({ \
> +		u32 lo, hi; \
> +		lo = dmar_readl(dmar, reg); \
> +		hi = dmar_readl(dmar, reg + 4); \
> +		(((u64) hi) << 32) + lo; })
> +#define dmar_writeq(dmar, reg, val) do {\
> +		dmar_writel(dmar, reg, (u32)(val)); \
> +		dmar_writel(dmar, reg + 4, (u32)((val) >> 32)); \
> +	} while (0)

Do these need to be macros?  If not, a regular C function would be better. 
Not necessarily an inlined one, either.

> +#define VER_MAJOR(v)		(((v) & 0xf0) >> 4)
> +#define VER_MINOR(v)		((v) & 0x0f)

We already have several VER_MAJORs defined in the tree, so adding a new one
is asking for trouble.  Suggest the use of a more specific identifier.

> +#define set_root_value(root, value) \
> +	do {(root).val |= ((value) & PAGE_MASK_4K);} while(0)

Methinks that could be written in C too.

> +struct domain {

hm, "domain" is a pretty generic term.  I think there's a bit of namespace
hogging here..

> +	int	id;			/* domain id */
> +	struct intel_iommu *iommu;	/* back pointer to owning iommu */
> +
> +	struct list_head devices; 	/* all devices' list */
> +	struct iova_domain iovad;	/* iova's that belong to this domain */
> +
> +	struct dma_pte	*pgd;		/* virtual address */
> +	spinlock_t	mapping_lock;	/* page table lock */
> +	int		gaw;		/* max guest address width */
> +	int		agaw;		/* adjusted guest address width, 0 is level 2 30-bit */
> +
> +#define DOMAIN_FLAG_MULTIPLE_DEVICES 1
> +	int		flags;
> +};
> +
>
> ...
>
> +#define for_each_drhd_unit(drhd) \
> +	list_for_each_entry(drhd, &dmar_drhd_units, list)
> +#define for_each_rmrr_units(rmrr) \
> +	list_for_each_entry(rmrr, &dmar_rmrr_units, list)
> +#define begin_for_each_rmrr_device(rmrr, pdev) \
> +	for_each_rmrr_units(rmrr) { \
> +		int _i; \
> +		for (_i = 0; _i < rmrr->devices_cnt; _i++) { \
> +			pdev = rmrr->devices[_i]; \
> +			/* some BIOS lists non-exist devices in DMAR table */\
> +			if (!pdev) \
> +				continue;
> +#define end_for_each_rmrr_device(rmrr, pdev) \
> +		} \
> +	}
> +

Are these used often enough to justify their inclusion?

Would it be possible to implement these as regular C functions which are
passed the address of a callback function?


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac
  2007-06-06 18:57 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
@ 2007-06-07 23:58   ` Andrew Morton
  0 siblings, 0 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-07 23:58 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Wed, 06 Jun 2007 11:57:05 -0700
anil.s.keshavamurthy@intel.com wrote:

> --- linux-2.6.22-rc3.orig/Documentation/kernel-parameters.txt	2007-06-04 12:40:29.000000000 -0700
> +++ linux-2.6.22-rc3/Documentation/kernel-parameters.txt	2007-06-04 12:40:41.000000000 -0700
> @@ -785,6 +785,13 @@
>  			bypassed by not enabling DMAR with this option. In
>  			this case, gfx device will use physical address for
>  			DMA.
> +		forcedac

You want
		forcedac [X86-64]

> +			With this option iommu will not optimize to look
> +			for io virtual address below 32 bit forcing dual
> +			address cycle on pci bus for cards supporting greater
> +			than 32 bit addressing. The default is to look
> +			for translation below 32 bit and if not available
> +			then look in the higher range.


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 09/10] Iommu Gfx workaround
  2007-06-06 18:57 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
@ 2007-06-08  0:01   ` Andrew Morton
  0 siblings, 0 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-08  0:01 UTC (permalink / raw)
  To: anil.s.keshavamurthy
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Wed, 06 Jun 2007 11:57:07 -0700
anil.s.keshavamurthy@intel.com wrote:

> +#ifdef CONFIG_DMAR_GFX_WA
> +	iommu_prepare_gfx_mapping();
> +#endif

Please do

#ifndef CONFIG_DMAR_GFX_WA
static inline void iommu_prepare_gfx_mapping(void)
{
}
#endif

in the head file instead (whole patchset)

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-07 23:27   ` Andrew Morton
@ 2007-06-08 18:21     ` Keshavamurthy, Anil S
  2007-06-08 19:01       ` Andrew Morton
  2007-06-08 22:32       ` Christoph Lameter
  0 siblings, 2 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-08 18:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: anil.s.keshavamurthy, linux-kernel, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> On Wed, 06 Jun 2007 11:57:00 -0700
> anil.s.keshavamurthy@intel.com wrote:
> 
> > Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> 
> That was a terse changelog.
> 
> Obvious question: how does this differ from mempools, and would it be
> better to fill in any gaps in mempool functionality instead of
> implementing something similar-looking?

Very good question. Mempool pre-allocates the elements
to the required minimum count size during its initilization time.
However when mempool_alloc() is called it tries to obtain the
element from OS and if that fails then it looks for the element in 
its pool. If there are no elements in its pool and if the gpf_t 
flags says it can wait then it waits untill someone puts the element 
back to pool, else if gpf_t flag say it can;t wait then it returns NULL. 
In other words, mempool acts as *emergency* pool, i.e only if the OS fails 
to allocate the required memory, then the pool object is used.


In the IOMMU case, we need exactly opposite of what mempool provides,
i.e we always want to look for the element in the pool and if the pool
has no element then go to OS as a worst case. This resource pool
library routines do the same. Again, this resource pools 
grows and shrinks automatically to maintain the minimum pool 
elements in the background. I am not sure whether this totally
opposite functionality of mempools and resource pools can be 
merged.

In fact the very first version of this IOMMU patch used mempools
and the performance was worse because mempool did not help as
IOMMU did a very frequent alloc and free of pool objects and
every call to alloc/free used to go to os. Andi Kleen, 
noticied and told us that mempool usage for IOMMU is wrong and
hence we came up with resource pool concept.

> 
> The changelog very much should describe all this, as well as explaining
> what the dynamic behaviour of this new thing is, and what applications are
> envisaged, what problems it solves, etc, etc.

I can gladly update the changelog if the resource pool concept is 
approved. I will fix all the below minor comments.

I envision that this might be useful for all vendor's (IBM, AMD, Intel, etc) IOMMU driver
and for any kernel component which does lots of dynamic alloc/free an object of same size.

thanks,
Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 05/10] IOVA allocation and management routines
  2007-06-07 23:34   ` Andrew Morton
@ 2007-06-08 18:25     ` Keshavamurthy, Anil S
  0 siblings, 0 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-08 18:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: anil.s.keshavamurthy, linux-kernel, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Thu, Jun 07, 2007 at 04:34:17PM -0700, Andrew Morton wrote:
> On Wed, 06 Jun 2007 11:57:03 -0700
> anil.s.keshavamurthy@intel.com wrote:
> 
> > +
> > +#define PAGE_SHIFT_4K		(12)
> > +#define PAGE_SIZE_4K		(1UL << PAGE_SHIFT_4K)
> > +#define PAGE_MASK_4K		(((u64)-1) << PAGE_SHIFT_4K)
> > +#define PAGE_ALIGN_4K(addr)	(((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
> 
> hm.  We can't use the architecture's PAGE_SHIFT and friends here?
I will see how I can get rid of this.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 18:21     ` Keshavamurthy, Anil S
@ 2007-06-08 19:01       ` Andrew Morton
  2007-06-08 20:12         ` Keshavamurthy, Anil S
  2007-06-08 20:43         ` Andreas Kleen
  2007-06-08 22:32       ` Christoph Lameter
  1 sibling, 2 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-08 19:01 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Fri, 8 Jun 2007 11:21:57 -0700
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:

> On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > On Wed, 06 Jun 2007 11:57:00 -0700
> > anil.s.keshavamurthy@intel.com wrote:
> > 
> > > Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > 
> > That was a terse changelog.
> > 
> > Obvious question: how does this differ from mempools, and would it be
> > better to fill in any gaps in mempool functionality instead of
> > implementing something similar-looking?
> 
> Very good question. Mempool pre-allocates the elements
> to the required minimum count size during its initilization time.
> However when mempool_alloc() is called it tries to obtain the
> element from OS and if that fails then it looks for the element in 
> its pool. If there are no elements in its pool and if the gpf_t 
> flags says it can wait then it waits untill someone puts the element 
> back to pool, else if gpf_t flag say it can;t wait then it returns NULL. 
> In other words, mempool acts as *emergency* pool, i.e only if the OS fails 
> to allocate the required memory, then the pool object is used.
> 
> 
> In the IOMMU case, we need exactly opposite of what mempool provides,
> i.e we always want to look for the element in the pool and if the pool
> has no element then go to OS as a worst case. This resource pool
> library routines do the same. Again, this resource pools 
> grows and shrinks automatically to maintain the minimum pool 
> elements in the background. I am not sure whether this totally
> opposite functionality of mempools and resource pools can be 
> merged.

Confused.

If resource pools are not designed to provide extra robustness via an
emergency pool, then what _are_ they designed for?  (Boy this is a hard way
to write a changelog!)

> In fact the very first version of this IOMMU patch used mempools
> and the performance was worse because mempool did not help as
> IOMMU did a very frequent alloc and free of pool objects and
> every call to alloc/free used to go to os. Andi Kleen, 
> noticied and told us that mempool usage for IOMMU is wrong and
> hence we came up with resource pool concept.

You _seem_ to be saying that the resource pools are there purely for
alloc/free performance reasons.  If so, I'd be skeptical: slab is pretty
darned fast.

> > 
> > The changelog very much should describe all this, as well as explaining
> > what the dynamic behaviour of this new thing is, and what applications are
> > envisaged, what problems it solves, etc, etc.
> 
> I can gladly update the changelog if the resource pool concept is 
> approved. I will fix all the below minor comments.
> 
> I envision that this might be useful for all vendor's (IBM, AMD, Intel, etc) IOMMU driver
> and for any kernel component which does lots of dynamic alloc/free an object of same size.
> 

That's what kmem_cache_alloc() is for?!?!

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 19:01       ` Andrew Morton
@ 2007-06-08 20:12         ` Keshavamurthy, Anil S
  2007-06-08 20:40           ` Siddha, Suresh B
                             ` (2 more replies)
  2007-06-08 20:43         ` Andreas Kleen
  1 sibling, 3 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-08 20:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, linux-kernel, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Fri, Jun 08, 2007 at 12:01:07PM -0700, Andrew Morton wrote:
> On Fri, 8 Jun 2007 11:21:57 -0700
> "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
> 
> > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > > On Wed, 06 Jun 2007 11:57:00 -0700
> > > anil.s.keshavamurthy@intel.com wrote:
> > > 
> > > > Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > > 
> > > That was a terse changelog.
> > > 
> > > Obvious question: how does this differ from mempools, and would it be
> > > better to fill in any gaps in mempool functionality instead of
> > > implementing something similar-looking?
> > 
> > Very good question. Mempool pre-allocates the elements
> > to the required minimum count size during its initilization time.
> > However when mempool_alloc() is called it tries to obtain the
> > element from OS and if that fails then it looks for the element in 
> > its pool. If there are no elements in its pool and if the gpf_t 
> > flags says it can wait then it waits untill someone puts the element 
> > back to pool, else if gpf_t flag say it can;t wait then it returns NULL. 
> > In other words, mempool acts as *emergency* pool, i.e only if the OS fails 
> > to allocate the required memory, then the pool object is used.
> > 
> > 
> > In the IOMMU case, we need exactly opposite of what mempool provides,
> > i.e we always want to look for the element in the pool and if the pool
> > has no element then go to OS as a worst case. This resource pool
> > library routines do the same. Again, this resource pools 
> > grows and shrinks automatically to maintain the minimum pool 
> > elements in the background. I am not sure whether this totally
> > opposite functionality of mempools and resource pools can be 
> > merged.
> 
> Confused.
> 
> If resource pools are not designed to provide extra robustness via an
> emergency pool, then what _are_ they designed for?  (Boy this is a hard way
> to write a changelog!)

The resource pool indeed provide extra robustness, the initial pool size will
be equal to min_count + grow_count. If the pool object count goes below
min_count, then pool grows in the background while serving as emergency
pool with min_count of objects in it. If we run out of emergency pool objects
before the pool grow in the background, then we go to OS for allocation.

Similary, if the pool objects grows above the max threshold,
the objects are freed to OS in the background thread maintaining
the pool objects close to min_count + grow_count size.


> 
> > In fact the very first version of this IOMMU patch used mempools
> > and the performance was worse because mempool did not help as
> > IOMMU did a very frequent alloc and free of pool objects and
> > every call to alloc/free used to go to os. Andi Kleen, 
> > noticied and told us that mempool usage for IOMMU is wrong and
> > hence we came up with resource pool concept.
> 
> You _seem_ to be saying that the resource pools are there purely for
> alloc/free performance reasons.  If so, I'd be skeptical: slab is pretty
> darned fast.
We need several objects of size say( 4 * sizeof(u64)) and reuse
them in dma map/unmap api calls for managing io virtual allocation address that
this driver has dished out. Hence having pool of objects where we put 
the element in the linked list and and get it from the linked list is pretty
fast compared to slab.
> 
> > > 
> > > The changelog very much should describe all this, as well as explaining
> > > what the dynamic behaviour of this new thing is, and what applications are
> > > envisaged, what problems it solves, etc, etc.
> > 
> > I can gladly update the changelog if the resource pool concept is 
> > approved. I will fix all the below minor comments.
> > 
> > I envision that this might be useful for all vendor's (IBM, AMD, Intel, etc) IOMMU driver
> > and for any kernel component which does lots of dynamic alloc/free an object of same size.
> > 
> 
> That's what kmem_cache_alloc() is for?!?!
We had this kmem_cache_alloc() with mempool concept earlier and Andi suggest to 
come up with something pre-allocated pool.  
Andi, Can you chime in please.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:12         ` Keshavamurthy, Anil S
@ 2007-06-08 20:40           ` Siddha, Suresh B
  2007-06-08 20:44           ` Andrew Morton
  2007-06-08 22:33           ` Christoph Lameter
  2 siblings, 0 replies; 64+ messages in thread
From: Siddha, Suresh B @ 2007-06-08 20:40 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andrew Morton, linux-kernel, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Fri, Jun 08, 2007 at 01:12:00PM -0700, Keshavamurthy, Anil S wrote:
> The resource pool indeed provide extra robustness, the initial pool size will
> be equal to min_count + grow_count. If the pool object count goes below
> min_count, then pool grows in the background while serving as emergency
> pool with min_count of objects in it. If we run out of emergency pool objects
> before the pool grow in the background, then we go to OS for allocation.
> 
> Similary, if the pool objects grows above the max threshold,
> the objects are freed to OS in the background thread maintaining
> the pool objects close to min_count + grow_count size.

slab already has this and it has additional functionalities like reaping
over time, when there is no activity...

> We need several objects of size say( 4 * sizeof(u64)) and reuse
> them in dma map/unmap api calls for managing io virtual allocation address that
> this driver has dished out. Hence having pool of objects where we put 
> the element in the linked list and and get it from the linked list is pretty
> fast compared to slab.

Not sure how is this fast compared to slab. Atleast slab is lockless in the
fast case..

> We had this kmem_cache_alloc() with mempool concept earlier and Andi suggest to 
> come up with something pre-allocated pool.  
> Andi, Can you chime in please.

In the initial patches, only for iova we were using slabs + mempool. But
for others like pgtable_mempool, we were using simple mempools.

Even slabs + mempool is not same as just usng slab.. slab is lockless
for the fast case.

thanks,
suresh

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 19:01       ` Andrew Morton
  2007-06-08 20:12         ` Keshavamurthy, Anil S
@ 2007-06-08 20:43         ` Andreas Kleen
  2007-06-08 20:55           ` Andrew Morton
                             ` (2 more replies)
  1 sibling, 3 replies; 64+ messages in thread
From: Andreas Kleen @ 2007-06-08 20:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

Am Fr 08.06.2007 21:01 schrieb Andrew Morton
<akpm@linux-foundation.org>:

> On Fri, 8 Jun 2007 11:21:57 -0700
> "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
>
> > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > > On Wed, 06 Jun 2007 11:57:00 -0700
> > > anil.s.keshavamurthy@intel.com wrote:
> > >
> > > > Signed-off-by: Anil S Keshavamurthy
> > > > <anil.s.keshavamurthy@intel.com>
> > >
> > > That was a terse changelog.
> > >
> > > Obvious question: how does this differ from mempools, and would it
> > > be
> > > better to fill in any gaps in mempool functionality instead of
> > > implementing something similar-looking?
> >
> > Very good question. Mempool pre-allocates the elements
> > to the required minimum count size during its initilization time.
> > However when mempool_alloc() is called it tries to obtain the
> > element from OS and if that fails then it looks for the element in
> > its pool. If there are no elements in its pool and if the gpf_t
> > flags says it can wait then it waits untill someone puts the element
> > back to pool, else if gpf_t flag say it can;t wait then it returns
> > NULL.
> > In other words, mempool acts as *emergency* pool, i.e only if the OS
> > fails
> > to allocate the required memory, then the pool object is used.
> >
> >
> > In the IOMMU case, we need exactly opposite of what mempool
> > provides,
> > i.e we always want to look for the element in the pool and if the
> > pool
> > has no element then go to OS as a worst case. This resource pool
> > library routines do the same. Again, this resource pools
> > grows and shrinks automatically to maintain the minimum pool
> > elements in the background. I am not sure whether this totally
> > opposite functionality of mempools and resource pools can be
> > merged.
>
> Confused.
>
> If resource pools are not designed to provide extra robustness via an
> emergency pool, then what _are_ they designed for? (Boy this is a hard
> way
> to write a changelog!)

mempools are designed to manage a limited resource pool by sleeping
if necessary until someone else frees a resource. It's basically similar
how to main VM works with a sleeping allocation, just in a "private user
group"

In the IOMMU case sleeping is not allowed because pci_map_* typically
happens inside spinlocks.  But the IOMMU code might need to allocate
new page tables and other datastructures in there.

This means mempools don't work for those (the previous version had non
sensical
constructs like GFP_ATOMIC mempool calls)

 I haven't looked at Anil's code, but I suspect the only really robust
way to handle this case is to always preallocate everything. But I'm not
sure
why that would need new library functions; it should be just some simple
lists that could be open coded.

If it needs to fall back to the OS for any non pre allocation then it
will
likely be flakey under high load. Now that might be ok in some cases
-- apparently block layer is much better at handling this than it used
to be and networking has to handle it anyways, but it might be still
a unpleasant surprise for many drivers. One generic problem is that
there are no upcalls when such resources become avaialable again
so the upper layers would need to poll to know when to resubmit
a request.

It's a pretty messy problem unfortunately.

One relatively easy way out would be to just preallocate
a static aperture fully and always map into it. Not sure
how much memory that would need -- when it's too large
it might take a lot of memory for page tables always and when it's
too small it might overflow under high load.

> That's what kmem_cache_alloc() is for?!?!

Tradtionally that was not allowed in block layer path. Not sure
it is fully obsolete with the recent dirty tracking work, probably not.

Besides it would need to be GFP_ATOMIC and the default
atomic pools are not that big.

-Andi



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:12         ` Keshavamurthy, Anil S
  2007-06-08 20:40           ` Siddha, Suresh B
@ 2007-06-08 20:44           ` Andrew Morton
  2007-06-08 22:33           ` Christoph Lameter
  2 siblings, 0 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-08 20:44 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: linux-kernel, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Fri, 8 Jun 2007 13:12:00 -0700
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:

> On Fri, Jun 08, 2007 at 12:01:07PM -0700, Andrew Morton wrote:
> > On Fri, 8 Jun 2007 11:21:57 -0700
> > "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
> > 
> > > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > > > On Wed, 06 Jun 2007 11:57:00 -0700
> > > > anil.s.keshavamurthy@intel.com wrote:
> > > > 
> > > > > Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > > > 
> > > > That was a terse changelog.
> > > > 
> > > > Obvious question: how does this differ from mempools, and would it be
> > > > better to fill in any gaps in mempool functionality instead of
> > > > implementing something similar-looking?
> > > 
> > > Very good question. Mempool pre-allocates the elements
> > > to the required minimum count size during its initilization time.
> > > However when mempool_alloc() is called it tries to obtain the
> > > element from OS and if that fails then it looks for the element in 
> > > its pool. If there are no elements in its pool and if the gpf_t 
> > > flags says it can wait then it waits untill someone puts the element 
> > > back to pool, else if gpf_t flag say it can;t wait then it returns NULL. 
> > > In other words, mempool acts as *emergency* pool, i.e only if the OS fails 
> > > to allocate the required memory, then the pool object is used.
> > > 
> > > 
> > > In the IOMMU case, we need exactly opposite of what mempool provides,
> > > i.e we always want to look for the element in the pool and if the pool
> > > has no element then go to OS as a worst case. This resource pool
> > > library routines do the same. Again, this resource pools 
> > > grows and shrinks automatically to maintain the minimum pool 
> > > elements in the background. I am not sure whether this totally
> > > opposite functionality of mempools and resource pools can be 
> > > merged.
> > 
> > Confused.
> > 
> > If resource pools are not designed to provide extra robustness via an
> > emergency pool, then what _are_ they designed for?  (Boy this is a hard way
> > to write a changelog!)
> 
> The resource pool indeed provide extra robustness, the initial pool size will
> be equal to min_count + grow_count. If the pool object count goes below
> min_count, then pool grows in the background while serving as emergency
> pool with min_count of objects in it. If we run out of emergency pool objects
> before the pool grow in the background, then we go to OS for allocation.

This wholly duplicates kswapd functionality.

> Similary, if the pool objects grows above the max threshold,
> the objects are freed to OS in the background thread maintaining
> the pool objects close to min_count + grow_count size.

That problem was _introduced_ by resource-pools, so yes, it also needs to
be solved there.

> 
> > 
> > > In fact the very first version of this IOMMU patch used mempools
> > > and the performance was worse because mempool did not help as
> > > IOMMU did a very frequent alloc and free of pool objects and
> > > every call to alloc/free used to go to os. Andi Kleen, 
> > > noticied and told us that mempool usage for IOMMU is wrong and
> > > hence we came up with resource pool concept.
> > 
> > You _seem_ to be saying that the resource pools are there purely for
> > alloc/free performance reasons.  If so, I'd be skeptical: slab is pretty
> > darned fast.
> We need several objects of size say( 4 * sizeof(u64)) and reuse
> them in dma map/unmap api calls for managing io virtual allocation address that
> this driver has dished out. Hence having pool of objects where we put 
> the element in the linked list and and get it from the linked list is pretty
> fast compared to slab.

slab is fast (and IO is slow!).  Do you have benchmark results?

> > 
> > > > 
> > > > The changelog very much should describe all this, as well as explaining
> > > > what the dynamic behaviour of this new thing is, and what applications are
> > > > envisaged, what problems it solves, etc, etc.
> > > 
> > > I can gladly update the changelog if the resource pool concept is 
> > > approved. I will fix all the below minor comments.
> > > 
> > > I envision that this might be useful for all vendor's (IBM, AMD, Intel, etc) IOMMU driver
> > > and for any kernel component which does lots of dynamic alloc/free an object of same size.
> > > 
> > 
> > That's what kmem_cache_alloc() is for?!?!
> We had this kmem_cache_alloc() with mempool concept earlier and Andi suggest to 
> come up with something pre-allocated pool.  
> Andi, Can you chime in please.



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:43         ` Andreas Kleen
@ 2007-06-08 20:55           ` Andrew Morton
  2007-06-08 22:31             ` Andi Kleen
  2007-06-08 21:20           ` Keshavamurthy, Anil S
  2007-06-08 22:36           ` Christoph Lameter
  2 siblings, 1 reply; 64+ messages in thread
From: Andrew Morton @ 2007-06-08 20:55 UTC (permalink / raw)
  To: Andreas Kleen
  Cc: Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Fri, 8 Jun 2007 22:43:10 +0200 (CEST)
Andreas Kleen <ak@suse.de> wrote:

> > That's what kmem_cache_alloc() is for?!?!
> 
> Tradtionally that was not allowed in block layer path. Not sure
> it is fully obsolete with the recent dirty tracking work, probably not.
> 
> Besides it would need to be GFP_ATOMIC and the default
> atomic pools are not that big.

That in itself is a problem.  What do we have to do to be able
to make these allocations use the *much* stronger GFP_NOIO?

We could perhaps talk to Christoph about arranging for each slabcache to
have an optional private reserve page pool.  But fixing the GFP_ATOMIC
would be better.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:43         ` Andreas Kleen
  2007-06-08 20:55           ` Andrew Morton
@ 2007-06-08 21:20           ` Keshavamurthy, Anil S
  2007-06-08 21:42             ` Andrew Morton
  2007-06-08 22:36           ` Christoph Lameter
  2 siblings, 1 reply; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-08 21:20 UTC (permalink / raw)
  To: Andreas Kleen
  Cc: Andrew Morton, Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Fri, Jun 08, 2007 at 10:43:10PM +0200, Andreas Kleen wrote:
> Am Fr 08.06.2007 21:01 schrieb Andrew Morton
> <akpm@linux-foundation.org>:
> 
> > On Fri, 8 Jun 2007 11:21:57 -0700
> > "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
> >
> > > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > > > On Wed, 06 Jun 2007 11:57:00 -0700
> > > > anil.s.keshavamurthy@intel.com wrote:
> > > >
> > > > > Signed-off-by: Anil S Keshavamurthy
> > > > > <anil.s.keshavamurthy@intel.com>
> > > >
> > > > That was a terse changelog.
> > > >
> > > > Obvious question: how does this differ from mempools, and would it
> > > > be
> > > > better to fill in any gaps in mempool functionality instead of
> > > > implementing something similar-looking?
> > >
> > > Very good question. Mempool pre-allocates the elements
> > > to the required minimum count size during its initilization time.
> > > However when mempool_alloc() is called it tries to obtain the
> > > element from OS and if that fails then it looks for the element in
> > > its pool. If there are no elements in its pool and if the gpf_t
> > > flags says it can wait then it waits untill someone puts the element
> > > back to pool, else if gpf_t flag say it can;t wait then it returns
> > > NULL.
> > > In other words, mempool acts as *emergency* pool, i.e only if the OS
> > > fails
> > > to allocate the required memory, then the pool object is used.
> > >
> > >
> > > In the IOMMU case, we need exactly opposite of what mempool
> > > provides,
> > > i.e we always want to look for the element in the pool and if the
> > > pool
> > > has no element then go to OS as a worst case. This resource pool
> > > library routines do the same. Again, this resource pools
> > > grows and shrinks automatically to maintain the minimum pool
> > > elements in the background. I am not sure whether this totally
> > > opposite functionality of mempools and resource pools can be
> > > merged.
> >
> > Confused.
> >
> > If resource pools are not designed to provide extra robustness via an
> > emergency pool, then what _are_ they designed for? (Boy this is a hard
> > way
> > to write a changelog!)
> 
> mempools are designed to manage a limited resource pool by sleeping
> if necessary until someone else frees a resource. It's basically similar
> how to main VM works with a sleeping allocation, just in a "private user
> group"
> 
> In the IOMMU case sleeping is not allowed because pci_map_* typically
> happens inside spinlocks.  But the IOMMU code might need to allocate
> new page tables and other datastructures in there.
> 
> This means mempools don't work for those (the previous version had non
> sensical
> constructs like GFP_ATOMIC mempool calls)
> 
>  I haven't looked at Anil's code, but I suspect the only really robust
> way to handle this case is to always preallocate everything. But I'm not
> sure
> why that would need new library functions; it should be just some simple
> lists that could be open coded.

Since it is practically impossible to predicit how much to preallocate,
we have a min_count+grow_count of object allocated and we always use from this
pool. If the object count goes below certain low threshold(which acts as 
emergency pool from this point), the pool grows by allocating and 
adding the newly allocated object into the pool in the
worker (keventd) thread. 
Again, once the IO pressure is over, the PCI driver
does the unmap calls and we put back the objects back to preallocate pools.
The smartness is builtin to the pool as the elements are put back to the 
pool it detects that the pool count is greater then the threshold and 
it automagically queues the work to free the objects and bring back the 
pre-allocated object count back to minimum threshold.
Thus this preallocated pool grow and shrinks based on the demand, while
acting as both pre-allocated pools and as emergency pool.

Currently I have made this as a libray functions, if that is not correct,
we can pull this and make it part of the Intel IOMMU driver itself.
Please do let me know your suggestions.

> 
> If it needs to fall back to the OS for any non pre allocation then it
> will
> likely be flakey under high load. Now that might be ok in some cases
> -- apparently block layer is much better at handling this than it used
> to be and networking has to handle it anyways, but it might be still
> a unpleasant surprise for many drivers. One generic problem is that
> there are no upcalls when such resources become avaialable again
> so the upper layers would need to poll to know when to resubmit
> a request.
> 
> It's a pretty messy problem unfortunately.
I agree, worst case if the element is not available in the 
pool we need to fall back to the OS and if OS fails then it
is tough luck.

> 
> One relatively easy way out would be to just preallocate
> a static aperture fully and always map into it. Not sure
> how much memory that would need -- when it's too large
> it might take a lot of memory for page tables always and when it's
> too small it might overflow under high load.
Yup, and since we need to use the same driver from 
desktop to servers, preallocation count for servers 
many not be suitable for desktops.

> 
> > That's what kmem_cache_alloc() is for?!?!
> 
> Tradtionally that was not allowed in block layer path. Not sure
> it is fully obsolete with the recent dirty tracking work, probably not.
> 
> Besides it would need to be GFP_ATOMIC and the default
> atomic pools are not that big.
> 
> -Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 21:20           ` Keshavamurthy, Anil S
@ 2007-06-08 21:42             ` Andrew Morton
  2007-06-08 22:17               ` Arjan van de Ven
  2007-06-08 22:18               ` Siddha, Suresh B
  0 siblings, 2 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-08 21:42 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andreas Kleen, linux-kernel, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Fri, 8 Jun 2007 14:20:54 -0700
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:

> > This means mempools don't work for those (the previous version had non
> > sensical
> > constructs like GFP_ATOMIC mempool calls)
> > 
> > __I haven't looked at Anil's code, but I suspect the only really robust
> > way to handle this case is to always preallocate everything. But I'm not
> > sure
> > why that would need new library functions; it should be just some simple
> > lists that could be open coded.
> 
> Since it is practically impossible to predicit how much to preallocate,
> we have a min_count+grow_count of object allocated and we always use from this
> pool. If the object count goes below certain low threshold(which acts as 
> emergency pool from this point), the pool grows by allocating and 
> adding the newly allocated object into the pool in the
> worker (keventd) thread. 

Asking keventd to do this might be problematic: there may be code in various
dark corners of device drivers which also depend upon keventd services for
IO completion, in which case there might be obscure deadlocks, dunno. 
otoh, keventd already surely does GFP_KERNEL allocations...

But still, the whole thing seems pointless: kswapd is already doing all of
this, replenishing the page reserves.  So why not use that?

> Again, once the IO pressure is over, the PCI driver
> does the unmap calls and we put back the objects back to preallocate pools.
> The smartness is builtin to the pool as the elements are put back to the 
> pool it detects that the pool count is greater then the threshold and 
> it automagically queues the work to free the objects and bring back the 
> pre-allocated object count back to minimum threshold.
> Thus this preallocated pool grow and shrinks based on the demand, while
> acting as both pre-allocated pools and as emergency pool.
> 
> Currently I have made this as a libray functions, if that is not correct,
> we can pull this and make it part of the Intel IOMMU driver itself.
> Please do let me know your suggestions.

I'd say just remove the whole thing and use kmem_cache_alloc().

Put much effort into removing the GFP_ATOMIC and using GFP_NOIO instead:
there's your problem right there.

If for some reason you really can't do that (and a requirement for
allocation-in-interrupt is the only valid reason, really) and if you indeed
can demonstrate memory allocation failures with certain workloads then
let's take a look at that.  As I said, attaching a reserve pool to your
slab cache might be a suitable approach.  But none of these things are
magic: if memory allcoation failures or deadlocks or livelocks are
demonstrable with the reserves absent, then they'll also be possible with
the reserves present.

Unless you use mempools, and can sleep.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 21:42             ` Andrew Morton
@ 2007-06-08 22:17               ` Arjan van de Ven
  2007-06-08 22:18               ` Siddha, Suresh B
  1 sibling, 0 replies; 64+ messages in thread
From: Arjan van de Ven @ 2007-06-08 22:17 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Andreas Kleen, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, ashok.raj, shaohua.li, davem

Andrew Morton wrote:
> Put much effort into removing the GFP_ATOMIC and using GFP_NOIO instead:
> there's your problem right there.
> 
> If for some reason you really can't do that (and a requirement for
> allocation-in-interrupt is the only valid reason, really)

and that's the case here; IO gets submitted from IRQ handlers (both 
network and block).......

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 21:42             ` Andrew Morton
  2007-06-08 22:17               ` Arjan van de Ven
@ 2007-06-08 22:18               ` Siddha, Suresh B
  2007-06-08 22:38                 ` Christoph Lameter
  1 sibling, 1 reply; 64+ messages in thread
From: Siddha, Suresh B @ 2007-06-08 22:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Andreas Kleen, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem, clameter

On Fri, Jun 08, 2007 at 02:42:07PM -0700, Andrew Morton wrote:
> I'd say just remove the whole thing and use kmem_cache_alloc().

We will try that.

> Put much effort into removing the GFP_ATOMIC and using GFP_NOIO instead:
> there's your problem right there.

As these are called from interrupt handlers, we can't use GFP_NOIO.

> If for some reason you really can't do that (and a requirement for
> allocation-in-interrupt is the only valid reason, really) and if you indeed
> can demonstrate memory allocation failures with certain workloads then
> let's take a look at that.  As I said, attaching a reserve pool to your
> slab cache might be a suitable approach.  But none of these things are

I agree. We are better off with enhancing slab infrastructure for this, if
needed.

> magic: if memory allcoation failures or deadlocks or livelocks are
> demonstrable with the reserves absent, then they'll also be possible with
> the reserves present.
> 
> Unless you use mempools, and can sleep.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 06/10] Intel IOMMU driver
  2007-06-07 23:57   ` Andrew Morton
@ 2007-06-08 22:30     ` Christoph Lameter
  2007-06-13 20:20     ` Keshavamurthy, Anil S
  1 sibling, 0 replies; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: anil.s.keshavamurthy, linux-kernel, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Thu, 7 Jun 2007, Andrew Morton wrote:

> > +void *iommu_rpool_alloc(unsigned int size, gfp_t flag)
> > +{
> > +	if (size == PAGE_SIZE_4K)
> > +		return(void *)get_zeroed_page(flag);
> > +	else
> > +		return kzalloc(size, flag);
> > +}
> 
> kmalloc(4k) is pretty efficient and (I think) is guaranteed to return a
> page-aligned address.

Page allocations should be done through the page allocator. 4k 
allocations benefit from the per cpu caches of the page allocator which 
makes the use of the page allocator fastest and best for 4k allocs.

kmalloc allocations are not guaranteed to be aligned to 4k boundaries. 
They usually are but ...



^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:55           ` Andrew Morton
@ 2007-06-08 22:31             ` Andi Kleen
  0 siblings, 0 replies; 64+ messages in thread
From: Andi Kleen @ 2007-06-08 22:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Friday 08 June 2007 22:55, Andrew Morton wrote:
> On Fri, 8 Jun 2007 22:43:10 +0200 (CEST)
>
> Andreas Kleen <ak@suse.de> wrote:
> > > That's what kmem_cache_alloc() is for?!?!
> >
> > Tradtionally that was not allowed in block layer path. Not sure
> > it is fully obsolete with the recent dirty tracking work, probably not.
> >
> > Besides it would need to be GFP_ATOMIC and the default
> > atomic pools are not that big.
>
> That in itself is a problem.  What do we have to do to be able
> to make these allocations use the *much* stronger GFP_NOIO?

That still sleeps.

Allow networking and block drivers (and other device drivers) to
map SGs without holding spinlocks. 

Would be major work. I've been asking for it for years
because it would also avoid other nasty problem, like the panic-on-overflow
issues with swiotlb/AMD GART iommu -- it could just block for free
space instead.

-Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 18:21     ` Keshavamurthy, Anil S
  2007-06-08 19:01       ` Andrew Morton
@ 2007-06-08 22:32       ` Christoph Lameter
  2007-06-08 22:45         ` Keshavamurthy, Anil S
  1 sibling, 1 reply; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:32 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andrew Morton, linux-kernel, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:

> In the IOMMU case, we need exactly opposite of what mempool provides,
> i.e we always want to look for the element in the pool and if the pool
> has no element then go to OS as a worst case. This resource pool
> library routines do the same. Again, this resource pools 
> grows and shrinks automatically to maintain the minimum pool 
> elements in the background. I am not sure whether this totally
> opposite functionality of mempools and resource pools can be 
> merged.

What functionality are you missing in the page allocator? It seems that is 
does what you want?

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:12         ` Keshavamurthy, Anil S
  2007-06-08 20:40           ` Siddha, Suresh B
  2007-06-08 20:44           ` Andrew Morton
@ 2007-06-08 22:33           ` Christoph Lameter
  2007-06-08 22:49             ` Keshavamurthy, Anil S
  2 siblings, 1 reply; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:33 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andrew Morton, linux-kernel, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:

> > You _seem_ to be saying that the resource pools are there purely for
> > alloc/free performance reasons.  If so, I'd be skeptical: slab is pretty
> > darned fast.
> We need several objects of size say( 4 * sizeof(u64)) and reuse
> them in dma map/unmap api calls for managing io virtual allocation address that
> this driver has dished out. Hence having pool of objects where we put 
> the element in the linked list and and get it from the linked list is pretty
> fast compared to slab.

SLUB also manages objects using a linked list. Is there a real performance 
difference?

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 20:43         ` Andreas Kleen
  2007-06-08 20:55           ` Andrew Morton
  2007-06-08 21:20           ` Keshavamurthy, Anil S
@ 2007-06-08 22:36           ` Christoph Lameter
  2007-06-08 22:56             ` Andi Kleen
  2 siblings, 1 reply; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:36 UTC (permalink / raw)
  To: Andreas Kleen
  Cc: Andrew Morton, Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Fri, 8 Jun 2007, Andreas Kleen wrote:

> > That's what kmem_cache_alloc() is for?!?!
> 
> Tradtionally that was not allowed in block layer path. Not sure
> it is fully obsolete with the recent dirty tracking work, probably not.

Why was it not allowed? Because interrupts are disabled?
 
> Besides it would need to be GFP_ATOMIC and the default
> atomic pools are not that big.

Those could be increased. I think Mel has them already increased in mm.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:18               ` Siddha, Suresh B
@ 2007-06-08 22:38                 ` Christoph Lameter
  0 siblings, 0 replies; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:38 UTC (permalink / raw)
  To: Siddha, Suresh B
  Cc: Andrew Morton, Keshavamurthy, Anil S, Andreas Kleen,
	linux-kernel, gregkh, muli, asit.k.mallick, arjan, ashok.raj,
	shaohua.li, davem

On Fri, 8 Jun 2007, Siddha, Suresh B wrote:

> > If for some reason you really can't do that (and a requirement for
> > allocation-in-interrupt is the only valid reason, really) and if you indeed
> > can demonstrate memory allocation failures with certain workloads then
> > let's take a look at that.  As I said, attaching a reserve pool to your
> > slab cache might be a suitable approach.  But none of these things are
> 
> I agree. We are better off with enhancing slab infrastructure for this, if
> needed.

The slab allocators already use the page allocators atomic reserves if 
called with GFP_ATOMIC. 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:32       ` Christoph Lameter
@ 2007-06-08 22:45         ` Keshavamurthy, Anil S
  2007-06-08 22:55           ` Christoph Lameter
  0 siblings, 1 reply; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-08 22:45 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Keshavamurthy, Anil S, Andrew Morton, linux-kernel, ak, gregkh,
	muli, asit.k.mallick, suresh.b.siddha, arjan, ashok.raj,
	shaohua.li, davem

On Fri, Jun 08, 2007 at 03:32:08PM -0700, Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> 
> > In the IOMMU case, we need exactly opposite of what mempool provides,
> > i.e we always want to look for the element in the pool and if the pool
> > has no element then go to OS as a worst case. This resource pool
> > library routines do the same. Again, this resource pools 
> > grows and shrinks automatically to maintain the minimum pool 
> > elements in the background. I am not sure whether this totally
> > opposite functionality of mempools and resource pools can be 
> > merged.
> 
> What functionality are you missing in the page allocator? It seems that is 
> does what you want?
Humm..I basically want to allocate memory during interrupt context and 
expect not to fail. I know this is a hard requirement :)
I want to be able to reserve certain amount of memory specifically for 
IOMMU purpose.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:33           ` Christoph Lameter
@ 2007-06-08 22:49             ` Keshavamurthy, Anil S
  0 siblings, 0 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-08 22:49 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Keshavamurthy, Anil S, Andrew Morton, linux-kernel, ak, gregkh,
	muli, asit.k.mallick, suresh.b.siddha, arjan, ashok.raj,
	shaohua.li, davem

On Fri, Jun 08, 2007 at 03:33:39PM -0700, Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> 
> > > You _seem_ to be saying that the resource pools are there purely for
> > > alloc/free performance reasons.  If so, I'd be skeptical: slab is pretty
> > > darned fast.
> > We need several objects of size say( 4 * sizeof(u64)) and reuse
> > them in dma map/unmap api calls for managing io virtual allocation address that
> > this driver has dished out. Hence having pool of objects where we put 
> > the element in the linked list and and get it from the linked list is pretty
> > fast compared to slab.
> 
> SLUB also manages objects using a linked list. Is there a real performance 
> difference?

Sorry, I have not tried using SLUB, I will surely check this out.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:45         ` Keshavamurthy, Anil S
@ 2007-06-08 22:55           ` Christoph Lameter
  2007-06-10 16:38             ` Arjan van de Ven
  0 siblings, 1 reply; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:55 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andrew Morton, linux-kernel, ak, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, arjan, ashok.raj, shaohua.li, davem

On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:

> > What functionality are you missing in the page allocator? It seems that is 
> > does what you want?
> Humm..I basically want to allocate memory during interrupt context and 
> expect not to fail. I know this is a hard requirement :)

The page allocator can do that for you. The reserves are configurable. Not 
failing is a thing unseen in the computer world.

> I want to be able to reserve certain amount of memory specifically for 
> IOMMU purpose.

That is of course a problem unless you allocate memory beforehand. 
Mempool?




^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:36           ` Christoph Lameter
@ 2007-06-08 22:56             ` Andi Kleen
  2007-06-08 22:59               ` Christoph Lameter
  0 siblings, 1 reply; 64+ messages in thread
From: Andi Kleen @ 2007-06-08 22:56 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrew Morton, Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Saturday 09 June 2007 00:36, Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Andreas Kleen wrote:
> > > That's what kmem_cache_alloc() is for?!?!
> >
> > Tradtionally that was not allowed in block layer path. Not sure
> > it is fully obsolete with the recent dirty tracking work, probably not.
>
> Why was it not allowed? Because interrupts are disabled?

Allocating memory during page out under low memory could 
lead to deadlocks. That is because Linux used to make no attempt
to limit dirty pages for anonymous mappings and then you could
end up with most of your memory dirty and not enough 
memory cleanable for page out and then when page out 
needs more memory you could be dead.

[yes that implies that mmap over NFS was always broken]

Now there is a anon dirty limit since a few releases, but I'm not
fully convinced it solves the problem completely.

> > Besides it would need to be GFP_ATOMIC and the default
> > atomic pools are not that big.
>
> Those could be increased. I think Mel has them already increased in mm.

That would be lots of wasted memory. I already got >100 MB free
on my workstation under steady caching state. Already far too much imho.

-Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:56             ` Andi Kleen
@ 2007-06-08 22:59               ` Christoph Lameter
  2007-06-09  9:47                 ` Andi Kleen
  0 siblings, 1 reply; 64+ messages in thread
From: Christoph Lameter @ 2007-06-08 22:59 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andrew Morton, Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Sat, 9 Jun 2007, Andi Kleen wrote:

> > Why was it not allowed? Because interrupts are disabled?
> 
> Allocating memory during page out under low memory could 
> lead to deadlocks. That is because Linux used to make no attempt
> to limit dirty pages for anonymous mappings and then you could
> end up with most of your memory dirty and not enough 
> memory cleanable for page out and then when page out 
> needs more memory you could be dead.
> 
> [yes that implies that mmap over NFS was always broken]

Right. We got that fixed in 2.6.19.

> Now there is a anon dirty limit since a few releases, but I'm not
> fully convinced it solves the problem completely.

A gut feeling or is there more?


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:59               ` Christoph Lameter
@ 2007-06-09  9:47                 ` Andi Kleen
  2007-06-11 20:44                   ` Keshavamurthy, Anil S
  0 siblings, 1 reply; 64+ messages in thread
From: Andi Kleen @ 2007-06-09  9:47 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrew Morton, Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem


> > Now there is a anon dirty limit since a few releases, but I'm not
> > fully convinced it solves the problem completely.
> 
> A gut feeling or is there more?

Lots of other subsystem can allocate a lot of memory
and they usually don't cooperate and have similar dirty limit concepts.
So you could run out of usable memory anyways and then have a similar
issue.

For example a flood of network packets could always steal your
GFP_ATOMIC pools very quickly in the background (gigabit or 10gig 
can transfer a lot of data very quickly) 

Also iirc try_to_free_pages() is not completely fair and might fail
under extreme load for some requesters.

Not requiring memory allocation for any IO would be certainly safer.

Anyways, it's a theoretic question because you can't sleep in 
there anyways unless something drastic changes in the driver interfaces.

-Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-08 22:55           ` Christoph Lameter
@ 2007-06-10 16:38             ` Arjan van de Ven
  2007-06-11 16:10               ` Christoph Lameter
  0 siblings, 1 reply; 64+ messages in thread
From: Arjan van de Ven @ 2007-06-10 16:38 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Keshavamurthy, Anil S, Andrew Morton, linux-kernel, ak, gregkh,
	muli, asit.k.mallick, suresh.b.siddha, ashok.raj, shaohua.li,
	davem

Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> 
>>> What functionality are you missing in the page allocator? It seems that is 
>>> does what you want?
>> Humm..I basically want to allocate memory during interrupt context and 
>> expect not to fail. I know this is a hard requirement :)
> 
> The page allocator can do that for you. The reserves are configurable. Not 
> failing is a thing unseen in the computer world.

but the page allocator reserve is shared.. and you will need this one 
EXACTLY when the shared pool is getting low... it's not an 
uncorrelated thing!

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 21:14                     ` Andrew Morton
@ 2007-06-11  9:46                       ` Ashok Raj
  2007-06-11 22:16                       ` Andi Kleen
  2007-06-11 23:52                       ` Keshavamurthy, Anil S
  2 siblings, 0 replies; 64+ messages in thread
From: Ashok Raj @ 2007-06-11  9:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Andi Kleen, Christoph Lameter,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> > 
> > Again, if dma_map_{single|sg} API's fails due to 
> > failure to allocate memory, the only thing that can
> > be done is to panic as this is what few of the other 
> > IOMMU implementation is doing today. 
> 
> If the only option is to panic then something's busted.  If it's network IO
> then there should be a way of dropping the frame.  If it's disk IO then we
> should report the failure and cause an IO error.

Just looking at the code.. appears that quite a few popular ones (or should say most) dont even look at the dma_addr_t returned to check for failure.

Thats going to be another major cleanup work.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 22:25                     ` Andi Kleen
@ 2007-06-11 11:29                       ` Ashok Raj
  2007-06-11 23:15                       ` Keshavamurthy, Anil S
  1 sibling, 0 replies; 64+ messages in thread
From: Ashok Raj @ 2007-06-11 11:29 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Keshavamurthy, Anil S, Christoph Lameter, Andrew Morton,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Tue, Jun 12, 2007 at 12:25:57AM +0200, Andi Kleen wrote:
> 
> > Please advice.
> 
> I think the short term only safe option would be to fully preallocate an aperture.
> If it is too small you can try GFP_ATOMIC but it would be just
> a unreliable fallback. For safety you could perhaps have some kernel thread
> that tries to enlarge it in the background depending on current
> use. That would be not 100% guaranteed to keep up with load,
> but would at least keep up if the system is not too busy.
> 
> That is basically what your resource pools do, but they seem
> to be unnecessarily convoluted for the task :- after all you
> could just preallocate the page tables and rewrite/flush them without
> having some kind of allocator inbetween, can't you?

Each iommu has multiple domains, where each domain represents an 
address space. PCIexpress endpoints can be located on its own domain
for addr protection reasons, and also have its own tag for iotlb cache.

each addr space can be either a 3 or 4 level. So it would be hard to predict
how much to setup ahead of time for each domain/device.

Its not a simple single level table with a small window like the gart case.

Just keeping a pool of page sized pages its easy to respond and use where its
really necessary without having to lock pages down without knowing real demand.

The addr space is plenty, so growing on demand is the best use of memory 
available.

> If you make the start value large enough (256+MB?) that might reasonably
> work. How much memory in page tables would that take? Or perhaps scale
> it with available memory or available devices. 
> 
> In theory it could also be precomputed from the block/network device queue 
> lengths etc.; the trouble is just such checks would need to be added to all kinds of 
> other odd subsystems that manage devices too.  That would be much more work.
> 
> Some investigation how to do sleeping block/network submit would be
> also interesting (e.g. replace the spinlocks there with mutexes and see how
> much it affects performance). For networking you would need to keep 
> at least a non sleeping path though because packets can be legally
> submitted from interrupt context. If it works out then sleeping
> interfaces to the IOMMU code could be added.
> 
> -Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-10 16:38             ` Arjan van de Ven
@ 2007-06-11 16:10               ` Christoph Lameter
  0 siblings, 0 replies; 64+ messages in thread
From: Christoph Lameter @ 2007-06-11 16:10 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Keshavamurthy, Anil S, Andrew Morton, linux-kernel, ak, gregkh,
	muli, asit.k.mallick, suresh.b.siddha, ashok.raj, shaohua.li,
	davem

On Sun, 10 Jun 2007, Arjan van de Ven wrote:

> Christoph Lameter wrote:
> > On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> > 
> > > > What functionality are you missing in the page allocator? It seems that
> > > > is does what you want?
> > > Humm..I basically want to allocate memory during interrupt context and
> > > expect not to fail. I know this is a hard requirement :)
> > 
> > The page allocator can do that for you. The reserves are configurable. Not
> > failing is a thing unseen in the computer world.
> 
> but the page allocator reserve is shared.. and you will need this one EXACTLY
> when the shared pool is getting low... it's not an uncorrelated thing!

I think what needs to be done first is to define under what conditions the 
allocation should not fail. There is certainly no way to have GFP_ATOMIC 
allocation that never fail. After all memory is limited and at some point 
we need to reclaim memory.
 

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-09  9:47                 ` Andi Kleen
@ 2007-06-11 20:44                   ` Keshavamurthy, Anil S
  2007-06-11 21:14                     ` Andrew Morton
                                       ` (2 more replies)
  0 siblings, 3 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-11 20:44 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Christoph Lameter, Andrew Morton, Keshavamurthy, Anil S,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Sat, Jun 09, 2007 at 11:47:23AM +0200, Andi Kleen wrote:
> 
> > > Now there is a anon dirty limit since a few releases, but I'm not
> > > fully convinced it solves the problem completely.
> > 
> > A gut feeling or is there more?
> 
> Lots of other subsystem can allocate a lot of memory
> and they usually don't cooperate and have similar dirty limit concepts.
> So you could run out of usable memory anyways and then have a similar
> issue.
> 
> For example a flood of network packets could always steal your
> GFP_ATOMIC pools very quickly in the background (gigabit or 10gig 
> can transfer a lot of data very quickly) > 
> Also iirc try_to_free_pages() is not completely fair and might fail
> under extreme load for some requesters.
> 
> Not requiring memory allocation for any IO would be certainly safer.
> 
> Anyways, it's a theoretic question because you can't sleep in 
> there anyways unless something drastic changes in the driver interfaces.

Agree, that the ideal thing would be to make such changes in the driver
interfaces where in dma_map_{singe|sg} API's are not called in the
interrupt context and/or spinlock held, there by IOMMU drivers are 
free to  block when memory is not available. This seems to be a  
noble goal invloving huge changes and testing beyond the scope of the 
current IOMMU driver. I guess it would be ideal if this gets discussed
and resolved at kernel summit. 

Assuming that we may have to live with the above limitations for a
while, what is the best way to allocate memory in the
dma_map_{single|sg} API's for the IOMMU drivers? (these memory
are required to setup internal IOMMU pagetables etc.)

In the first implementation of ours, we had used mempools api's to 
allocate memory and we were told that mempools with GFP_ATOMIC is
useless and hence in the second implementation we came up with
resource pools ( which is preallocate pools) and again as I understand
the argument is why create another when we have slab allocation which
is similar to this resource pools.

Hence, can I assume that the conclusion of this 
discussion is to use kmem_cache_alloc() functions 
to allocate memory in dma_map_{single|sg} API's?

Again, if dma_map_{single|sg} API's fails due to 
failure to allocate memory, the only thing that can
be done is to panic as this is what few of the other 
IOMMU implementation is doing today. 

Please advice.

Thanks,
Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 20:44                   ` Keshavamurthy, Anil S
@ 2007-06-11 21:14                     ` Andrew Morton
  2007-06-11  9:46                       ` Ashok Raj
                                         ` (2 more replies)
  2007-06-11 21:29                     ` Christoph Lameter
  2007-06-11 22:25                     ` Andi Kleen
  2 siblings, 3 replies; 64+ messages in thread
From: Andrew Morton @ 2007-06-11 21:14 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andi Kleen, Christoph Lameter, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, 11 Jun 2007 13:44:42 -0700
"Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:

> In the first implementation of ours, we had used mempools api's to 
> allocate memory and we were told that mempools with GFP_ATOMIC is
> useless and hence in the second implementation we came up with
> resource pools ( which is preallocate pools) and again as I understand
> the argument is why create another when we have slab allocation which
> is similar to this resource pools.

Odd.  mempool with GFP_ATOMIC is basically equivalent to your
resource-pools, isn't it?: we'll try the slab allocator and if that failed,
fall back to the reserves.

It's missing the recharge-from-a-kernel-thread functionality but that can be
added easily enough if it's useful.  It's slightly abusive of the mempool
philosophy, but it's probably better to do that than to create a new and
very-similar thing.

> Hence, can I assume that the conclusion of this 
> discussion is to use kmem_cache_alloc() functions 
> to allocate memory in dma_map_{single|sg} API's?
> 
> Again, if dma_map_{single|sg} API's fails due to 
> failure to allocate memory, the only thing that can
> be done is to panic as this is what few of the other 
> IOMMU implementation is doing today. 

If the only option is to panic then something's busted.  If it's network IO
then there should be a way of dropping the frame.  If it's disk IO then we
should report the failure and cause an IO error.


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 20:44                   ` Keshavamurthy, Anil S
  2007-06-11 21:14                     ` Andrew Morton
@ 2007-06-11 21:29                     ` Christoph Lameter
  2007-06-11 21:40                       ` Keshavamurthy, Anil S
  2007-06-11 22:25                     ` Andi Kleen
  2 siblings, 1 reply; 64+ messages in thread
From: Christoph Lameter @ 2007-06-11 21:29 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andi Kleen, Andrew Morton, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:

> Hence, can I assume that the conclusion of this 
> discussion is to use kmem_cache_alloc() functions 
> to allocate memory in dma_map_{single|sg} API's?


Use the page allocator for page size allocations. If you need to have 
specially aligned memory in less than page sized chunks then use 
kmem_cache_alloc with a specially configured slab.
> 
> Again, if dma_map_{single|sg} API's fails due to 
> failure to allocate memory, the only thing that can
> be done is to panic as this is what few of the other 
> IOMMU implementation is doing today. 

Why does it have to be so severe? The I/O operation fails right?


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 21:29                     ` Christoph Lameter
@ 2007-06-11 21:40                       ` Keshavamurthy, Anil S
  0 siblings, 0 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-11 21:40 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Keshavamurthy, Anil S, Andi Kleen, Andrew Morton, linux-kernel,
	gregkh, muli, asit.k.mallick, suresh.b.siddha, arjan, ashok.raj,
	shaohua.li, davem

On Mon, Jun 11, 2007 at 02:29:56PM -0700, Christoph Lameter wrote:
> On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:
> 
> > Hence, can I assume that the conclusion of this 
> > discussion is to use kmem_cache_alloc() functions 
> > to allocate memory in dma_map_{single|sg} API's?
> 
> 
> Use the page allocator for page size allocations. If you need to have 
> specially aligned memory in less than page sized chunks then use 
> kmem_cache_alloc with a specially configured slab.

Okay, will do this change and get back to the list.


> > 
> > Again, if dma_map_{single|sg} API's fails due to 
> > failure to allocate memory, the only thing that can
> > be done is to panic as this is what few of the other 
> > IOMMU implementation is doing today. 
> 
> Why does it have to be so severe? The I/O operation fails right?
Not sure. Also most of the callers today don;t check for failures.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 21:14                     ` Andrew Morton
  2007-06-11  9:46                       ` Ashok Raj
@ 2007-06-11 22:16                       ` Andi Kleen
  2007-06-11 23:28                         ` Christoph Lameter
  2007-06-11 23:52                       ` Keshavamurthy, Anil S
  2 siblings, 1 reply; 64+ messages in thread
From: Andi Kleen @ 2007-06-11 22:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Christoph Lameter, linux-kernel, gregkh,
	muli, asit.k.mallick, suresh.b.siddha, arjan, ashok.raj,
	shaohua.li, davem


> 
> If the only option is to panic then something's busted.  If it's network IO
> then there should be a way of dropping the frame.  If it's disk IO then we
> should report the failure and cause an IO error.

An block IO error is basically catastrophic for the system too. There isn't really
a concept of "temporary IO error that will resolve itself" concept in Unix.

There are still lots of users of pci_map_single() that don't check the return
value unfortunately.   That is mostly in old drivers; it is generally
picked on in reviews now. But then there is no guarantee that these rarely
used likely untested error handling paths actually work.

The alternative is writing out random junk which is somewhat risky.

We fixed over time all the pci_map_sg()s at least to do the checks correctly.

When I wrote the IOMMU code originally this wasn't the case and it destroyed
several file systems of test systems due to IOMMU leaks in drivers
(writing junk over the super block  when the IOMMU is full doesn't make
mount happy after the next reboot) 

Because of these experiences I'm more inclined towards the panic option,
although x86-64 defaults to not panic these days.

It would be really much better if sleeping was allowed, but it is hard.

-Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 20:44                   ` Keshavamurthy, Anil S
  2007-06-11 21:14                     ` Andrew Morton
  2007-06-11 21:29                     ` Christoph Lameter
@ 2007-06-11 22:25                     ` Andi Kleen
  2007-06-11 11:29                       ` Ashok Raj
  2007-06-11 23:15                       ` Keshavamurthy, Anil S
  2 siblings, 2 replies; 64+ messages in thread
From: Andi Kleen @ 2007-06-11 22:25 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Christoph Lameter, Andrew Morton, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem


> Please advice.

I think the short term only safe option would be to fully preallocate an aperture.
If it is too small you can try GFP_ATOMIC but it would be just
a unreliable fallback. For safety you could perhaps have some kernel thread
that tries to enlarge it in the background depending on current
use. That would be not 100% guaranteed to keep up with load,
but would at least keep up if the system is not too busy.

That is basically what your resource pools do, but they seem
to be unnecessarily convoluted for the task :- after all you
could just preallocate the page tables and rewrite/flush them without
having some kind of allocator inbetween, can't you?

If you make the start value large enough (256+MB?) that might reasonably
work. How much memory in page tables would that take? Or perhaps scale
it with available memory or available devices. 

In theory it could also be precomputed from the block/network device queue 
lengths etc.; the trouble is just such checks would need to be added to all kinds of 
other odd subsystems that manage devices too.  That would be much more work.

Some investigation how to do sleeping block/network submit would be
also interesting (e.g. replace the spinlocks there with mutexes and see how
much it affects performance). For networking you would need to keep 
at least a non sleeping path though because packets can be legally
submitted from interrupt context. If it works out then sleeping
interfaces to the IOMMU code could be added.

-Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 22:25                     ` Andi Kleen
  2007-06-11 11:29                       ` Ashok Raj
@ 2007-06-11 23:15                       ` Keshavamurthy, Anil S
  1 sibling, 0 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-11 23:15 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Keshavamurthy, Anil S, Christoph Lameter, Andrew Morton,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Tue, Jun 12, 2007 at 12:25:57AM +0200, Andi Kleen wrote:
> 
> > Please advice.
> 
> I think the short term only safe option would be to fully preallocate an aperture.
> If it is too small you can try GFP_ATOMIC but it would be just
> a unreliable fallback. For safety you could perhaps have some kernel thread
> that tries to enlarge it in the background depending on current
> use. That would be not 100% guaranteed to keep up with load,
> but would at least keep up if the system is not too busy.
> 
> That is basically what your resource pools do, but they seem
> to be unnecessarily convoluted for the task :- after all you
> could just preallocate the page tables and rewrite/flush them without
> having some kind of allocator inbetween, can't you?
Nope, it is not convoluted for the task. If you see carefully how
the IO virtual address is obtained, I am basically reusing the
the previous translated virutal address once it is freed instead
of going for continious IO virtural address. Because of this
reuse of IO virtual address, these address tend to map to the
same PAGE tables and hence the memory for page tables itself
does not grow unless there is that much IO going on in the System
where all entries in the page tables are full(which means that
much IO is in flight).

The only defect I see in the current resource pool is that
I am queuing the work on Keventd to grow the pool which could
be a problem as many other subsystem in the kernel depends
on keventd and as Anderew pointed out we could dead lock.
If we have a separate worker thread to grow the pool then
the deadlock issues is taken care.

I would still love to get my current resource pool (just fix the
keventd to separate thread to grow the pool) implementations 
to get into linus kernrl compared to kmem_cache_alloc implementation 
as I don;t see any benifit in moving to kmem_cache_alloc. But if 
people want I can provide kmem_cache_alloc implementation 
too just for comparisions. But this does not solve the fundamental 
problem that we have today.

So ideally for IOMMU's we should have some preallocated buffers
and if the buffers reach certain min_threshould the pool should
grow in the background and all of these features is in resource pool
implementation. Since we did not see any problems, can we
atleat try this resource pool implementation in the linux
MM kernels? If it is too bad, then I will change to 
kmem_cache_alloc() version. If this testing is OKAY, then
I will refresh my patch for the coding styles etc and 
resubmit with resource pool implementation. Andrew??


> 
> If you make the start value large enough (256+MB?) that might reasonably
> work. How much memory in page tables would that take? Or perhaps scale
> it with available memory or available devices. 

What you are suggesting is to prealloacate and setup the page tables at the
begining. But this would waste lot of memory because we don't know ahead of
time how large the page table setup should be and in future our hardware
can support 64K domains where each domain can dish out independent address
from its start to end address range. And pre-setup of tables for all of the
64K domains is not feasible.

> 
> In theory it could also be precomputed from the block/network device queue 
> lengths etc.; the trouble is just such checks would need to be added to all kinds of 
> other odd subsystems that manage devices too.  That would be much more work.
> 
> Some investigation how to do sleeping block/network submit would be
> also interesting (e.g. replace the spinlocks there with mutexes and see how
> much it affects performance). For networking you would need to keep 
> at least a non sleeping path though because packets can be legally
> submitted from interrupt context. If it works out then sleeping
> interfaces to the IOMMU code could be added.

Yup, these investigations needs to happen and sooner the better for all and 
for general linux community.

> 
> -Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 22:16                       ` Andi Kleen
@ 2007-06-11 23:28                         ` Christoph Lameter
  0 siblings, 0 replies; 64+ messages in thread
From: Christoph Lameter @ 2007-06-11 23:28 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andrew Morton, Keshavamurthy, Anil S, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Tue, 12 Jun 2007, Andi Kleen wrote:

> > If the only option is to panic then something's busted.  If it's network IO
> > then there should be a way of dropping the frame.  If it's disk IO then we
> > should report the failure and cause an IO error.
> 
> An block IO error is basically catastrophic for the system too. There isn't really
> a concept of "temporary IO error that will resolve itself" concept in Unix.

In Unix? You mean the block layer cannot handle a I/O error? Not too 
familiar with it but from what I can tell an I/O operation can be aborted 
in the request function.

> There are still lots of users of pci_map_single() that don't check the return
> value unfortunately.   That is mostly in old drivers; it is generally
> picked on in reviews now. But then there is no guarantee that these rarely
> used likely untested error handling paths actually work.

Then we need to review the code and clean these up.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 21:14                     ` Andrew Morton
  2007-06-11  9:46                       ` Ashok Raj
  2007-06-11 22:16                       ` Andi Kleen
@ 2007-06-11 23:52                       ` Keshavamurthy, Anil S
  2007-06-12  0:30                         ` Andrew Morton
  2007-06-12  0:38                         ` Christoph Lameter
  2 siblings, 2 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-11 23:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Andi Kleen, Christoph Lameter,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	arjan, ashok.raj, shaohua.li, davem

On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> On Mon, 11 Jun 2007 13:44:42 -0700
> "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
> 
> > In the first implementation of ours, we had used mempools api's to 
> > allocate memory and we were told that mempools with GFP_ATOMIC is
> > useless and hence in the second implementation we came up with
> > resource pools ( which is preallocate pools) and again as I understand
> > the argument is why create another when we have slab allocation which
> > is similar to this resource pools.
> 
> Odd.  mempool with GFP_ATOMIC is basically equivalent to your
> resource-pools, isn't it?: we'll try the slab allocator and if that failed,
> fall back to the reserves.

slab allocators don;t reserve the memory, in other words this memory 
can be consumed by VM under memory pressure which we don;t want in
IOMMU case.

Nope,they both are exactly opposite. 
mempool with GFP_ATOMIC, first tries to get memory from OS and
if that fails, it looks for the object in the pool and returns.

Where as resource pool is exactly opposite of mempool, where each 
time it looks for an object in the pool and if it exist then we 
return that object else we try to get the memory for OS while 
scheduling the work to grow the pool objects. In fact, the  work
is schedule to grow the pool when the low threshold point is hit.

Hence, I still feel resource pool implementation is best choice 
for IOMMU.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 23:52                       ` Keshavamurthy, Anil S
@ 2007-06-12  0:30                         ` Andrew Morton
  2007-06-12  1:10                           ` Arjan van de Ven
  2007-06-12  0:38                         ` Christoph Lameter
  1 sibling, 1 reply; 64+ messages in thread
From: Andrew Morton @ 2007-06-12  0:30 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andi Kleen, Christoph Lameter, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, 11 Jun 2007 16:52:08 -0700 "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:

> On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> > On Mon, 11 Jun 2007 13:44:42 -0700
> > "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
> > 
> > > In the first implementation of ours, we had used mempools api's to 
> > > allocate memory and we were told that mempools with GFP_ATOMIC is
> > > useless and hence in the second implementation we came up with
> > > resource pools ( which is preallocate pools) and again as I understand
> > > the argument is why create another when we have slab allocation which
> > > is similar to this resource pools.
> > 
> > Odd.  mempool with GFP_ATOMIC is basically equivalent to your
> > resource-pools, isn't it?: we'll try the slab allocator and if that failed,
> > fall back to the reserves.
> 
> slab allocators don;t reserve the memory, in other words this memory 
> can be consumed by VM under memory pressure which we don;t want in
> IOMMU case.
> 
> Nope,they both are exactly opposite. 
> mempool with GFP_ATOMIC, first tries to get memory from OS and
> if that fails, it looks for the object in the pool and returns.
> 
> Where as resource pool is exactly opposite of mempool, where each 
> time it looks for an object in the pool and if it exist then we 
> return that object else we try to get the memory for OS while 
> scheduling the work to grow the pool objects. In fact, the  work
> is schedule to grow the pool when the low threshold point is hit.

I realise all that.  But I'd have thought that the mempool approach is
actually better: use the page allocator and only deplete your reserve pool
when the page allocator fails.

The refill-the-pool-in-the-background feature sounds pretty worthless to
me.  On a uniprocessor machine (for example), the kernel thread may not get
scheduled for tens of milliseconds (easily), which is far, far more than is
needed for that reserve pool to become fully consumed.


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-11 23:52                       ` Keshavamurthy, Anil S
  2007-06-12  0:30                         ` Andrew Morton
@ 2007-06-12  0:38                         ` Christoph Lameter
  1 sibling, 0 replies; 64+ messages in thread
From: Christoph Lameter @ 2007-06-12  0:38 UTC (permalink / raw)
  To: Keshavamurthy, Anil S
  Cc: Andrew Morton, Andi Kleen, linux-kernel, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:

> slab allocators don;t reserve the memory, in other words this memory 
> can be consumed by VM under memory pressure which we don;t want in
> IOMMU case.

So mempools....

> Nope,they both are exactly opposite. 
> mempool with GFP_ATOMIC, first tries to get memory from OS and
> if that fails, it looks for the object in the pool and returns.

How does the difference matter? In both cases you get the memory you want.

> Where as resource pool is exactly opposite of mempool, where each 
> time it looks for an object in the pool and if it exist then we 
> return that object else we try to get the memory for OS while 
> scheduling the work to grow the pool objects. In fact, the  work
> is schedule to grow the pool when the low threshold point is hit.

Grow the mempool when the low level point is hit? Or equip mempools with 
the functionality that you want?

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-12  0:30                         ` Andrew Morton
@ 2007-06-12  1:10                           ` Arjan van de Ven
  2007-06-12  1:30                             ` Christoph Lameter
  2007-06-12  1:35                             ` Andrew Morton
  0 siblings, 2 replies; 64+ messages in thread
From: Arjan van de Ven @ 2007-06-12  1:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Andi Kleen, Christoph Lameter,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	ashok.raj, shaohua.li, davem

Andrew Morton wrote:
>> Where as resource pool is exactly opposite of mempool, where each 
>> time it looks for an object in the pool and if it exist then we 
>> return that object else we try to get the memory for OS while 
>> scheduling the work to grow the pool objects. In fact, the  work
>> is schedule to grow the pool when the low threshold point is hit.
> 
> I realise all that.  But I'd have thought that the mempool approach is
> actually better: use the page allocator and only deplete your reserve pool
> when the page allocator fails.

the problem with that is that if anything downstream from the iommu 
layer ALSO needs memory, we've now eaten up the last free page and 
things go splat.

in terms of deadlock avoidance... I wonder if we need something 
similar to the swap token; once a process dips into the emergency 
pool, it becomes the only one that gets to use this pool, so that it's 
entire chain of allocations will succeed, rather than each process 
only getting halfway through...

But yeah it's minute details and you can argue either way is the right 
approach.

You can even argue for the old highmem.c approach; go into half the 
pool before going to the VM, then to kmalloc() and if that fails dip 
into the second half of the pool.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-12  1:10                           ` Arjan van de Ven
@ 2007-06-12  1:30                             ` Christoph Lameter
  2007-06-12  1:35                             ` Andrew Morton
  1 sibling, 0 replies; 64+ messages in thread
From: Christoph Lameter @ 2007-06-12  1:30 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Andrew Morton, Keshavamurthy, Anil S, Andi Kleen, linux-kernel,
	gregkh, muli, asit.k.mallick, suresh.b.siddha, ashok.raj,
	shaohua.li, davem

On Mon, 11 Jun 2007, Arjan van de Ven wrote:

> the problem with that is that if anything downstream from the iommu layer ALSO
> needs memory, we've now eaten up the last free page and things go splat.

Hmmm... We need something like a reservation system right? Higher levels 
in a atomic context could register their future needs. Then we can avoid 
overallocating in the iommu layer.


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-12  1:10                           ` Arjan van de Ven
  2007-06-12  1:30                             ` Christoph Lameter
@ 2007-06-12  1:35                             ` Andrew Morton
  2007-06-12  1:55                               ` Arjan van de Ven
  1 sibling, 1 reply; 64+ messages in thread
From: Andrew Morton @ 2007-06-12  1:35 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Keshavamurthy, Anil S, Andi Kleen, Christoph Lameter,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	ashok.raj, shaohua.li, davem

On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven <arjan@linux.intel.com> wrote:

> Andrew Morton wrote:
> >> Where as resource pool is exactly opposite of mempool, where each 
> >> time it looks for an object in the pool and if it exist then we 
> >> return that object else we try to get the memory for OS while 
> >> scheduling the work to grow the pool objects. In fact, the  work
> >> is schedule to grow the pool when the low threshold point is hit.
> > 
> > I realise all that.  But I'd have thought that the mempool approach is
> > actually better: use the page allocator and only deplete your reserve pool
> > when the page allocator fails.
> 
> the problem with that is that if anything downstream from the iommu 
> layer ALSO needs memory, we've now eaten up the last free page and 
> things go splat.

If that happens, we still have the mempool reserve to fall back to.

I don't see why it is better to consume the reserves before going to the
page allocator instead of holding them, err, in reserve.


^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-12  1:35                             ` Andrew Morton
@ 2007-06-12  1:55                               ` Arjan van de Ven
  2007-06-12  2:08                                 ` Siddha, Suresh B
  2007-06-13 18:40                                 ` Matt Mackall
  0 siblings, 2 replies; 64+ messages in thread
From: Arjan van de Ven @ 2007-06-12  1:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Keshavamurthy, Anil S, Andi Kleen, Christoph Lameter,
	linux-kernel, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	ashok.raj, shaohua.li, davem

Andrew Morton wrote:
> On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven <arjan@linux.intel.com> wrote:
> 
>> Andrew Morton wrote:
>>>> Where as resource pool is exactly opposite of mempool, where each 
>>>> time it looks for an object in the pool and if it exist then we 
>>>> return that object else we try to get the memory for OS while 
>>>> scheduling the work to grow the pool objects. In fact, the  work
>>>> is schedule to grow the pool when the low threshold point is hit.
>>> I realise all that.  But I'd have thought that the mempool approach is
>>> actually better: use the page allocator and only deplete your reserve pool
>>> when the page allocator fails.
>> the problem with that is that if anything downstream from the iommu 
>> layer ALSO needs memory, we've now eaten up the last free page and 
>> things go splat.
> 
> If that happens, we still have the mempool reserve to fall back to.

we do, except that we just ate the memory the downstream code would 
use and get ... so THAT can't get any.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-12  1:55                               ` Arjan van de Ven
@ 2007-06-12  2:08                                 ` Siddha, Suresh B
  2007-06-13 18:40                                 ` Matt Mackall
  1 sibling, 0 replies; 64+ messages in thread
From: Siddha, Suresh B @ 2007-06-12  2:08 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Andrew Morton, Keshavamurthy, Anil S, Andi Kleen,
	Christoph Lameter, linux-kernel, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, ashok.raj, shaohua.li, davem

On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> Andrew Morton wrote:
> >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven 
> ><arjan@linux.intel.com> wrote:
> >
> >>Andrew Morton wrote:
> >>>>Where as resource pool is exactly opposite of mempool, where each 
> >>>>time it looks for an object in the pool and if it exist then we 
> >>>>return that object else we try to get the memory for OS while 
> >>>>scheduling the work to grow the pool objects. In fact, the  work
> >>>>is schedule to grow the pool when the low threshold point is hit.
> >>>I realise all that.  But I'd have thought that the mempool approach is
> >>>actually better: use the page allocator and only deplete your reserve 
> >>>pool
> >>>when the page allocator fails.
> >>the problem with that is that if anything downstream from the iommu 
> >>layer ALSO needs memory, we've now eaten up the last free page and 
> >>things go splat.
> >
> >If that happens, we still have the mempool reserve to fall back to.
> 
> we do, except that we just ate the memory the downstream code would 
> use and get ... so THAT can't get any.

Then this problem can happen, irrespective of the changes we are
reviewing in this patch set. Is n't it?

thanks,
suresh

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-12  1:55                               ` Arjan van de Ven
  2007-06-12  2:08                                 ` Siddha, Suresh B
@ 2007-06-13 18:40                                 ` Matt Mackall
  2007-06-13 19:04                                   ` Andi Kleen
  1 sibling, 1 reply; 64+ messages in thread
From: Matt Mackall @ 2007-06-13 18:40 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Andrew Morton, Keshavamurthy, Anil S, Andi Kleen,
	Christoph Lameter, linux-kernel, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, ashok.raj, shaohua.li, davem

On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> Andrew Morton wrote:
> >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven 
> ><arjan@linux.intel.com> wrote:
> >
> >>Andrew Morton wrote:
> >>>>Where as resource pool is exactly opposite of mempool, where each 
> >>>>time it looks for an object in the pool and if it exist then we 
> >>>>return that object else we try to get the memory for OS while 
> >>>>scheduling the work to grow the pool objects. In fact, the  work
> >>>>is schedule to grow the pool when the low threshold point is hit.
> >>>I realise all that.  But I'd have thought that the mempool approach is
> >>>actually better: use the page allocator and only deplete your reserve 
> >>>pool
> >>>when the page allocator fails.
> >>the problem with that is that if anything downstream from the iommu 
> >>layer ALSO needs memory, we've now eaten up the last free page and 
> >>things go splat.
> >
> >If that happens, we still have the mempool reserve to fall back to.
> 
> we do, except that we just ate the memory the downstream code would 
> use and get ... so THAT can't get any.

Then the downstream ought to be using a mempool?

-- 
Mathematics is the supreme nostalgia of our time.

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
  2007-06-13 18:40                                 ` Matt Mackall
@ 2007-06-13 19:04                                   ` Andi Kleen
  0 siblings, 0 replies; 64+ messages in thread
From: Andi Kleen @ 2007-06-13 19:04 UTC (permalink / raw)
  To: Matt Mackall
  Cc: Arjan van de Ven, Andrew Morton, Keshavamurthy, Anil S,
	Christoph Lameter, linux-kernel, gregkh, muli, asit.k.mallick,
	suresh.b.siddha, ashok.raj, shaohua.li, davem

On Wednesday 13 June 2007 20:40:11 Matt Mackall wrote:
> On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> > Andrew Morton wrote:
> > >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven 
> > ><arjan@linux.intel.com> wrote:
> > >
> > >>Andrew Morton wrote:
> > >>>>Where as resource pool is exactly opposite of mempool, where each 
> > >>>>time it looks for an object in the pool and if it exist then we 
> > >>>>return that object else we try to get the memory for OS while 
> > >>>>scheduling the work to grow the pool objects. In fact, the  work
> > >>>>is schedule to grow the pool when the low threshold point is hit.
> > >>>I realise all that.  But I'd have thought that the mempool approach is
> > >>>actually better: use the page allocator and only deplete your reserve 
> > >>>pool
> > >>>when the page allocator fails.
> > >>the problem with that is that if anything downstream from the iommu 
> > >>layer ALSO needs memory, we've now eaten up the last free page and 
> > >>things go splat.
> > >
> > >If that happens, we still have the mempool reserve to fall back to.
> > 
> > we do, except that we just ate the memory the downstream code would 
> > use and get ... so THAT can't get any.
> 
> Then the downstream ought to be using a mempool?

Normally there shouldn't be a downstream. PCI IO tends to be not stacked,
but at the edges  (unless you're talking hypervisors with virtual devices but those 
definitely have separate memory pools). And the drivers I'm familiar with tend to 
first grab whatever resources they need and then map the DMA mappings.

-Andi

^ permalink raw reply	[flat|nested] 64+ messages in thread

* Re: [Intel-IOMMU 06/10] Intel IOMMU driver
  2007-06-07 23:57   ` Andrew Morton
  2007-06-08 22:30     ` Christoph Lameter
@ 2007-06-13 20:20     ` Keshavamurthy, Anil S
  1 sibling, 0 replies; 64+ messages in thread
From: Keshavamurthy, Anil S @ 2007-06-13 20:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: anil.s.keshavamurthy, linux-kernel, ak, gregkh, muli,
	asit.k.mallick, suresh.b.siddha, arjan, ashok.raj, shaohua.li,
	davem

On Thu, Jun 07, 2007 at 04:57:39PM -0700, Andrew Morton wrote:
> On Wed, 06 Jun 2007 11:57:04 -0700
> anil.s.keshavamurthy@intel.com wrote:
Andrew,
	Most of your comments make sense, I will fix all of them and resend soon.
Thanks for your time taking a look at my patch.

-Anil

^ permalink raw reply	[flat|nested] 64+ messages in thread

* [Intel-IOMMU 00/10] Intel IOMMU support
@ 2007-06-04 21:02 anil.s.keshavamurthy
  0 siblings, 0 replies; 64+ messages in thread
From: anil.s.keshavamurthy @ 2007-06-04 21:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: akpm, ak, gregkh, muli, asit.k.mallick, suresh.b.siddha,
	anil.s.keshavamurthy, arjan, ashok.raj, shaohua.li, davem

Hi,
	We are pleased to announce the revised version of
the Intel IOMMU driver. This driver incorporates several
feedback received from Anid Kleen, David Miller and 
several others.

Most notable changes from previous postings (apart from
general code cleanup) are

1) Replaced linear linked list with RB tree to manage IOVA's.

2) IOVA address is now being allocated from the cards MAX DMA address capability or 
DMA32bit limit which ever is lower. This allowed us to get rid of having to
preserve certain address range when multiple cards of different DMA address
capabilities share the same domain.

3)Implements generic pre-allocated pools a.k.a. resource pool to allocate
memory for IOVA's and for vt-d page tables. This resource pools grows
automagically in the background (work queued to keventd) based
on the demand.

4) Did some tuning in terms of locking for iova allocation and freeing.

5) Changed command line options for isa and gfx workaround to CONFIG options,
so that when we have all the components adhere to PCI-DMA api's we can
easily yank this workarounds.

With all the above changes, the performance greatly improved and
the results showed that performance with IOMMU was comparable to 
without IOMMU configured.


Once again, thanks for providing valuable feedback, please
apply this set of patches to MM if you have no further objections.


Cheers,
-Anil Keshavmaurthy
e-mail: anil.s.keshavamurthy@intel.com
Open Source Technology Center
Intel Corp.
-- 

^ permalink raw reply	[flat|nested] 64+ messages in thread

end of thread, other threads:[~2007-06-13 20:25 UTC | newest]

Thread overview: 64+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling anil.s.keshavamurthy
2007-06-07 23:27   ` Andrew Morton
2007-06-08 18:21     ` Keshavamurthy, Anil S
2007-06-08 19:01       ` Andrew Morton
2007-06-08 20:12         ` Keshavamurthy, Anil S
2007-06-08 20:40           ` Siddha, Suresh B
2007-06-08 20:44           ` Andrew Morton
2007-06-08 22:33           ` Christoph Lameter
2007-06-08 22:49             ` Keshavamurthy, Anil S
2007-06-08 20:43         ` Andreas Kleen
2007-06-08 20:55           ` Andrew Morton
2007-06-08 22:31             ` Andi Kleen
2007-06-08 21:20           ` Keshavamurthy, Anil S
2007-06-08 21:42             ` Andrew Morton
2007-06-08 22:17               ` Arjan van de Ven
2007-06-08 22:18               ` Siddha, Suresh B
2007-06-08 22:38                 ` Christoph Lameter
2007-06-08 22:36           ` Christoph Lameter
2007-06-08 22:56             ` Andi Kleen
2007-06-08 22:59               ` Christoph Lameter
2007-06-09  9:47                 ` Andi Kleen
2007-06-11 20:44                   ` Keshavamurthy, Anil S
2007-06-11 21:14                     ` Andrew Morton
2007-06-11  9:46                       ` Ashok Raj
2007-06-11 22:16                       ` Andi Kleen
2007-06-11 23:28                         ` Christoph Lameter
2007-06-11 23:52                       ` Keshavamurthy, Anil S
2007-06-12  0:30                         ` Andrew Morton
2007-06-12  1:10                           ` Arjan van de Ven
2007-06-12  1:30                             ` Christoph Lameter
2007-06-12  1:35                             ` Andrew Morton
2007-06-12  1:55                               ` Arjan van de Ven
2007-06-12  2:08                                 ` Siddha, Suresh B
2007-06-13 18:40                                 ` Matt Mackall
2007-06-13 19:04                                   ` Andi Kleen
2007-06-12  0:38                         ` Christoph Lameter
2007-06-11 21:29                     ` Christoph Lameter
2007-06-11 21:40                       ` Keshavamurthy, Anil S
2007-06-11 22:25                     ` Andi Kleen
2007-06-11 11:29                       ` Ashok Raj
2007-06-11 23:15                       ` Keshavamurthy, Anil S
2007-06-08 22:32       ` Christoph Lameter
2007-06-08 22:45         ` Keshavamurthy, Anil S
2007-06-08 22:55           ` Christoph Lameter
2007-06-10 16:38             ` Arjan van de Ven
2007-06-11 16:10               ` Christoph Lameter
2007-06-06 18:57 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
2007-06-07 23:34   ` Andrew Morton
2007-06-08 18:25     ` Keshavamurthy, Anil S
2007-06-06 18:57 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
2007-06-07 23:57   ` Andrew Morton
2007-06-08 22:30     ` Christoph Lameter
2007-06-13 20:20     ` Keshavamurthy, Anil S
2007-06-06 18:57 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
2007-06-07 23:58   ` Andrew Morton
2007-06-06 18:57 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
2007-06-08  0:01   ` Andrew Morton
2007-06-06 18:57 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy
  -- strict thread matches above, loose matches on Subject: below --
2007-06-04 21:02 [Intel-IOMMU 00/10] Intel IOMMU support anil.s.keshavamurthy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).