linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access
@ 2007-07-20  0:44 Shannon Nelson
  2007-07-20  0:44 ` [PATCH 1/7] I/OAT: New device ids Shannon Nelson
                   ` (6 more replies)
  0 siblings, 7 replies; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:44 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

The following series implements support for providers and clients of
Direct Cache Access (DCA), a method for warming the cache in the correct
CPU before needing data.

This series applies on GIT commit 5bae7ac9feba925fd0099057f6b23d7be80b7b41

ioat-new-device-ids.patch
	- add devices id's for newer Intel chipsets which support DMA and DCA
ioat-rename-source-file.patch
	- prepare for adding new functionality
ioat-dma-cleanups.patch
	- cleanup some code ugliness
ioat-split-startup-code.patch
	- split the DMA support code from the PCI startup
ioat-add-msi-msix-support.patch
	- add support for various interrupt handling schemes
ioat-add-dca-support.patch
	- add the dca driver
ioat-add-ioat-dca.patch
	- add DCA services to the ioatdma driver

Please pull from my git tree at
	git://lost.foo-projects.org/~sln/linux-2.6 dca-upstream

Thanks to Dan Williams, Auke Kok, PJ Waskiewicz, and Chris Leech for their
help.

sln
--
======================================================================
Mr. Shannon Nelson                 LAN Access Division, Intel Corp.
Shannon.Nelson@intel.com                I don't speak for Intel
(503) 712-7659                    Parents can't afford to be squeamish. 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 1/7] I/OAT: New device ids
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
@ 2007-07-20  0:44 ` Shannon Nelson
  2007-07-20  0:49   ` David Miller
  2007-07-20  0:44 ` [PATCH 2/7] I/OAT: Rename the source file Shannon Nelson
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:44 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Add device ids for new revs of the Intel I/OAT DMA engine

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/dma/ioatdma.c   |    5 +++--
 include/linux/pci_ids.h |    2 ++
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/ioatdma.c b/drivers/dma/ioatdma.c
index 5fbe56b..52e2ac2 100644
--- a/drivers/dma/ioatdma.c
+++ b/drivers/dma/ioatdma.c
@@ -517,8 +517,9 @@ static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
 
 static struct pci_device_id ioat_pci_tbl[] = {
 	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
-	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS,
-		     PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB)  },
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
+	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
 	{ 0, }
 };
 
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 2c7add1..fd7b79e 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -2258,6 +2258,8 @@
 #define PCI_DEVICE_ID_INTEL_MCH_PC	0x3599
 #define PCI_DEVICE_ID_INTEL_MCH_PC1	0x359a
 #define PCI_DEVICE_ID_INTEL_E7525_MCH	0x359e
+#define PCI_DEVICE_ID_INTEL_IOAT_CNB	0x360b
+#define PCI_DEVICE_ID_INTEL_IOAT_SCNB	0x65ff
 #define PCI_DEVICE_ID_INTEL_82371SB_0	0x7000
 #define PCI_DEVICE_ID_INTEL_82371SB_1	0x7010
 #define PCI_DEVICE_ID_INTEL_82371SB_2	0x7020

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 2/7] I/OAT: Rename the source file
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
  2007-07-20  0:44 ` [PATCH 1/7] I/OAT: New device ids Shannon Nelson
@ 2007-07-20  0:44 ` Shannon Nelson
  2007-07-20  0:49   ` David Miller
  2007-07-20  0:45 ` [PATCH 3/7] I/OAT: code cleanup from checkpatch output Shannon Nelson
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:44 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Rename the ioatdma.c file in preparation for splitting into multiple files,
which will allow for easier adding new functionality.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/dma/Makefile   |    1 
 drivers/dma/ioat_dma.c |  829 ++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/ioatdma.c  |  829 ------------------------------------------------
 3 files changed, 830 insertions(+), 829 deletions(-)

diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index b3839b6..77bee99 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
 obj-$(CONFIG_NET_DMA) += iovlock.o
 obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
+ioatdma-objs := ioat_dma.o
 obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
new file mode 100644
index 0000000..52e2ac2
--- /dev/null
+++ b/drivers/dma/ioat_dma.c
@@ -0,0 +1,829 @@
+/*
+ * Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston, MA  02111-1307, USA.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/*
+ * This driver supports an Intel I/OAT DMA engine, which does asynchronous
+ * copy operations.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include "ioatdma.h"
+#include "ioatdma_registers.h"
+#include "ioatdma_hw.h"
+
+#define to_ioat_chan(chan) container_of(chan, struct ioat_dma_chan, common)
+#define to_ioat_device(dev) container_of(dev, struct ioat_device, common)
+#define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node)
+#define tx_to_ioat_desc(tx) container_of(tx, struct ioat_desc_sw, async_tx)
+
+/* internal functions */
+static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
+static void ioat_shutdown(struct pci_dev *pdev);
+static void __devexit ioat_remove(struct pci_dev *pdev);
+
+static int enumerate_dma_channels(struct ioat_device *device)
+{
+	u8 xfercap_scale;
+	u32 xfercap;
+	int i;
+	struct ioat_dma_chan *ioat_chan;
+
+	device->common.chancnt = readb(device->reg_base + IOAT_CHANCNT_OFFSET);
+	xfercap_scale = readb(device->reg_base + IOAT_XFERCAP_OFFSET);
+	xfercap = (xfercap_scale == 0 ? -1 : (1UL << xfercap_scale));
+
+	for (i = 0; i < device->common.chancnt; i++) {
+		ioat_chan = kzalloc(sizeof(*ioat_chan), GFP_KERNEL);
+		if (!ioat_chan) {
+			device->common.chancnt = i;
+			break;
+		}
+
+		ioat_chan->device = device;
+		ioat_chan->reg_base = device->reg_base + (0x80 * (i + 1));
+		ioat_chan->xfercap = xfercap;
+		spin_lock_init(&ioat_chan->cleanup_lock);
+		spin_lock_init(&ioat_chan->desc_lock);
+		INIT_LIST_HEAD(&ioat_chan->free_desc);
+		INIT_LIST_HEAD(&ioat_chan->used_desc);
+		/* This should be made common somewhere in dmaengine.c */
+		ioat_chan->common.device = &device->common;
+		list_add_tail(&ioat_chan->common.device_node,
+		              &device->common.channels);
+	}
+	return device->common.chancnt;
+}
+
+static void
+ioat_set_src(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
+{
+	struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx);
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
+
+	pci_unmap_addr_set(desc, src, addr);
+
+	list_for_each_entry(iter, &desc->async_tx.tx_list, node) {
+		iter->hw->src_addr = addr;
+		addr += ioat_chan->xfercap;
+	}
+
+}
+
+static void
+ioat_set_dest(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
+{
+	struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx);
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
+
+	pci_unmap_addr_set(desc, dst, addr);
+
+	list_for_each_entry(iter, &desc->async_tx.tx_list, node) {
+		iter->hw->dst_addr = addr;
+		addr += ioat_chan->xfercap;
+	}
+}
+
+static dma_cookie_t
+ioat_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
+	struct ioat_desc_sw *desc = tx_to_ioat_desc(tx);
+	int append = 0;
+	dma_cookie_t cookie;
+	struct ioat_desc_sw *group_start;
+
+	group_start = list_entry(desc->async_tx.tx_list.next,
+				 struct ioat_desc_sw, node);
+	spin_lock_bh(&ioat_chan->desc_lock);
+	/* cookie incr and addition to used_list must be atomic */
+	cookie = ioat_chan->common.cookie;
+	cookie++;
+	if (cookie < 0)
+		cookie = 1;
+	ioat_chan->common.cookie = desc->async_tx.cookie = cookie;
+
+	/* write address into NextDescriptor field of last desc in chain */
+	to_ioat_desc(ioat_chan->used_desc.prev)->hw->next =
+						group_start->async_tx.phys;
+	list_splice_init(&desc->async_tx.tx_list, ioat_chan->used_desc.prev);
+
+	ioat_chan->pending += desc->tx_cnt;
+	if (ioat_chan->pending >= 4) {
+		append = 1;
+		ioat_chan->pending = 0;
+	}
+	spin_unlock_bh(&ioat_chan->desc_lock);
+
+	if (append)
+		writeb(IOAT_CHANCMD_APPEND,
+			ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
+	
+	return cookie;
+}
+
+static struct ioat_desc_sw *ioat_dma_alloc_descriptor(
+	struct ioat_dma_chan *ioat_chan,
+	gfp_t flags)
+{
+	struct ioat_dma_descriptor *desc;
+	struct ioat_desc_sw *desc_sw;
+	struct ioat_device *ioat_device;
+	dma_addr_t phys;
+
+	ioat_device = to_ioat_device(ioat_chan->common.device);
+	desc = pci_pool_alloc(ioat_device->dma_pool, flags, &phys);
+	if (unlikely(!desc))
+		return NULL;
+
+	desc_sw = kzalloc(sizeof(*desc_sw), flags);
+	if (unlikely(!desc_sw)) {
+		pci_pool_free(ioat_device->dma_pool, desc, phys);
+		return NULL;
+	}
+
+	memset(desc, 0, sizeof(*desc));
+	dma_async_tx_descriptor_init(&desc_sw->async_tx, &ioat_chan->common);
+	desc_sw->async_tx.tx_set_src = ioat_set_src;
+	desc_sw->async_tx.tx_set_dest = ioat_set_dest;
+	desc_sw->async_tx.tx_submit = ioat_tx_submit;
+	INIT_LIST_HEAD(&desc_sw->async_tx.tx_list);
+	desc_sw->hw = desc;
+	desc_sw->async_tx.phys = phys;
+
+	return desc_sw;
+}
+
+#define INITIAL_IOAT_DESC_COUNT 128
+
+static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan);
+
+/* returns the actual number of allocated descriptors */
+static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+	struct ioat_desc_sw *desc = NULL;
+	u16 chanctrl;
+	u32 chanerr;
+	int i;
+	LIST_HEAD(tmp_list);
+
+	/*
+	 * In-use bit automatically set by reading chanctrl
+	 * If 0, we got it, if 1, someone else did
+	 */
+	chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
+	if (chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE)
+		return -EBUSY;
+
+        /* Setup register to interrupt and write completion status on error */
+	chanctrl = IOAT_CHANCTRL_CHANNEL_IN_USE |
+		IOAT_CHANCTRL_ERR_INT_EN |
+		IOAT_CHANCTRL_ANY_ERR_ABORT_EN |
+		IOAT_CHANCTRL_ERR_COMPLETION_EN;
+        writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
+
+	chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
+	if (chanerr) {
+		printk("IOAT: CHANERR = %x, clearing\n", chanerr);
+		writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
+	}
+
+	/* Allocate descriptors */
+	for (i = 0; i < INITIAL_IOAT_DESC_COUNT; i++) {
+		desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL);
+		if (!desc) {
+			printk(KERN_ERR "IOAT: Only %d initial descriptors\n", i);
+			break;
+		}
+		list_add_tail(&desc->node, &tmp_list);
+	}
+	spin_lock_bh(&ioat_chan->desc_lock);
+	list_splice(&tmp_list, &ioat_chan->free_desc);
+	spin_unlock_bh(&ioat_chan->desc_lock);
+
+	/* allocate a completion writeback area */
+	/* doing 2 32bit writes to mmio since 1 64b write doesn't work */
+	ioat_chan->completion_virt =
+		pci_pool_alloc(ioat_chan->device->completion_pool,
+		               GFP_KERNEL,
+		               &ioat_chan->completion_addr);
+	memset(ioat_chan->completion_virt, 0,
+	       sizeof(*ioat_chan->completion_virt));
+	writel(((u64) ioat_chan->completion_addr) & 0x00000000FFFFFFFF,
+	       ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW);
+	writel(((u64) ioat_chan->completion_addr) >> 32,
+	       ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH);
+
+	ioat_start_null_desc(ioat_chan);
+	return i;
+}
+
+static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan);
+
+static void ioat_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+	struct ioat_device *ioat_device = to_ioat_device(chan->device);
+	struct ioat_desc_sw *desc, *_desc;
+	u16 chanctrl;
+	int in_use_descs = 0;
+
+	ioat_dma_memcpy_cleanup(ioat_chan);
+
+	writeb(IOAT_CHANCMD_RESET, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
+
+	spin_lock_bh(&ioat_chan->desc_lock);
+	list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) {
+		in_use_descs++;
+		list_del(&desc->node);
+		pci_pool_free(ioat_device->dma_pool, desc->hw,
+			      desc->async_tx.phys);
+		kfree(desc);
+	}
+	list_for_each_entry_safe(desc, _desc, &ioat_chan->free_desc, node) {
+		list_del(&desc->node);
+		pci_pool_free(ioat_device->dma_pool, desc->hw,
+			      desc->async_tx.phys);
+		kfree(desc);
+	}
+	spin_unlock_bh(&ioat_chan->desc_lock);
+
+	pci_pool_free(ioat_device->completion_pool,
+	              ioat_chan->completion_virt,
+	              ioat_chan->completion_addr);
+
+	/* one is ok since we left it on there on purpose */
+	if (in_use_descs > 1)
+		printk(KERN_ERR "IOAT: Freeing %d in use descriptors!\n",
+			in_use_descs - 1);
+
+	ioat_chan->last_completion = ioat_chan->completion_addr = 0;
+
+	/* Tell hw the chan is free */
+	chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
+	chanctrl &= ~IOAT_CHANCTRL_CHANNEL_IN_USE;
+	writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
+}
+
+static struct dma_async_tx_descriptor *
+ioat_dma_prep_memcpy(struct dma_chan *chan, size_t len, int int_en)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+	struct ioat_desc_sw *first, *prev, *new;
+	LIST_HEAD(new_chain);
+	u32 copy;
+	size_t orig_len;
+	int desc_count = 0;
+
+	if (!len)
+		return NULL;
+
+	orig_len = len;
+
+	first = NULL;
+	prev = NULL;
+
+	spin_lock_bh(&ioat_chan->desc_lock);
+	while (len) {
+		if (!list_empty(&ioat_chan->free_desc)) {
+			new = to_ioat_desc(ioat_chan->free_desc.next);
+			list_del(&new->node);
+		} else {
+			/* try to get another desc */
+			new = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC);
+			/* will this ever happen? */
+			/* TODO add upper limit on these */
+			BUG_ON(!new);
+		}
+
+		copy = min((u32) len, ioat_chan->xfercap);
+
+		new->hw->size = copy;
+		new->hw->ctl = 0;
+		new->async_tx.cookie = 0;
+		new->async_tx.ack = 1;
+
+		/* chain together the physical address list for the HW */
+		if (!first)
+			first = new;
+		else
+			prev->hw->next = (u64) new->async_tx.phys;
+
+		prev = new;
+		len  -= copy;
+		list_add_tail(&new->node, &new_chain);
+		desc_count++;
+	}
+
+	list_splice(&new_chain, &new->async_tx.tx_list);
+
+	new->hw->ctl = IOAT_DMA_DESCRIPTOR_CTL_CP_STS;
+	new->hw->next = 0;
+	new->tx_cnt = desc_count;
+	new->async_tx.ack = 0; /* client is in control of this ack */
+	new->async_tx.cookie = -EBUSY;
+
+	pci_unmap_len_set(new, src_len, orig_len);
+	pci_unmap_len_set(new, dst_len, orig_len);
+	spin_unlock_bh(&ioat_chan->desc_lock);
+
+	return new ? &new->async_tx : NULL;
+}
+
+
+/**
+ * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended descriptors to hw
+ * @chan: DMA channel handle
+ */
+
+static void ioat_dma_memcpy_issue_pending(struct dma_chan *chan)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+
+	if (ioat_chan->pending != 0) {
+		ioat_chan->pending = 0;
+		writeb(IOAT_CHANCMD_APPEND,
+		       ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
+	}
+}
+
+static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
+{
+	unsigned long phys_complete;
+	struct ioat_desc_sw *desc, *_desc;
+	dma_cookie_t cookie = 0;
+
+	prefetch(chan->completion_virt);
+
+	if (!spin_trylock(&chan->cleanup_lock))
+		return;
+
+	/* The completion writeback can happen at any time,
+	   so reads by the driver need to be atomic operations
+	   The descriptor physical addresses are limited to 32-bits
+	   when the CPU can only do a 32-bit mov */
+
+#if (BITS_PER_LONG == 64)
+	phys_complete =
+	chan->completion_virt->full & IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR;
+#else
+	phys_complete = chan->completion_virt->low & IOAT_LOW_COMPLETION_MASK;
+#endif
+
+	if ((chan->completion_virt->full & IOAT_CHANSTS_DMA_TRANSFER_STATUS) ==
+		IOAT_CHANSTS_DMA_TRANSFER_STATUS_HALTED) {
+		printk("IOAT: Channel halted, chanerr = %x\n",
+			readl(chan->reg_base + IOAT_CHANERR_OFFSET));
+
+		/* TODO do something to salvage the situation */
+	}
+
+	if (phys_complete == chan->last_completion) {
+		spin_unlock(&chan->cleanup_lock);
+		return;
+	}
+
+	spin_lock_bh(&chan->desc_lock);
+	list_for_each_entry_safe(desc, _desc, &chan->used_desc, node) {
+
+		/*
+		 * Incoming DMA requests may use multiple descriptors, due to
+		 * exceeding xfercap, perhaps. If so, only the last one will
+		 * have a cookie, and require unmapping.
+		 */
+		if (desc->async_tx.cookie) {
+			cookie = desc->async_tx.cookie;
+
+			/* yes we are unmapping both _page and _single alloc'd
+			   regions with unmap_page. Is this *really* that bad?
+			*/
+			pci_unmap_page(chan->device->pdev,
+					pci_unmap_addr(desc, dst),
+					pci_unmap_len(desc, dst_len),
+					PCI_DMA_FROMDEVICE);
+			pci_unmap_page(chan->device->pdev,
+					pci_unmap_addr(desc, src),
+					pci_unmap_len(desc, src_len),
+					PCI_DMA_TODEVICE);
+		}
+
+		if (desc->async_tx.phys != phys_complete) {
+			/* a completed entry, but not the last, so cleanup
+			 * if the client is done with the descriptor
+			 */
+			if (desc->async_tx.ack) {
+				list_del(&desc->node);
+				list_add_tail(&desc->node, &chan->free_desc);
+			} else
+				desc->async_tx.cookie = 0;
+		} else {
+			/* last used desc. Do not remove, so we can append from
+			   it, but don't look at it next time, either */
+			desc->async_tx.cookie = 0;
+
+			/* TODO check status bits? */
+			break;
+		}
+	}
+
+	spin_unlock_bh(&chan->desc_lock);
+
+	chan->last_completion = phys_complete;
+	if (cookie != 0)
+		chan->completed_cookie = cookie;
+
+	spin_unlock(&chan->cleanup_lock);
+}
+
+static void ioat_dma_dependency_added(struct dma_chan *chan)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+	spin_lock_bh(&ioat_chan->desc_lock);
+	if (ioat_chan->pending == 0) {
+		spin_unlock_bh(&ioat_chan->desc_lock);
+		ioat_dma_memcpy_cleanup(ioat_chan);
+	} else
+		spin_unlock_bh(&ioat_chan->desc_lock);
+}
+
+/**
+ * ioat_dma_is_complete - poll the status of a IOAT DMA transaction
+ * @chan: IOAT DMA channel handle
+ * @cookie: DMA transaction identifier
+ * @done: if not %NULL, updated with last completed transaction
+ * @used: if not %NULL, updated with last used transaction
+ */
+
+static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
+                                            dma_cookie_t cookie,
+                                            dma_cookie_t *done,
+                                            dma_cookie_t *used)
+{
+	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
+	dma_cookie_t last_used;
+	dma_cookie_t last_complete;
+	enum dma_status ret;
+
+	last_used = chan->cookie;
+	last_complete = ioat_chan->completed_cookie;
+
+	if (done)
+		*done= last_complete;
+	if (used)
+		*used = last_used;
+
+	ret = dma_async_is_complete(cookie, last_complete, last_used);
+	if (ret == DMA_SUCCESS)
+		return ret;
+
+	ioat_dma_memcpy_cleanup(ioat_chan);
+
+	last_used = chan->cookie;
+	last_complete = ioat_chan->completed_cookie;
+
+	if (done)
+		*done= last_complete;
+	if (used)
+		*used = last_used;
+
+	return dma_async_is_complete(cookie, last_complete, last_used);
+}
+
+/* PCI API */
+
+static struct pci_device_id ioat_pci_tbl[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB)  },
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
+	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
+	{ 0, }
+};
+
+static struct pci_driver ioat_pci_driver = {
+	.name 	= "ioatdma",
+	.id_table = ioat_pci_tbl,
+	.probe	= ioat_probe,
+	.shutdown = ioat_shutdown,
+	.remove	= __devexit_p(ioat_remove),
+};
+
+static irqreturn_t ioat_do_interrupt(int irq, void *data)
+{
+	struct ioat_device *instance = data;
+	unsigned long attnstatus;
+	u8 intrctrl;
+
+	intrctrl = readb(instance->reg_base + IOAT_INTRCTRL_OFFSET);
+
+	if (!(intrctrl & IOAT_INTRCTRL_MASTER_INT_EN))
+		return IRQ_NONE;
+
+	if (!(intrctrl & IOAT_INTRCTRL_INT_STATUS)) {
+		writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
+		return IRQ_NONE;
+	}
+
+	attnstatus = readl(instance->reg_base + IOAT_ATTNSTATUS_OFFSET);
+
+	printk(KERN_ERR "ioatdma error: interrupt! status %lx\n", attnstatus);
+
+	writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
+	return IRQ_HANDLED;
+}
+
+static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan)
+{
+	struct ioat_desc_sw *desc;
+
+	spin_lock_bh(&ioat_chan->desc_lock);
+
+	if (!list_empty(&ioat_chan->free_desc)) {
+		desc = to_ioat_desc(ioat_chan->free_desc.next);
+		list_del(&desc->node);
+	} else {
+		/* try to get another desc */
+		spin_unlock_bh(&ioat_chan->desc_lock);
+		desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL);
+		spin_lock_bh(&ioat_chan->desc_lock);
+		/* will this ever happen? */
+		BUG_ON(!desc);
+	}
+
+	desc->hw->ctl = IOAT_DMA_DESCRIPTOR_NUL;
+	desc->hw->next = 0;
+	desc->async_tx.ack = 1;
+
+	list_add_tail(&desc->node, &ioat_chan->used_desc);
+	spin_unlock_bh(&ioat_chan->desc_lock);
+
+	writel(((u64) desc->async_tx.phys) & 0x00000000FFFFFFFF,
+	       ioat_chan->reg_base + IOAT_CHAINADDR_OFFSET_LOW);
+	writel(((u64) desc->async_tx.phys) >> 32,
+	       ioat_chan->reg_base + IOAT_CHAINADDR_OFFSET_HIGH);
+
+	writeb(IOAT_CHANCMD_START, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
+}
+
+/*
+ * Perform a IOAT transaction to verify the HW works.
+ */
+#define IOAT_TEST_SIZE 2000
+
+static int ioat_self_test(struct ioat_device *device)
+{
+	int i;
+	u8 *src;
+	u8 *dest;
+	struct dma_chan *dma_chan;
+	struct dma_async_tx_descriptor *tx;
+	dma_addr_t addr;
+	dma_cookie_t cookie;
+	int err = 0;
+
+	src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL);
+	if (!src)
+		return -ENOMEM;
+	dest = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL);
+	if (!dest) {
+		kfree(src);
+		return -ENOMEM;
+	}
+
+	/* Fill in src buffer */
+	for (i = 0; i < IOAT_TEST_SIZE; i++)
+		src[i] = (u8)i;
+
+	/* Start copy, using first DMA channel */
+	dma_chan = container_of(device->common.channels.next,
+	                        struct dma_chan,
+	                        device_node);
+	if (ioat_dma_alloc_chan_resources(dma_chan) < 1) {
+		err = -ENODEV;
+		goto out;
+	}
+
+	tx = ioat_dma_prep_memcpy(dma_chan, IOAT_TEST_SIZE, 0);
+	async_tx_ack(tx);
+	addr = dma_map_single(dma_chan->device->dev, src, IOAT_TEST_SIZE,
+			DMA_TO_DEVICE);
+	ioat_set_src(addr, tx, 0);
+	addr = dma_map_single(dma_chan->device->dev, dest, IOAT_TEST_SIZE,
+			DMA_FROM_DEVICE);
+	ioat_set_dest(addr, tx, 0);
+	cookie = ioat_tx_submit(tx);
+	ioat_dma_memcpy_issue_pending(dma_chan);
+	msleep(1);
+
+	if (ioat_dma_is_complete(dma_chan, cookie, NULL, NULL) != DMA_SUCCESS) {
+		printk(KERN_ERR "ioatdma: Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto free_resources;
+	}
+	if (memcmp(src, dest, IOAT_TEST_SIZE)) {
+		printk(KERN_ERR "ioatdma: Self-test copy failed compare, disabling\n");
+		err = -ENODEV;
+		goto free_resources;
+	}
+
+free_resources:
+	ioat_dma_free_chan_resources(dma_chan);
+out:
+	kfree(src);
+	kfree(dest);
+	return err;
+}
+
+static int __devinit ioat_probe(struct pci_dev *pdev,
+                                const struct pci_device_id *ent)
+{
+	int err;
+	unsigned long mmio_start, mmio_len;
+	void __iomem *reg_base;
+	struct ioat_device *device;
+
+	err = pci_enable_device(pdev);
+	if (err)
+		goto err_enable_device;
+
+	err = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
+	if (err)
+		err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+	if (err)
+		goto err_set_dma_mask;
+
+	err = pci_request_regions(pdev, ioat_pci_driver.name);
+	if (err)
+		goto err_request_regions;
+
+	mmio_start = pci_resource_start(pdev, 0);
+	mmio_len = pci_resource_len(pdev, 0);
+
+	reg_base = ioremap(mmio_start, mmio_len);
+	if (!reg_base) {
+		err = -ENOMEM;
+		goto err_ioremap;
+	}
+
+	device = kzalloc(sizeof(*device), GFP_KERNEL);
+	if (!device) {
+		err = -ENOMEM;
+		goto err_kzalloc;
+	}
+
+	/* DMA coherent memory pool for DMA descriptor allocations */
+	device->dma_pool = pci_pool_create("dma_desc_pool", pdev,
+		sizeof(struct ioat_dma_descriptor), 64, 0);
+	if (!device->dma_pool) {
+		err = -ENOMEM;
+		goto err_dma_pool;
+	}
+
+	device->completion_pool = pci_pool_create("completion_pool", pdev, sizeof(u64), SMP_CACHE_BYTES, SMP_CACHE_BYTES);
+	if (!device->completion_pool) {
+		err = -ENOMEM;
+		goto err_completion_pool;
+	}
+
+	device->pdev = pdev;
+	pci_set_drvdata(pdev, device);
+#ifdef CONFIG_PCI_MSI
+	if (pci_enable_msi(pdev) == 0) {
+		device->msi = 1;
+	} else {
+		device->msi = 0;
+	}
+#endif
+	err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat",
+		device);
+	if (err)
+		goto err_irq;
+
+	device->reg_base = reg_base;
+
+	writeb(IOAT_INTRCTRL_MASTER_INT_EN, device->reg_base + IOAT_INTRCTRL_OFFSET);
+	pci_set_master(pdev);
+
+	INIT_LIST_HEAD(&device->common.channels);
+	enumerate_dma_channels(device);
+
+	dma_cap_set(DMA_MEMCPY, device->common.cap_mask);
+	device->common.device_alloc_chan_resources = ioat_dma_alloc_chan_resources;
+	device->common.device_free_chan_resources = ioat_dma_free_chan_resources;
+	device->common.device_prep_dma_memcpy = ioat_dma_prep_memcpy;
+	device->common.device_is_tx_complete = ioat_dma_is_complete;
+	device->common.device_issue_pending = ioat_dma_memcpy_issue_pending;
+	device->common.device_dependency_added = ioat_dma_dependency_added;
+	device->common.dev = &pdev->dev;
+	printk(KERN_INFO "Intel(R) I/OAT DMA Engine found, %d channels\n",
+		device->common.chancnt);
+
+	err = ioat_self_test(device);
+	if (err)
+		goto err_self_test;
+
+	dma_async_device_register(&device->common);
+
+	return 0;
+
+err_self_test:
+err_irq:
+	pci_pool_destroy(device->completion_pool);
+err_completion_pool:
+	pci_pool_destroy(device->dma_pool);
+err_dma_pool:
+	kfree(device);
+err_kzalloc:
+	iounmap(reg_base);
+err_ioremap:
+	pci_release_regions(pdev);
+err_request_regions:
+err_set_dma_mask:
+	pci_disable_device(pdev);
+err_enable_device:
+
+	printk(KERN_ERR "Intel(R) I/OAT DMA Engine initialization failed\n");
+
+	return err;
+}
+
+static void ioat_shutdown(struct pci_dev *pdev)
+{
+	struct ioat_device *device;
+	device = pci_get_drvdata(pdev);
+
+	dma_async_device_unregister(&device->common);
+}
+
+static void __devexit ioat_remove(struct pci_dev *pdev)
+{
+	struct ioat_device *device;
+	struct dma_chan *chan, *_chan;
+	struct ioat_dma_chan *ioat_chan;
+
+	device = pci_get_drvdata(pdev);
+	dma_async_device_unregister(&device->common);
+
+	free_irq(device->pdev->irq, device);
+#ifdef CONFIG_PCI_MSI
+	if (device->msi)
+		pci_disable_msi(device->pdev);
+#endif
+	pci_pool_destroy(device->dma_pool);
+	pci_pool_destroy(device->completion_pool);
+	iounmap(device->reg_base);
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+	list_for_each_entry_safe(chan, _chan, &device->common.channels, device_node) {
+		ioat_chan = to_ioat_chan(chan);
+		list_del(&chan->device_node);
+		kfree(ioat_chan);
+	}
+	kfree(device);
+}
+
+/* MODULE API */
+MODULE_VERSION("1.9");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+
+static int __init ioat_init_module(void)
+{
+	/* it's currently unsafe to unload this module */
+	/* if forced, worst case is that rmmod hangs */
+	__unsafe(THIS_MODULE);
+
+	return pci_register_driver(&ioat_pci_driver);
+}
+
+module_init(ioat_init_module);
+
+static void __exit ioat_exit_module(void)
+{
+	pci_unregister_driver(&ioat_pci_driver);
+}
+
+module_exit(ioat_exit_module);
diff --git a/drivers/dma/ioatdma.c b/drivers/dma/ioatdma.c
deleted file mode 100644
index 52e2ac2..0000000
--- a/drivers/dma/ioatdma.c
+++ /dev/null
@@ -1,829 +0,0 @@
-/*
- * Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
- *
- * This program is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59
- * Temple Place - Suite 330, Boston, MA  02111-1307, USA.
- *
- * The full GNU General Public License is included in this distribution in the
- * file called COPYING.
- */
-
-/*
- * This driver supports an Intel I/OAT DMA engine, which does asynchronous
- * copy operations.
- */
-
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/dmaengine.h>
-#include <linux/delay.h>
-#include <linux/dma-mapping.h>
-#include "ioatdma.h"
-#include "ioatdma_registers.h"
-#include "ioatdma_hw.h"
-
-#define to_ioat_chan(chan) container_of(chan, struct ioat_dma_chan, common)
-#define to_ioat_device(dev) container_of(dev, struct ioat_device, common)
-#define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node)
-#define tx_to_ioat_desc(tx) container_of(tx, struct ioat_desc_sw, async_tx)
-
-/* internal functions */
-static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
-static void ioat_shutdown(struct pci_dev *pdev);
-static void __devexit ioat_remove(struct pci_dev *pdev);
-
-static int enumerate_dma_channels(struct ioat_device *device)
-{
-	u8 xfercap_scale;
-	u32 xfercap;
-	int i;
-	struct ioat_dma_chan *ioat_chan;
-
-	device->common.chancnt = readb(device->reg_base + IOAT_CHANCNT_OFFSET);
-	xfercap_scale = readb(device->reg_base + IOAT_XFERCAP_OFFSET);
-	xfercap = (xfercap_scale == 0 ? -1 : (1UL << xfercap_scale));
-
-	for (i = 0; i < device->common.chancnt; i++) {
-		ioat_chan = kzalloc(sizeof(*ioat_chan), GFP_KERNEL);
-		if (!ioat_chan) {
-			device->common.chancnt = i;
-			break;
-		}
-
-		ioat_chan->device = device;
-		ioat_chan->reg_base = device->reg_base + (0x80 * (i + 1));
-		ioat_chan->xfercap = xfercap;
-		spin_lock_init(&ioat_chan->cleanup_lock);
-		spin_lock_init(&ioat_chan->desc_lock);
-		INIT_LIST_HEAD(&ioat_chan->free_desc);
-		INIT_LIST_HEAD(&ioat_chan->used_desc);
-		/* This should be made common somewhere in dmaengine.c */
-		ioat_chan->common.device = &device->common;
-		list_add_tail(&ioat_chan->common.device_node,
-		              &device->common.channels);
-	}
-	return device->common.chancnt;
-}
-
-static void
-ioat_set_src(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
-{
-	struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx);
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
-
-	pci_unmap_addr_set(desc, src, addr);
-
-	list_for_each_entry(iter, &desc->async_tx.tx_list, node) {
-		iter->hw->src_addr = addr;
-		addr += ioat_chan->xfercap;
-	}
-
-}
-
-static void
-ioat_set_dest(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
-{
-	struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx);
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
-
-	pci_unmap_addr_set(desc, dst, addr);
-
-	list_for_each_entry(iter, &desc->async_tx.tx_list, node) {
-		iter->hw->dst_addr = addr;
-		addr += ioat_chan->xfercap;
-	}
-}
-
-static dma_cookie_t
-ioat_tx_submit(struct dma_async_tx_descriptor *tx)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
-	struct ioat_desc_sw *desc = tx_to_ioat_desc(tx);
-	int append = 0;
-	dma_cookie_t cookie;
-	struct ioat_desc_sw *group_start;
-
-	group_start = list_entry(desc->async_tx.tx_list.next,
-				 struct ioat_desc_sw, node);
-	spin_lock_bh(&ioat_chan->desc_lock);
-	/* cookie incr and addition to used_list must be atomic */
-	cookie = ioat_chan->common.cookie;
-	cookie++;
-	if (cookie < 0)
-		cookie = 1;
-	ioat_chan->common.cookie = desc->async_tx.cookie = cookie;
-
-	/* write address into NextDescriptor field of last desc in chain */
-	to_ioat_desc(ioat_chan->used_desc.prev)->hw->next =
-						group_start->async_tx.phys;
-	list_splice_init(&desc->async_tx.tx_list, ioat_chan->used_desc.prev);
-
-	ioat_chan->pending += desc->tx_cnt;
-	if (ioat_chan->pending >= 4) {
-		append = 1;
-		ioat_chan->pending = 0;
-	}
-	spin_unlock_bh(&ioat_chan->desc_lock);
-
-	if (append)
-		writeb(IOAT_CHANCMD_APPEND,
-			ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
-	
-	return cookie;
-}
-
-static struct ioat_desc_sw *ioat_dma_alloc_descriptor(
-	struct ioat_dma_chan *ioat_chan,
-	gfp_t flags)
-{
-	struct ioat_dma_descriptor *desc;
-	struct ioat_desc_sw *desc_sw;
-	struct ioat_device *ioat_device;
-	dma_addr_t phys;
-
-	ioat_device = to_ioat_device(ioat_chan->common.device);
-	desc = pci_pool_alloc(ioat_device->dma_pool, flags, &phys);
-	if (unlikely(!desc))
-		return NULL;
-
-	desc_sw = kzalloc(sizeof(*desc_sw), flags);
-	if (unlikely(!desc_sw)) {
-		pci_pool_free(ioat_device->dma_pool, desc, phys);
-		return NULL;
-	}
-
-	memset(desc, 0, sizeof(*desc));
-	dma_async_tx_descriptor_init(&desc_sw->async_tx, &ioat_chan->common);
-	desc_sw->async_tx.tx_set_src = ioat_set_src;
-	desc_sw->async_tx.tx_set_dest = ioat_set_dest;
-	desc_sw->async_tx.tx_submit = ioat_tx_submit;
-	INIT_LIST_HEAD(&desc_sw->async_tx.tx_list);
-	desc_sw->hw = desc;
-	desc_sw->async_tx.phys = phys;
-
-	return desc_sw;
-}
-
-#define INITIAL_IOAT_DESC_COUNT 128
-
-static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan);
-
-/* returns the actual number of allocated descriptors */
-static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-	struct ioat_desc_sw *desc = NULL;
-	u16 chanctrl;
-	u32 chanerr;
-	int i;
-	LIST_HEAD(tmp_list);
-
-	/*
-	 * In-use bit automatically set by reading chanctrl
-	 * If 0, we got it, if 1, someone else did
-	 */
-	chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
-	if (chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE)
-		return -EBUSY;
-
-        /* Setup register to interrupt and write completion status on error */
-	chanctrl = IOAT_CHANCTRL_CHANNEL_IN_USE |
-		IOAT_CHANCTRL_ERR_INT_EN |
-		IOAT_CHANCTRL_ANY_ERR_ABORT_EN |
-		IOAT_CHANCTRL_ERR_COMPLETION_EN;
-        writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
-
-	chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
-	if (chanerr) {
-		printk("IOAT: CHANERR = %x, clearing\n", chanerr);
-		writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
-	}
-
-	/* Allocate descriptors */
-	for (i = 0; i < INITIAL_IOAT_DESC_COUNT; i++) {
-		desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL);
-		if (!desc) {
-			printk(KERN_ERR "IOAT: Only %d initial descriptors\n", i);
-			break;
-		}
-		list_add_tail(&desc->node, &tmp_list);
-	}
-	spin_lock_bh(&ioat_chan->desc_lock);
-	list_splice(&tmp_list, &ioat_chan->free_desc);
-	spin_unlock_bh(&ioat_chan->desc_lock);
-
-	/* allocate a completion writeback area */
-	/* doing 2 32bit writes to mmio since 1 64b write doesn't work */
-	ioat_chan->completion_virt =
-		pci_pool_alloc(ioat_chan->device->completion_pool,
-		               GFP_KERNEL,
-		               &ioat_chan->completion_addr);
-	memset(ioat_chan->completion_virt, 0,
-	       sizeof(*ioat_chan->completion_virt));
-	writel(((u64) ioat_chan->completion_addr) & 0x00000000FFFFFFFF,
-	       ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW);
-	writel(((u64) ioat_chan->completion_addr) >> 32,
-	       ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH);
-
-	ioat_start_null_desc(ioat_chan);
-	return i;
-}
-
-static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan);
-
-static void ioat_dma_free_chan_resources(struct dma_chan *chan)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-	struct ioat_device *ioat_device = to_ioat_device(chan->device);
-	struct ioat_desc_sw *desc, *_desc;
-	u16 chanctrl;
-	int in_use_descs = 0;
-
-	ioat_dma_memcpy_cleanup(ioat_chan);
-
-	writeb(IOAT_CHANCMD_RESET, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
-
-	spin_lock_bh(&ioat_chan->desc_lock);
-	list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) {
-		in_use_descs++;
-		list_del(&desc->node);
-		pci_pool_free(ioat_device->dma_pool, desc->hw,
-			      desc->async_tx.phys);
-		kfree(desc);
-	}
-	list_for_each_entry_safe(desc, _desc, &ioat_chan->free_desc, node) {
-		list_del(&desc->node);
-		pci_pool_free(ioat_device->dma_pool, desc->hw,
-			      desc->async_tx.phys);
-		kfree(desc);
-	}
-	spin_unlock_bh(&ioat_chan->desc_lock);
-
-	pci_pool_free(ioat_device->completion_pool,
-	              ioat_chan->completion_virt,
-	              ioat_chan->completion_addr);
-
-	/* one is ok since we left it on there on purpose */
-	if (in_use_descs > 1)
-		printk(KERN_ERR "IOAT: Freeing %d in use descriptors!\n",
-			in_use_descs - 1);
-
-	ioat_chan->last_completion = ioat_chan->completion_addr = 0;
-
-	/* Tell hw the chan is free */
-	chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
-	chanctrl &= ~IOAT_CHANCTRL_CHANNEL_IN_USE;
-	writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
-}
-
-static struct dma_async_tx_descriptor *
-ioat_dma_prep_memcpy(struct dma_chan *chan, size_t len, int int_en)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-	struct ioat_desc_sw *first, *prev, *new;
-	LIST_HEAD(new_chain);
-	u32 copy;
-	size_t orig_len;
-	int desc_count = 0;
-
-	if (!len)
-		return NULL;
-
-	orig_len = len;
-
-	first = NULL;
-	prev = NULL;
-
-	spin_lock_bh(&ioat_chan->desc_lock);
-	while (len) {
-		if (!list_empty(&ioat_chan->free_desc)) {
-			new = to_ioat_desc(ioat_chan->free_desc.next);
-			list_del(&new->node);
-		} else {
-			/* try to get another desc */
-			new = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC);
-			/* will this ever happen? */
-			/* TODO add upper limit on these */
-			BUG_ON(!new);
-		}
-
-		copy = min((u32) len, ioat_chan->xfercap);
-
-		new->hw->size = copy;
-		new->hw->ctl = 0;
-		new->async_tx.cookie = 0;
-		new->async_tx.ack = 1;
-
-		/* chain together the physical address list for the HW */
-		if (!first)
-			first = new;
-		else
-			prev->hw->next = (u64) new->async_tx.phys;
-
-		prev = new;
-		len  -= copy;
-		list_add_tail(&new->node, &new_chain);
-		desc_count++;
-	}
-
-	list_splice(&new_chain, &new->async_tx.tx_list);
-
-	new->hw->ctl = IOAT_DMA_DESCRIPTOR_CTL_CP_STS;
-	new->hw->next = 0;
-	new->tx_cnt = desc_count;
-	new->async_tx.ack = 0; /* client is in control of this ack */
-	new->async_tx.cookie = -EBUSY;
-
-	pci_unmap_len_set(new, src_len, orig_len);
-	pci_unmap_len_set(new, dst_len, orig_len);
-	spin_unlock_bh(&ioat_chan->desc_lock);
-
-	return new ? &new->async_tx : NULL;
-}
-
-
-/**
- * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended descriptors to hw
- * @chan: DMA channel handle
- */
-
-static void ioat_dma_memcpy_issue_pending(struct dma_chan *chan)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-
-	if (ioat_chan->pending != 0) {
-		ioat_chan->pending = 0;
-		writeb(IOAT_CHANCMD_APPEND,
-		       ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
-	}
-}
-
-static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
-{
-	unsigned long phys_complete;
-	struct ioat_desc_sw *desc, *_desc;
-	dma_cookie_t cookie = 0;
-
-	prefetch(chan->completion_virt);
-
-	if (!spin_trylock(&chan->cleanup_lock))
-		return;
-
-	/* The completion writeback can happen at any time,
-	   so reads by the driver need to be atomic operations
-	   The descriptor physical addresses are limited to 32-bits
-	   when the CPU can only do a 32-bit mov */
-
-#if (BITS_PER_LONG == 64)
-	phys_complete =
-	chan->completion_virt->full & IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR;
-#else
-	phys_complete = chan->completion_virt->low & IOAT_LOW_COMPLETION_MASK;
-#endif
-
-	if ((chan->completion_virt->full & IOAT_CHANSTS_DMA_TRANSFER_STATUS) ==
-		IOAT_CHANSTS_DMA_TRANSFER_STATUS_HALTED) {
-		printk("IOAT: Channel halted, chanerr = %x\n",
-			readl(chan->reg_base + IOAT_CHANERR_OFFSET));
-
-		/* TODO do something to salvage the situation */
-	}
-
-	if (phys_complete == chan->last_completion) {
-		spin_unlock(&chan->cleanup_lock);
-		return;
-	}
-
-	spin_lock_bh(&chan->desc_lock);
-	list_for_each_entry_safe(desc, _desc, &chan->used_desc, node) {
-
-		/*
-		 * Incoming DMA requests may use multiple descriptors, due to
-		 * exceeding xfercap, perhaps. If so, only the last one will
-		 * have a cookie, and require unmapping.
-		 */
-		if (desc->async_tx.cookie) {
-			cookie = desc->async_tx.cookie;
-
-			/* yes we are unmapping both _page and _single alloc'd
-			   regions with unmap_page. Is this *really* that bad?
-			*/
-			pci_unmap_page(chan->device->pdev,
-					pci_unmap_addr(desc, dst),
-					pci_unmap_len(desc, dst_len),
-					PCI_DMA_FROMDEVICE);
-			pci_unmap_page(chan->device->pdev,
-					pci_unmap_addr(desc, src),
-					pci_unmap_len(desc, src_len),
-					PCI_DMA_TODEVICE);
-		}
-
-		if (desc->async_tx.phys != phys_complete) {
-			/* a completed entry, but not the last, so cleanup
-			 * if the client is done with the descriptor
-			 */
-			if (desc->async_tx.ack) {
-				list_del(&desc->node);
-				list_add_tail(&desc->node, &chan->free_desc);
-			} else
-				desc->async_tx.cookie = 0;
-		} else {
-			/* last used desc. Do not remove, so we can append from
-			   it, but don't look at it next time, either */
-			desc->async_tx.cookie = 0;
-
-			/* TODO check status bits? */
-			break;
-		}
-	}
-
-	spin_unlock_bh(&chan->desc_lock);
-
-	chan->last_completion = phys_complete;
-	if (cookie != 0)
-		chan->completed_cookie = cookie;
-
-	spin_unlock(&chan->cleanup_lock);
-}
-
-static void ioat_dma_dependency_added(struct dma_chan *chan)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-	spin_lock_bh(&ioat_chan->desc_lock);
-	if (ioat_chan->pending == 0) {
-		spin_unlock_bh(&ioat_chan->desc_lock);
-		ioat_dma_memcpy_cleanup(ioat_chan);
-	} else
-		spin_unlock_bh(&ioat_chan->desc_lock);
-}
-
-/**
- * ioat_dma_is_complete - poll the status of a IOAT DMA transaction
- * @chan: IOAT DMA channel handle
- * @cookie: DMA transaction identifier
- * @done: if not %NULL, updated with last completed transaction
- * @used: if not %NULL, updated with last used transaction
- */
-
-static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
-                                            dma_cookie_t cookie,
-                                            dma_cookie_t *done,
-                                            dma_cookie_t *used)
-{
-	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-	dma_cookie_t last_used;
-	dma_cookie_t last_complete;
-	enum dma_status ret;
-
-	last_used = chan->cookie;
-	last_complete = ioat_chan->completed_cookie;
-
-	if (done)
-		*done= last_complete;
-	if (used)
-		*used = last_used;
-
-	ret = dma_async_is_complete(cookie, last_complete, last_used);
-	if (ret == DMA_SUCCESS)
-		return ret;
-
-	ioat_dma_memcpy_cleanup(ioat_chan);
-
-	last_used = chan->cookie;
-	last_complete = ioat_chan->completed_cookie;
-
-	if (done)
-		*done= last_complete;
-	if (used)
-		*used = last_used;
-
-	return dma_async_is_complete(cookie, last_complete, last_used);
-}
-
-/* PCI API */
-
-static struct pci_device_id ioat_pci_tbl[] = {
-	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
-	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB)  },
-	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
-	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
-	{ 0, }
-};
-
-static struct pci_driver ioat_pci_driver = {
-	.name 	= "ioatdma",
-	.id_table = ioat_pci_tbl,
-	.probe	= ioat_probe,
-	.shutdown = ioat_shutdown,
-	.remove	= __devexit_p(ioat_remove),
-};
-
-static irqreturn_t ioat_do_interrupt(int irq, void *data)
-{
-	struct ioat_device *instance = data;
-	unsigned long attnstatus;
-	u8 intrctrl;
-
-	intrctrl = readb(instance->reg_base + IOAT_INTRCTRL_OFFSET);
-
-	if (!(intrctrl & IOAT_INTRCTRL_MASTER_INT_EN))
-		return IRQ_NONE;
-
-	if (!(intrctrl & IOAT_INTRCTRL_INT_STATUS)) {
-		writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
-		return IRQ_NONE;
-	}
-
-	attnstatus = readl(instance->reg_base + IOAT_ATTNSTATUS_OFFSET);
-
-	printk(KERN_ERR "ioatdma error: interrupt! status %lx\n", attnstatus);
-
-	writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
-	return IRQ_HANDLED;
-}
-
-static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan)
-{
-	struct ioat_desc_sw *desc;
-
-	spin_lock_bh(&ioat_chan->desc_lock);
-
-	if (!list_empty(&ioat_chan->free_desc)) {
-		desc = to_ioat_desc(ioat_chan->free_desc.next);
-		list_del(&desc->node);
-	} else {
-		/* try to get another desc */
-		spin_unlock_bh(&ioat_chan->desc_lock);
-		desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL);
-		spin_lock_bh(&ioat_chan->desc_lock);
-		/* will this ever happen? */
-		BUG_ON(!desc);
-	}
-
-	desc->hw->ctl = IOAT_DMA_DESCRIPTOR_NUL;
-	desc->hw->next = 0;
-	desc->async_tx.ack = 1;
-
-	list_add_tail(&desc->node, &ioat_chan->used_desc);
-	spin_unlock_bh(&ioat_chan->desc_lock);
-
-	writel(((u64) desc->async_tx.phys) & 0x00000000FFFFFFFF,
-	       ioat_chan->reg_base + IOAT_CHAINADDR_OFFSET_LOW);
-	writel(((u64) desc->async_tx.phys) >> 32,
-	       ioat_chan->reg_base + IOAT_CHAINADDR_OFFSET_HIGH);
-
-	writeb(IOAT_CHANCMD_START, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
-}
-
-/*
- * Perform a IOAT transaction to verify the HW works.
- */
-#define IOAT_TEST_SIZE 2000
-
-static int ioat_self_test(struct ioat_device *device)
-{
-	int i;
-	u8 *src;
-	u8 *dest;
-	struct dma_chan *dma_chan;
-	struct dma_async_tx_descriptor *tx;
-	dma_addr_t addr;
-	dma_cookie_t cookie;
-	int err = 0;
-
-	src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL);
-	if (!src)
-		return -ENOMEM;
-	dest = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL);
-	if (!dest) {
-		kfree(src);
-		return -ENOMEM;
-	}
-
-	/* Fill in src buffer */
-	for (i = 0; i < IOAT_TEST_SIZE; i++)
-		src[i] = (u8)i;
-
-	/* Start copy, using first DMA channel */
-	dma_chan = container_of(device->common.channels.next,
-	                        struct dma_chan,
-	                        device_node);
-	if (ioat_dma_alloc_chan_resources(dma_chan) < 1) {
-		err = -ENODEV;
-		goto out;
-	}
-
-	tx = ioat_dma_prep_memcpy(dma_chan, IOAT_TEST_SIZE, 0);
-	async_tx_ack(tx);
-	addr = dma_map_single(dma_chan->device->dev, src, IOAT_TEST_SIZE,
-			DMA_TO_DEVICE);
-	ioat_set_src(addr, tx, 0);
-	addr = dma_map_single(dma_chan->device->dev, dest, IOAT_TEST_SIZE,
-			DMA_FROM_DEVICE);
-	ioat_set_dest(addr, tx, 0);
-	cookie = ioat_tx_submit(tx);
-	ioat_dma_memcpy_issue_pending(dma_chan);
-	msleep(1);
-
-	if (ioat_dma_is_complete(dma_chan, cookie, NULL, NULL) != DMA_SUCCESS) {
-		printk(KERN_ERR "ioatdma: Self-test copy timed out, disabling\n");
-		err = -ENODEV;
-		goto free_resources;
-	}
-	if (memcmp(src, dest, IOAT_TEST_SIZE)) {
-		printk(KERN_ERR "ioatdma: Self-test copy failed compare, disabling\n");
-		err = -ENODEV;
-		goto free_resources;
-	}
-
-free_resources:
-	ioat_dma_free_chan_resources(dma_chan);
-out:
-	kfree(src);
-	kfree(dest);
-	return err;
-}
-
-static int __devinit ioat_probe(struct pci_dev *pdev,
-                                const struct pci_device_id *ent)
-{
-	int err;
-	unsigned long mmio_start, mmio_len;
-	void __iomem *reg_base;
-	struct ioat_device *device;
-
-	err = pci_enable_device(pdev);
-	if (err)
-		goto err_enable_device;
-
-	err = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
-	if (err)
-		err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
-	if (err)
-		goto err_set_dma_mask;
-
-	err = pci_request_regions(pdev, ioat_pci_driver.name);
-	if (err)
-		goto err_request_regions;
-
-	mmio_start = pci_resource_start(pdev, 0);
-	mmio_len = pci_resource_len(pdev, 0);
-
-	reg_base = ioremap(mmio_start, mmio_len);
-	if (!reg_base) {
-		err = -ENOMEM;
-		goto err_ioremap;
-	}
-
-	device = kzalloc(sizeof(*device), GFP_KERNEL);
-	if (!device) {
-		err = -ENOMEM;
-		goto err_kzalloc;
-	}
-
-	/* DMA coherent memory pool for DMA descriptor allocations */
-	device->dma_pool = pci_pool_create("dma_desc_pool", pdev,
-		sizeof(struct ioat_dma_descriptor), 64, 0);
-	if (!device->dma_pool) {
-		err = -ENOMEM;
-		goto err_dma_pool;
-	}
-
-	device->completion_pool = pci_pool_create("completion_pool", pdev, sizeof(u64), SMP_CACHE_BYTES, SMP_CACHE_BYTES);
-	if (!device->completion_pool) {
-		err = -ENOMEM;
-		goto err_completion_pool;
-	}
-
-	device->pdev = pdev;
-	pci_set_drvdata(pdev, device);
-#ifdef CONFIG_PCI_MSI
-	if (pci_enable_msi(pdev) == 0) {
-		device->msi = 1;
-	} else {
-		device->msi = 0;
-	}
-#endif
-	err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat",
-		device);
-	if (err)
-		goto err_irq;
-
-	device->reg_base = reg_base;
-
-	writeb(IOAT_INTRCTRL_MASTER_INT_EN, device->reg_base + IOAT_INTRCTRL_OFFSET);
-	pci_set_master(pdev);
-
-	INIT_LIST_HEAD(&device->common.channels);
-	enumerate_dma_channels(device);
-
-	dma_cap_set(DMA_MEMCPY, device->common.cap_mask);
-	device->common.device_alloc_chan_resources = ioat_dma_alloc_chan_resources;
-	device->common.device_free_chan_resources = ioat_dma_free_chan_resources;
-	device->common.device_prep_dma_memcpy = ioat_dma_prep_memcpy;
-	device->common.device_is_tx_complete = ioat_dma_is_complete;
-	device->common.device_issue_pending = ioat_dma_memcpy_issue_pending;
-	device->common.device_dependency_added = ioat_dma_dependency_added;
-	device->common.dev = &pdev->dev;
-	printk(KERN_INFO "Intel(R) I/OAT DMA Engine found, %d channels\n",
-		device->common.chancnt);
-
-	err = ioat_self_test(device);
-	if (err)
-		goto err_self_test;
-
-	dma_async_device_register(&device->common);
-
-	return 0;
-
-err_self_test:
-err_irq:
-	pci_pool_destroy(device->completion_pool);
-err_completion_pool:
-	pci_pool_destroy(device->dma_pool);
-err_dma_pool:
-	kfree(device);
-err_kzalloc:
-	iounmap(reg_base);
-err_ioremap:
-	pci_release_regions(pdev);
-err_request_regions:
-err_set_dma_mask:
-	pci_disable_device(pdev);
-err_enable_device:
-
-	printk(KERN_ERR "Intel(R) I/OAT DMA Engine initialization failed\n");
-
-	return err;
-}
-
-static void ioat_shutdown(struct pci_dev *pdev)
-{
-	struct ioat_device *device;
-	device = pci_get_drvdata(pdev);
-
-	dma_async_device_unregister(&device->common);
-}
-
-static void __devexit ioat_remove(struct pci_dev *pdev)
-{
-	struct ioat_device *device;
-	struct dma_chan *chan, *_chan;
-	struct ioat_dma_chan *ioat_chan;
-
-	device = pci_get_drvdata(pdev);
-	dma_async_device_unregister(&device->common);
-
-	free_irq(device->pdev->irq, device);
-#ifdef CONFIG_PCI_MSI
-	if (device->msi)
-		pci_disable_msi(device->pdev);
-#endif
-	pci_pool_destroy(device->dma_pool);
-	pci_pool_destroy(device->completion_pool);
-	iounmap(device->reg_base);
-	pci_release_regions(pdev);
-	pci_disable_device(pdev);
-	list_for_each_entry_safe(chan, _chan, &device->common.channels, device_node) {
-		ioat_chan = to_ioat_chan(chan);
-		list_del(&chan->device_node);
-		kfree(ioat_chan);
-	}
-	kfree(device);
-}
-
-/* MODULE API */
-MODULE_VERSION("1.9");
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Intel Corporation");
-
-static int __init ioat_init_module(void)
-{
-	/* it's currently unsafe to unload this module */
-	/* if forced, worst case is that rmmod hangs */
-	__unsafe(THIS_MODULE);
-
-	return pci_register_driver(&ioat_pci_driver);
-}
-
-module_init(ioat_init_module);
-
-static void __exit ioat_exit_module(void)
-{
-	pci_unregister_driver(&ioat_pci_driver);
-}
-
-module_exit(ioat_exit_module);

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 3/7] I/OAT: code cleanup from checkpatch output
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
  2007-07-20  0:44 ` [PATCH 1/7] I/OAT: New device ids Shannon Nelson
  2007-07-20  0:44 ` [PATCH 2/7] I/OAT: Rename the source file Shannon Nelson
@ 2007-07-20  0:45 ` Shannon Nelson
  2007-07-20  0:49   ` David Miller
  2007-07-20  0:45 ` [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code Shannon Nelson
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:45 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Take care of a bunch of little code nits in ioatdma files

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/dma/ioat_dma.c |  200 +++++++++++++++++++++++++++---------------------
 1 files changed, 111 insertions(+), 89 deletions(-)

diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
index 52e2ac2..0a56361 100644
--- a/drivers/dma/ioat_dma.c
+++ b/drivers/dma/ioat_dma.c
@@ -1,10 +1,10 @@
 /*
- * Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+ * Intel I/OAT DMA Linux driver
+ * Copyright(c) 2004 - 2007 Intel Corporation.
  *
  * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
  *
  * This program is distributed in the hope that it will be useful, but WITHOUT
  * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
@@ -12,11 +12,12 @@
  * more details.
  *
  * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59
- * Temple Place - Suite 330, Boston, MA  02111-1307, USA.
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
  *
- * The full GNU General Public License is included in this distribution in the
- * file called COPYING.
  */
 
 /*
@@ -35,17 +36,22 @@
 #include "ioatdma_registers.h"
 #include "ioatdma_hw.h"
 
+#define INITIAL_IOAT_DESC_COUNT 128
+
 #define to_ioat_chan(chan) container_of(chan, struct ioat_dma_chan, common)
 #define to_ioat_device(dev) container_of(dev, struct ioat_device, common)
 #define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node)
 #define tx_to_ioat_desc(tx) container_of(tx, struct ioat_desc_sw, async_tx)
 
 /* internal functions */
-static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
+static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan);
+static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan);
+static int __devinit ioat_probe(struct pci_dev *pdev,
+				const struct pci_device_id *ent);
 static void ioat_shutdown(struct pci_dev *pdev);
 static void __devexit ioat_remove(struct pci_dev *pdev);
 
-static int enumerate_dma_channels(struct ioat_device *device)
+static int ioat_dma_enumerate_channels(struct ioat_device *device)
 {
 	u8 xfercap_scale;
 	u32 xfercap;
@@ -73,13 +79,14 @@ static int enumerate_dma_channels(struct ioat_device *device)
 		/* This should be made common somewhere in dmaengine.c */
 		ioat_chan->common.device = &device->common;
 		list_add_tail(&ioat_chan->common.device_node,
-		              &device->common.channels);
+			      &device->common.channels);
 	}
 	return device->common.chancnt;
 }
 
-static void
-ioat_set_src(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
+static void ioat_set_src(dma_addr_t addr,
+			 struct dma_async_tx_descriptor *tx,
+			 int index)
 {
 	struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx);
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
@@ -93,8 +100,9 @@ ioat_set_src(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
 
 }
 
-static void
-ioat_set_dest(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
+static void ioat_set_dest(dma_addr_t addr,
+			  struct dma_async_tx_descriptor *tx,
+			  int index)
 {
 	struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx);
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
@@ -107,8 +115,7 @@ ioat_set_dest(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index)
 	}
 }
 
-static dma_cookie_t
-ioat_tx_submit(struct dma_async_tx_descriptor *tx)
+static dma_cookie_t ioat_tx_submit(struct dma_async_tx_descriptor *tx)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan);
 	struct ioat_desc_sw *desc = tx_to_ioat_desc(tx);
@@ -141,13 +148,13 @@ ioat_tx_submit(struct dma_async_tx_descriptor *tx)
 	if (append)
 		writeb(IOAT_CHANCMD_APPEND,
 			ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
-	
+
 	return cookie;
 }
 
 static struct ioat_desc_sw *ioat_dma_alloc_descriptor(
-	struct ioat_dma_chan *ioat_chan,
-	gfp_t flags)
+					struct ioat_dma_chan *ioat_chan,
+					gfp_t flags)
 {
 	struct ioat_dma_descriptor *desc;
 	struct ioat_desc_sw *desc_sw;
@@ -177,10 +184,6 @@ static struct ioat_desc_sw *ioat_dma_alloc_descriptor(
 	return desc_sw;
 }
 
-#define INITIAL_IOAT_DESC_COUNT 128
-
-static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan);
-
 /* returns the actual number of allocated descriptors */
 static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 {
@@ -199,16 +202,17 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 	if (chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE)
 		return -EBUSY;
 
-        /* Setup register to interrupt and write completion status on error */
+	/* Setup register to interrupt and write completion status on error */
 	chanctrl = IOAT_CHANCTRL_CHANNEL_IN_USE |
 		IOAT_CHANCTRL_ERR_INT_EN |
 		IOAT_CHANCTRL_ANY_ERR_ABORT_EN |
 		IOAT_CHANCTRL_ERR_COMPLETION_EN;
-        writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
+	writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
 
 	chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
 	if (chanerr) {
-		printk("IOAT: CHANERR = %x, clearing\n", chanerr);
+		dev_err(&ioat_chan->device->pdev->dev,
+			"ioatdma: CHANERR = %x, clearing\n", chanerr);
 		writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
 	}
 
@@ -216,7 +220,8 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 	for (i = 0; i < INITIAL_IOAT_DESC_COUNT; i++) {
 		desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL);
 		if (!desc) {
-			printk(KERN_ERR "IOAT: Only %d initial descriptors\n", i);
+			dev_err(&ioat_chan->device->pdev->dev,
+				"ioatdma: Only %d initial descriptors\n", i);
 			break;
 		}
 		list_add_tail(&desc->node, &tmp_list);
@@ -229,8 +234,8 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 	/* doing 2 32bit writes to mmio since 1 64b write doesn't work */
 	ioat_chan->completion_virt =
 		pci_pool_alloc(ioat_chan->device->completion_pool,
-		               GFP_KERNEL,
-		               &ioat_chan->completion_addr);
+			       GFP_KERNEL,
+			       &ioat_chan->completion_addr);
 	memset(ioat_chan->completion_virt, 0,
 	       sizeof(*ioat_chan->completion_virt));
 	writel(((u64) ioat_chan->completion_addr) & 0x00000000FFFFFFFF,
@@ -238,12 +243,10 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 	writel(((u64) ioat_chan->completion_addr) >> 32,
 	       ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH);
 
-	ioat_start_null_desc(ioat_chan);
+	ioat_dma_start_null_desc(ioat_chan);
 	return i;
 }
 
-static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan);
-
 static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
@@ -273,12 +276,13 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 	spin_unlock_bh(&ioat_chan->desc_lock);
 
 	pci_pool_free(ioat_device->completion_pool,
-	              ioat_chan->completion_virt,
-	              ioat_chan->completion_addr);
+		      ioat_chan->completion_virt,
+		      ioat_chan->completion_addr);
 
 	/* one is ok since we left it on there on purpose */
 	if (in_use_descs > 1)
-		printk(KERN_ERR "IOAT: Freeing %d in use descriptors!\n",
+		dev_err(&ioat_chan->device->pdev->dev,
+			"ioatdma: Freeing %d in use descriptors!\n",
 			in_use_descs - 1);
 
 	ioat_chan->last_completion = ioat_chan->completion_addr = 0;
@@ -289,8 +293,10 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 	writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
 }
 
-static struct dma_async_tx_descriptor *
-ioat_dma_prep_memcpy(struct dma_chan *chan, size_t len, int int_en)
+static struct dma_async_tx_descriptor *ioat_dma_prep_memcpy(
+						struct dma_chan *chan,
+						size_t len,
+						int int_en)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
 	struct ioat_desc_sw *first, *prev, *new;
@@ -354,12 +360,11 @@ ioat_dma_prep_memcpy(struct dma_chan *chan, size_t len, int int_en)
 	return new ? &new->async_tx : NULL;
 }
 
-
 /**
- * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended descriptors to hw
+ * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended
+ *                                 descriptors to hw
  * @chan: DMA channel handle
  */
-
 static void ioat_dma_memcpy_issue_pending(struct dma_chan *chan)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
@@ -371,15 +376,15 @@ static void ioat_dma_memcpy_issue_pending(struct dma_chan *chan)
 	}
 }
 
-static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
+static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan)
 {
 	unsigned long phys_complete;
 	struct ioat_desc_sw *desc, *_desc;
 	dma_cookie_t cookie = 0;
 
-	prefetch(chan->completion_virt);
+	prefetch(ioat_chan->completion_virt);
 
-	if (!spin_trylock(&chan->cleanup_lock))
+	if (!spin_trylock(&ioat_chan->cleanup_lock))
 		return;
 
 	/* The completion writeback can happen at any time,
@@ -389,26 +394,27 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
 
 #if (BITS_PER_LONG == 64)
 	phys_complete =
-	chan->completion_virt->full & IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR;
+	ioat_chan->completion_virt->full & IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR;
 #else
-	phys_complete = chan->completion_virt->low & IOAT_LOW_COMPLETION_MASK;
+	phys_complete = ioat_chan->completion_virt->low & IOAT_LOW_COMPLETION_MASK;
 #endif
 
-	if ((chan->completion_virt->full & IOAT_CHANSTS_DMA_TRANSFER_STATUS) ==
-		IOAT_CHANSTS_DMA_TRANSFER_STATUS_HALTED) {
-		printk("IOAT: Channel halted, chanerr = %x\n",
-			readl(chan->reg_base + IOAT_CHANERR_OFFSET));
+	if ((ioat_chan->completion_virt->full & IOAT_CHANSTS_DMA_TRANSFER_STATUS) ==
+				IOAT_CHANSTS_DMA_TRANSFER_STATUS_HALTED) {
+		dev_err(&ioat_chan->device->pdev->dev,
+			"ioatdma: Channel halted, chanerr = %x\n",
+			readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET));
 
 		/* TODO do something to salvage the situation */
 	}
 
-	if (phys_complete == chan->last_completion) {
-		spin_unlock(&chan->cleanup_lock);
+	if (phys_complete == ioat_chan->last_completion) {
+		spin_unlock(&ioat_chan->cleanup_lock);
 		return;
 	}
 
-	spin_lock_bh(&chan->desc_lock);
-	list_for_each_entry_safe(desc, _desc, &chan->used_desc, node) {
+	spin_lock_bh(&ioat_chan->desc_lock);
+	list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) {
 
 		/*
 		 * Incoming DMA requests may use multiple descriptors, due to
@@ -418,31 +424,36 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
 		if (desc->async_tx.cookie) {
 			cookie = desc->async_tx.cookie;
 
-			/* yes we are unmapping both _page and _single alloc'd
-			   regions with unmap_page. Is this *really* that bad?
-			*/
-			pci_unmap_page(chan->device->pdev,
+			/*
+			 * yes we are unmapping both _page and _single alloc'd
+			 * regions with unmap_page. Is this *really* that bad?
+			 */
+			pci_unmap_page(ioat_chan->device->pdev,
 					pci_unmap_addr(desc, dst),
 					pci_unmap_len(desc, dst_len),
 					PCI_DMA_FROMDEVICE);
-			pci_unmap_page(chan->device->pdev,
+			pci_unmap_page(ioat_chan->device->pdev,
 					pci_unmap_addr(desc, src),
 					pci_unmap_len(desc, src_len),
 					PCI_DMA_TODEVICE);
 		}
 
 		if (desc->async_tx.phys != phys_complete) {
-			/* a completed entry, but not the last, so cleanup
+			/*
+			 * a completed entry, but not the last, so cleanup
 			 * if the client is done with the descriptor
 			 */
 			if (desc->async_tx.ack) {
 				list_del(&desc->node);
-				list_add_tail(&desc->node, &chan->free_desc);
+				list_add_tail(&desc->node,
+					      &ioat_chan->free_desc);
 			} else
 				desc->async_tx.cookie = 0;
 		} else {
-			/* last used desc. Do not remove, so we can append from
-			   it, but don't look at it next time, either */
+			/*
+			 * last used desc. Do not remove, so we can append from
+			 * it, but don't look at it next time, either
+			 */
 			desc->async_tx.cookie = 0;
 
 			/* TODO check status bits? */
@@ -450,13 +461,13 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
 		}
 	}
 
-	spin_unlock_bh(&chan->desc_lock);
+	spin_unlock_bh(&ioat_chan->desc_lock);
 
-	chan->last_completion = phys_complete;
+	ioat_chan->last_completion = phys_complete;
 	if (cookie != 0)
-		chan->completed_cookie = cookie;
+		ioat_chan->completed_cookie = cookie;
 
-	spin_unlock(&chan->cleanup_lock);
+	spin_unlock(&ioat_chan->cleanup_lock);
 }
 
 static void ioat_dma_dependency_added(struct dma_chan *chan)
@@ -477,11 +488,10 @@ static void ioat_dma_dependency_added(struct dma_chan *chan)
  * @done: if not %NULL, updated with last completed transaction
  * @used: if not %NULL, updated with last used transaction
  */
-
 static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
-                                            dma_cookie_t cookie,
-                                            dma_cookie_t *done,
-                                            dma_cookie_t *used)
+					    dma_cookie_t cookie,
+					    dma_cookie_t *done,
+					    dma_cookie_t *used)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
 	dma_cookie_t last_used;
@@ -492,7 +502,7 @@ static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
 	last_complete = ioat_chan->completed_cookie;
 
 	if (done)
-		*done= last_complete;
+		*done = last_complete;
 	if (used)
 		*used = last_used;
 
@@ -506,7 +516,7 @@ static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
 	last_complete = ioat_chan->completed_cookie;
 
 	if (done)
-		*done= last_complete;
+		*done = last_complete;
 	if (used)
 		*used = last_used;
 
@@ -549,13 +559,13 @@ static irqreturn_t ioat_do_interrupt(int irq, void *data)
 
 	attnstatus = readl(instance->reg_base + IOAT_ATTNSTATUS_OFFSET);
 
-	printk(KERN_ERR "ioatdma error: interrupt! status %lx\n", attnstatus);
+	printk(KERN_ERR "ioatdma: interrupt! status %lx\n", attnstatus);
 
 	writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
 	return IRQ_HANDLED;
 }
 
-static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan)
+static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan)
 {
 	struct ioat_desc_sw *desc;
 
@@ -619,9 +629,11 @@ static int ioat_self_test(struct ioat_device *device)
 
 	/* Start copy, using first DMA channel */
 	dma_chan = container_of(device->common.channels.next,
-	                        struct dma_chan,
-	                        device_node);
+				struct dma_chan,
+				device_node);
 	if (ioat_dma_alloc_chan_resources(dma_chan) < 1) {
+		dev_err(&device->pdev->dev,
+			"selftest cannot allocate chan resource\n");
 		err = -ENODEV;
 		goto out;
 	}
@@ -639,12 +651,14 @@ static int ioat_self_test(struct ioat_device *device)
 	msleep(1);
 
 	if (ioat_dma_is_complete(dma_chan, cookie, NULL, NULL) != DMA_SUCCESS) {
-		printk(KERN_ERR "ioatdma: Self-test copy timed out, disabling\n");
+		dev_err(&device->pdev->dev,
+			"ioatdma: Self-test copy timed out, disabling\n");
 		err = -ENODEV;
 		goto free_resources;
 	}
 	if (memcmp(src, dest, IOAT_TEST_SIZE)) {
-		printk(KERN_ERR "ioatdma: Self-test copy failed compare, disabling\n");
+		dev_err(&device->pdev->dev,
+			"ioatdma: Self-test copy failed compare, disabling\n");
 		err = -ENODEV;
 		goto free_resources;
 	}
@@ -658,7 +672,7 @@ out:
 }
 
 static int __devinit ioat_probe(struct pci_dev *pdev,
-                                const struct pci_device_id *ent)
+				const struct pci_device_id *ent)
 {
 	int err;
 	unsigned long mmio_start, mmio_len;
@@ -702,7 +716,9 @@ static int __devinit ioat_probe(struct pci_dev *pdev,
 		goto err_dma_pool;
 	}
 
-	device->completion_pool = pci_pool_create("completion_pool", pdev, sizeof(u64), SMP_CACHE_BYTES, SMP_CACHE_BYTES);
+	device->completion_pool = pci_pool_create("completion_pool", pdev,
+						  sizeof(u64), SMP_CACHE_BYTES,
+						  SMP_CACHE_BYTES);
 	if (!device->completion_pool) {
 		err = -ENOMEM;
 		goto err_completion_pool;
@@ -724,22 +740,26 @@ static int __devinit ioat_probe(struct pci_dev *pdev,
 
 	device->reg_base = reg_base;
 
-	writeb(IOAT_INTRCTRL_MASTER_INT_EN, device->reg_base + IOAT_INTRCTRL_OFFSET);
+	writeb(IOAT_INTRCTRL_MASTER_INT_EN,
+	       device->reg_base + IOAT_INTRCTRL_OFFSET);
 	pci_set_master(pdev);
 
 	INIT_LIST_HEAD(&device->common.channels);
-	enumerate_dma_channels(device);
+	ioat_dma_enumerate_channels(device);
 
 	dma_cap_set(DMA_MEMCPY, device->common.cap_mask);
-	device->common.device_alloc_chan_resources = ioat_dma_alloc_chan_resources;
-	device->common.device_free_chan_resources = ioat_dma_free_chan_resources;
+	device->common.device_alloc_chan_resources =
+						ioat_dma_alloc_chan_resources;
+	device->common.device_free_chan_resources =
+						ioat_dma_free_chan_resources;
 	device->common.device_prep_dma_memcpy = ioat_dma_prep_memcpy;
 	device->common.device_is_tx_complete = ioat_dma_is_complete;
 	device->common.device_issue_pending = ioat_dma_memcpy_issue_pending;
 	device->common.device_dependency_added = ioat_dma_dependency_added;
 	device->common.dev = &pdev->dev;
-	printk(KERN_INFO "Intel(R) I/OAT DMA Engine found, %d channels\n",
-		device->common.chancnt);
+	printk(KERN_INFO " "
+		 "ioatdma: Intel(R) I/OAT DMA Engine found, %d channels\n",
+		 device->common.chancnt);
 
 	err = ioat_self_test(device);
 	if (err)
@@ -765,7 +785,8 @@ err_set_dma_mask:
 	pci_disable_device(pdev);
 err_enable_device:
 
-	printk(KERN_ERR "Intel(R) I/OAT DMA Engine initialization failed\n");
+	printk(KERN_INFO " "
+		"ioatdma: Intel(R) I/OAT DMA Engine initialization failed\n");
 
 	return err;
 }
@@ -797,7 +818,8 @@ static void __devexit ioat_remove(struct pci_dev *pdev)
 	iounmap(device->reg_base);
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
-	list_for_each_entry_safe(chan, _chan, &device->common.channels, device_node) {
+	list_for_each_entry_safe(chan, _chan,
+				 &device->common.channels, device_node) {
 		ioat_chan = to_ioat_chan(chan);
 		list_del(&chan->device_node);
 		kfree(ioat_chan);

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
                   ` (2 preceding siblings ...)
  2007-07-20  0:45 ` [PATCH 3/7] I/OAT: code cleanup from checkpatch output Shannon Nelson
@ 2007-07-20  0:45 ` Shannon Nelson
  2007-07-20  0:50   ` David Miller
  2007-07-20 10:53   ` Andrey Panin
  2007-07-20  0:45 ` [PATCH 5/7] I/OAT: Add support for MSI and MSI-X Shannon Nelson
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:45 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Split the general PCI startup from the DMA handling code in order to
prepare for adding support for DCA services and future versions of the
ioatdma device.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/dma/Makefile     |    2 
 drivers/dma/ioat.c       |  186 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/ioat_dma.c   |  196 +++++++++++-----------------------------------
 drivers/dma/ioatdma.h    |   16 +++-
 drivers/dma/ioatdma_hw.h |    2 
 5 files changed, 245 insertions(+), 157 deletions(-)

diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 77bee99..cec0c9c 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -1,5 +1,5 @@
 obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
 obj-$(CONFIG_NET_DMA) += iovlock.o
 obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
-ioatdma-objs := ioat_dma.o
+ioatdma-objs := ioat.o ioat_dma.o
 obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
diff --git a/drivers/dma/ioat.c b/drivers/dma/ioat.c
new file mode 100644
index 0000000..9d9f672
--- /dev/null
+++ b/drivers/dma/ioat.c
@@ -0,0 +1,186 @@
+/*
+ * Intel I/OAT DMA Linux driver
+ * Copyright(c) 2004 - 2007 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ */
+
+/*
+ * This driver supports an Intel I/OAT DMA engine, which does asynchronous
+ * copy operations.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include "ioatdma.h"
+#include "ioatdma_registers.h"
+#include "ioatdma_hw.h"
+
+MODULE_VERSION("1.24");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+
+static struct pci_device_id ioat_pci_tbl[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB)  },
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
+	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
+	{ 0, }
+};
+
+struct ioat_device {
+	struct pci_dev		*pdev;
+	void __iomem		*iobase;
+	struct ioatdma_device	*dma;
+};
+
+static int __devinit ioat_probe(struct pci_dev *pdev,
+				const struct pci_device_id *id);
+static void __devexit ioat_remove(struct pci_dev *pdev);
+
+static int ioat_setup_functionality(struct pci_dev *pdev, void __iomem *iobase)
+{
+	struct ioat_device *device = pci_get_drvdata(pdev);
+	u8 version;
+	int err = 0;
+
+	version = readb(iobase + IOAT_VER_OFFSET);
+	switch (version) {
+	case IOAT_VER_1_2:
+		device->dma = ioat_dma_probe(pdev, iobase);
+		break;
+	default:
+		err = -ENODEV;
+		break;
+	}
+	return err;
+}
+
+static void ioat_shutdown_functionality(struct pci_dev *pdev)
+{
+	struct ioat_device *device = pci_get_drvdata(pdev);
+
+	if (device->dma) {
+		ioat_dma_remove(device->dma);
+		device->dma = NULL;
+	}
+}
+
+static struct pci_driver ioat_pci_drv = {
+	.name		= "ioatdma",
+	.id_table	= ioat_pci_tbl,
+	.probe		= ioat_probe,
+	.shutdown	= ioat_shutdown_functionality,
+	.remove		= __devexit_p(ioat_remove),
+};
+
+static int __devinit ioat_probe(struct pci_dev *pdev,
+				const struct pci_device_id *id)
+{
+	void __iomem *iobase;
+	struct ioat_device *device;
+	unsigned long mmio_start, mmio_len;
+	int err;
+
+	err = pci_enable_device(pdev);
+	if (err)
+		goto err_enable_device;
+
+	err = pci_request_regions(pdev, ioat_pci_drv.name);
+	if (err)
+		goto err_request_regions;
+
+	err = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
+	if (err)
+		err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+	if (err)
+		goto err_set_dma_mask;
+
+	err = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK);
+	if (err)
+		err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK);
+	if (err)
+		goto err_set_dma_mask;
+
+	mmio_start = pci_resource_start(pdev, 0);
+	mmio_len = pci_resource_len(pdev, 0);
+	iobase = ioremap(mmio_start, mmio_len);
+	if (!iobase) {
+		err = -ENOMEM;
+		goto err_ioremap;
+	}
+
+	device = kzalloc(sizeof(*device), GFP_KERNEL);
+	if (!device) {
+		err = -ENOMEM;
+		goto err_kzalloc;
+	}
+	device->pdev = pdev;
+	pci_set_drvdata(pdev, device);
+	device->iobase = iobase;
+
+	pci_set_master(pdev);
+
+	err = ioat_setup_functionality(pdev, iobase);
+	if (err)
+		goto err_version;
+
+	return 0;
+
+err_version:
+	kfree(device);
+err_kzalloc:
+	iounmap(iobase);
+err_ioremap:
+err_set_dma_mask:
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+err_request_regions:
+err_enable_device:
+	return err;
+}
+
+static void __devexit ioat_remove(struct pci_dev *pdev)
+{
+	struct ioat_device *device = pci_get_drvdata(pdev);
+
+	ioat_shutdown_functionality(pdev);
+
+	kfree(device);
+
+	iounmap(device->iobase);
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
+}
+
+static int __init ioat_init_module(void)
+{
+	/* it's currently unsafe to unload this module */
+	/* if forced, worst case is that rmmod hangs */
+	__unsafe(THIS_MODULE);
+	return pci_register_driver(&ioat_pci_drv);
+}
+module_init(ioat_init_module);
+
+static void __exit ioat_exit_module(void)
+{
+	pci_unregister_driver(&ioat_pci_drv);
+}
+module_exit(ioat_exit_module);
diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
index 0a56361..62bea23 100644
--- a/drivers/dma/ioat_dma.c
+++ b/drivers/dma/ioat_dma.c
@@ -39,19 +39,15 @@
 #define INITIAL_IOAT_DESC_COUNT 128
 
 #define to_ioat_chan(chan) container_of(chan, struct ioat_dma_chan, common)
-#define to_ioat_device(dev) container_of(dev, struct ioat_device, common)
+#define to_ioatdma_device(dev) container_of(dev, struct ioatdma_device, common)
 #define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node)
 #define tx_to_ioat_desc(tx) container_of(tx, struct ioat_desc_sw, async_tx)
 
 /* internal functions */
 static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan);
 static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan);
-static int __devinit ioat_probe(struct pci_dev *pdev,
-				const struct pci_device_id *ent);
-static void ioat_shutdown(struct pci_dev *pdev);
-static void __devexit ioat_remove(struct pci_dev *pdev);
 
-static int ioat_dma_enumerate_channels(struct ioat_device *device)
+static int ioat_dma_enumerate_channels(struct ioatdma_device *device)
 {
 	u8 xfercap_scale;
 	u32 xfercap;
@@ -158,17 +154,17 @@ static struct ioat_desc_sw *ioat_dma_alloc_descriptor(
 {
 	struct ioat_dma_descriptor *desc;
 	struct ioat_desc_sw *desc_sw;
-	struct ioat_device *ioat_device;
+	struct ioatdma_device *ioatdma_device;
 	dma_addr_t phys;
 
-	ioat_device = to_ioat_device(ioat_chan->common.device);
-	desc = pci_pool_alloc(ioat_device->dma_pool, flags, &phys);
+	ioatdma_device = to_ioatdma_device(ioat_chan->common.device);
+	desc = pci_pool_alloc(ioatdma_device->dma_pool, flags, &phys);
 	if (unlikely(!desc))
 		return NULL;
 
 	desc_sw = kzalloc(sizeof(*desc_sw), flags);
 	if (unlikely(!desc_sw)) {
-		pci_pool_free(ioat_device->dma_pool, desc, phys);
+		pci_pool_free(ioatdma_device->dma_pool, desc, phys);
 		return NULL;
 	}
 
@@ -194,17 +190,12 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 	int i;
 	LIST_HEAD(tmp_list);
 
-	/*
-	 * In-use bit automatically set by reading chanctrl
-	 * If 0, we got it, if 1, someone else did
-	 */
-	chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
-	if (chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE)
+	/* have we already been set up? */
+	if (ioat_chan->free_desc.next != &ioat_chan->free_desc)
 		return -EBUSY;
 
 	/* Setup register to interrupt and write completion status on error */
-	chanctrl = IOAT_CHANCTRL_CHANNEL_IN_USE |
-		IOAT_CHANCTRL_ERR_INT_EN |
+	chanctrl = IOAT_CHANCTRL_ERR_INT_EN |
 		IOAT_CHANCTRL_ANY_ERR_ABORT_EN |
 		IOAT_CHANCTRL_ERR_COMPLETION_EN;
 	writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
@@ -250,9 +241,8 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
-	struct ioat_device *ioat_device = to_ioat_device(chan->device);
+	struct ioatdma_device *ioatdma_device = to_ioatdma_device(chan->device);
 	struct ioat_desc_sw *desc, *_desc;
-	u16 chanctrl;
 	int in_use_descs = 0;
 
 	ioat_dma_memcpy_cleanup(ioat_chan);
@@ -263,19 +253,19 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 	list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) {
 		in_use_descs++;
 		list_del(&desc->node);
-		pci_pool_free(ioat_device->dma_pool, desc->hw,
+		pci_pool_free(ioatdma_device->dma_pool, desc->hw,
 			      desc->async_tx.phys);
 		kfree(desc);
 	}
 	list_for_each_entry_safe(desc, _desc, &ioat_chan->free_desc, node) {
 		list_del(&desc->node);
-		pci_pool_free(ioat_device->dma_pool, desc->hw,
+		pci_pool_free(ioatdma_device->dma_pool, desc->hw,
 			      desc->async_tx.phys);
 		kfree(desc);
 	}
 	spin_unlock_bh(&ioat_chan->desc_lock);
 
-	pci_pool_free(ioat_device->completion_pool,
+	pci_pool_free(ioatdma_device->completion_pool,
 		      ioat_chan->completion_virt,
 		      ioat_chan->completion_addr);
 
@@ -286,11 +276,6 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 			in_use_descs - 1);
 
 	ioat_chan->last_completion = ioat_chan->completion_addr = 0;
-
-	/* Tell hw the chan is free */
-	chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
-	chanctrl &= ~IOAT_CHANCTRL_CHANNEL_IN_USE;
-	writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET);
 }
 
 static struct dma_async_tx_descriptor *ioat_dma_prep_memcpy(
@@ -525,25 +510,9 @@ static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
 
 /* PCI API */
 
-static struct pci_device_id ioat_pci_tbl[] = {
-	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
-	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB)  },
-	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
-	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
-	{ 0, }
-};
-
-static struct pci_driver ioat_pci_driver = {
-	.name 	= "ioatdma",
-	.id_table = ioat_pci_tbl,
-	.probe	= ioat_probe,
-	.shutdown = ioat_shutdown,
-	.remove	= __devexit_p(ioat_remove),
-};
-
 static irqreturn_t ioat_do_interrupt(int irq, void *data)
 {
-	struct ioat_device *instance = data;
+	struct ioatdma_device *instance = data;
 	unsigned long attnstatus;
 	u8 intrctrl;
 
@@ -603,7 +572,7 @@ static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan)
  */
 #define IOAT_TEST_SIZE 2000
 
-static int ioat_self_test(struct ioat_device *device)
+static int ioat_self_test(struct ioatdma_device *device)
 {
 	int i;
 	u8 *src;
@@ -671,46 +640,25 @@ out:
 	return err;
 }
 
-static int __devinit ioat_probe(struct pci_dev *pdev,
-				const struct pci_device_id *ent)
+struct ioatdma_device *ioat_dma_probe(struct pci_dev *pdev,
+				      void __iomem *iobase)
 {
 	int err;
-	unsigned long mmio_start, mmio_len;
-	void __iomem *reg_base;
-	struct ioat_device *device;
-
-	err = pci_enable_device(pdev);
-	if (err)
-		goto err_enable_device;
-
-	err = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
-	if (err)
-		err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
-	if (err)
-		goto err_set_dma_mask;
-
-	err = pci_request_regions(pdev, ioat_pci_driver.name);
-	if (err)
-		goto err_request_regions;
-
-	mmio_start = pci_resource_start(pdev, 0);
-	mmio_len = pci_resource_len(pdev, 0);
-
-	reg_base = ioremap(mmio_start, mmio_len);
-	if (!reg_base) {
-		err = -ENOMEM;
-		goto err_ioremap;
-	}
+	struct ioatdma_device *device;
 
 	device = kzalloc(sizeof(*device), GFP_KERNEL);
 	if (!device) {
 		err = -ENOMEM;
 		goto err_kzalloc;
 	}
+	device->pdev = pdev;
+	device->reg_base = iobase;
+	device->version = readb(device->reg_base + IOAT_VER_OFFSET);
 
 	/* DMA coherent memory pool for DMA descriptor allocations */
 	device->dma_pool = pci_pool_create("dma_desc_pool", pdev,
-		sizeof(struct ioat_dma_descriptor), 64, 0);
+					   sizeof(struct ioat_dma_descriptor),
+					   64, 0);
 	if (!device->dma_pool) {
 		err = -ENOMEM;
 		goto err_dma_pool;
@@ -724,26 +672,6 @@ static int __devinit ioat_probe(struct pci_dev *pdev,
 		goto err_completion_pool;
 	}
 
-	device->pdev = pdev;
-	pci_set_drvdata(pdev, device);
-#ifdef CONFIG_PCI_MSI
-	if (pci_enable_msi(pdev) == 0) {
-		device->msi = 1;
-	} else {
-		device->msi = 0;
-	}
-#endif
-	err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat",
-		device);
-	if (err)
-		goto err_irq;
-
-	device->reg_base = reg_base;
-
-	writeb(IOAT_INTRCTRL_MASTER_INT_EN,
-	       device->reg_base + IOAT_INTRCTRL_OFFSET);
-	pci_set_master(pdev);
-
 	INIT_LIST_HEAD(&device->common.channels);
 	ioat_dma_enumerate_channels(device);
 
@@ -757,9 +685,19 @@ static int __devinit ioat_probe(struct pci_dev *pdev,
 	device->common.device_issue_pending = ioat_dma_memcpy_issue_pending;
 	device->common.device_dependency_added = ioat_dma_dependency_added;
 	device->common.dev = &pdev->dev;
-	printk(KERN_INFO " "
-		 "ioatdma: Intel(R) I/OAT DMA Engine found, %d channels\n",
-		 device->common.chancnt);
+	printk(KERN_INFO "ioatdma: Intel(R) I/OAT DMA Engine found,"
+	       " %d channels, device version 0x%02x\n",
+	       device->common.chancnt, device->version);
+
+	pci_set_drvdata(pdev, device);
+	err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat",
+		device);
+	if (err)
+		goto err_irq;
+
+	writeb(IOAT_INTRCTRL_MASTER_INT_EN,
+	       device->reg_base + IOAT_INTRCTRL_OFFSET);
+	pci_set_master(pdev);
 
 	err = ioat_self_test(device);
 	if (err)
@@ -767,9 +705,10 @@ static int __devinit ioat_probe(struct pci_dev *pdev,
 
 	dma_async_device_register(&device->common);
 
-	return 0;
+	return device;
 
 err_self_test:
+	free_irq(device->pdev->irq, device);
 err_irq:
 	pci_pool_destroy(device->completion_pool);
 err_completion_pool:
@@ -777,47 +716,24 @@ err_completion_pool:
 err_dma_pool:
 	kfree(device);
 err_kzalloc:
-	iounmap(reg_base);
-err_ioremap:
-	pci_release_regions(pdev);
-err_request_regions:
-err_set_dma_mask:
-	pci_disable_device(pdev);
-err_enable_device:
-
-	printk(KERN_INFO " "
-		"ioatdma: Intel(R) I/OAT DMA Engine initialization failed\n");
-
-	return err;
+	iounmap(iobase);
+	printk(KERN_ERR " "
+	       "ioatdma: Intel(R) I/OAT DMA Engine initialization failed\n");
+	return NULL;
 }
 
-static void ioat_shutdown(struct pci_dev *pdev)
+void ioat_dma_remove(struct ioatdma_device *device)
 {
-	struct ioat_device *device;
-	device = pci_get_drvdata(pdev);
-
-	dma_async_device_unregister(&device->common);
-}
-
-static void __devexit ioat_remove(struct pci_dev *pdev)
-{
-	struct ioat_device *device;
 	struct dma_chan *chan, *_chan;
 	struct ioat_dma_chan *ioat_chan;
 
-	device = pci_get_drvdata(pdev);
 	dma_async_device_unregister(&device->common);
 
 	free_irq(device->pdev->irq, device);
-#ifdef CONFIG_PCI_MSI
-	if (device->msi)
-		pci_disable_msi(device->pdev);
-#endif
+
 	pci_pool_destroy(device->dma_pool);
 	pci_pool_destroy(device->completion_pool);
-	iounmap(device->reg_base);
-	pci_release_regions(pdev);
-	pci_disable_device(pdev);
+
 	list_for_each_entry_safe(chan, _chan,
 				 &device->common.channels, device_node) {
 		ioat_chan = to_ioat_chan(chan);
@@ -827,25 +743,3 @@ static void __devexit ioat_remove(struct pci_dev *pdev)
 	kfree(device);
 }
 
-/* MODULE API */
-MODULE_VERSION("1.9");
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Intel Corporation");
-
-static int __init ioat_init_module(void)
-{
-	/* it's currently unsafe to unload this module */
-	/* if forced, worst case is that rmmod hangs */
-	__unsafe(THIS_MODULE);
-
-	return pci_register_driver(&ioat_pci_driver);
-}
-
-module_init(ioat_init_module);
-
-static void __exit ioat_exit_module(void)
-{
-	pci_unregister_driver(&ioat_pci_driver);
-}
-
-module_exit(ioat_exit_module);
diff --git a/drivers/dma/ioatdma.h b/drivers/dma/ioatdma.h
index d372647..87c5e00 100644
--- a/drivers/dma/ioatdma.h
+++ b/drivers/dma/ioatdma.h
@@ -31,7 +31,7 @@
 #define IOAT_LOW_COMPLETION_MASK	0xffffffc0
 
 /**
- * struct ioat_device - internal representation of a IOAT device
+ * struct ioatdma_device - internal representation of a IOAT device
  * @pdev: PCI-Express device
  * @reg_base: MMIO register space base address
  * @dma_pool: for allocating DMA descriptors
@@ -39,14 +39,14 @@
  * @msi: Message Signaled Interrupt number
  */
 
-struct ioat_device {
+struct ioatdma_device {
 	struct pci_dev *pdev;
 	void __iomem *reg_base;
 	struct pci_pool *dma_pool;
 	struct pci_pool *completion_pool;
 
 	struct dma_device common;
-	u8 msi;
+	u8 version;
 };
 
 /**
@@ -84,7 +84,7 @@ struct ioat_dma_chan {
 
 	int pending;
 
-	struct ioat_device *device;
+	struct ioatdma_device *device;
 	struct dma_chan common;
 
 	dma_addr_t completion_addr;
@@ -118,4 +118,12 @@ struct ioat_desc_sw {
 	struct dma_async_tx_descriptor async_tx;
 };
 
+#if defined(CONFIG_INTEL_IOATDMA) || defined(CONFIG_INTEL_IOATDMA_MODULE)
+struct ioatdma_device *ioat_dma_probe(struct pci_dev *, void __iomem *);
+void ioat_dma_remove(struct ioatdma_device *device);
+#else
+#define ioat_dma_probe(pdev, io)                NULL
+#define ioat_dma_remove(dev)            do { } while (0)
+#endif
+
 #endif /* IOATDMA_H */
diff --git a/drivers/dma/ioatdma_hw.h b/drivers/dma/ioatdma_hw.h
index 4d7a128..9e7434e 100644
--- a/drivers/dma/ioatdma_hw.h
+++ b/drivers/dma/ioatdma_hw.h
@@ -27,7 +27,7 @@
 #define IOAT_PCI_RID			0x00
 #define IOAT_PCI_SVID			0x8086
 #define IOAT_PCI_SID			0x8086
-#define IOAT_VER			0x12	/* Version 1.2 */
+#define IOAT_VER_1_2			0x12	/* Version 1.2 */
 
 struct ioat_dma_descriptor {
 	uint32_t	size;

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
                   ` (3 preceding siblings ...)
  2007-07-20  0:45 ` [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code Shannon Nelson
@ 2007-07-20  0:45 ` Shannon Nelson
  2007-07-20  0:51   ` David Miller
  2007-07-20 17:43   ` Roland Dreier
  2007-07-20  0:45 ` [PATCH 6/7] DCA: Add Direct Cache Access driver Shannon Nelson
  2007-07-20  0:45 ` [PATCH 7/7] I/OAT: Add DCA services Shannon Nelson
  6 siblings, 2 replies; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:45 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Add support for MSI and MSI-X interrupt handling, including the ability
to choose the desired interrupt method.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/dma/ioat_dma.c          |  353 ++++++++++++++++++++++++++++++++-------
 drivers/dma/ioatdma.h           |   12 +
 drivers/dma/ioatdma_registers.h |    6 +
 3 files changed, 305 insertions(+), 66 deletions(-)

diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
index 62bea23..55f4179 100644
--- a/drivers/dma/ioat_dma.c
+++ b/drivers/dma/ioat_dma.c
@@ -47,6 +47,71 @@
 static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan);
 static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan);
 
+#define for_each_bit(bit, addr, size) \
+	for ((bit) = find_first_bit((addr), (size)); \
+	     (bit) < (size); \
+	     (bit) = find_next_bit((addr), (size), (bit) + 1))
+
+
+struct ioat_dma_chan *ioat_lookup_chan_by_index(struct ioatdma_device *device,
+						int index)
+{
+	return device->idx[index];
+}
+
+/**
+ * ioat_dma_do_interrupt - handler used for single vector interrupt mode
+ * @irq: interrupt id
+ * @data: interrupt data
+ */
+static irqreturn_t ioat_dma_do_interrupt(int irq, void *data)
+{
+	struct ioatdma_device *instance = data;
+	struct ioat_dma_chan *ioat_chan;
+	unsigned long attnstatus;
+	int bit;
+	u8 intrctrl;
+
+	intrctrl = readb(instance->reg_base + IOAT_INTRCTRL_OFFSET);
+
+	if (!(intrctrl & IOAT_INTRCTRL_MASTER_INT_EN))
+		return IRQ_NONE;
+
+	if (!(intrctrl & IOAT_INTRCTRL_INT_STATUS)) {
+		writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
+		return IRQ_NONE;
+	}
+
+	attnstatus = readl(instance->reg_base + IOAT_ATTNSTATUS_OFFSET);
+	for_each_bit (bit, &attnstatus, BITS_PER_LONG) {
+		ioat_chan = ioat_lookup_chan_by_index(instance, bit);
+		tasklet_schedule(&ioat_chan->cleanup_task);
+	}
+
+	writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
+	return IRQ_HANDLED;
+}
+
+/**
+ * ioat_dma_do_interrupt_msix - handler used for vector-per-channel interrupt mode
+ * @irq: interrupt id
+ * @data: interrupt data
+ */
+static irqreturn_t ioat_dma_do_interrupt_msix(int irq, void *data)
+{
+	struct ioat_dma_chan *ioat_chan = data;
+
+	tasklet_schedule(&ioat_chan->cleanup_task);
+
+	return IRQ_HANDLED;
+}
+
+static void ioat_dma_cleanup_tasklet(unsigned long data);
+
+/**
+ * ioat_dma_enumerate_channels - find and initialize the device's channels
+ * @device: the device to be enumerated
+ */
 static int ioat_dma_enumerate_channels(struct ioatdma_device *device)
 {
 	u8 xfercap_scale;
@@ -76,6 +141,11 @@ static int ioat_dma_enumerate_channels(struct ioatdma_device *device)
 		ioat_chan->common.device = &device->common;
 		list_add_tail(&ioat_chan->common.device_node,
 			      &device->common.channels);
+		device->idx[i] = ioat_chan;
+		tasklet_init(&ioat_chan->cleanup_task,
+			     ioat_dma_cleanup_tasklet,
+			     (unsigned long) ioat_chan);
+		tasklet_disable(&ioat_chan->cleanup_task);
 	}
 	return device->common.chancnt;
 }
@@ -234,6 +304,7 @@ static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
 	writel(((u64) ioat_chan->completion_addr) >> 32,
 	       ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH);
 
+	tasklet_enable(&ioat_chan->cleanup_task);
 	ioat_dma_start_null_desc(ioat_chan);
 	return i;
 }
@@ -245,9 +316,14 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 	struct ioat_desc_sw *desc, *_desc;
 	int in_use_descs = 0;
 
+	tasklet_disable(&ioat_chan->cleanup_task);
 	ioat_dma_memcpy_cleanup(ioat_chan);
 
+	/* Delay 100ms after reset to allow internal DMA logic to quiesce
+	 * before removing DMA descriptor resources.
+	 */
 	writeb(IOAT_CHANCMD_RESET, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET);
+	mdelay(100);
 
 	spin_lock_bh(&ioat_chan->desc_lock);
 	list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) {
@@ -276,6 +352,34 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
 			in_use_descs - 1);
 
 	ioat_chan->last_completion = ioat_chan->completion_addr = 0;
+	ioat_chan->pending = 0;
+}
+/**
+ * ioat_dma_get_next_descriptor - return the next available descriptor from
+ *				  the chain
+ * @ioat_chan: IOAT DMA channel handle
+ * Gets the next descriptor from the chain, and must be called with the
+ * channel's desc_lock held.  Allocates more descriptors if the channel
+ * has run out.
+ */
+static struct ioat_desc_sw *ioat_dma_get_next_descriptor(
+						struct ioat_dma_chan *ioat_chan)
+{
+	struct ioat_desc_sw *new = NULL;
+
+	if (!list_empty(&ioat_chan->free_desc)) {
+		new = to_ioat_desc(ioat_chan->free_desc.next);
+		list_del(&new->node);
+	} else {
+		/* try to get another desc */
+		new = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC);
+		/* will this ever happen? */
+		/* TODO add upper limit on these */
+		BUG_ON(!new);
+	}
+
+	prefetch(new->hw);
+	return new;
 }
 
 static struct dma_async_tx_descriptor *ioat_dma_prep_memcpy(
@@ -300,17 +404,7 @@ static struct dma_async_tx_descriptor *ioat_dma_prep_memcpy(
 
 	spin_lock_bh(&ioat_chan->desc_lock);
 	while (len) {
-		if (!list_empty(&ioat_chan->free_desc)) {
-			new = to_ioat_desc(ioat_chan->free_desc.next);
-			list_del(&new->node);
-		} else {
-			/* try to get another desc */
-			new = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC);
-			/* will this ever happen? */
-			/* TODO add upper limit on these */
-			BUG_ON(!new);
-		}
-
+		new = ioat_dma_get_next_descriptor(ioat_chan);
 		copy = min((u32) len, ioat_chan->xfercap);
 
 		new->hw->size = copy;
@@ -361,6 +455,14 @@ static void ioat_dma_memcpy_issue_pending(struct dma_chan *chan)
 	}
 }
 
+static void ioat_dma_cleanup_tasklet(unsigned long data)
+{
+	struct ioat_dma_chan *chan = (void *)data;
+	ioat_dma_memcpy_cleanup(chan);
+	writew(IOAT_CHANCTRL_INT_DISABLE,
+	       chan->reg_base + IOAT_CHANCTRL_OFFSET);
+}
+
 static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan)
 {
 	unsigned long phys_complete;
@@ -398,6 +500,7 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan)
 		return;
 	}
 
+	cookie = 0;
 	spin_lock_bh(&ioat_chan->desc_lock);
 	list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) {
 
@@ -510,48 +613,13 @@ static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
 
 /* PCI API */
 
-static irqreturn_t ioat_do_interrupt(int irq, void *data)
-{
-	struct ioatdma_device *instance = data;
-	unsigned long attnstatus;
-	u8 intrctrl;
-
-	intrctrl = readb(instance->reg_base + IOAT_INTRCTRL_OFFSET);
-
-	if (!(intrctrl & IOAT_INTRCTRL_MASTER_INT_EN))
-		return IRQ_NONE;
-
-	if (!(intrctrl & IOAT_INTRCTRL_INT_STATUS)) {
-		writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
-		return IRQ_NONE;
-	}
-
-	attnstatus = readl(instance->reg_base + IOAT_ATTNSTATUS_OFFSET);
-
-	printk(KERN_ERR "ioatdma: interrupt! status %lx\n", attnstatus);
-
-	writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET);
-	return IRQ_HANDLED;
-}
-
 static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan)
 {
 	struct ioat_desc_sw *desc;
 
 	spin_lock_bh(&ioat_chan->desc_lock);
 
-	if (!list_empty(&ioat_chan->free_desc)) {
-		desc = to_ioat_desc(ioat_chan->free_desc.next);
-		list_del(&desc->node);
-	} else {
-		/* try to get another desc */
-		spin_unlock_bh(&ioat_chan->desc_lock);
-		desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL);
-		spin_lock_bh(&ioat_chan->desc_lock);
-		/* will this ever happen? */
-		BUG_ON(!desc);
-	}
-
+	desc = ioat_dma_get_next_descriptor(ioat_chan);
 	desc->hw->ctl = IOAT_DMA_DESCRIPTOR_NUL;
 	desc->hw->next = 0;
 	desc->async_tx.ack = 1;
@@ -572,7 +640,11 @@ static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan)
  */
 #define IOAT_TEST_SIZE 2000
 
-static int ioat_self_test(struct ioatdma_device *device)
+/**
+ * ioat_dma_self_test - Perform a IOAT transaction to verify the HW works.
+ * @device: device to be tested
+ */
+static int ioat_dma_self_test(struct ioatdma_device *device)
 {
 	int i;
 	u8 *src;
@@ -640,6 +712,162 @@ out:
 	return err;
 }
 
+static char ioat_interrupt_style[32];
+module_param_string(ioat_interrupt_style, ioat_interrupt_style,
+		    sizeof(ioat_interrupt_style), 0644);
+MODULE_PARM_DESC(ioat_interrupt_style,
+		 "set ioat interrupt style: msix (default), "
+		 "msix-single-vector, msi, intx)");
+
+/**
+ * ioat_dma_setup_interrupts - setup interrupt handler, choosing btwn msix,
+ *                             msi, and legacy
+ * @device: ioat device
+ */
+int ioat_dma_setup_interrupts(struct ioatdma_device *device)
+{
+	struct ioat_dma_chan *ioat_chan;
+	int err, i, j, msixcnt;
+	u8 intrctrl = 0;
+
+	if (!strcmp(ioat_interrupt_style, "msix"))
+		goto msix;
+	else if (!strcmp(ioat_interrupt_style, "msix-single-vector"))
+		goto msix_single_vector;
+	else if (!strcmp(ioat_interrupt_style, "msi"))
+		goto msi;
+	else if (!strcmp(ioat_interrupt_style, "intx"))
+		goto intx;
+
+msix:
+	/* The number of MSI-X vectors should equal the number of channels */
+	msixcnt = device->common.chancnt;
+	for (i = 0; i < msixcnt; i++)
+		device->msix_entries[i].entry = i;
+
+	err = pci_enable_msix(device->pdev, device->msix_entries, msixcnt);
+	if (err < 0)
+		goto msi;
+	if (err > 0)
+		goto msix_single_vector;
+
+	for (i = 0; i < msixcnt; i++) {
+		ioat_chan = ioat_lookup_chan_by_index(device, i);
+		err = request_irq(device->msix_entries[i].vector,
+				  ioat_dma_do_interrupt_msix,
+				  0, "ioat-msix", ioat_chan);
+		if (err) {
+			for (j = 0; j < i; j++) {
+				ioat_chan =
+					ioat_lookup_chan_by_index(device, j);
+				free_irq(device->msix_entries[j].vector,
+					 ioat_chan);
+			}
+			goto msix_single_vector;
+		}
+	}
+	intrctrl |= IOAT_INTRCTRL_MSIX_VECTOR_CONTROL;
+	device->irq_mode = msix_multi_vector;
+	goto done;
+
+msix_single_vector:
+	device->msix_entries[0].entry = 0;
+	err = pci_enable_msix(device->pdev, device->msix_entries, 1);
+	if (err)
+		goto msi;
+
+	err = request_irq(device->msix_entries[0].vector, ioat_dma_do_interrupt,
+			  0, "ioat-msix", device);
+	if (err) {
+		pci_disable_msix(device->pdev);
+		goto msi;
+	}
+	device->irq_mode = msix_single_vector;
+	goto done;
+
+msi:
+	err = pci_enable_msi(device->pdev);
+	if (err)
+		goto intx;
+
+	err = request_irq(device->pdev->irq, ioat_dma_do_interrupt,
+			  0, "ioat-msi", device);
+	if (err) {
+		pci_disable_msi(device->pdev);
+		goto intx;
+	}
+	/*
+	 * CB 1.2 devices need a bit set in configuration space to enable MSI
+	 */
+	if (device->version == IOAT_VER_1_2) {
+		u32 dmactrl;
+		pci_read_config_dword(device->pdev,
+				      IOAT_PCI_DMACTRL_OFFSET, &dmactrl);
+		dmactrl |= IOAT_PCI_DMACTRL_MSI_EN;
+		pci_write_config_dword(device->pdev,
+				       IOAT_PCI_DMACTRL_OFFSET, dmactrl);
+	}
+	device->irq_mode = msi;
+	goto done;
+
+intx:
+	err = request_irq(device->pdev->irq, ioat_dma_do_interrupt,
+			  IRQF_SHARED, "ioat-intx", device);
+	if (err)
+		goto err_no_irq;
+	device->irq_mode = intx;
+
+done:
+	intrctrl |= IOAT_INTRCTRL_MASTER_INT_EN;
+	writeb(intrctrl, device->reg_base + IOAT_INTRCTRL_OFFSET);
+	return 0;
+
+err_no_irq:
+	/* Disable all interrupt generation */
+	writeb(0, device->reg_base + IOAT_INTRCTRL_OFFSET);
+	dev_err(&device->pdev->dev, "no usable interrupts\n");
+	device->irq_mode = none;
+	return -1;
+}
+
+/**
+ * ioat_dma_remove_interrupts - remove whatever interrupts were set
+ * @device: ioat device
+ */
+void ioat_dma_remove_interrupts(struct ioatdma_device *device)
+{
+	struct ioat_dma_chan *ioat_chan;
+	int i;
+
+	/* Disable all interrupt generation */
+	writeb(0, device->reg_base + IOAT_INTRCTRL_OFFSET);
+
+	switch (device->irq_mode) {
+	case msix_multi_vector:
+		for (i = 0; i < device->common.chancnt; i++) {
+			ioat_chan = ioat_lookup_chan_by_index(device, i);
+			free_irq(device->msix_entries[i].vector, ioat_chan);
+		}
+		pci_disable_msix(device->pdev);
+		break;
+	case msix_single_vector:
+		free_irq(device->msix_entries[0].vector, device);
+		pci_disable_msix(device->pdev);
+		break;
+	case msi:
+		free_irq(device->pdev->irq, device);
+		pci_disable_msi(device->pdev);
+		break;
+	case intx:
+		free_irq(device->pdev->irq, device);
+		break;
+	case none:
+		dev_warn(&device->pdev->dev,
+			 "call to %s without interrupts setup\n", __func__);
+	}
+	device->irq_mode = none;
+}
+
 struct ioatdma_device *ioat_dma_probe(struct pci_dev *pdev,
 				      void __iomem *iobase)
 {
@@ -685,21 +913,16 @@ struct ioatdma_device *ioat_dma_probe(struct pci_dev *pdev,
 	device->common.device_issue_pending = ioat_dma_memcpy_issue_pending;
 	device->common.device_dependency_added = ioat_dma_dependency_added;
 	device->common.dev = &pdev->dev;
-	printk(KERN_INFO "ioatdma: Intel(R) I/OAT DMA Engine found,"
-	       " %d channels, device version 0x%02x\n",
-	       device->common.chancnt, device->version);
+	dev_err(&device->pdev->dev,
+		"ioatdma: Intel(R) I/OAT DMA Engine found,"
+		" %d channels, device version 0x%02x\n",
+		device->common.chancnt, device->version);
 
-	pci_set_drvdata(pdev, device);
-	err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat",
-		device);
+	err = ioat_dma_setup_interrupts(device);
 	if (err)
-		goto err_irq;
-
-	writeb(IOAT_INTRCTRL_MASTER_INT_EN,
-	       device->reg_base + IOAT_INTRCTRL_OFFSET);
-	pci_set_master(pdev);
+		goto err_setup_interrupts;
 
-	err = ioat_self_test(device);
+	err = ioat_dma_self_test(device);
 	if (err)
 		goto err_self_test;
 
@@ -708,8 +931,8 @@ struct ioatdma_device *ioat_dma_probe(struct pci_dev *pdev,
 	return device;
 
 err_self_test:
-	free_irq(device->pdev->irq, device);
-err_irq:
+	ioat_dma_remove_interrupts(device);
+err_setup_interrupts:
 	pci_pool_destroy(device->completion_pool);
 err_completion_pool:
 	pci_pool_destroy(device->dma_pool);
@@ -717,8 +940,8 @@ err_dma_pool:
 	kfree(device);
 err_kzalloc:
 	iounmap(iobase);
-	printk(KERN_ERR " "
-	       "ioatdma: Intel(R) I/OAT DMA Engine initialization failed\n");
+	dev_err(&device->pdev->dev,
+		"ioatdma: Intel(R) I/OAT DMA Engine initialization failed\n");
 	return NULL;
 }
 
@@ -729,7 +952,7 @@ void ioat_dma_remove(struct ioatdma_device *device)
 
 	dma_async_device_unregister(&device->common);
 
-	free_irq(device->pdev->irq, device);
+	ioat_dma_remove_interrupts(device);
 
 	pci_pool_destroy(device->dma_pool);
 	pci_pool_destroy(device->completion_pool);
diff --git a/drivers/dma/ioatdma.h b/drivers/dma/ioatdma.h
index 87c5e00..2b499d5 100644
--- a/drivers/dma/ioatdma.h
+++ b/drivers/dma/ioatdma.h
@@ -28,6 +28,14 @@
 #include <linux/cache.h>
 #include <linux/pci_ids.h>
 
+enum ioat_interrupt {
+	none = 0,
+	msix_multi_vector = 1,
+	msix_single_vector = 2,
+	msi = 3,
+	intx = 4,
+};
+
 #define IOAT_LOW_COMPLETION_MASK	0xffffffc0
 
 /**
@@ -47,6 +55,9 @@ struct ioatdma_device {
 
 	struct dma_device common;
 	u8 version;
+	enum ioat_interrupt irq_mode;
+	struct msix_entry msix_entries[4];
+	struct ioat_dma_chan *idx[4];
 };
 
 /**
@@ -95,6 +106,7 @@ struct ioat_dma_chan {
 			u32 high;
 		};
 	} *completion_virt;
+	struct tasklet_struct cleanup_task;
 };
 
 /* wrapper around hardware descriptor format + additional software fields */
diff --git a/drivers/dma/ioatdma_registers.h b/drivers/dma/ioatdma_registers.h
index a30c734..baaab5e 100644
--- a/drivers/dma/ioatdma_registers.h
+++ b/drivers/dma/ioatdma_registers.h
@@ -1,5 +1,5 @@
 /*
- * Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+ * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of the GNU General Public License as published by the Free
@@ -21,6 +21,9 @@
 #ifndef _IOAT_REGISTERS_H_
 #define _IOAT_REGISTERS_H_
 
+#define IOAT_PCI_DMACTRL_OFFSET			0x48
+#define IOAT_PCI_DMACTRL_DMA_EN			0x00000001
+#define IOAT_PCI_DMACTRL_MSI_EN			0x00000002
 
 /* MMIO Device Registers */
 #define IOAT_CHANCNT_OFFSET			0x00	/*  8-bit */
@@ -39,6 +42,7 @@
 #define IOAT_INTRCTRL_MASTER_INT_EN		0x01	/* Master Interrupt Enable */
 #define IOAT_INTRCTRL_INT_STATUS		0x02	/* ATTNSTATUS -or- Channel Int */
 #define IOAT_INTRCTRL_INT			0x04	/* INT_STATUS -and- MASTER_INT_EN */
+#define IOAT_INTRCTRL_MSIX_VECTOR_CONTROL	0x08    /* Enable all MSI-X vectors */
 
 #define IOAT_ATTNSTATUS_OFFSET			0x04	/* Each bit is a channel */
 

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 6/7] DCA: Add Direct Cache Access driver
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
                   ` (4 preceding siblings ...)
  2007-07-20  0:45 ` [PATCH 5/7] I/OAT: Add support for MSI and MSI-X Shannon Nelson
@ 2007-07-20  0:45 ` Shannon Nelson
  2007-07-20  0:52   ` David Miller
  2007-07-20  0:45 ` [PATCH 7/7] I/OAT: Add DCA services Shannon Nelson
  6 siblings, 1 reply; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:45 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Direct Cache Access (DCA) is a method for warming the CPU cache before data
is used, with the intent of lessening the impact of cache misses.  This
patch adds a manager and interface for matching up client requests for DCA
services with devices that offer DCA services.

In order to use DCA, a module must do bus writes with the appropriate tag
bits set to trigger a cache read for a specific CPU.  However, different
CPUs and chipsets can require different sets of tag bits, and the methods
for determining the correct bits may be simple hardcoding or may be a
hardware specific magic incantation.  This interface is a way for DCA
clients to find the correct tag bits for the targeted CPU without needing
to know the specifics.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/Kconfig         |    2 +
 drivers/Makefile        |    1 
 drivers/dca/Kconfig     |   11 +++
 drivers/dca/Makefile    |    2 +
 drivers/dca/dca-core.c  |  170 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dca/dca-sysfs.c |   88 ++++++++++++++++++++++++
 include/linux/dca.h     |   47 +++++++++++++
 7 files changed, 321 insertions(+), 0 deletions(-)

diff --git a/drivers/Kconfig b/drivers/Kconfig
index 7916f4b..98b7e10 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -80,6 +80,8 @@ source "drivers/rtc/Kconfig"
 
 source "drivers/dma/Kconfig"
 
+source "drivers/dca/Kconfig"
+
 source "drivers/auxdisplay/Kconfig"
 
 source "drivers/kvm/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index 6d9d7fa..4e03de6 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -82,5 +82,6 @@ obj-$(CONFIG_CRYPTO)		+= crypto/
 obj-$(CONFIG_SUPERH)		+= sh/
 obj-$(CONFIG_GENERIC_TIME)	+= clocksource/
 obj-$(CONFIG_DMA_ENGINE)	+= dma/
+obj-$(CONFIG_DCA)		+= dca/
 obj-$(CONFIG_HID)		+= hid/
 obj-$(CONFIG_PPC_PS3)		+= ps3/
diff --git a/drivers/dca/Kconfig b/drivers/dca/Kconfig
new file mode 100644
index 0000000..d901615
--- /dev/null
+++ b/drivers/dca/Kconfig
@@ -0,0 +1,11 @@
+#
+# DCA server configuration
+#
+
+config DCA
+	tristate "DCA support for clients and providers"
+	---help---
+          This is a server to help modules that want to use Direct Cache
+	  Access to find DCA providers that will supply correct CPU tags.
+	default m
+
diff --git a/drivers/dca/Makefile b/drivers/dca/Makefile
new file mode 100644
index 0000000..b2db56b
--- /dev/null
+++ b/drivers/dca/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_DCA) += dca.o
+dca-objs := dca-core.o dca-sysfs.o
diff --git a/drivers/dca/dca-core.c b/drivers/dca/dca-core.c
new file mode 100644
index 0000000..864f469
--- /dev/null
+++ b/drivers/dca/dca-core.c
@@ -0,0 +1,170 @@
+/*
+ * Copyright(c) 2007 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston, MA  02111-1307, USA.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/*
+ * This driver supports an interface for DCA clients and providers to meet.
+ */
+
+#include <linux/kernel.h>
+#include <linux/notifier.h>
+#include <linux/device.h>
+#include <linux/dca.h>
+
+MODULE_LICENSE("GPL");
+
+/* For now we're assuming a single, global, DCA provider for the system. */
+
+static spinlock_t dca_lock;
+
+struct dca_provider *global_dca = NULL;
+
+int dca_add_requester(struct device *dev)
+{
+	int err, slot;
+
+	if (!global_dca)
+		return -ENODEV;
+
+	spin_lock(&dca_lock);
+	slot = global_dca->ops->add_requester(global_dca, dev);
+	spin_unlock(&dca_lock);
+	if (slot < 0)
+		return slot;
+
+	err = dca_sysfs_add_req(global_dca, dev, slot);
+	if (err) {
+		spin_lock(&dca_lock);
+		global_dca->ops->remove_requester(global_dca, dev);
+		spin_unlock(&dca_lock);
+		return err;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(dca_add_requester);
+
+int dca_remove_requester(struct device *dev)
+{
+	int slot;
+	if (!global_dca)
+		return -ENODEV;
+
+	spin_lock(&dca_lock);
+	slot = global_dca->ops->remove_requester(global_dca, dev);
+	spin_unlock(&dca_lock);
+	if (slot < 0)
+		return slot;
+
+	dca_sysfs_remove_req(global_dca, slot);
+	return 0;
+}
+EXPORT_SYMBOL(dca_remove_requester);
+
+u8 dca_get_tag(int cpu)
+{
+	if (!global_dca)
+		return -ENODEV;
+	return global_dca->ops->get_tag(global_dca, cpu);
+}
+EXPORT_SYMBOL(dca_get_tag);
+
+struct dca_provider *alloc_dca_provider(struct dca_ops *ops, int priv_size)
+{
+	struct dca_provider *dca;
+	int alloc_size;
+
+	alloc_size = (sizeof(*dca) + priv_size);
+	dca = kzalloc(alloc_size, GFP_KERNEL);
+	if (!dca)
+		return NULL;
+	dca->ops = ops;
+
+	return dca;
+}
+EXPORT_SYMBOL(alloc_dca_provider);
+
+void free_dca_provider(struct dca_provider *dca)
+{
+	kfree(dca);
+}
+EXPORT_SYMBOL(free_dca_provider);
+
+static BLOCKING_NOTIFIER_HEAD(dca_provider_chain);
+
+int register_dca_provider(struct dca_provider *dca, struct device *dev)
+{
+	int err;
+
+	if (global_dca)
+		return -EEXIST;
+	err = dca_sysfs_add_provider(dca, dev);
+	if (err)
+		return err;
+	global_dca = dca;
+	blocking_notifier_call_chain(&dca_provider_chain,
+				     DCA_PROVIDER_ADD, NULL);
+	return 0;
+}
+EXPORT_SYMBOL(register_dca_provider);
+
+void unregister_dca_provider(struct dca_provider *dca)
+{
+	if (!global_dca)
+		return;
+	blocking_notifier_call_chain(&dca_provider_chain,
+				     DCA_PROVIDER_REMOVE, NULL);
+	global_dca = NULL;
+	dca_sysfs_remove_provider(dca);
+}
+EXPORT_SYMBOL(unregister_dca_provider);
+
+void dca_register_notify(struct notifier_block *nb)
+{
+	blocking_notifier_chain_register(&dca_provider_chain, nb);
+}
+EXPORT_SYMBOL(dca_register_notify);
+
+void dca_unregister_notify(struct notifier_block *nb)
+{
+	blocking_notifier_chain_unregister(&dca_provider_chain, nb);
+}
+EXPORT_SYMBOL(dca_unregister_notify);
+
+static int __init dca_init(void)
+{
+	int err;
+
+	spin_lock_init(&dca_lock);
+
+	err = dca_sysfs_init();
+	if (err)
+		return err;
+	return 0;
+}
+
+static void __exit dca_exit(void)
+{
+	dca_sysfs_exit();
+}
+
+module_init(dca_init);
+module_exit(dca_exit);
+
diff --git a/drivers/dca/dca-sysfs.c b/drivers/dca/dca-sysfs.c
new file mode 100644
index 0000000..24a263b
--- /dev/null
+++ b/drivers/dca/dca-sysfs.c
@@ -0,0 +1,88 @@
+#include <linux/kernel.h>
+#include <linux/spinlock.h>
+#include <linux/device.h>
+#include <linux/idr.h>
+#include <linux/kdev_t.h>
+#include <linux/err.h>
+#include <linux/dca.h>
+
+static struct class *dca_class;
+static struct idr dca_idr;
+static spinlock_t dca_idr_lock;
+
+int dca_sysfs_add_req(struct dca_provider *dca, struct device *dev, int slot)
+{
+	struct class_device *cd;
+
+	cd = class_device_create(dca_class, dca->cd, MKDEV(0, slot + 1),
+				 dev, "requester%d", slot);
+	if (IS_ERR(cd))
+		return PTR_ERR(cd);
+	return 0;
+}
+
+void dca_sysfs_remove_req(struct dca_provider *dca, int slot)
+{
+	class_device_destroy(dca_class, MKDEV(0, slot + 1));
+}
+
+int dca_sysfs_add_provider(struct dca_provider *dca, struct device *dev)
+{
+	struct class_device *cd;
+	int err = 0;
+
+idr_try_again:
+	if (!idr_pre_get(&dca_idr, GFP_KERNEL))
+		return -ENOMEM;
+	spin_lock(&dca_idr_lock);
+	err = idr_get_new(&dca_idr, dca, &dca->id);
+	spin_unlock(&dca_idr_lock);
+	switch (err) {
+	case 0:
+		break;
+	case -EAGAIN:
+		goto idr_try_again;
+	default:
+		return err;
+	}
+
+	cd = class_device_create(dca_class, NULL, MKDEV(0, 0),
+				 dev, "dca%d", dca->id);
+	if (IS_ERR(cd)) {
+		spin_lock(&dca_idr_lock);
+		idr_remove(&dca_idr, dca->id);
+		spin_unlock(&dca_idr_lock);
+		return PTR_ERR(cd);
+	}
+	dca->cd = cd;
+	return 0;
+}
+
+void dca_sysfs_remove_provider(struct dca_provider *dca)
+{
+	class_device_unregister(dca->cd);
+	dca->cd = NULL;
+	spin_lock(&dca_idr_lock);
+	idr_remove(&dca_idr, dca->id);
+	spin_unlock(&dca_idr_lock);
+}
+
+int __init dca_sysfs_init(void)
+{
+	idr_init(&dca_idr);
+	spin_lock_init(&dca_idr_lock);
+
+	dca_class = class_create(THIS_MODULE, "dca");
+	if (IS_ERR(dca_class)) {
+		idr_destroy(&dca_idr);
+		return PTR_ERR(dca_class);
+	}
+	return 0;
+}
+
+void __exit dca_sysfs_exit(void)
+{
+	class_destroy(dca_class);
+	idr_destroy(&dca_idr);
+}
+
diff --git a/include/linux/dca.h b/include/linux/dca.h
new file mode 100644
index 0000000..83eaecc
--- /dev/null
+++ b/include/linux/dca.h
@@ -0,0 +1,47 @@
+#ifndef DCA_H
+#define DCA_H
+/* DCA Provider API */
+
+/* DCA Notifier Interface */
+void dca_register_notify(struct notifier_block *nb);
+void dca_unregister_notify(struct notifier_block *nb);
+
+#define DCA_PROVIDER_ADD     0x0001
+#define DCA_PROVIDER_REMOVE  0x0002
+
+struct dca_provider {
+	struct dca_ops		*ops;
+	struct class_device 	*cd;
+	int			 id;
+};
+
+struct dca_ops {
+	int	(*add_requester)    (struct dca_provider *, struct device *);
+	int	(*remove_requester) (struct dca_provider *, struct device *);
+	u8	(*get_tag)	    (struct dca_provider *, int cpu);
+};
+
+struct dca_provider *alloc_dca_provider(struct dca_ops *ops, int priv_size);
+void free_dca_provider(struct dca_provider *dca);
+int register_dca_provider(struct dca_provider *dca, struct device *dev);
+void unregister_dca_provider(struct dca_provider *dca);
+
+static inline void *dca_priv(struct dca_provider *dca)
+{
+	return (void *)dca + sizeof(struct dca_provider);
+}
+
+/* Requester API */
+int dca_add_requester(struct device *dev);
+int dca_remove_requester(struct device *dev);
+u8 dca_get_tag(int cpu);
+
+/* internal stuff */
+int __init dca_sysfs_init(void);
+void __exit dca_sysfs_exit(void);
+int dca_sysfs_add_provider(struct dca_provider *dca, struct device *dev);
+void dca_sysfs_remove_provider(struct dca_provider *dca);
+int dca_sysfs_add_req(struct dca_provider *dca, struct device *dev, int slot);
+void dca_sysfs_remove_req(struct dca_provider *dca, int slot);
+
+#endif /* DCA_H */

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 7/7] I/OAT: Add DCA services
  2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
                   ` (5 preceding siblings ...)
  2007-07-20  0:45 ` [PATCH 6/7] DCA: Add Direct Cache Access driver Shannon Nelson
@ 2007-07-20  0:45 ` Shannon Nelson
  2007-07-20  0:52   ` David Miller
  6 siblings, 1 reply; 33+ messages in thread
From: Shannon Nelson @ 2007-07-20  0:45 UTC (permalink / raw)
  To: akpm, linux-kernel
  Cc: davem, jeff, dan.j.williams, christopher.leech, peter.p.waskiewicz.jr

Add code to connect to the DCA driver and provide cpu tags for use by
drivers that would like to use Direct Cache Access hints.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
---

 drivers/dma/Makefile   |    2 
 drivers/dma/ioat.c     |   12 ++
 drivers/dma/ioat_dca.c |  259 ++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/ioatdma.h  |    2 
 4 files changed, 273 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index cec0c9c..b152cd8 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -1,5 +1,5 @@
 obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
 obj-$(CONFIG_NET_DMA) += iovlock.o
 obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
-ioatdma-objs := ioat.o ioat_dma.o
+ioatdma-objs := ioat.o ioat_dma.o ioat_dca.o
 obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
diff --git a/drivers/dma/ioat.c b/drivers/dma/ioat.c
index 9d9f672..8ae8c53 100644
--- a/drivers/dma/ioat.c
+++ b/drivers/dma/ioat.c
@@ -1,6 +1,6 @@
 /*
  * Intel I/OAT DMA Linux driver
- * Copyright(c) 2004 - 2007 Intel Corporation.
+ * Copyright(c) 2007 Intel Corporation.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -29,6 +29,7 @@
 #include <linux/module.h>
 #include <linux/pci.h>
 #include <linux/interrupt.h>
+#include <linux/dca.h>
 #include "ioatdma.h"
 #include "ioatdma_registers.h"
 #include "ioatdma_hw.h"
@@ -49,6 +50,7 @@ struct ioat_device {
 	struct pci_dev		*pdev;
 	void __iomem		*iobase;
 	struct ioatdma_device	*dma;
+	struct dca_provider	*dca;
 };
 
 static int __devinit ioat_probe(struct pci_dev *pdev,
@@ -65,6 +67,7 @@ static int ioat_setup_functionality(struct pci_dev *pdev, void __iomem *iobase)
 	switch (version) {
 	case IOAT_VER_1_2:
 		device->dma = ioat_dma_probe(pdev, iobase);
+		device->dca = ioat_dca_init(pdev, iobase);
 		break;
 	default:
 		err = -ENODEV;
@@ -81,6 +84,13 @@ static void ioat_shutdown_functionality(struct pci_dev *pdev)
 		ioat_dma_remove(device->dma);
 		device->dma = NULL;
 	}
+
+	if (device->dca) {
+		unregister_dca_provider(device->dca);
+		free_dca_provider(device->dca);
+		device->dca = NULL;
+	}
+
 }
 
 static struct pci_driver ioat_pci_drv = {
diff --git a/drivers/dma/ioat_dca.c b/drivers/dma/ioat_dca.c
new file mode 100644
index 0000000..c3a500b
--- /dev/null
+++ b/drivers/dma/ioat_dca.c
@@ -0,0 +1,259 @@
+/*
+ * Intel I/OAT DMA Linux driver
+ * Copyright(c) 2007 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/smp.h>
+#include <linux/interrupt.h>
+#include <linux/dca.h>
+#include "ioatdma.h"
+#include "ioatdma_registers.h"
+
+/* THIS STUFF NEEDS TO LIVE SOMEWHERE ELSE */
+#define X86_FEATURE_DCA	(4*32+18) /* Direct Cache Access */
+/* / THIS STUFF NEEDS TO LIVE SOMEWHERE ELSE */
+
+/*
+ * Bit 16 of a tag map entry is the "valid" bit, if it is set then bits 0:15
+ * contain the bit number of the APIC ID to map into the DCA tag.  If the valid
+ * bit is not set, then the value must be 0 or 1 and defines the bit in the tag.
+ */
+#define DCA_TAG_MAP_VALID 0x80
+
+/*
+ * "Legacy" DCA systems do not implement the DCA register set in the
+ * I/OAT device.  Software needs direct support for their tag mappings.
+ */
+
+#define APICID_BIT(x)		(DCA_TAG_MAP_VALID | (x))
+#define IOAT_TAG_MAP_LEN	8
+
+static u8 ioat_tag_map_BNB[IOAT_TAG_MAP_LEN] = {
+	1, APICID_BIT(1), APICID_BIT(2), APICID_BIT(2), };
+static u8 ioat_tag_map_SCNB[IOAT_TAG_MAP_LEN] = {
+	1, APICID_BIT(1), APICID_BIT(2), APICID_BIT(2), };
+static u8 ioat_tag_map_CNB[IOAT_TAG_MAP_LEN] = {
+	1, APICID_BIT(1), APICID_BIT(3), APICID_BIT(4), APICID_BIT(2), };
+static u8 ioat_tag_map_UNISYS[IOAT_TAG_MAP_LEN] = { 0 };
+
+/* pack PCI B/D/F into a u16 */
+static inline u16 dcaid_from_pcidev(struct pci_dev *pci)
+{
+	return (pci->bus->number << 8) | pci->devfn;
+}
+
+static int dca_enabled_in_bios(void)
+{
+	/* CPUID level 9 returns DCA configuration */
+	/* Bit 0 indicates DCA enabled by the BIOS */
+	unsigned long cpuid_level_9;
+	int res;
+
+	cpuid_level_9 = cpuid_eax(9);
+	res = test_bit(0, &cpuid_level_9);
+	if (!res)
+		printk(KERN_ERR "ioat dma: DCA is disabled in BIOS\n");
+
+	return res;
+}
+
+static int system_has_dca_enabled(void)
+{
+	if (boot_cpu_has(X86_FEATURE_DCA))
+		return dca_enabled_in_bios();
+
+	printk(KERN_ERR "ioat dma: boot cpu doesn't have X86_FEATURE_DCA\n");
+	return 0;
+}
+
+struct ioat_dca_slot {
+	struct pci_dev *pdev;	/* requester device */
+	u16 rid;		/* requester id, as used by IOAT */
+};
+
+#define IOAT_DCA_MAX_REQ 6
+
+struct ioat_dca_priv {
+	void __iomem		*iobase;
+	void			*dca_base;
+	int			 max_requesters;
+	int			 requester_count;
+	u8			 tag_map[IOAT_TAG_MAP_LEN];
+	struct ioat_dca_slot 	 req_slots[0];
+};
+
+/* 5000 series chipset DCA Port Requester ID Table Entry Format
+ * [15:8]	PCI-Express Bus Number
+ * [7:3]	PCI-Express Device Number
+ * [2:0]	PCI-Express Function Number
+ *
+ * 5000 series chipset DCA control register format
+ * [7:1]	Reserved (0)
+ * [0]		Ignore Function Number
+ */
+
+static int ioat_dca_add_requester(struct dca_provider *dca, struct device *dev)
+{
+	struct ioat_dca_priv *ioatdca = dca_priv(dca);
+	struct pci_dev *pdev;
+	int i;
+	u16 id;
+
+	/* This implementation only supports PCI-Express */
+	if (dev->bus != &pci_bus_type)
+		return -ENODEV;
+	pdev = to_pci_dev(dev);
+	id = dcaid_from_pcidev(pdev);
+
+	if (ioatdca->requester_count == ioatdca->max_requesters)
+		return -ENODEV;
+
+	for (i = 0; i < ioatdca->max_requesters; i++) {
+		if (ioatdca->req_slots[i].pdev == NULL) {
+			/* found an empty slot */
+			ioatdca->requester_count++;
+			ioatdca->req_slots[i].pdev = pdev;
+			ioatdca->req_slots[i].rid = id;
+			writew(id, ioatdca->dca_base + (i * 4));
+			/* make sure the ignore function bit is off */
+			writeb(0, ioatdca->dca_base + (i * 4) + 2);
+			return i;
+		}
+	}
+	/* Error, ioatdma->requester_count is out of whack */
+	return -EFAULT;
+}
+
+static int ioat_dca_remove_requester(struct dca_provider *dca,
+				     struct device *dev)
+{
+	struct ioat_dca_priv *ioatdca = dca_priv(dca);
+	struct pci_dev *pdev;
+	int i;
+
+	/* This implementation only supports PCI-Express */
+	if (dev->bus != &pci_bus_type)
+		return -ENODEV;
+	pdev = to_pci_dev(dev);
+
+	for (i = 0; i < ioatdca->max_requesters; i++) {
+		if (ioatdca->req_slots[i].pdev == pdev) {
+			writew(0, ioatdca->dca_base + (i * 4));
+			ioatdca->req_slots[i].pdev = NULL;
+			ioatdca->req_slots[i].rid = 0;
+			ioatdca->requester_count--;
+			return i;
+		}
+	}
+	return -ENODEV;
+}
+
+static u8 ioat_dca_get_tag(struct dca_provider *dca, int cpu)
+{
+	struct ioat_dca_priv *ioatdca = dca_priv(dca);
+	int i, apic_id, bit, value;
+	u8 entry, tag;
+
+	tag = 0;
+	apic_id = cpu_physical_id(cpu);
+
+	for (i = 0; i < IOAT_TAG_MAP_LEN; i++) {
+		entry = ioatdca->tag_map[i];
+		if (entry & DCA_TAG_MAP_VALID) {
+			bit = entry & ~DCA_TAG_MAP_VALID;
+			value = (apic_id & (1 << bit)) ? 1 : 0;
+		} else {
+			value = entry ? 1 : 0;
+		}
+		tag |= (value << i);
+	}
+	return tag;
+}
+
+static struct dca_ops ioat_dca_ops = {
+	.add_requester		= ioat_dca_add_requester,
+	.remove_requester	= ioat_dca_remove_requester,
+	.get_tag		= ioat_dca_get_tag,
+};
+
+
+struct dca_provider *ioat_dca_init(struct pci_dev *pdev, void __iomem *iobase)
+{
+	struct dca_provider *dca;
+	struct ioat_dca_priv *ioatdca;
+	u8 *tag_map = NULL;
+	int i;
+	int err;
+
+	if (!system_has_dca_enabled())
+		return NULL;
+
+	/* I/OAT v1 systems must have a known tag_map to support DCA */
+	switch (pdev->vendor) {
+	case PCI_VENDOR_ID_INTEL:
+		switch (pdev->device) {
+		case PCI_DEVICE_ID_INTEL_IOAT:
+			tag_map = ioat_tag_map_BNB;
+			break;
+		case PCI_DEVICE_ID_INTEL_IOAT_CNB:
+			tag_map = ioat_tag_map_CNB;
+			break;
+		case PCI_DEVICE_ID_INTEL_IOAT_SCNB:
+			tag_map = ioat_tag_map_SCNB;
+			break;
+		}
+		break;
+	case PCI_VENDOR_ID_UNISYS:
+		switch (pdev->device) {
+		case PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR:
+			tag_map = ioat_tag_map_UNISYS;
+			break;
+		}
+		break;
+	}
+	if (tag_map == NULL)
+		return NULL;
+
+	dca = alloc_dca_provider(&ioat_dca_ops,
+			sizeof(*ioatdca) +
+			(sizeof(struct ioat_dca_slot) * IOAT_DCA_MAX_REQ));
+	if (!dca)
+		return NULL;
+
+	ioatdca = dca_priv(dca);
+	ioatdca->max_requesters = IOAT_DCA_MAX_REQ;
+
+	ioatdca->dca_base = iobase + 0x54;
+
+	/* copy over the APIC ID to DCA tag mapping */
+	for (i = 0; i < IOAT_TAG_MAP_LEN; i++)
+		ioatdca->tag_map[i] = tag_map[i];
+
+	err = register_dca_provider(dca, &pdev->dev);
+	if (err) {
+		free_dca_provider(dca);
+		return NULL;
+	}
+
+	return dca;
+}
+
diff --git a/drivers/dma/ioatdma.h b/drivers/dma/ioatdma.h
index 2b499d5..bab2a72 100644
--- a/drivers/dma/ioatdma.h
+++ b/drivers/dma/ioatdma.h
@@ -133,9 +133,11 @@ struct ioat_desc_sw {
 #if defined(CONFIG_INTEL_IOATDMA) || defined(CONFIG_INTEL_IOATDMA_MODULE)
 struct ioatdma_device *ioat_dma_probe(struct pci_dev *, void __iomem *);
 void ioat_dma_remove(struct ioatdma_device *device);
+struct dca_provider *ioat_dca_init(struct pci_dev *, void __iomem *);
 #else
 #define ioat_dma_probe(pdev, io)                NULL
 #define ioat_dma_remove(dev)            do { } while (0)
+#define ioat_dca_init(pdev, io)			NULL
 #endif
 
 #endif /* IOATDMA_H */

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/7] I/OAT: New device ids
  2007-07-20  0:44 ` [PATCH 1/7] I/OAT: New device ids Shannon Nelson
@ 2007-07-20  0:49   ` David Miller
  0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2007-07-20  0:49 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:44:52 -0700

> Add device ids for new revs of the Intel I/OAT DMA engine
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/7] I/OAT: Rename the source file
  2007-07-20  0:44 ` [PATCH 2/7] I/OAT: Rename the source file Shannon Nelson
@ 2007-07-20  0:49   ` David Miller
  0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2007-07-20  0:49 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:44:57 -0700

> Rename the ioatdma.c file in preparation for splitting into multiple files,
> which will allow for easier adding new functionality.
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/7] I/OAT: code cleanup from checkpatch output
  2007-07-20  0:45 ` [PATCH 3/7] I/OAT: code cleanup from checkpatch output Shannon Nelson
@ 2007-07-20  0:49   ` David Miller
  0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2007-07-20  0:49 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:45:02 -0700

> Take care of a bunch of little code nits in ioatdma files
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code
  2007-07-20  0:45 ` [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code Shannon Nelson
@ 2007-07-20  0:50   ` David Miller
  2007-07-20 10:53   ` Andrey Panin
  1 sibling, 0 replies; 33+ messages in thread
From: David Miller @ 2007-07-20  0:50 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:45:07 -0700

> Split the general PCI startup from the DMA handling code in order to
> prepare for adding support for DCA services and future versions of the
> ioatdma device.
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20  0:45 ` [PATCH 5/7] I/OAT: Add support for MSI and MSI-X Shannon Nelson
@ 2007-07-20  0:51   ` David Miller
  2007-07-20 17:43   ` Roland Dreier
  1 sibling, 0 replies; 33+ messages in thread
From: David Miller @ 2007-07-20  0:51 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:45:12 -0700

> Add support for MSI and MSI-X interrupt handling, including the ability
> to choose the desired interrupt method.
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>

Acked-by: David S. Miller <davem@davemloft.net>

But:

> +#define for_each_bit(bit, addr, size) \
> +	for ((bit) = find_first_bit((addr), (size)); \
> +	     (bit) < (size); \
> +	     (bit) = find_next_bit((addr), (size), (bit) + 1))

This or something like it is codified in a few spots now,
namely now here and cpumask.h, it would be nice to have
this in some standard place.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 6/7] DCA: Add Direct Cache Access driver
  2007-07-20  0:45 ` [PATCH 6/7] DCA: Add Direct Cache Access driver Shannon Nelson
@ 2007-07-20  0:52   ` David Miller
  2007-07-20 16:35     ` Nelson, Shannon
  0 siblings, 1 reply; 33+ messages in thread
From: David Miller @ 2007-07-20  0:52 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:45:17 -0700

> +static spinlock_t dca_lock;
 ...
> +	spin_lock_init(&dca_lock);

It's easier to use DEFINE_SPINLOCK().

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 7/7] I/OAT: Add DCA services
  2007-07-20  0:45 ` [PATCH 7/7] I/OAT: Add DCA services Shannon Nelson
@ 2007-07-20  0:52   ` David Miller
  0 siblings, 0 replies; 33+ messages in thread
From: David Miller @ 2007-07-20  0:52 UTC (permalink / raw)
  To: shannon.nelson
  Cc: akpm, linux-kernel, jeff, dan.j.williams, christopher.leech,
	peter.p.waskiewicz.jr

From: Shannon Nelson <shannon.nelson@intel.com>
Date: Thu, 19 Jul 2007 17:45:22 -0700

> Add code to connect to the DCA driver and provide cpu tags for use by
> drivers that would like to use Direct Cache Access hints.
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code
  2007-07-20  0:45 ` [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code Shannon Nelson
  2007-07-20  0:50   ` David Miller
@ 2007-07-20 10:53   ` Andrey Panin
  2007-07-20 16:33     ` Nelson, Shannon
  1 sibling, 1 reply; 33+ messages in thread
From: Andrey Panin @ 2007-07-20 10:53 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: akpm, linux-kernel, davem, jeff, dan.j.williams,
	christopher.leech, peter.p.waskiewicz.jr

[-- Attachment #1: Type: text/plain, Size: 3168 bytes --]

On 200, 07 19, 2007 at 05:45:07PM -0700, Shannon Nelson wrote:
> Split the general PCI startup from the DMA handling code in order to
> prepare for adding support for DCA services and future versions of the
> ioatdma device.
> 
> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
> ---
> 
>  drivers/dma/Makefile     |    2 
>  drivers/dma/ioat.c       |  186 ++++++++++++++++++++++++++++++++++++++++++++
>  drivers/dma/ioat_dma.c   |  196 +++++++++++-----------------------------------
>  drivers/dma/ioatdma.h    |   16 +++-
>  drivers/dma/ioatdma_hw.h |    2 
>  5 files changed, 245 insertions(+), 157 deletions(-)
> 
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 77bee99..cec0c9c 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -1,5 +1,5 @@
>  obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
>  obj-$(CONFIG_NET_DMA) += iovlock.o
>  obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
> -ioatdma-objs := ioat_dma.o
> +ioatdma-objs := ioat.o ioat_dma.o
>  obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
> diff --git a/drivers/dma/ioat.c b/drivers/dma/ioat.c
> new file mode 100644
> index 0000000..9d9f672
> --- /dev/null
> +++ b/drivers/dma/ioat.c
> @@ -0,0 +1,186 @@
> +/*
> + * Intel I/OAT DMA Linux driver
> + * Copyright(c) 2004 - 2007 Intel Corporation.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + *
> + * The full GNU General Public License is included in this distribution in
> + * the file called "COPYING".
> + *
> + */
> +
> +/*
> + * This driver supports an Intel I/OAT DMA engine, which does asynchronous
> + * copy operations.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/interrupt.h>
> +#include "ioatdma.h"
> +#include "ioatdma_registers.h"
> +#include "ioatdma_hw.h"
> +
> +MODULE_VERSION("1.24");
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Intel Corporation");
> +
> +static struct pci_device_id ioat_pci_tbl[] = {
> +	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) },
> +	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB)  },
> +	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) },
> +	{ PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) },
> +	{ 0, }
> +};

Why this driver lacks MODULE_DEVICE_TABLE() ? Is it intentionaly omitted ?


-- 
Andrey Panin		| Linux and UNIX system administrator
pazke@donpac.ru		| PGP key: wwwkeys.pgp.net

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code
  2007-07-20 10:53   ` Andrey Panin
@ 2007-07-20 16:33     ` Nelson, Shannon
  0 siblings, 0 replies; 33+ messages in thread
From: Nelson, Shannon @ 2007-07-20 16:33 UTC (permalink / raw)
  To: Andrey Panin
  Cc: akpm, linux-kernel, davem, jeff, Williams, Dan J, Leech,
	Christopher, Waskiewicz Jr, Peter P

Andrey Panin [mailto:pazke@donpac.ru] 
>
>Why this driver lacks MODULE_DEVICE_TABLE() ? Is it 
>intentionaly omitted ?

Hmmm - good catch.  It has been missing for a long time in the code.
I'll double check with the original developer to see if it was left out
on purpose and probably add it in a later patch.

Thanks,
sln
--
======================================================================
Mr. Shannon Nelson                 LAN Access Division, Intel Corp.
Shannon.Nelson@intel.com                I don't speak for Intel
(503) 712-7659                    Parents can't afford to be squeamish.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 6/7] DCA: Add Direct Cache Access driver
  2007-07-20  0:52   ` David Miller
@ 2007-07-20 16:35     ` Nelson, Shannon
  0 siblings, 0 replies; 33+ messages in thread
From: Nelson, Shannon @ 2007-07-20 16:35 UTC (permalink / raw)
  To: David Miller
  Cc: akpm, linux-kernel, jeff, Williams, Dan J, Leech, Christopher,
	Waskiewicz Jr, Peter P

David Miller [mailto:davem@davemloft.net] 
>
>> +static spinlock_t dca_lock;
> ...
>> +	spin_lock_init(&dca_lock);
>
>It's easier to use DEFINE_SPINLOCK().
>

Thanks - I'll adjust that in a future patch.
sln
--
======================================================================
Mr. Shannon Nelson                 LAN Access Division, Intel Corp.
Shannon.Nelson@intel.com                I don't speak for Intel
(503) 712-7659                    Parents can't afford to be squeamish.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20  0:45 ` [PATCH 5/7] I/OAT: Add support for MSI and MSI-X Shannon Nelson
  2007-07-20  0:51   ` David Miller
@ 2007-07-20 17:43   ` Roland Dreier
  2007-07-20 18:09     ` Waskiewicz Jr, Peter P
  1 sibling, 1 reply; 33+ messages in thread
From: Roland Dreier @ 2007-07-20 17:43 UTC (permalink / raw)
  To: Shannon Nelson
  Cc: akpm, linux-kernel, davem, jeff, dan.j.williams,
	christopher.leech, peter.p.waskiewicz.jr

are there any devices that support MSI but not MSI-X?  If not, is
there any point in having code to support both?

 - R.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 17:43   ` Roland Dreier
@ 2007-07-20 18:09     ` Waskiewicz Jr, Peter P
  2007-07-20 19:05       ` Roland Dreier
  0 siblings, 1 reply; 33+ messages in thread
From: Waskiewicz Jr, Peter P @ 2007-07-20 18:09 UTC (permalink / raw)
  To: Roland Dreier, Nelson, Shannon
  Cc: akpm, linux-kernel, davem, jeff, Williams, Dan J, Leech, Christopher

> -----Original Message-----
> From: Roland Dreier [mailto:rdreier@cisco.com] 
> Sent: Friday, July 20, 2007 10:43 AM
> To: Nelson, Shannon
> Cc: akpm@linux-foundation.org; linux-kernel@vger.kernel.org; 
> davem@davemloft.net; jeff@garzik.org; Williams, Dan J; Leech, 
> Christopher; Waskiewicz Jr, Peter P
> Subject: Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
> 
> are there any devices that support MSI but not MSI-X?  If 
> not, is there any point in having code to support both?
> 
>  - R.
> 

Both igb (recently posted) and ixgbe (also recently posted) support both
MSI and MSI-X.  Right now when we try to request MSI-X vectors, if we
fail to acquire what we've asked for, we fall back to MSI support.  If
MSI fails to initialize, we fall back to legacy interrupts.  So it needs
to be there in case MSI-X allocation fails for the NIC driver.

Thanks,
-PJ Waskiewicz

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 18:09     ` Waskiewicz Jr, Peter P
@ 2007-07-20 19:05       ` Roland Dreier
  2007-07-20 19:19         ` Waskiewicz Jr, Peter P
  0 siblings, 1 reply; 33+ messages in thread
From: Roland Dreier @ 2007-07-20 19:05 UTC (permalink / raw)
  To: Waskiewicz Jr, Peter P
  Cc: Nelson, Shannon, akpm, linux-kernel, davem, jeff, Williams,
	Dan J, Leech, Christopher

 > Both igb (recently posted) and ixgbe (also recently posted) support both
 > MSI and MSI-X.  Right now when we try to request MSI-X vectors, if we
 > fail to acquire what we've asked for, we fall back to MSI support.  If
 > MSI fails to initialize, we fall back to legacy interrupts.  So it needs
 > to be there in case MSI-X allocation fails for the NIC driver.

Hmm, I see I don't understand what this driver is doing.  What is a
"struct ioatdma_device"?  Is this driver requesting interrupts that
come from the NIC or the IOAT DMA engine?

Anyway, if the NICs support MSI-X, is there any chance of failing to
get one MSI-X vectors but then succeeding in getting MSI enabled?
How could that happen?  I don't see what falling back to MSI buys you
beyond more code.

 - R.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 19:05       ` Roland Dreier
@ 2007-07-20 19:19         ` Waskiewicz Jr, Peter P
  2007-07-20 19:49           ` Roland Dreier
  0 siblings, 1 reply; 33+ messages in thread
From: Waskiewicz Jr, Peter P @ 2007-07-20 19:19 UTC (permalink / raw)
  To: Roland Dreier
  Cc: Nelson, Shannon, akpm, linux-kernel, davem, jeff, Williams,
	Dan J, Leech, Christopher

> Hmm, I see I don't understand what this driver is doing.  
> What is a "struct ioatdma_device"?  Is this driver requesting 
> interrupts that come from the NIC or the IOAT DMA engine?

I might have caused some confusion.  You had asked if any drivers
support MSI but not MSI-X, so I threw 2 drivers out there that currently
support both, and why we support MSI for compatibility.

> Anyway, if the NICs support MSI-X, is there any chance of 
> failing to get one MSI-X vectors but then succeeding in 
> getting MSI enabled?
> How could that happen?  I don't see what falling back to MSI 
> buys you beyond more code.

MSI-X doesn't make much sense if you have 1 Rx queue on your NIC, since
1 vector essentially acts like MSI.  In the case of why MSI-X could
fail, I have had it fail when I misconfigured my driver and didn't ask
for enough vectors for what I was assigning, so the driver disabled the
multiple Rx queues, and fell back to MSI.

I hope this helps.

-PJ

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 19:19         ` Waskiewicz Jr, Peter P
@ 2007-07-20 19:49           ` Roland Dreier
  2007-07-20 21:10             ` Leech, Christopher
  2007-07-20 21:13             ` Nelson, Shannon
  0 siblings, 2 replies; 33+ messages in thread
From: Roland Dreier @ 2007-07-20 19:49 UTC (permalink / raw)
  To: Waskiewicz Jr, Peter P
  Cc: Nelson, Shannon, akpm, linux-kernel, davem, jeff, Williams,
	Dan J, Leech, Christopher

 > > Hmm, I see I don't understand what this driver is doing.  
 > > What is a "struct ioatdma_device"?  Is this driver requesting 
 > > interrupts that come from the NIC or the IOAT DMA engine?
 > 
 > I might have caused some confusion.  You had asked if any drivers
 > support MSI but not MSI-X, so I threw 2 drivers out there that currently
 > support both, and why we support MSI for compatibility.
 > 
 > > Anyway, if the NICs support MSI-X, is there any chance of 
 > > failing to get one MSI-X vectors but then succeeding in 
 > > getting MSI enabled?
 > > How could that happen?  I don't see what falling back to MSI 
 > > buys you beyond more code.
 > 
 > MSI-X doesn't make much sense if you have 1 Rx queue on your NIC, since
 > 1 vector essentially acts like MSI.  In the case of why MSI-X could
 > fail, I have had it fail when I misconfigured my driver and didn't ask
 > for enough vectors for what I was assigning, so the driver disabled the
 > multiple Rx queues, and fell back to MSI.

OK, let's try to avoid going off into the weeds here.  In the context
of the specific patch that this thread started with, is there any
point in having both "msix-single-vector" and "msi" interrupt support?

 - R.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 19:49           ` Roland Dreier
@ 2007-07-20 21:10             ` Leech, Christopher
  2007-07-20 21:21               ` Roland Dreier
  2007-07-20 21:13             ` Nelson, Shannon
  1 sibling, 1 reply; 33+ messages in thread
From: Leech, Christopher @ 2007-07-20 21:10 UTC (permalink / raw)
  To: Roland Dreier, Waskiewicz Jr, Peter P
  Cc: Nelson, Shannon, akpm, linux-kernel, davem, jeff, Williams, Dan J

Roland Dreier wrote:
> OK, let's try to avoid going off into the weeds here.  In the context
> of the specific patch that this thread started with, is there any
> point in having both "msix-single-vector" and "msi" interrupt support?

This driver supports some chipsets that do MSI, and some that do MSI-X,
but none that can do both.

- Chris

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 19:49           ` Roland Dreier
  2007-07-20 21:10             ` Leech, Christopher
@ 2007-07-20 21:13             ` Nelson, Shannon
  1 sibling, 0 replies; 33+ messages in thread
From: Nelson, Shannon @ 2007-07-20 21:13 UTC (permalink / raw)
  To: Roland Dreier, Waskiewicz Jr, Peter P
  Cc: akpm, linux-kernel, davem, jeff, Williams, Dan J, Leech, Christopher

Roland Dreier [mailto:rdreier@cisco.com] 
>
>OK, let's try to avoid going off into the weeds here.  In the context
>of the specific patch that this thread started with, is there any
>point in having both "msix-single-vector" and "msi" interrupt support?

Some versions of this hardware support MSI-X and not MSI.  If it can't
get all the MSI-X interrupts, it will try a single interrupt before
fallign all the way down to Legacy interrupts.  Basically, if we can
only get one interrupt, we'd still like to use the MSI-X mechanics if
possible.  If totally out of the question, then we'll use legacy.

sln
--
======================================================================
Mr. Shannon Nelson                 LAN Access Division, Intel Corp.
Shannon.Nelson@intel.com                I don't speak for Intel
(503) 712-7659                    Parents can't afford to be squeamish.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:10             ` Leech, Christopher
@ 2007-07-20 21:21               ` Roland Dreier
  2007-07-20 21:32                 ` Manu Abraham
  0 siblings, 1 reply; 33+ messages in thread
From: Roland Dreier @ 2007-07-20 21:21 UTC (permalink / raw)
  To: Leech, Christopher
  Cc: Waskiewicz Jr, Peter P, Nelson, Shannon, akpm, linux-kernel,
	davem, jeff, Williams, Dan J

 > This driver supports some chipsets that do MSI, and some that do MSI-X,
 > but none that can do both.

Thanks, that's the simple answer I was hoping for.  Obviously if some
chipsets only do MSI then you need the MSI code in addition to the
MSI-X code.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:21               ` Roland Dreier
@ 2007-07-20 21:32                 ` Manu Abraham
  2007-07-20 21:38                   ` Roland Dreier
  0 siblings, 1 reply; 33+ messages in thread
From: Manu Abraham @ 2007-07-20 21:32 UTC (permalink / raw)
  To: Roland Dreier
  Cc: Leech, Christopher, Waskiewicz Jr, Peter P, Nelson, Shannon,
	akpm, linux-kernel, davem, jeff, Williams, Dan J

On 7/21/07, Roland Dreier <rdreier@cisco.com> wrote:
>  > This driver supports some chipsets that do MSI, and some that do MSI-X,
>  > but none that can do both.
>
> Thanks, that's the simple answer I was hoping for.  Obviously if some
> chipsets only do MSI then you need the MSI code in addition to the
> MSI-X code.

In a case where you have a device that which supports MSI-X (multiple
interrupts) but the device in default is in MSI mode, ie some
configuration change is needed on the device. In such a case, how
would one handle between MSI-X and MSI ?

ie the device initially doesn't support MSI-X

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:32                 ` Manu Abraham
@ 2007-07-20 21:38                   ` Roland Dreier
  2007-07-20 21:49                     ` Manu Abraham
  0 siblings, 1 reply; 33+ messages in thread
From: Roland Dreier @ 2007-07-20 21:38 UTC (permalink / raw)
  To: Manu Abraham
  Cc: Leech, Christopher, Waskiewicz Jr, Peter P, Nelson, Shannon,
	akpm, linux-kernel, davem, jeff, Williams, Dan J

 > In a case where you have a device that which supports MSI-X (multiple
 > interrupts) but the device in default is in MSI mode, ie some
 > configuration change is needed on the device. In such a case, how
 > would one handle between MSI-X and MSI ?
 > 
 > ie the device initially doesn't support MSI-X

I don't understand the question really.  Does the PCI spec even allow
a device to be in MSI mode by default?  Surely the OS must initialize
the address/message before the device can generate an MSI?

What device do you have in mind?  I guess the interesting case is a
PCIe device that supports MSI and MSI-X but not legacy interrupts.
However I would assume such a device would come up with both MSI and
MSI-X disabled.

 - R.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:38                   ` Roland Dreier
@ 2007-07-20 21:49                     ` Manu Abraham
  2007-07-20 21:54                       ` Nelson, Shannon
  0 siblings, 1 reply; 33+ messages in thread
From: Manu Abraham @ 2007-07-20 21:49 UTC (permalink / raw)
  To: Roland Dreier
  Cc: Leech, Christopher, Waskiewicz Jr, Peter P, Nelson, Shannon,
	akpm, linux-kernel, davem, jeff, Williams, Dan J

On 7/21/07, Roland Dreier <rdreier@cisco.com> wrote:
>  > In a case where you have a device that which supports MSI-X (multiple
>  > interrupts) but the device in default is in MSI mode, ie some
>  > configuration change is needed on the device. In such a case, how
>  > would one handle between MSI-X and MSI ?
>  >
>  > ie the device initially doesn't support MSI-X
>
> I don't understand the question really.  Does the PCI spec even allow
> a device to be in MSI mode by default?  Surely the OS must initialize
> the address/message before the device can generate an MSI?
>

Sorry for being not clear. What i was asking is thus:

A device that has legacy interrupts and MSI-X. I was thinking that if
MSI-X failed one should fall back to MSI mode (single message), that's
what i was assuming.

In such a case, i do enable MSI-X mode on the device, when the request
for 2 ^ n  number of messages (where messages can be a max of 32), If
the request fails one falls into a single message mode, ie MSI ?


> What device do you have in mind?


The device that i have in mind is a SAA7160.


> I guess the interesting case is a
> PCIe device that supports MSI and MSI-X but not legacy interrupts.
> However I would assume such a device would come up with both MSI and
> MSI-X disabled.
>
>  - R.
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:49                     ` Manu Abraham
@ 2007-07-20 21:54                       ` Nelson, Shannon
  2007-07-20 21:59                         ` Manu Abraham
  0 siblings, 1 reply; 33+ messages in thread
From: Nelson, Shannon @ 2007-07-20 21:54 UTC (permalink / raw)
  To: Manu Abraham, Roland Dreier
  Cc: Leech, Christopher, Waskiewicz Jr, Peter P, akpm, linux-kernel,
	davem, jeff, Williams, Dan J

Manu Abraham [mailto:abraham.manu@gmail.com] 
>Sorry for being not clear. What i was asking is thus:
>
>A device that has legacy interrupts and MSI-X. I was thinking that if
>MSI-X failed one should fall back to MSI mode (single message), that's
>what i was assuming.
>
>In such a case, i do enable MSI-X mode on the device, when the request
>for 2 ^ n  number of messages (where messages can be a max of 32), If
>the request fails one falls into a single message mode, ie MSI ?
>
>
>> What device do you have in mind?
>
>
>The device that i have in mind is a SAA7160.

Notice our code looks at the return from pci_enable_msix() - it will
give you a hint whether MSI-X is not supported (returns < 0) or you
simply asked for too many (returns > 0).  If the former, then fallback
to legacy; if the latter, try MSI-X with only one interrupt, which
essentially emulates MSI mode.

sln
--
======================================================================
Mr. Shannon Nelson                 LAN Access Division, Intel Corp.
Shannon.Nelson@intel.com                I don't speak for Intel
(503) 712-7659                    Parents can't afford to be squeamish.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:54                       ` Nelson, Shannon
@ 2007-07-20 21:59                         ` Manu Abraham
  2007-07-20 22:00                           ` Nelson, Shannon
  2007-07-20 22:07                           ` Waskiewicz Jr, Peter P
  0 siblings, 2 replies; 33+ messages in thread
From: Manu Abraham @ 2007-07-20 21:59 UTC (permalink / raw)
  To: Nelson, Shannon
  Cc: Roland Dreier, Leech, Christopher, Waskiewicz Jr, Peter P, akpm,
	linux-kernel, davem, jeff, Williams, Dan J

On 7/21/07, Nelson, Shannon <shannon.nelson@intel.com> wrote:
> Manu Abraham [mailto:abraham.manu@gmail.com]
> >Sorry for being not clear. What i was asking is thus:
> >
> >A device that has legacy interrupts and MSI-X. I was thinking that if
> >MSI-X failed one should fall back to MSI mode (single message), that's
> >what i was assuming.
> >
> >In such a case, i do enable MSI-X mode on the device, when the request
> >for 2 ^ n  number of messages (where messages can be a max of 32), If
> >the request fails one falls into a single message mode, ie MSI ?
> >
> >
> >> What device do you have in mind?
> >
> >
> >The device that i have in mind is a SAA7160.
>
> Notice our code looks at the return from pci_enable_msix() - it will
> give you a hint whether MSI-X is not supported (returns < 0) or you
> simply asked for too many (returns > 0).  If the former, then fallback
> to legacy; if the latter, try MSI-X with only one interrupt, which
> essentially emulates MSI mode.

Ok. Thanks for clearing it up. So the idea would be that if
pci_enable_msix() for "n" number of messages failed, then settle down
for 1 message, which is equivalent to MSI mode

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:59                         ` Manu Abraham
@ 2007-07-20 22:00                           ` Nelson, Shannon
  2007-07-20 22:07                           ` Waskiewicz Jr, Peter P
  1 sibling, 0 replies; 33+ messages in thread
From: Nelson, Shannon @ 2007-07-20 22:00 UTC (permalink / raw)
  To: Manu Abraham; +Cc: linux-kernel

Manu Abraham [mailto:abraham.manu@gmail.com] 
>
>Ok. Thanks for clearing it up. So the idea would be that if
>pci_enable_msix() for "n" number of messages failed, then settle down
>for 1 message, which is equivalent to MSI mode
>

yep

^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [PATCH 5/7] I/OAT: Add support for MSI and MSI-X
  2007-07-20 21:59                         ` Manu Abraham
  2007-07-20 22:00                           ` Nelson, Shannon
@ 2007-07-20 22:07                           ` Waskiewicz Jr, Peter P
  1 sibling, 0 replies; 33+ messages in thread
From: Waskiewicz Jr, Peter P @ 2007-07-20 22:07 UTC (permalink / raw)
  To: Manu Abraham, Nelson, Shannon
  Cc: Roland Dreier, Leech, Christopher, akpm, linux-kernel, davem,
	jeff, Williams, Dan J


> Ok. Thanks for clearing it up. So the idea would be that if
> pci_enable_msix() for "n" number of messages failed, then 
> settle down for 1 message, which is equivalent to MSI mode

And if that fails to acquire the one vector for whatever reason (MSI
isn't enabled, or something is buggy in your chipset), fail back to
legacy interrupts.

Cheers,
-PJ Waskiewicz

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2007-07-20 22:41 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-20  0:44 [PATCH 0/7] I/OAT: Add support for DCA - Direct Cache Access Shannon Nelson
2007-07-20  0:44 ` [PATCH 1/7] I/OAT: New device ids Shannon Nelson
2007-07-20  0:49   ` David Miller
2007-07-20  0:44 ` [PATCH 2/7] I/OAT: Rename the source file Shannon Nelson
2007-07-20  0:49   ` David Miller
2007-07-20  0:45 ` [PATCH 3/7] I/OAT: code cleanup from checkpatch output Shannon Nelson
2007-07-20  0:49   ` David Miller
2007-07-20  0:45 ` [PATCH 4/7] I/OAT: Split PCI startup from DMA handling code Shannon Nelson
2007-07-20  0:50   ` David Miller
2007-07-20 10:53   ` Andrey Panin
2007-07-20 16:33     ` Nelson, Shannon
2007-07-20  0:45 ` [PATCH 5/7] I/OAT: Add support for MSI and MSI-X Shannon Nelson
2007-07-20  0:51   ` David Miller
2007-07-20 17:43   ` Roland Dreier
2007-07-20 18:09     ` Waskiewicz Jr, Peter P
2007-07-20 19:05       ` Roland Dreier
2007-07-20 19:19         ` Waskiewicz Jr, Peter P
2007-07-20 19:49           ` Roland Dreier
2007-07-20 21:10             ` Leech, Christopher
2007-07-20 21:21               ` Roland Dreier
2007-07-20 21:32                 ` Manu Abraham
2007-07-20 21:38                   ` Roland Dreier
2007-07-20 21:49                     ` Manu Abraham
2007-07-20 21:54                       ` Nelson, Shannon
2007-07-20 21:59                         ` Manu Abraham
2007-07-20 22:00                           ` Nelson, Shannon
2007-07-20 22:07                           ` Waskiewicz Jr, Peter P
2007-07-20 21:13             ` Nelson, Shannon
2007-07-20  0:45 ` [PATCH 6/7] DCA: Add Direct Cache Access driver Shannon Nelson
2007-07-20  0:52   ` David Miller
2007-07-20 16:35     ` Nelson, Shannon
2007-07-20  0:45 ` [PATCH 7/7] I/OAT: Add DCA services Shannon Nelson
2007-07-20  0:52   ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).