linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/33] SG table chaining support
@ 2007-07-16  9:47 Jens Axboe
  2007-07-16  9:47 ` [PATCH 01/33] crypto: don't pollute the global namespace with sg_next() Jens Axboe
                   ` (35 more replies)
  0 siblings, 36 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi

Hi,

A repost of this patchset, which adds support forchaining of sg tables.
This enables much larger IO commands, since we don't have to allocate
large consecutive pieces of memory to represent the sgtable of a
huge command. Right now Linux is limited to somewhere between 128 and 256
segments, depending on the architecture. This translates into at most
512k-1mb request sizes. With this patchset, I've successfully pushed
10MiB commands through the IO stack.

This will potentially increase performance a lot on hardware that
requires larger IO commands to perform at their maximum.

Also see http://marc.info/?l=linux-kernel&m=117869783524152

To enable large IO commands for device sda, you would do:

# cd /sys/block/sda/queue
# echo 4096 > max_segments
# cat max_hw_sectors_kb  > max_sectors_kb

cat max_hw_sectors_kb to see what your largest IO size would now be.

Changes since last post:

- Rebase to current -git. Lots of SCSI drivers have been converted
  to use the sg accessor helpers, which nicely shrinks this patchset
  from 70 to 33 patches. Great!

-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH 01/33] crypto: don't pollute the global namespace with sg_next()
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 02/33] Add sg helpers for iterating over a scatterlist table Jens Axboe
                   ` (34 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, James Morris

It's a subsystem function, prefix it as such.

Cc: James Morris <jmorris@namei.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 crypto/digest.c      |    2 +-
 crypto/scatterwalk.c |    2 +-
 crypto/scatterwalk.h |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/crypto/digest.c b/crypto/digest.c
index 1bf7414..e56de67 100644
--- a/crypto/digest.c
+++ b/crypto/digest.c
@@ -77,7 +77,7 @@ static int update2(struct hash_desc *desc,
 
 		if (!nbytes)
 			break;
-		sg = sg_next(sg);
+		sg = scatterwalk_sg_next(sg);
 	}
 
 	return 0;
diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index 81afd17..2e51f82 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -70,7 +70,7 @@ static void scatterwalk_pagedone(struct scatter_walk *walk, int out,
 		walk->offset += PAGE_SIZE - 1;
 		walk->offset &= PAGE_MASK;
 		if (walk->offset >= walk->sg->offset + walk->sg->length)
-			scatterwalk_start(walk, sg_next(walk->sg));
+			scatterwalk_start(walk, scatterwalk_sg_next(walk->sg));
 	}
 }
 
diff --git a/crypto/scatterwalk.h b/crypto/scatterwalk.h
index f1592cc..e049c62 100644
--- a/crypto/scatterwalk.h
+++ b/crypto/scatterwalk.h
@@ -20,7 +20,7 @@
 
 #include "internal.h"
 
-static inline struct scatterlist *sg_next(struct scatterlist *sg)
+static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg)
 {
 	return (++sg)->length ? sg : (void *)sg->page;
 }
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 02/33] Add sg helpers for iterating over a scatterlist table
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
  2007-07-16  9:47 ` [PATCH 01/33] crypto: don't pollute the global namespace with sg_next() Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 03/33] block: convert to using sg helpers Jens Axboe
                   ` (33 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

First step to being able to change the scatterlist setup without
having to modify drivers (a lot :-)

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 include/linux/scatterlist.h |    9 +++++++++
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 4efbd9c..bed5ab4 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -20,4 +20,13 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
 	sg_set_buf(sg, buf, buflen);
 }
 
+#define sg_next(sg)		((sg) + 1)
+#define sg_last(sg, nents)	(&(sg[(nents) - 1]))
+
+/*
+ * Loop over each sg element, following the pointer to a new list if necessary
+ */
+#define for_each_sg(sglist, sg, nr, __i)	\
+	for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg))
+
 #endif /* _LINUX_SCATTERLIST_H */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 03/33] block: convert to using sg helpers
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
  2007-07-16  9:47 ` [PATCH 01/33] crypto: don't pollute the global namespace with sg_next() Jens Axboe
  2007-07-16  9:47 ` [PATCH 02/33] Add sg helpers for iterating over a scatterlist table Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
  2007-07-16  9:47 ` [PATCH 04/33] scsi: " Jens Axboe
                   ` (32 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

Convert the main rq mapper (blk_rq_map_sg()) to the sg helper setup.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/ll_rw_blk.c |   19 ++++++++++++-------
 1 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c
index ef42bb2..ab71087 100644
--- a/block/ll_rw_blk.c
+++ b/block/ll_rw_blk.c
@@ -30,6 +30,7 @@
 #include <linux/cpu.h>
 #include <linux/blktrace_api.h>
 #include <linux/fault-inject.h>
+#include <linux/scatterlist.h>
 
 /*
  * for max sense size
@@ -1307,9 +1308,11 @@ static int blk_hw_contig_segment(request_queue_t *q, struct bio *bio,
  * map a request to scatterlist, return number of sg entries setup. Caller
  * must make sure sg can hold rq->nr_phys_segments entries
  */
-int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg)
+int blk_rq_map_sg(request_queue_t *q, struct request *rq,
+		  struct scatterlist *sglist)
 {
 	struct bio_vec *bvec, *bvprv;
+	struct scatterlist *next_sg, *sg;
 	struct bio *bio;
 	int nsegs, i, cluster;
 
@@ -1320,6 +1323,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
 	 * for each bio in rq
 	 */
 	bvprv = NULL;
+	sg = next_sg = &sglist[0];
 	rq_for_each_bio(bio, rq) {
 		/*
 		 * for each segment in bio
@@ -1328,7 +1332,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
 			int nbytes = bvec->bv_len;
 
 			if (bvprv && cluster) {
-				if (sg[nsegs - 1].length + nbytes > q->max_segment_size)
+				if (sg->length + nbytes > q->max_segment_size)
 					goto new_segment;
 
 				if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
@@ -1336,14 +1340,15 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
 				if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
 					goto new_segment;
 
-				sg[nsegs - 1].length += nbytes;
+				sg->length += nbytes;
 			} else {
 new_segment:
-				memset(&sg[nsegs],0,sizeof(struct scatterlist));
-				sg[nsegs].page = bvec->bv_page;
-				sg[nsegs].length = nbytes;
-				sg[nsegs].offset = bvec->bv_offset;
+				sg = next_sg;
+				next_sg = sg_next(sg);
 
+				sg->page = bvec->bv_page;
+				sg->length = nbytes;
+				sg->offset = bvec->bv_offset;
 				nsegs++;
 			}
 			bvprv = bvec;
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 04/33] scsi: convert to using sg helpers
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (2 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 03/33] block: convert to using sg helpers Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 05/33] Add chained sg support to linux/scatterlist.h Jens Axboe
                   ` (31 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, FUJITA Tomonori, James.Bottomley

This converts the SCSI mid layer to using the sg helpers for looking up
sg elements, instead of doing it manually.

Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: James.Bottomley@SteelEye.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/scsi_lib.c  |   21 ++++++++++++---------
 include/scsi/scsi_cmnd.h |    2 +-
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 1f5a07b..9fe66f8 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -17,6 +17,7 @@
 #include <linux/pci.h>
 #include <linux/delay.h>
 #include <linux/hardirq.h>
+#include <linux/scatterlist.h>
 
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
@@ -302,14 +303,15 @@ static int scsi_req_map_sg(struct request *rq, struct scatterlist *sgl,
 	struct request_queue *q = rq->q;
 	int nr_pages = (bufflen + sgl[0].offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	unsigned int data_len = 0, len, bytes, off;
+	struct scatterlist *sg;
 	struct page *page;
 	struct bio *bio = NULL;
 	int i, err, nr_vecs = 0;
 
-	for (i = 0; i < nsegs; i++) {
-		page = sgl[i].page;
-		off = sgl[i].offset;
-		len = sgl[i].length;
+	for_each_sg(sgl, sg, nsegs, i) {
+		page = sg->page;
+		off = sg->offset;
+		len = sg->length;
 		data_len += len;
 
 		while (len > 0) {
@@ -2240,18 +2242,19 @@ EXPORT_SYMBOL_GPL(scsi_target_unblock);
  *
  * Returns virtual address of the start of the mapped page
  */
-void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count,
+void *scsi_kmap_atomic_sg(struct scatterlist *sgl, int sg_count,
 			  size_t *offset, size_t *len)
 {
 	int i;
 	size_t sg_len = 0, len_complete = 0;
+	struct scatterlist *sg;
 	struct page *page;
 
 	WARN_ON(!irqs_disabled());
 
-	for (i = 0; i < sg_count; i++) {
+	for_each_sg(sgl, sg, sg_count, i) {
 		len_complete = sg_len; /* Complete sg-entries */
-		sg_len += sg[i].length;
+		sg_len += sg->length;
 		if (sg_len > *offset)
 			break;
 	}
@@ -2265,10 +2268,10 @@ void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count,
 	}
 
 	/* Offset starting from the beginning of first page in this sg-entry */
-	*offset = *offset - len_complete + sg[i].offset;
+	*offset = *offset - len_complete + sg->offset;
 
 	/* Assumption: contiguous pages can be accessed as "page + i" */
-	page = nth_page(sg[i].page, (*offset >> PAGE_SHIFT));
+	page = nth_page(sg->page, (*offset >> PAGE_SHIFT));
 	*offset &= ~PAGE_MASK;
 
 	/* Bytes in this sg-entry from *offset to the end of the page */
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 53e1705..a059f1b 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -153,6 +153,6 @@ static inline int scsi_get_resid(struct scsi_cmnd *cmd)
 }
 
 #define scsi_for_each_sg(cmd, sg, nseg, __i)			\
-	for (__i = 0, sg = scsi_sglist(cmd); __i < (nseg); __i++, (sg)++)
+	for_each_sg(scsi_sglist(cmd), sg, nseg, __i)
 
 #endif /* _SCSI_SCSI_CMND_H */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 05/33] Add chained sg support to linux/scatterlist.h
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (3 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 04/33] scsi: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
  2007-07-16  9:47 ` [PATCH 06/33] i386 dma_map_sg: convert to using sg helpers Jens Axboe
                   ` (30 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

The core of the patch - allow the last sg element in a scatterlist
table to point to the start of a new table. We overload the LSB of
the page pointer to indicate whether this is a valid sg entry, or
merely a link to the next list.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 include/linux/scatterlist.h |   79 +++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index bed5ab4..10f6223 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -20,8 +20,36 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
 	sg_set_buf(sg, buf, buflen);
 }
 
-#define sg_next(sg)		((sg) + 1)
-#define sg_last(sg, nents)	(&(sg[(nents) - 1]))
+/*
+ * We overload the LSB of the page pointer to indicate whether it's
+ * a valid sg entry, or whether it points to the start of a new scatterlist.
+ * Those low bits are there for everyone! (thanks mason :-)
+ */
+#define sg_is_chain(sg)		((unsigned long) (sg)->page & 0x01)
+#define sg_chain_ptr(sg)	\
+	((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01))
+
+/**
+ * sg_next - return the next scatterlist entry in a list
+ * @sg:		The current sg entry
+ *
+ * Usually the next entry will be @sg@ + 1, but if this sg element is part
+ * of a chained scatterlist, it could jump to the start of a new
+ * scatterlist array.
+ *
+ * Note that the caller must ensure that there are further entries after
+ * the current entry, this function will NOT return NULL for an end-of-list.
+ *
+ */
+static inline struct scatterlist *sg_next(struct scatterlist *sg)
+{
+	sg++;
+
+	if (unlikely(sg_is_chain(sg)))
+		sg = sg_chain_ptr(sg);
+
+	return sg;
+}
 
 /*
  * Loop over each sg element, following the pointer to a new list if necessary
@@ -29,4 +57,51 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
 #define for_each_sg(sglist, sg, nr, __i)	\
 	for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg))
 
+/**
+ * sg_last - return the last scatterlist entry in a list
+ * @sgl:	First entry in the scatterlist
+ * @nents:	Number of entries in the scatterlist
+ *
+ * Should only be used casually, it (currently) scan the entire list
+ * to get the last entry.
+ *
+ * Note that the @sgl@ pointer passed in need not be the first one,
+ * the important bit is that @nents@ denotes the number of entries that
+ * exist from @sgl@.
+ *
+ */
+static inline struct scatterlist *sg_last(struct scatterlist *sgl,
+					  unsigned int nents)
+{
+#ifdef ARCH_HAS_SG_CHAIN
+	struct scatterlist *ret = &sgl[nents - 1];
+#else
+	struct scatterlist *sg, *ret = NULL;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		ret = sg;
+
+#endif
+	return ret;
+}
+
+/**
+ * sg_chain - Chain two sglists together
+ * @prv:	First scatterlist
+ * @prv_nents:	Number of entries in prv
+ * @sgl:	Second scatterlist
+ *
+ * Links @prv@ and @sgl@ together, to form a longer scatterlist.
+ *
+ */
+static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
+			    struct scatterlist *sgl)
+{
+#ifndef ARCH_HAS_SG_CHAIN
+	BUG();
+#endif
+	prv[prv_nents - 1].page = (struct page *) ((unsigned long) sgl | 0x01);
+}
+
 #endif /* _LINUX_SCATTERLIST_H */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 06/33] i386 dma_map_sg: convert to using sg helpers
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (4 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 05/33] Add chained sg support to linux/scatterlist.h Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 07/33] i386: enable sg chaining Jens Axboe
                   ` (29 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, ak

The dma mapping helpers need to be converted to using
sg helpers as well, so they will work with a chained
sglist setup.

Cc: ak@suse.de
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 include/asm-i386/dma-mapping.h |   13 +++++++------
 1 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/asm-i386/dma-mapping.h b/include/asm-i386/dma-mapping.h
index f1d72d1..6a2d26c 100644
--- a/include/asm-i386/dma-mapping.h
+++ b/include/asm-i386/dma-mapping.h
@@ -2,10 +2,10 @@
 #define _ASM_I386_DMA_MAPPING_H
 
 #include <linux/mm.h>
+#include <linux/scatterlist.h>
 
 #include <asm/cache.h>
 #include <asm/io.h>
-#include <asm/scatterlist.h>
 #include <asm/bug.h>
 
 #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
@@ -35,18 +35,19 @@ dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
 }
 
 static inline int
-dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
 	   enum dma_data_direction direction)
 {
+	struct scatterlist *sg;
 	int i;
 
 	BUG_ON(!valid_dma_direction(direction));
-	WARN_ON(nents == 0 || sg[0].length == 0);
+	WARN_ON(nents == 0 || sglist[0].length == 0);
 
-	for (i = 0; i < nents; i++ ) {
-		BUG_ON(!sg[i].page);
+	for_each_sg(sglist, sg, nents, i) {
+		BUG_ON(!sg->page);
 
-		sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset;
+		sg->dma_address = page_to_phys(sg->page) + sg->offset;
 	}
 
 	flush_write_buffers();
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 07/33] i386: enable sg chaining
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (5 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 06/33] i386 dma_map_sg: convert to using sg helpers Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 08/33] swiotlb: sg chaining support Jens Axboe
                   ` (28 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, ak

We don't need to do more on x86, there's no iommu to be worried about.

Cc: ak@suse.de
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 include/asm-i386/scatterlist.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/include/asm-i386/scatterlist.h b/include/asm-i386/scatterlist.h
index d7e45a8..bd5164a 100644
--- a/include/asm-i386/scatterlist.h
+++ b/include/asm-i386/scatterlist.h
@@ -10,6 +10,8 @@ struct scatterlist {
     unsigned int	length;
 };
 
+#define ARCH_HAS_SG_CHAIN
+
 /* These macros should be used after a pci_map_sg call has been done
  * to get bus addresses of each of the SG entries and their lengths.
  * You should only work with the number of sg entries pci_map_sg
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 08/33] swiotlb: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (6 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 07/33] i386: enable sg chaining Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe
                   ` (27 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, ak

Cc: ak@suse.de
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 lib/swiotlb.c |   19 ++++++++++++-------
 1 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 10c13ad..1f658e4 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -673,16 +673,17 @@ swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
  * same here.
  */
 int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
 	       int dir)
 {
+	struct scatterlist *sg;
 	void *addr;
 	dma_addr_t dev_addr;
 	int i;
 
 	BUG_ON(dir == DMA_NONE);
 
-	for (i = 0; i < nelems; i++, sg++) {
+	for_each_sg(sgl, sg, nelems, i) {
 		addr = SG_ENT_VIRT_ADDRESS(sg);
 		dev_addr = virt_to_bus(addr);
 		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
@@ -692,7 +693,7 @@ swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
 				   to do proper error handling. */
 				swiotlb_full(hwdev, sg->length, dir, 0);
 				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
+				sgl[0].dma_length = 0;
 				return 0;
 			}
 			sg->dma_address = virt_to_bus(map);
@@ -708,19 +709,21 @@ swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
  * concerning calls here are the same as for swiotlb_unmap_single() above.
  */
 void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
 		 int dir)
 {
+	struct scatterlist *sg;
 	int i;
 
 	BUG_ON(dir == DMA_NONE);
 
-	for (i = 0; i < nelems; i++, sg++)
+	for_each_sg(sgl, sg, nelems, i) {
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			unmap_single(hwdev, bus_to_virt(sg->dma_address),
 				     sg->dma_length, dir);
 		else if (dir == DMA_FROM_DEVICE)
 			dma_mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+	}
 }
 
 /*
@@ -731,19 +734,21 @@ swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
  * and usage.
  */
 static void
-swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sgl,
 		int nelems, int dir, int target)
 {
+	struct scatterlist *sg;
 	int i;
 
 	BUG_ON(dir == DMA_NONE);
 
-	for (i = 0; i < nelems; i++, sg++)
+	for_each_sg(sgl, sg, nelems, i) {
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, bus_to_virt(sg->dma_address),
 				    sg->dma_length, dir, target);
 		else if (dir == DMA_FROM_DEVICE)
 			dma_mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+	}
 }
 
 void
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (7 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 08/33] swiotlb: sg chaining support Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 20:06   ` Andrew Morton
  2007-07-17 11:03   ` Muli Ben-Yehuda
  2007-07-16  9:47 ` [PATCH 10/33] x86-64: enable sg chaining Jens Axboe
                   ` (26 subsequent siblings)
  35 siblings, 2 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, ak

This prepares x86-64 for sg chaining support.

Additional improvements/fixups for pci-gart from
Benny Halevy <bhalevy@panasas.com>

Cc: ak@suse.de
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
 arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
 arch/x86_64/kernel/pci-nommu.c   |    5 ++-
 3 files changed, 51 insertions(+), 42 deletions(-)

diff --git a/arch/x86_64/kernel/pci-calgary.c b/arch/x86_64/kernel/pci-calgary.c
index 5bd20b5..c472b14 100644
--- a/arch/x86_64/kernel/pci-calgary.c
+++ b/arch/x86_64/kernel/pci-calgary.c
@@ -35,6 +35,7 @@
 #include <linux/pci_ids.h>
 #include <linux/pci.h>
 #include <linux/delay.h>
+#include <linux/scatterlist.h>
 #include <asm/proto.h>
 #include <asm/calgary.h>
 #include <asm/tce.h>
@@ -341,17 +342,19 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr,
 static void __calgary_unmap_sg(struct iommu_table *tbl,
 	struct scatterlist *sglist, int nelems, int direction)
 {
-	while (nelems--) {
+	struct scatterlist *s;
+	int i;
+
+	for_each_sg(sglist, s, nelems, i) {
 		unsigned int npages;
-		dma_addr_t dma = sglist->dma_address;
-		unsigned int dmalen = sglist->dma_length;
+		dma_addr_t dma = s->dma_address;
+		unsigned int dmalen = s->dma_length;
 
 		if (dmalen == 0)
 			break;
 
 		npages = num_dma_pages(dma, dmalen);
 		__iommu_free(tbl, dma, npages);
-		sglist++;
 	}
 }
 
@@ -374,10 +377,10 @@ void calgary_unmap_sg(struct device *dev, struct scatterlist *sglist,
 static int calgary_nontranslate_map_sg(struct device* dev,
 	struct scatterlist *sg, int nelems, int direction)
 {
+	struct scatterlist *s;
 	int i;
 
- 	for (i = 0; i < nelems; i++ ) {
-		struct scatterlist *s = &sg[i];
+	for_each_sg(sg, s, nelems, i) {
 		BUG_ON(!s->page);
 		s->dma_address = virt_to_bus(page_address(s->page) +s->offset);
 		s->dma_length = s->length;
@@ -389,6 +392,7 @@ int calgary_map_sg(struct device *dev, struct scatterlist *sg,
 	int nelems, int direction)
 {
 	struct iommu_table *tbl = to_pci_dev(dev)->bus->self->sysdata;
+	struct scatterlist *s;
 	unsigned long flags;
 	unsigned long vaddr;
 	unsigned int npages;
@@ -400,8 +404,7 @@ int calgary_map_sg(struct device *dev, struct scatterlist *sg,
 
 	spin_lock_irqsave(&tbl->it_lock, flags);
 
-	for (i = 0; i < nelems; i++ ) {
-		struct scatterlist *s = &sg[i];
+	for_each_sg(sg, s, nelems, i) {
 		BUG_ON(!s->page);
 
 		vaddr = (unsigned long)page_address(s->page) + s->offset;
@@ -428,9 +431,9 @@ int calgary_map_sg(struct device *dev, struct scatterlist *sg,
 	return nelems;
 error:
 	__calgary_unmap_sg(tbl, sg, nelems, direction);
-	for (i = 0; i < nelems; i++) {
-		sg[i].dma_address = bad_dma_address;
-		sg[i].dma_length = 0;
+	for_each_sg(sg, s, nelems, i) {
+		s->dma_address = bad_dma_address;
+		s->dma_length = 0;
 	}
 	spin_unlock_irqrestore(&tbl->it_lock, flags);
 	return 0;
diff --git a/arch/x86_64/kernel/pci-gart.c b/arch/x86_64/kernel/pci-gart.c
index ae091cd..b16384f 100644
--- a/arch/x86_64/kernel/pci-gart.c
+++ b/arch/x86_64/kernel/pci-gart.c
@@ -23,6 +23,7 @@
 #include <linux/interrupt.h>
 #include <linux/bitops.h>
 #include <linux/kdebug.h>
+#include <linux/scatterlist.h>
 #include <asm/atomic.h>
 #include <asm/io.h>
 #include <asm/mtrr.h>
@@ -277,10 +278,10 @@ void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
  */
 void gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
 {
+	struct scatterlist *s;
 	int i;
 
-	for (i = 0; i < nents; i++) {
-		struct scatterlist *s = &sg[i];
+	for_each_sg(sg, s, nents, i) {
 		if (!s->dma_length || !s->length)
 			break;
 		gart_unmap_single(dev, s->dma_address, s->dma_length, dir);
@@ -291,14 +292,14 @@ void gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int di
 static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
 			       int nents, int dir)
 {
+	struct scatterlist *s;
 	int i;
 
 #ifdef CONFIG_IOMMU_DEBUG
 	printk(KERN_DEBUG "dma_map_sg overflow\n");
 #endif
 
- 	for (i = 0; i < nents; i++ ) {
-		struct scatterlist *s = &sg[i];
+	for_each_sg(sg, s, nents, i) {
 		unsigned long addr = page_to_phys(s->page) + s->offset; 
 		if (nonforced_iommu(dev, addr, s->length)) { 
 			addr = dma_map_area(dev, addr, s->length, dir);
@@ -318,23 +319,23 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
 }
 
 /* Map multiple scatterlist entries continuous into the first. */
-static int __dma_map_cont(struct scatterlist *sg, int start, int stopat,
+static int __dma_map_cont(struct scatterlist *start, int nelems,
 		      struct scatterlist *sout, unsigned long pages)
 {
 	unsigned long iommu_start = alloc_iommu(pages);
 	unsigned long iommu_page = iommu_start; 
+	struct scatterlist *s;
 	int i;
 
 	if (iommu_start == -1)
 		return -1;
-	
-	for (i = start; i < stopat; i++) {
-		struct scatterlist *s = &sg[i];
+
+	for_each_sg(start, s, nelems, i) {
 		unsigned long pages, addr;
 		unsigned long phys_addr = s->dma_address;
 		
-		BUG_ON(i > start && s->offset);
-		if (i == start) {
+		BUG_ON(s != start && s->offset);
+		if (s == start) {
 			*sout = *s; 
 			sout->dma_address = iommu_bus_base;
 			sout->dma_address += iommu_page*PAGE_SIZE + s->offset;
@@ -356,17 +357,17 @@ static int __dma_map_cont(struct scatterlist *sg, int start, int stopat,
 	return 0;
 }
 
-static inline int dma_map_cont(struct scatterlist *sg, int start, int stopat,
+static inline int dma_map_cont(struct scatterlist *start, int nelems,
 		      struct scatterlist *sout,
 		      unsigned long pages, int need)
 {
-	if (!need) { 
-		BUG_ON(stopat - start != 1);
-		*sout = sg[start]; 
-		sout->dma_length = sg[start].length; 
+	if (!need) {
+		BUG_ON(nelems != 1);
+		*sout = *start;
+		sout->dma_length = start->length;
 		return 0;
-	} 
-	return __dma_map_cont(sg, start, stopat, sout, pages);
+	}
+	return __dma_map_cont(start, nelems, sout, pages);
 }
 		
 /*
@@ -380,6 +381,7 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
 	int start;
 	unsigned long pages = 0;
 	int need = 0, nextneed;
+	struct scatterlist *s, *ps, *start_sg, *sgmap;
 
 	if (nents == 0) 
 		return 0;
@@ -389,8 +391,9 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
 
 	out = 0;
 	start = 0;
-	for (i = 0; i < nents; i++) {
-		struct scatterlist *s = &sg[i];
+	start_sg = sgmap = sg;
+	ps = NULL; /* shut up gcc */
+	for_each_sg(sg, s, nents, i) {
 		dma_addr_t addr = page_to_phys(s->page) + s->offset;
 		s->dma_address = addr;
 		BUG_ON(s->length == 0); 
@@ -399,29 +402,31 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
 
 		/* Handle the previous not yet processed entries */
 		if (i > start) {
-			struct scatterlist *ps = &sg[i-1];
 			/* Can only merge when the last chunk ends on a page 
 			   boundary and the new one doesn't have an offset. */
 			if (!iommu_merge || !nextneed || !need || s->offset ||
-			    (ps->offset + ps->length) % PAGE_SIZE) { 
-				if (dma_map_cont(sg, start, i, sg+out, pages,
-						 need) < 0)
+			    (ps->offset + ps->length) % PAGE_SIZE) {
+				if (dma_map_cont(start_sg, i - start, sgmap,
+						  pages, need) < 0)
 					goto error;
 				out++;
+				sgmap = sg_next(sgmap);
 				pages = 0;
-				start = i;	
+				start = i;
+				start_sg = s;
 			}
 		}
 
 		need = nextneed;
 		pages += to_pages(s->offset, s->length);
+		ps = s;
 	}
-	if (dma_map_cont(sg, start, i, sg+out, pages, need) < 0)
+	if (dma_map_cont(start_sg, i - start, sgmap, pages, need) < 0)
 		goto error;
 	out++;
 	flush_gart();
-	if (out < nents) 
-		sg[out].dma_length = 0; 
+	if (out < nents)
+		ps->dma_length = 0;
 	return out;
 
 error:
@@ -436,8 +441,8 @@ error:
 	if (panic_on_overflow)
 		panic("dma_map_sg: overflow on %lu pages\n", pages);
 	iommu_full(dev, pages << PAGE_SHIFT, dir);
-	for (i = 0; i < nents; i++)
-		sg[i].dma_address = bad_dma_address;
+	for_each_sg(sg, s, nents, i)
+		s->dma_address = bad_dma_address;
 	return 0;
 } 
 
diff --git a/arch/x86_64/kernel/pci-nommu.c b/arch/x86_64/kernel/pci-nommu.c
index 6dade0c..24c9faf 100644
--- a/arch/x86_64/kernel/pci-nommu.c
+++ b/arch/x86_64/kernel/pci-nommu.c
@@ -5,6 +5,7 @@
 #include <linux/pci.h>
 #include <linux/string.h>
 #include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
 
 #include <asm/proto.h>
 #include <asm/processor.h>
@@ -57,10 +58,10 @@ void nommu_unmap_single(struct device *dev, dma_addr_t addr,size_t size,
 int nommu_map_sg(struct device *hwdev, struct scatterlist *sg,
 	       int nents, int direction)
 {
+	struct scatterlist *s;
 	int i;
 
- 	for (i = 0; i < nents; i++ ) {
-		struct scatterlist *s = &sg[i];
+	for_each_sg(sg, s, nents, i) {
 		BUG_ON(!s->page);
 		s->dma_address = virt_to_bus(page_address(s->page) +s->offset);
 		if (!check_addr("map_sg", hwdev, s->dma_address, s->length))
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 10/33] x86-64: enable sg chaining
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (8 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 11/33] IA64: sg chaining support Jens Axboe
                   ` (25 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, ak

Cc: ak@suse.de
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 include/asm-x86_64/dma-mapping.h |    3 +--
 include/asm-x86_64/scatterlist.h |    2 ++
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h
index 6897e2a..ecd0f61 100644
--- a/include/asm-x86_64/dma-mapping.h
+++ b/include/asm-x86_64/dma-mapping.h
@@ -6,8 +6,7 @@
  * documentation.
  */
 
-
-#include <asm/scatterlist.h>
+#include <linux/scatterlist.h>
 #include <asm/io.h>
 #include <asm/swiotlb.h>
 
diff --git a/include/asm-x86_64/scatterlist.h b/include/asm-x86_64/scatterlist.h
index eaf7ada..ef3986b 100644
--- a/include/asm-x86_64/scatterlist.h
+++ b/include/asm-x86_64/scatterlist.h
@@ -11,6 +11,8 @@ struct scatterlist {
     unsigned int        dma_length;
 };
 
+#define ARCH_HAS_SG_CHAIN
+
 #define ISA_DMA_THRESHOLD (0x00ffffff)
 
 /* These macros should be used after a pci_map_sg call has been done
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 11/33] IA64: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (9 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 10/33] x86-64: enable sg chaining Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 12/33] PPC: " Jens Axboe
                   ` (24 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, tony.luck

This updates the ia64 iommu/pci dma mappers to sg chaining.

Cc: tony.luck@intel.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 arch/ia64/hp/common/sba_iommu.c |   14 +++++++-------
 arch/ia64/sn/pci/pci_dma.c      |   11 ++++++-----
 include/asm-ia64/dma-mapping.h  |    1 +
 include/asm-ia64/scatterlist.h  |    2 ++
 4 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index c1dca22..f6d15cf 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -393,7 +393,7 @@ sba_dump_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
 		printk(KERN_DEBUG " %d : DMA %08lx/%05x CPU %p\n", nents,
 		       startsg->dma_address, startsg->dma_length,
 		       sba_sg_address(startsg));
-		startsg++;
+		startsg = sg_next(startsg);
 	}
 }
 
@@ -406,7 +406,7 @@ sba_check_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
 	while (the_nents-- > 0) {
 		if (sba_sg_address(the_sg) == 0x0UL)
 			sba_dump_sg(NULL, startsg, nents);
-		the_sg++;
+		the_sg = sg_next(the_sg);
 	}
 }
 
@@ -1198,7 +1198,7 @@ sba_fill_pdir(
 			u32 pide = startsg->dma_address & ~PIDE_FLAG;
 			dma_offset = (unsigned long) pide & ~iovp_mask;
 			startsg->dma_address = 0;
-			dma_sg++;
+			dma_sg = sg_next(dma_sg);
 			dma_sg->dma_address = pide | ioc->ibase;
 			pdirp = &(ioc->pdir_base[pide >> iovp_shift]);
 			n_mappings++;
@@ -1225,7 +1225,7 @@ sba_fill_pdir(
 				pdirp++;
 			} while (cnt > 0);
 		}
-		startsg++;
+		startsg = sg_next(startsg);
 	}
 	/* force pdir update */
 	wmb();
@@ -1294,7 +1294,7 @@ sba_coalesce_chunks( struct ioc *ioc,
 		while (--nents > 0) {
 			unsigned long vaddr;	/* tmp */
 
-			startsg++;
+			startsg = sg_next(startsg);
 
 			/* PARANOID */
 			startsg->dma_address = startsg->dma_length = 0;
@@ -1404,7 +1404,7 @@ int sba_map_sg(struct device *dev, struct scatterlist *sglist, int nents, int di
 #ifdef ALLOW_IOV_BYPASS_SG
 	ASSERT(to_pci_dev(dev)->dma_mask);
 	if (likely((ioc->dma_mask & ~to_pci_dev(dev)->dma_mask) == 0)) {
-		for (sg = sglist ; filled < nents ; filled++, sg++){
+		for_each_sg(sglist, sg, nents, filled) {
 			sg->dma_length = sg->length;
 			sg->dma_address = virt_to_phys(sba_sg_address(sg));
 		}
@@ -1498,7 +1498,7 @@ void sba_unmap_sg (struct device *dev, struct scatterlist *sglist, int nents, in
 	while (nents && sglist->dma_length) {
 
 		sba_unmap_single(dev, sglist->dma_address, sglist->dma_length, dir);
-		sglist++;
+		sglist = sg_next(sglist);
 		nents--;
 	}
 
diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c
index d79ddac..ecd8a52 100644
--- a/arch/ia64/sn/pci/pci_dma.c
+++ b/arch/ia64/sn/pci/pci_dma.c
@@ -218,16 +218,17 @@ EXPORT_SYMBOL(sn_dma_unmap_single);
  *
  * Unmap a set of streaming mode DMA translations.
  */
-void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
 		     int nhwentries, int direction)
 {
 	int i;
 	struct pci_dev *pdev = to_pci_dev(dev);
 	struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
+	struct scatterlist *sg;
 
 	BUG_ON(dev->bus != &pci_bus_type);
 
-	for (i = 0; i < nhwentries; i++, sg++) {
+	for_each_sg(sgl, sg, nhwentries, i) {
 		provider->dma_unmap(pdev, sg->dma_address, direction);
 		sg->dma_address = (dma_addr_t) NULL;
 		sg->dma_length = 0;
@@ -244,11 +245,11 @@ EXPORT_SYMBOL(sn_dma_unmap_sg);
  *
  * Maps each entry of @sg for DMA.
  */
-int sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
+int sn_dma_map_sg(struct device *dev, struct scatterlist *sgl, int nhwentries,
 		  int direction)
 {
 	unsigned long phys_addr;
-	struct scatterlist *saved_sg = sg;
+	struct scatterlist *saved_sg = sgl, *sg;
 	struct pci_dev *pdev = to_pci_dev(dev);
 	struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
 	int i;
@@ -258,7 +259,7 @@ int sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
 	/*
 	 * Setup a DMA address for each entry in the scatterlist.
 	 */
-	for (i = 0; i < nhwentries; i++, sg++) {
+	for_each_sg(sgl, sg, nhwentries, i) {
 		phys_addr = SG_ENT_PHYS_ADDRESS(sg);
 		sg->dma_address = provider->dma_map(pdev,
 						    phys_addr, sg->length,
diff --git a/include/asm-ia64/dma-mapping.h b/include/asm-ia64/dma-mapping.h
index 6299b51..aeba8da 100644
--- a/include/asm-ia64/dma-mapping.h
+++ b/include/asm-ia64/dma-mapping.h
@@ -5,6 +5,7 @@
  * Copyright (C) 2003-2004 Hewlett-Packard Co
  *	David Mosberger-Tang <davidm@hpl.hp.com>
  */
+#include <linux/scatterlist.h>
 #include <asm/machvec.h>
 
 #define dma_alloc_coherent	platform_dma_alloc_coherent
diff --git a/include/asm-ia64/scatterlist.h b/include/asm-ia64/scatterlist.h
index a452ea2..7d5234d 100644
--- a/include/asm-ia64/scatterlist.h
+++ b/include/asm-ia64/scatterlist.h
@@ -30,4 +30,6 @@ struct scatterlist {
 #define sg_dma_len(sg)		((sg)->dma_length)
 #define sg_dma_address(sg)	((sg)->dma_address)
 
+#define	ARCH_HAS_SG_CHAIN
+
 #endif /* _ASM_IA64_SCATTERLIST_H */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 12/33] PPC: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (10 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 11/33] IA64: sg chaining support Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 13/33] SPARC: " Jens Axboe
                   ` (23 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, paulus

This updates the ppc iommu/pci dma mappers to sg chaining.

Cc: paulus@samba.org
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 arch/powerpc/kernel/dma_64.c            |    5 +++--
 arch/powerpc/kernel/ibmebus.c           |   11 ++++++-----
 arch/powerpc/kernel/iommu.c             |   18 +++++++++++-------
 arch/powerpc/platforms/ps3/system-bus.c |    5 +++--
 include/asm-powerpc/dma-mapping.h       |    2 +-
 include/asm-powerpc/scatterlist.h       |    2 ++
 6 files changed, 26 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/kernel/dma_64.c b/arch/powerpc/kernel/dma_64.c
index 7b0e754..9001104 100644
--- a/arch/powerpc/kernel/dma_64.c
+++ b/arch/powerpc/kernel/dma_64.c
@@ -154,12 +154,13 @@ static void dma_direct_unmap_single(struct device *dev, dma_addr_t dma_addr,
 {
 }
 
-static int dma_direct_map_sg(struct device *dev, struct scatterlist *sg,
+static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,
 			     int nents, enum dma_data_direction direction)
 {
+	struct scatterlist *sg;
 	int i;
 
-	for (i = 0; i < nents; i++, sg++) {
+	for_each_sg(sgl, sg, nents, i) {
 		sg->dma_address = (page_to_phys(sg->page) + sg->offset) |
 			dma_direct_offset;
 		sg->dma_length = sg->length;
diff --git a/arch/powerpc/kernel/ibmebus.c b/arch/powerpc/kernel/ibmebus.c
index 9a8c9af..3131c61 100644
--- a/arch/powerpc/kernel/ibmebus.c
+++ b/arch/powerpc/kernel/ibmebus.c
@@ -87,15 +87,16 @@ static void ibmebus_unmap_single(struct device *dev,
 }
 
 static int ibmebus_map_sg(struct device *dev,
-			  struct scatterlist *sg,
+			  struct scatterlist *sgl,
 			  int nents, enum dma_data_direction direction)
 {
+	struct scatterlist *sg;
 	int i;
 
-	for (i = 0; i < nents; i++) {
-		sg[i].dma_address = (dma_addr_t)page_address(sg[i].page)
-			+ sg[i].offset;
-		sg[i].dma_length = sg[i].length;
+	for_each_sg(sgl, sg, nents, i) {
+		sg->dma_address = (dma_addr_t)page_address(sg->page)
+			+ sg->offset;
+		sg->dma_length = sg->length;
 	}
 
 	return nents;
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index c08ceca..a146856 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -278,7 +278,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 	dma_addr_t dma_next = 0, dma_addr;
 	unsigned long flags;
 	struct scatterlist *s, *outs, *segstart;
-	int outcount, incount;
+	int outcount, incount, i;
 	unsigned long handle;
 
 	BUG_ON(direction == DMA_NONE);
@@ -298,7 +298,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 
 	spin_lock_irqsave(&(tbl->it_lock), flags);
 
-	for (s = outs; nelems; nelems--, s++) {
+	for_each_sg(sglist, s, nelems, i) {
 		unsigned long vaddr, npages, entry, slen;
 
 		slen = s->length;
@@ -386,7 +386,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 	return outcount;
 
  failure:
-	for (s = &sglist[0]; s <= outs; s++) {
+	for_each_sg(sglist, s, nelems, i) {
 		if (s->dma_length != 0) {
 			unsigned long vaddr, npages;
 
@@ -396,6 +396,8 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 			s->dma_address = DMA_ERROR_CODE;
 			s->dma_length = 0;
 		}
+		if (s == outs)
+			break;
 	}
 	spin_unlock_irqrestore(&(tbl->it_lock), flags);
 	return 0;
@@ -405,6 +407,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 void iommu_unmap_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 		int nelems, enum dma_data_direction direction)
 {
+	struct scatterlist *sg;
 	unsigned long flags;
 
 	BUG_ON(direction == DMA_NONE);
@@ -414,15 +417,16 @@ void iommu_unmap_sg(struct iommu_table *tbl, struct scatterlist *sglist,
 
 	spin_lock_irqsave(&(tbl->it_lock), flags);
 
+	sg = sglist;
 	while (nelems--) {
 		unsigned int npages;
-		dma_addr_t dma_handle = sglist->dma_address;
+		dma_addr_t dma_handle = sg->dma_address;
 
-		if (sglist->dma_length == 0)
+		if (sg->dma_length == 0)
 			break;
-		npages = iommu_num_pages(dma_handle,sglist->dma_length);
+		npages = iommu_num_pages(dma_handle, sg->dma_length);
 		__iommu_free(tbl, dma_handle, npages);
-		sglist++;
+		sg = sg_next(sg);
 	}
 
 	/* Flush/invalidate TLBs if necessary. As for iommu_free(), we
diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platforms/ps3/system-bus.c
index 6bda510..22aedb6 100644
--- a/arch/powerpc/platforms/ps3/system-bus.c
+++ b/arch/powerpc/platforms/ps3/system-bus.c
@@ -271,7 +271,7 @@ static void ps3_unmap_single(struct device *_dev, dma_addr_t dma_addr,
 	}
 }
 
-static int ps3_map_sg(struct device *_dev, struct scatterlist *sg, int nents,
+static int ps3_map_sg(struct device *_dev, struct scatterlist *sgl, int nents,
 	enum dma_data_direction direction)
 {
 #if defined(CONFIG_PS3_DYNAMIC_DMA)
@@ -279,9 +279,10 @@ static int ps3_map_sg(struct device *_dev, struct scatterlist *sg, int nents,
 	return -EPERM;
 #else
 	struct ps3_system_bus_device *dev = to_ps3_system_bus_device(_dev);
+	struct scatterlist *sg;
 	int i;
 
-	for (i = 0; i < nents; i++, sg++) {
+	for_each_sg(sgl, sg, nents, i) {
 		int result = ps3_dma_map(dev->d_region,
 			page_to_phys(sg->page) + sg->offset, sg->length,
 			&sg->dma_address);
diff --git a/include/asm-powerpc/dma-mapping.h b/include/asm-powerpc/dma-mapping.h
index f6bd804..b712774 100644
--- a/include/asm-powerpc/dma-mapping.h
+++ b/include/asm-powerpc/dma-mapping.h
@@ -12,7 +12,7 @@
 #include <linux/cache.h>
 /* need struct page definitions */
 #include <linux/mm.h>
-#include <asm/scatterlist.h>
+#include <linux/scatterlist.h>
 #include <asm/io.h>
 
 #define DMA_ERROR_CODE		(~(dma_addr_t)0x0)
diff --git a/include/asm-powerpc/scatterlist.h b/include/asm-powerpc/scatterlist.h
index 8c992d1..b075f61 100644
--- a/include/asm-powerpc/scatterlist.h
+++ b/include/asm-powerpc/scatterlist.h
@@ -41,5 +41,7 @@ struct scatterlist {
 #define ISA_DMA_THRESHOLD	(~0UL)
 #endif
 
+#define ARCH_HAS_SG_CHAIN
+
 #endif /* __KERNEL__ */
 #endif /* _ASM_POWERPC_SCATTERLIST_H */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 13/33] SPARC: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (11 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 12/33] PPC: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 11:29   ` David Miller
  2007-07-16  9:47 ` [PATCH 14/33] SPARC64: " Jens Axboe
                   ` (22 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, davem

This updates the sparc iommu/pci dma mappers to sg chaining.

Cc: davem@davemloft.net
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 arch/sparc/kernel/ioport.c      |   25 +++++++++++++------------
 arch/sparc/mm/io-unit.c         |   12 +++++++-----
 arch/sparc/mm/iommu.c           |   10 +++++-----
 arch/sparc/mm/sun4c.c           |   10 ++++++----
 include/asm-sparc/scatterlist.h |    2 ++
 5 files changed, 33 insertions(+), 26 deletions(-)

diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
index 62182d2..9c3ed88 100644
--- a/arch/sparc/kernel/ioport.c
+++ b/arch/sparc/kernel/ioport.c
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/pci.h>		/* struct pci_dev */
 #include <linux/proc_fs.h>
+#include <linux/scatterlist.h>
 
 #include <asm/io.h>
 #include <asm/vaddrs.h>
@@ -717,19 +718,19 @@ void pci_unmap_page(struct pci_dev *hwdev,
  * Device ownership issues as mentioned above for pci_map_single are
  * the same here.
  */
-int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
+int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents,
     int direction)
 {
+	struct scatterlist *sg;
 	int n;
 
 	BUG_ON(direction == PCI_DMA_NONE);
 	/* IIep is write-through, not flushing. */
-	for (n = 0; n < nents; n++) {
+	for_each_sg(sgl, sg, nents, n) {
 		BUG_ON(page_address(sg->page) == NULL);
 		sg->dvma_address =
 			virt_to_phys(page_address(sg->page)) + sg->offset;
 		sg->dvma_length = sg->length;
-		sg++;
 	}
 	return nents;
 }
@@ -738,19 +739,19 @@ int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
  * Again, cpu read rules concerning calls here are the same as for
  * pci_unmap_single() above.
  */
-void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
+void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents,
     int direction)
 {
+	struct scatterlist *sg;
 	int n;
 
 	BUG_ON(direction == PCI_DMA_NONE);
 	if (direction != PCI_DMA_TODEVICE) {
-		for (n = 0; n < nents; n++) {
+		for_each_sg(sgl, sg, nents, n) {
 			BUG_ON(page_address(sg->page) == NULL);
 			mmu_inval_dma_area(
 			    (unsigned long) page_address(sg->page),
 			    (sg->length + PAGE_SIZE-1) & PAGE_MASK);
-			sg++;
 		}
 	}
 }
@@ -789,34 +790,34 @@ void pci_dma_sync_single_for_device(struct pci_dev *hwdev, dma_addr_t ba, size_t
  * The same as pci_dma_sync_single_* but for a scatter-gather list,
  * same rules and usage.
  */
-void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, int direction)
 {
+	struct scatterlist *sg;
 	int n;
 
 	BUG_ON(direction == PCI_DMA_NONE);
 	if (direction != PCI_DMA_TODEVICE) {
-		for (n = 0; n < nents; n++) {
+		for_each_sg(sgl, sg, nents, n) {
 			BUG_ON(page_address(sg->page) == NULL);
 			mmu_inval_dma_area(
 			    (unsigned long) page_address(sg->page),
 			    (sg->length + PAGE_SIZE-1) & PAGE_MASK);
-			sg++;
 		}
 	}
 }
 
-void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, int direction)
 {
+	struct scatterlist *sg;
 	int n;
 
 	BUG_ON(direction == PCI_DMA_NONE);
 	if (direction != PCI_DMA_TODEVICE) {
-		for (n = 0; n < nents; n++) {
+		for_each_sg(sgl, sg, nents, n) {
 			BUG_ON(page_address(sg->page) == NULL);
 			mmu_inval_dma_area(
 			    (unsigned long) page_address(sg->page),
 			    (sg->length + PAGE_SIZE-1) & PAGE_MASK);
-			sg++;
 		}
 	}
 }
diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c
index 4ccda77..71442fe 100644
--- a/arch/sparc/mm/io-unit.c
+++ b/arch/sparc/mm/io-unit.c
@@ -11,8 +11,8 @@
 #include <linux/mm.h>
 #include <linux/highmem.h>	/* pte_offset_map => kmap_atomic */
 #include <linux/bitops.h>
+#include <linux/scatterlist.h>
 
-#include <asm/scatterlist.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
 #include <asm/sbus.h>
@@ -144,8 +144,9 @@ static void iounit_get_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_bus
 	spin_lock_irqsave(&iounit->lock, flags);
 	while (sz != 0) {
 		--sz;
-		sg[sz].dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg[sz].page) + sg[sz].offset, sg[sz].length);
-		sg[sz].dvma_length = sg[sz].length;
+		sg->dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg->page) + sg->offset, sg->length);
+		sg->dvma_length = sg->length;
+		sg = sg_next(sg);
 	}
 	spin_unlock_irqrestore(&iounit->lock, flags);
 }
@@ -173,11 +174,12 @@ static void iounit_release_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_
 	spin_lock_irqsave(&iounit->lock, flags);
 	while (sz != 0) {
 		--sz;
-		len = ((sg[sz].dvma_address & ~PAGE_MASK) + sg[sz].length + (PAGE_SIZE-1)) >> PAGE_SHIFT;
-		vaddr = (sg[sz].dvma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT;
+		len = ((sg->dvma_address & ~PAGE_MASK) + sg->length + (PAGE_SIZE-1)) >> PAGE_SHIFT;
+		vaddr = (sg->dvma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT;
 		IOD(("iounit_release %08lx-%08lx\n", (long)vaddr, (long)len+vaddr));
 		for (len += vaddr; vaddr < len; vaddr++)
 			clear_bit(vaddr, iounit->bmap);
+		sg = sg_next(sg);
 	}
 	spin_unlock_irqrestore(&iounit->lock, flags);
 }
diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c
index be042ef..0534cd0 100644
--- a/arch/sparc/mm/iommu.c
+++ b/arch/sparc/mm/iommu.c
@@ -12,8 +12,8 @@
 #include <linux/mm.h>
 #include <linux/slab.h>
 #include <linux/highmem.h>	/* pte_offset_map => kmap_atomic */
+#include <linux/scatterlist.h>
 
-#include <asm/scatterlist.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
 #include <asm/sbus.h>
@@ -240,7 +240,7 @@ static void iommu_get_scsi_sgl_noflush(struct scatterlist *sg, int sz, struct sb
 		n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
 		sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset;
 		sg->dvma_length = (__u32) sg->length;
-		sg++;
+		sg = sg_next(sg);
 	}
 }
 
@@ -254,7 +254,7 @@ static void iommu_get_scsi_sgl_gflush(struct scatterlist *sg, int sz, struct sbu
 		n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
 		sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset;
 		sg->dvma_length = (__u32) sg->length;
-		sg++;
+		sg = sg_next(sg);
 	}
 }
 
@@ -285,7 +285,7 @@ static void iommu_get_scsi_sgl_pflush(struct scatterlist *sg, int sz, struct sbu
 
 		sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset;
 		sg->dvma_length = (__u32) sg->length;
-		sg++;
+		sg = sg_next(sg);
 	}
 }
 
@@ -325,7 +325,7 @@ static void iommu_release_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_b
 		n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
 		iommu_release_one(sg->dvma_address & PAGE_MASK, n, sbus);
 		sg->dvma_address = 0x21212121;
-		sg++;
+		sg = sg_next(sg);
 	}
 }
 
diff --git a/arch/sparc/mm/sun4c.c b/arch/sparc/mm/sun4c.c
index 436021c..e5c703b 100644
--- a/arch/sparc/mm/sun4c.c
+++ b/arch/sparc/mm/sun4c.c
@@ -17,8 +17,8 @@
 #include <linux/highmem.h>
 #include <linux/fs.h>
 #include <linux/seq_file.h>
+#include <linux/scatterlist.h>
 
-#include <asm/scatterlist.h>
 #include <asm/page.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
@@ -1229,8 +1229,9 @@ static void sun4c_get_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_bus *
 {
 	while (sz != 0) {
 		--sz;
-		sg[sz].dvma_address = (__u32)sun4c_lockarea(page_address(sg[sz].page) + sg[sz].offset, sg[sz].length);
-		sg[sz].dvma_length = sg[sz].length;
+		sg->dvma_address = (__u32)sun4c_lockarea(page_address(sg->page) + sg->offset, sg->length);
+		sg->dvma_length = sg->length;
+		sg = sg_next(sg);
 	}
 }
 
@@ -1245,7 +1246,8 @@ static void sun4c_release_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_b
 {
 	while (sz != 0) {
 		--sz;
-		sun4c_unlockarea((char *)sg[sz].dvma_address, sg[sz].length);
+		sun4c_unlockarea((char *)sg->dvma_address, sg->length);
+		sg = sg_next(sg);
 	}
 }
 
diff --git a/include/asm-sparc/scatterlist.h b/include/asm-sparc/scatterlist.h
index a4fcf9a..4055af9 100644
--- a/include/asm-sparc/scatterlist.h
+++ b/include/asm-sparc/scatterlist.h
@@ -19,4 +19,6 @@ struct scatterlist {
 
 #define ISA_DMA_THRESHOLD (~0UL)
 
+#define ARCH_HAS_SG_CHAIN
+
 #endif /* !(_SPARC_SCATTERLIST_H) */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 14/33] SPARC64: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (12 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 13/33] SPARC: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 11:29   ` David Miller
  2007-07-16  9:47 ` [PATCH 15/33] scsi: simplify scsi_free_sgtable() Jens Axboe
                   ` (21 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, davem

This updates the sparc64 iommu/pci dma mappers to sg chaining.

Cc: davem@davemloft.net
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 arch/sparc64/kernel/pci_iommu.c   |   39 +++++++++++++++++++++-----------
 arch/sparc64/kernel/pci_sun4v.c   |   32 ++++++++++++++++-----------
 arch/sparc64/kernel/sbus.c        |   44 ++++++++++++++++++++++--------------
 include/asm-sparc64/scatterlist.h |    2 +
 4 files changed, 73 insertions(+), 44 deletions(-)

diff --git a/arch/sparc64/kernel/pci_iommu.c b/arch/sparc64/kernel/pci_iommu.c
index 70d2364..2118a4d 100644
--- a/arch/sparc64/kernel/pci_iommu.c
+++ b/arch/sparc64/kernel/pci_iommu.c
@@ -9,6 +9,7 @@
 #include <linux/mm.h>
 #include <linux/delay.h>
 #include <linux/pci.h>
+#include <linux/scatterlist.h>
 
 #include <asm/oplib.h>
 
@@ -470,7 +471,7 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 			   int nused, int nelems, unsigned long iopte_protection)
 {
 	struct scatterlist *dma_sg = sg;
-	struct scatterlist *sg_end = sg + nelems;
+	struct scatterlist *sg_end = sg_last(sg, nelems);
 	int i;
 
 	for (i = 0; i < nused; i++) {
@@ -505,7 +506,7 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 					len -= (IO_PAGE_SIZE - (tmp & (IO_PAGE_SIZE - 1UL)));
 					break;
 				}
-				sg++;
+				sg = sg_next(sg);
 			}
 
 			pteval = iopte_protection | (pteval & IOPTE_PAGE);
@@ -518,7 +519,7 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 			}
 
 			pteval = (pteval & IOPTE_PAGE) + len;
-			sg++;
+			sg = sg_next(sg);
 
 			/* Skip over any tail mappings we've fully mapped,
 			 * adjusting pteval along the way.  Stop when we
@@ -530,12 +531,12 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 			       ((pteval ^
 				 (SG_ENT_PHYS_ADDRESS(sg) + sg->length - 1UL)) >> IO_PAGE_SHIFT) == 0UL) {
 				pteval += sg->length;
-				sg++;
+				sg = sg_next(sg);
 			}
 			if ((pteval << (64 - IO_PAGE_SHIFT)) == 0UL)
 				pteval = ~0UL;
 		} while (dma_npages != 0);
-		dma_sg++;
+		dma_sg = sg_next(dma_sg);
 	}
 }
 
@@ -599,7 +600,7 @@ static int pci_4u_map_sg(struct pci_dev *pdev, struct scatterlist *sglist, int n
 	sgtmp = sglist;
 	while (used && sgtmp->dma_length) {
 		sgtmp->dma_address += dma_base;
-		sgtmp++;
+		sgtmp = sg_next(sgtmp);
 		used--;
 	}
 	used = nelems - used;
@@ -635,6 +636,7 @@ static void pci_4u_unmap_sg(struct pci_dev *pdev, struct scatterlist *sglist, in
 	struct strbuf *strbuf;
 	iopte_t *base;
 	unsigned long flags, ctx, i, npages;
+	struct scatterlist *sg, *sgprv;
 	u32 bus_addr;
 
 	if (unlikely(direction == PCI_DMA_NONE)) {
@@ -647,11 +649,15 @@ static void pci_4u_unmap_sg(struct pci_dev *pdev, struct scatterlist *sglist, in
 	
 	bus_addr = sglist->dma_address & IO_PAGE_MASK;
 
-	for (i = 1; i < nelems; i++)
-		if (sglist[i].dma_length == 0)
+	sgprv = NULL;
+	for_each_sg(sglist, sg, nelems, i) {
+		if (sg->dma_length == 0)
 			break;
-	i--;
-	npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) -
+
+		sgprv = sg;
+	}
+
+	npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length) -
 		  bus_addr) >> IO_PAGE_SHIFT;
 
 	base = iommu->page_table +
@@ -730,6 +736,7 @@ static void pci_4u_dma_sync_sg_for_cpu(struct pci_dev *pdev, struct scatterlist
 	struct iommu *iommu;
 	struct strbuf *strbuf;
 	unsigned long flags, ctx, npages, i;
+	struct scatterlist *sg, *sgprv;
 	u32 bus_addr;
 
 	iommu = pdev->dev.archdata.iommu;
@@ -753,11 +760,15 @@ static void pci_4u_dma_sync_sg_for_cpu(struct pci_dev *pdev, struct scatterlist
 
 	/* Step 2: Kick data out of streaming buffers. */
 	bus_addr = sglist[0].dma_address & IO_PAGE_MASK;
-	for(i = 1; i < nelems; i++)
-		if (!sglist[i].dma_length)
+	sgprv = NULL;
+	for_each_sg(sglist, sg, nelems, i) {
+		if (sg->dma_length == 0)
 			break;
-	i--;
-	npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length)
+
+		sgprv = sg;
+	}
+
+	npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length)
 		  - bus_addr) >> IO_PAGE_SHIFT;
 	pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction);
 
diff --git a/arch/sparc64/kernel/pci_sun4v.c b/arch/sparc64/kernel/pci_sun4v.c
index 6b3fe2c..ebbc567 100644
--- a/arch/sparc64/kernel/pci_sun4v.c
+++ b/arch/sparc64/kernel/pci_sun4v.c
@@ -13,6 +13,7 @@
 #include <linux/irq.h>
 #include <linux/msi.h>
 #include <linux/log2.h>
+#include <linux/scatterlist.h>
 
 #include <asm/iommu.h>
 #include <asm/irq.h>
@@ -368,7 +369,7 @@ static inline long fill_sg(long entry, struct pci_dev *pdev,
 			   int nused, int nelems, unsigned long prot)
 {
 	struct scatterlist *dma_sg = sg;
-	struct scatterlist *sg_end = sg + nelems;
+	struct scatterlist *sg_end = sg_last(sg, nelems);
 	unsigned long flags;
 	int i;
 
@@ -408,7 +409,7 @@ static inline long fill_sg(long entry, struct pci_dev *pdev,
 					len -= (IO_PAGE_SIZE - (tmp & (IO_PAGE_SIZE - 1UL)));
 					break;
 				}
-				sg++;
+				sg = sg_next(sg);
 			}
 
 			pteval = (pteval & IOPTE_PAGE);
@@ -426,24 +427,25 @@ static inline long fill_sg(long entry, struct pci_dev *pdev,
 			}
 
 			pteval = (pteval & IOPTE_PAGE) + len;
-			sg++;
+			sg = sg_next(sg);
 
 			/* Skip over any tail mappings we've fully mapped,
 			 * adjusting pteval along the way.  Stop when we
 			 * detect a page crossing event.
 			 */
-			while (sg < sg_end &&
-			       (pteval << (64 - IO_PAGE_SHIFT)) != 0UL &&
+			while ((pteval << (64 - IO_PAGE_SHIFT)) != 0UL &&
 			       (pteval == SG_ENT_PHYS_ADDRESS(sg)) &&
 			       ((pteval ^
 				 (SG_ENT_PHYS_ADDRESS(sg) + sg->length - 1UL)) >> IO_PAGE_SHIFT) == 0UL) {
 				pteval += sg->length;
-				sg++;
+				if (sg == sg_end)
+					break;
+				sg = sg_next(sg);
 			}
 			if ((pteval << (64 - IO_PAGE_SHIFT)) == 0UL)
 				pteval = ~0UL;
 		} while (dma_npages != 0);
-		dma_sg++;
+		dma_sg = sg_next(dma_sg);
 	}
 
 	if (unlikely(pci_iommu_batch_end() < 0L))
@@ -503,7 +505,7 @@ static int pci_4v_map_sg(struct pci_dev *pdev, struct scatterlist *sglist, int n
 	sgtmp = sglist;
 	while (used && sgtmp->dma_length) {
 		sgtmp->dma_address += dma_base;
-		sgtmp++;
+		sgtmp = sg_next(sgtmp);
 		used--;
 	}
 	used = nelems - used;
@@ -537,6 +539,7 @@ static void pci_4v_unmap_sg(struct pci_dev *pdev, struct scatterlist *sglist, in
 	struct pci_pbm_info *pbm;
 	struct iommu *iommu;
 	unsigned long flags, i, npages;
+	struct scatterlist *sg, *sgprv;
 	long entry;
 	u32 devhandle, bus_addr;
 
@@ -550,12 +553,15 @@ static void pci_4v_unmap_sg(struct pci_dev *pdev, struct scatterlist *sglist, in
 	devhandle = pbm->devhandle;
 	
 	bus_addr = sglist->dma_address & IO_PAGE_MASK;
-
-	for (i = 1; i < nelems; i++)
-		if (sglist[i].dma_length == 0)
+	sgprv = NULL;
+	for_each_sg(sglist, sg, nelems, i) {
+		if (sg->dma_length == 0)
 			break;
-	i--;
-	npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) -
+
+		sgprv = sg;
+	}
+
+	npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length) -
 		  bus_addr) >> IO_PAGE_SHIFT;
 
 	entry = ((bus_addr - iommu->page_table_map_base) >> IO_PAGE_SHIFT);
diff --git a/arch/sparc64/kernel/sbus.c b/arch/sparc64/kernel/sbus.c
index a1fd9bc..4479f91 100644
--- a/arch/sparc64/kernel/sbus.c
+++ b/arch/sparc64/kernel/sbus.c
@@ -11,6 +11,7 @@
 #include <linux/slab.h>
 #include <linux/init.h>
 #include <linux/interrupt.h>
+#include <linux/scatterlist.h>
 
 #include <asm/page.h>
 #include <asm/sbus.h>
@@ -348,7 +349,7 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 			   int nused, int nelems, unsigned long iopte_protection)
 {
 	struct scatterlist *dma_sg = sg;
-	struct scatterlist *sg_end = sg + nelems;
+	struct scatterlist *sg_end = sg_last(sg, nelems);
 	int i;
 
 	for (i = 0; i < nused; i++) {
@@ -383,7 +384,7 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 					len -= (IO_PAGE_SIZE - (tmp & (IO_PAGE_SIZE - 1UL)));
 					break;
 				}
-				sg++;
+				sg = sg_next(sg);
 			}
 
 			pteval = iopte_protection | (pteval & IOPTE_PAGE);
@@ -396,24 +397,25 @@ static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
 			}
 
 			pteval = (pteval & IOPTE_PAGE) + len;
-			sg++;
+			sg = sg_next(sg);
 
 			/* Skip over any tail mappings we've fully mapped,
 			 * adjusting pteval along the way.  Stop when we
 			 * detect a page crossing event.
 			 */
-			while (sg < sg_end &&
-			       (pteval << (64 - IO_PAGE_SHIFT)) != 0UL &&
+			while ((pteval << (64 - IO_PAGE_SHIFT)) != 0UL &&
 			       (pteval == SG_ENT_PHYS_ADDRESS(sg)) &&
 			       ((pteval ^
 				 (SG_ENT_PHYS_ADDRESS(sg) + sg->length - 1UL)) >> IO_PAGE_SHIFT) == 0UL) {
 				pteval += sg->length;
-				sg++;
+				if (sg == sg_end)
+					break;
+				sg = sg_next(sg);
 			}
 			if ((pteval << (64 - IO_PAGE_SHIFT)) == 0UL)
 				pteval = ~0UL;
 		} while (dma_npages != 0);
-		dma_sg++;
+		dma_sg = sg_next(dma_sg);
 	}
 }
 
@@ -461,7 +463,7 @@ int sbus_map_sg(struct sbus_dev *sdev, struct scatterlist *sglist, int nelems, i
 	sgtmp = sglist;
 	while (used && sgtmp->dma_length) {
 		sgtmp->dma_address += dma_base;
-		sgtmp++;
+		sgtmp = sg_next(sgtmp);
 		used--;
 	}
 	used = nelems - used;
@@ -486,6 +488,7 @@ void sbus_unmap_sg(struct sbus_dev *sdev, struct scatterlist *sglist, int nelems
 	struct strbuf *strbuf;
 	iopte_t *base;
 	unsigned long flags, i, npages;
+	struct scatterlist *sg, *sgprv;
 	u32 bus_addr;
 
 	if (unlikely(direction == SBUS_DMA_NONE))
@@ -496,12 +499,15 @@ void sbus_unmap_sg(struct sbus_dev *sdev, struct scatterlist *sglist, int nelems
 	strbuf = &info->strbuf;
 
 	bus_addr = sglist->dma_address & IO_PAGE_MASK;
-
-	for (i = 1; i < nelems; i++)
-		if (sglist[i].dma_length == 0)
+	sgprv = NULL;
+	for_each_sg(sglist, sg, nelems, i) {
+		if (sg->dma_length == 0)
 			break;
-	i--;
-	npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) -
+
+		sgprv = sg;
+	}
+
+	npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length) -
 		  bus_addr) >> IO_PAGE_SHIFT;
 
 	base = iommu->page_table +
@@ -545,6 +551,7 @@ void sbus_dma_sync_sg_for_cpu(struct sbus_dev *sdev, struct scatterlist *sglist,
 	struct iommu *iommu;
 	struct strbuf *strbuf;
 	unsigned long flags, npages, i;
+	struct scatterlist *sg, *sgprv;
 	u32 bus_addr;
 
 	info = sdev->bus->iommu;
@@ -552,12 +559,15 @@ void sbus_dma_sync_sg_for_cpu(struct sbus_dev *sdev, struct scatterlist *sglist,
 	strbuf = &info->strbuf;
 
 	bus_addr = sglist[0].dma_address & IO_PAGE_MASK;
-	for (i = 0; i < nelems; i++) {
-		if (!sglist[i].dma_length)
+	sgprv = NULL;
+	for_each_sg(sglist, sg, nelems, i) {
+		if (sg->dma_length == 0)
 			break;
+
+		sgprv = sg;
 	}
-	i--;
-	npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length)
+
+	npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length)
 		  - bus_addr) >> IO_PAGE_SHIFT;
 
 	spin_lock_irqsave(&iommu->lock, flags);
diff --git a/include/asm-sparc64/scatterlist.h b/include/asm-sparc64/scatterlist.h
index 048fdb4..703c5bb 100644
--- a/include/asm-sparc64/scatterlist.h
+++ b/include/asm-sparc64/scatterlist.h
@@ -20,4 +20,6 @@ struct scatterlist {
 
 #define ISA_DMA_THRESHOLD	(~0UL)
 
+#define ARCH_HAS_SG_CHAIN
+
 #endif /* !(_SPARC64_SCATTERLIST_H) */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 15/33] scsi: simplify scsi_free_sgtable()
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (13 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 14/33] SPARC64: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 16/33] SCSI: support for allocating large scatterlists Jens Axboe
                   ` (20 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, fujita.tomonori, James.Bottomley

Just pass in the command, no point in passing in the scatterlist
and scatterlist pool index seperately.

Cc: fujita.tomonori@lab.ntt.co.jp
Cc: James.Bottomley@SteelEye.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/scsi_lib.c     |    9 +++++----
 drivers/scsi/scsi_tgt_lib.c |    4 ++--
 include/scsi/scsi_cmnd.h    |    2 +-
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 9fe66f8..0b4610c 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -746,13 +746,14 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
 
 EXPORT_SYMBOL(scsi_alloc_sgtable);
 
-void scsi_free_sgtable(struct scatterlist *sgl, int index)
+void scsi_free_sgtable(struct scsi_cmnd *cmd)
 {
+	struct scatterlist *sgl = cmd->request_buffer;
 	struct scsi_host_sg_pool *sgp;
 
-	BUG_ON(index >= SG_MEMPOOL_NR);
+	BUG_ON(cmd->sglist_len >= SG_MEMPOOL_NR);
 
-	sgp = scsi_sg_pools + index;
+	sgp = scsi_sg_pools + cmd->sglist_len;
 	mempool_free(sgl, sgp->pool);
 }
 
@@ -778,7 +779,7 @@ EXPORT_SYMBOL(scsi_free_sgtable);
 static void scsi_release_buffers(struct scsi_cmnd *cmd)
 {
 	if (cmd->use_sg)
-		scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len);
+		scsi_free_sgtable(cmd);
 
 	/*
 	 * Zero these out.  They now point to freed memory, and it is
diff --git a/drivers/scsi/scsi_tgt_lib.c b/drivers/scsi/scsi_tgt_lib.c
index 2570f48..d6e58e5 100644
--- a/drivers/scsi/scsi_tgt_lib.c
+++ b/drivers/scsi/scsi_tgt_lib.c
@@ -329,7 +329,7 @@ static void scsi_tgt_cmd_done(struct scsi_cmnd *cmd)
 	scsi_tgt_uspace_send_status(cmd, tcmd->tag);
 
 	if (cmd->request_buffer)
-		scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len);
+		scsi_free_sgtable(cmd);
 
 	queue_work(scsi_tgtd, &tcmd->work);
 }
@@ -370,7 +370,7 @@ static int scsi_tgt_init_cmd(struct scsi_cmnd *cmd, gfp_t gfp_mask)
 	}
 
 	eprintk("cmd %p cnt %d\n", cmd, cmd->use_sg);
-	scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len);
+	scsi_free_sgtable(cmd);
 	return -EINVAL;
 }
 
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index a059f1b..760d4a5 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -133,7 +133,7 @@ extern void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count,
 extern void scsi_kunmap_atomic_sg(void *virt);
 
 extern struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *, gfp_t);
-extern void scsi_free_sgtable(struct scatterlist *, int);
+extern void scsi_free_sgtable(struct scsi_cmnd *);
 
 extern int scsi_dma_map(struct scsi_cmnd *cmd);
 extern void scsi_dma_unmap(struct scsi_cmnd *cmd);
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 16/33] SCSI: support for allocating large scatterlists
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (14 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 15/33] scsi: simplify scsi_free_sgtable() Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 17/33] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe
                   ` (19 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, fujita.tomonori, James.Bottomley

This is what enables large commands. If we need to allocate an
sgtable that doesn't fit in a single page, allocate several
SCSI_MAX_SG_SEGMENTS sized tables and chain them together.

SCSI defaults to large chained sg tables, if the arch supports it.

Cc: fujita.tomonori@lab.ntt.co.jp
Cc: James.Bottomley@SteelEye.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/scsi_lib.c  |  208 ++++++++++++++++++++++++++++++++++++----------
 include/scsi/scsi.h      |    7 --
 include/scsi/scsi_cmnd.h |    1 +
 3 files changed, 163 insertions(+), 53 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 0b4610c..60bb557 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -36,33 +36,20 @@
 
 struct scsi_host_sg_pool {
 	size_t		size;
-	char		*name; 
+	char		*name;
 	struct kmem_cache	*slab;
 	mempool_t	*pool;
 };
 
-#if (SCSI_MAX_PHYS_SEGMENTS < 32)
-#error SCSI_MAX_PHYS_SEGMENTS is too small
-#endif
-
-#define SP(x) { x, "sgpool-" #x } 
+#define SP(x) { x, "sgpool-" #x }
 static struct scsi_host_sg_pool scsi_sg_pools[] = {
 	SP(8),
 	SP(16),
 	SP(32),
-#if (SCSI_MAX_PHYS_SEGMENTS > 32)
 	SP(64),
-#if (SCSI_MAX_PHYS_SEGMENTS > 64)
 	SP(128),
-#if (SCSI_MAX_PHYS_SEGMENTS > 128)
 	SP(256),
-#if (SCSI_MAX_PHYS_SEGMENTS > 256)
-#error SCSI_MAX_PHYS_SEGMENTS is too large
-#endif
-#endif
-#endif
-#endif
-}; 	
+};
 #undef SP
 
 static void scsi_run_queue(struct request_queue *q);
@@ -703,45 +690,126 @@ static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate,
 	return NULL;
 }
 
-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
-{
-	struct scsi_host_sg_pool *sgp;
-	struct scatterlist *sgl;
+/*
+ * The maximum number of SG segments that we will put inside a scatterlist
+ * (unless chaining is used). Should ideally fit inside a single page, to
+ * avoid a higher order allocation.
+ */
+#define SCSI_MAX_SG_SEGMENTS	128
 
-	BUG_ON(!cmd->use_sg);
+/*
+ * Like SCSI_MAX_SG_SEGMENTS, but for archs that have sg chaining. This limit
+ * is totally arbitrary, a setting of 2048 will get you at least 8mb ios.
+ */
+#define SCSI_MAX_SG_CHAIN_SEGMENTS	2048
 
-	switch (cmd->use_sg) {
+static inline unsigned int scsi_sgtable_index(unsigned short nents)
+{
+	unsigned int index;
+
+	switch (nents) {
 	case 1 ... 8:
-		cmd->sglist_len = 0;
+		index = 0;
 		break;
 	case 9 ... 16:
-		cmd->sglist_len = 1;
+		index = 1;
 		break;
 	case 17 ... 32:
-		cmd->sglist_len = 2;
+		index = 2;
 		break;
-#if (SCSI_MAX_PHYS_SEGMENTS > 32)
 	case 33 ... 64:
-		cmd->sglist_len = 3;
+		index = 3;
 		break;
-#if (SCSI_MAX_PHYS_SEGMENTS > 64)
-	case 65 ... 128:
-		cmd->sglist_len = 4;
+	case 65 ... SCSI_MAX_SG_SEGMENTS:
+		index = 4;
 		break;
-#if (SCSI_MAX_PHYS_SEGMENTS  > 128)
-	case 129 ... 256:
-		cmd->sglist_len = 5;
-		break;
-#endif
-#endif
-#endif
 	default:
-		return NULL;
+		printk(KERN_ERR "scsi: bad segment count=%d\n", nents);
+		BUG();
 	}
 
-	sgp = scsi_sg_pools + cmd->sglist_len;
-	sgl = mempool_alloc(sgp->pool, gfp_mask);
-	return sgl;
+	return index;
+}
+
+struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+{
+	struct scsi_host_sg_pool *sgp;
+	struct scatterlist *sgl, *prev, *ret;
+	unsigned int index;
+	int this, left;
+
+	BUG_ON(!cmd->use_sg);
+
+	left = cmd->use_sg;
+	ret = prev = NULL;
+	do {
+		this = left;
+		if (this > SCSI_MAX_SG_SEGMENTS) {
+			this = SCSI_MAX_SG_SEGMENTS - 1;
+			index = SG_MEMPOOL_NR - 1;
+		} else
+			index = scsi_sgtable_index(this);
+
+		left -= this;
+
+		sgp = scsi_sg_pools + index;
+
+		sgl = mempool_alloc(sgp->pool, gfp_mask);
+		if (unlikely(!sgl))
+			goto enomem;
+
+		memset(sgl, 0, sizeof(*sgl) * sgp->size);
+
+		/*
+		 * first loop through, set initial index and return value
+		 */
+		if (!ret) {
+			cmd->sglist_len = index;
+			ret = sgl;
+		}
+
+		/*
+		 * chain previous sglist, if any. we know the previous
+		 * sglist must be the biggest one, or we would not have
+		 * ended up doing another loop.
+		 */
+		if (prev)
+			sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl);
+
+		/*
+		 * don't allow subsequent mempool allocs to sleep, it would
+		 * violate the mempool principle.
+		 */
+		gfp_mask &= ~__GFP_WAIT;
+		gfp_mask |= __GFP_HIGH;
+		prev = sgl;
+	} while (left);
+
+	/*
+	 * ->use_sg may get modified after dma mapping has potentially
+	 * shrunk the number of segments, so keep a copy of it for free.
+	 */
+	cmd->__use_sg = cmd->use_sg;
+	return ret;
+enomem:
+	if (ret) {
+		/*
+		 * Free entries chained off ret. Since we were trying to
+		 * allocate another sglist, we know that all entries are of
+		 * the max size.
+		 */
+		sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1;
+		prev = ret;
+		ret = &ret[SCSI_MAX_SG_SEGMENTS - 1];
+
+		while ((sgl = sg_chain_ptr(ret)) != NULL) {
+			ret = &sgl[SCSI_MAX_SG_SEGMENTS - 1];
+			mempool_free(sgl, sgp->pool);
+		}
+
+		mempool_free(prev, sgp->pool);
+	}
+	return NULL;
 }
 
 EXPORT_SYMBOL(scsi_alloc_sgtable);
@@ -753,6 +821,42 @@ void scsi_free_sgtable(struct scsi_cmnd *cmd)
 
 	BUG_ON(cmd->sglist_len >= SG_MEMPOOL_NR);
 
+	/*
+	 * if this is the biggest size sglist, check if we have
+	 * chained parts we need to free
+	 */
+	if (cmd->__use_sg > SCSI_MAX_SG_SEGMENTS) {
+		unsigned short this, left;
+		struct scatterlist *next;
+		unsigned int index;
+
+		left = cmd->__use_sg - (SCSI_MAX_SG_SEGMENTS - 1);
+		next = sg_chain_ptr(&sgl[SCSI_MAX_SG_SEGMENTS - 1]);
+		while (left && next) {
+			sgl = next;
+			this = left;
+			if (this > SCSI_MAX_SG_SEGMENTS) {
+				this = SCSI_MAX_SG_SEGMENTS - 1;
+				index = SG_MEMPOOL_NR - 1;
+			} else
+				index = scsi_sgtable_index(this);
+
+			left -= this;
+
+			sgp = scsi_sg_pools + index;
+
+			if (left)
+				next = sg_chain_ptr(&sgl[sgp->size - 1]);
+
+			mempool_free(sgl, sgp->pool);
+		}
+
+		/*
+		 * Restore original, will be freed below
+		 */
+		sgl = cmd->request_buffer;
+	}
+
 	sgp = scsi_sg_pools + cmd->sglist_len;
 	mempool_free(sgl, sgp->pool);
 }
@@ -994,7 +1098,6 @@ EXPORT_SYMBOL(scsi_io_completion);
 static int scsi_init_io(struct scsi_cmnd *cmd)
 {
 	struct request     *req = cmd->request;
-	struct scatterlist *sgpnt;
 	int		   count;
 
 	/*
@@ -1007,14 +1110,13 @@ static int scsi_init_io(struct scsi_cmnd *cmd)
 	/*
 	 * If sg table allocation fails, requeue request later.
 	 */
-	sgpnt = scsi_alloc_sgtable(cmd, GFP_ATOMIC);
-	if (unlikely(!sgpnt)) {
+	cmd->request_buffer = scsi_alloc_sgtable(cmd, GFP_ATOMIC);
+	if (unlikely(!cmd->request_buffer)) {
 		scsi_unprep_request(req);
 		return BLKPREP_DEFER;
 	}
 
 	req->buffer = NULL;
-	cmd->request_buffer = (char *) sgpnt;
 	if (blk_pc_request(req))
 		cmd->request_bufflen = req->data_len;
 	else
@@ -1578,8 +1680,22 @@ struct request_queue *__scsi_alloc_queue(struct Scsi_Host *shost,
 	if (!q)
 		return NULL;
 
+	/*
+	 * this limit is imposed by hardware restrictions
+	 */
 	blk_queue_max_hw_segments(q, shost->sg_tablesize);
-	blk_queue_max_phys_segments(q, SCSI_MAX_PHYS_SEGMENTS);
+
+	/*
+	 * In the future, sg chaining support will be mandatory and this
+	 * ifdef can then go away. Right now we don't have all archs
+	 * converted, so better keep it safe.
+	 */
+#ifdef ARCH_HAS_SG_CHAIN
+	blk_queue_max_phys_segments(q, SCSI_MAX_SG_CHAIN_SEGMENTS);
+#else
+	blk_queue_max_phys_segments(q, SCSI_MAX_SG_SEGMENTS);
+#endif
+
 	blk_queue_max_sectors(q, shost->max_sectors);
 	blk_queue_bounce_limit(q, scsi_calculate_bounce_limit(shost));
 	blk_queue_segment_boundary(q, shost->dma_boundary);
diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
index 9f8f80a..702fcfe 100644
--- a/include/scsi/scsi.h
+++ b/include/scsi/scsi.h
@@ -11,13 +11,6 @@
 #include <linux/types.h>
 
 /*
- *	The maximum sg list length SCSI can cope with
- *	(currently must be a power of 2 between 32 and 256)
- */
-#define SCSI_MAX_PHYS_SEGMENTS	MAX_PHYS_SEGMENTS
-
-
-/*
  *	SCSI command lengths
  */
 
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 760d4a5..937b3c4 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -72,6 +72,7 @@ struct scsi_cmnd {
 	/* These elements define the operation we ultimately want to perform */
 	unsigned short use_sg;	/* Number of pieces of scatter-gather */
 	unsigned short sglist_len;	/* size of malloc'd scatter-gather list */
+	unsigned short __use_sg;
 
 	unsigned underflow;	/* Return error if less than
 				   this amount is transferred */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 17/33] ll_rw_blk: temporarily enable max_segments tweaking
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (15 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 16/33] SCSI: support for allocating large scatterlists Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 18/33] libata: convert to using sg helpers Jens Axboe
                   ` (18 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

Expose this setting for now, so that users can play with enabling
large commands without defaulting it to on globally. This is a debug
patch, it will be dropped for the final versions.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/ll_rw_blk.c |   23 +++++++++++++++++++++++
 1 files changed, 23 insertions(+), 0 deletions(-)

diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c
index ab71087..3b85461 100644
--- a/block/ll_rw_blk.c
+++ b/block/ll_rw_blk.c
@@ -3980,7 +3980,23 @@ static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page)
 	return queue_var_show(max_hw_sectors_kb, (page));
 }
 
+static ssize_t queue_max_segments_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(q->max_phys_segments, page);
+}
+
+static ssize_t queue_max_segments_store(struct request_queue *q,
+					const char *page, size_t count)
+{
+	unsigned long segments;
+	ssize_t ret = queue_var_store(&segments, page, count);
 
+	spin_lock_irq(q->queue_lock);
+	q->max_phys_segments = segments;
+	spin_unlock_irq(q->queue_lock);
+
+	return ret;
+}
 static struct queue_sysfs_entry queue_requests_entry = {
 	.attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
 	.show = queue_requests_show,
@@ -4004,6 +4020,12 @@ static struct queue_sysfs_entry queue_max_hw_sectors_entry = {
 	.show = queue_max_hw_sectors_show,
 };
 
+static struct queue_sysfs_entry queue_max_segments_entry = {
+	.attr = {.name = "max_segments", .mode = S_IRUGO | S_IWUSR },
+	.show = queue_max_segments_show,
+	.store = queue_max_segments_store,
+};
+
 static struct queue_sysfs_entry queue_iosched_entry = {
 	.attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR },
 	.show = elv_iosched_show,
@@ -4015,6 +4037,7 @@ static struct attribute *default_attrs[] = {
 	&queue_ra_entry.attr,
 	&queue_max_hw_sectors_entry.attr,
 	&queue_max_sectors_entry.attr,
+	&queue_max_segments_entry.attr,
 	&queue_iosched_entry.attr,
 	NULL,
 };
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 18/33] libata: convert to using sg helpers
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (16 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 17/33] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 19/33] scsi_debug: support sg chaining Jens Axboe
                   ` (17 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, jgarzik, htejun

This converts libata to using the sg helpers for looking up sg
elements, instead of doing it manually.

Cc: jgarzik@pobox.com
Cc: htejun@gmail.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/ata/libata-core.c |   30 ++++++++++++++++--------------
 include/linux/libata.h    |   16 ++++++++++------
 2 files changed, 26 insertions(+), 20 deletions(-)

diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index 88e2dd0..788dc7b 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -1341,7 +1341,7 @@ static void ata_qc_complete_internal(struct ata_queued_cmd *qc)
  */
 unsigned ata_exec_internal_sg(struct ata_device *dev,
 			      struct ata_taskfile *tf, const u8 *cdb,
-			      int dma_dir, struct scatterlist *sg,
+			      int dma_dir, struct scatterlist *sgl,
 			      unsigned int n_elem)
 {
 	struct ata_port *ap = dev->ap;
@@ -1399,11 +1399,12 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 	qc->dma_dir = dma_dir;
 	if (dma_dir != DMA_NONE) {
 		unsigned int i, buflen = 0;
+		struct scatterlist *sg;
 
-		for (i = 0; i < n_elem; i++)
-			buflen += sg[i].length;
+		for_each_sg(sgl, sg, n_elem, i)
+			buflen += sg->length;
 
-		ata_sg_init(qc, sg, n_elem);
+		ata_sg_init(qc, sgl, n_elem);
 		qc->nbytes = buflen;
 	}
 
@@ -4000,7 +4001,7 @@ void ata_sg_clean(struct ata_queued_cmd *qc)
 		if (qc->n_elem)
 			dma_unmap_sg(ap->dev, sg, qc->n_elem, dir);
 		/* restore last sg */
-		sg[qc->orig_n_elem - 1].length += qc->pad_len;
+		sg_last(sg, qc->orig_n_elem)->length += qc->pad_len;
 		if (pad_buf) {
 			struct scatterlist *psg = &qc->pad_sgent;
 			void *addr = kmap_atomic(psg->page, KM_IRQ0);
@@ -4225,6 +4226,7 @@ void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, unsigned int buflen)
 	qc->orig_n_elem = 1;
 	qc->buf_virt = buf;
 	qc->nbytes = buflen;
+	qc->cursg = qc->__sg;
 
 	sg_init_one(&qc->sgent, buf, buflen);
 }
@@ -4250,6 +4252,7 @@ void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg,
 	qc->__sg = sg;
 	qc->n_elem = n_elem;
 	qc->orig_n_elem = n_elem;
+	qc->cursg = qc->__sg;
 }
 
 /**
@@ -4339,7 +4342,7 @@ static int ata_sg_setup(struct ata_queued_cmd *qc)
 {
 	struct ata_port *ap = qc->ap;
 	struct scatterlist *sg = qc->__sg;
-	struct scatterlist *lsg = &sg[qc->n_elem - 1];
+	struct scatterlist *lsg = sg_last(qc->__sg, qc->n_elem);
 	int n_elem, pre_n_elem, dir, trim_sg = 0;
 
 	VPRINTK("ENTER, ata%u\n", ap->print_id);
@@ -4503,7 +4506,6 @@ void ata_data_xfer_noirq(struct ata_device *adev, unsigned char *buf,
 static void ata_pio_sector(struct ata_queued_cmd *qc)
 {
 	int do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
-	struct scatterlist *sg = qc->__sg;
 	struct ata_port *ap = qc->ap;
 	struct page *page;
 	unsigned int offset;
@@ -4512,8 +4514,8 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
 	if (qc->curbytes == qc->nbytes - qc->sect_size)
 		ap->hsm_task_state = HSM_ST_LAST;
 
-	page = sg[qc->cursg].page;
-	offset = sg[qc->cursg].offset + qc->cursg_ofs;
+	page = qc->cursg->page;
+	offset = qc->cursg->offset + qc->cursg_ofs;
 
 	/* get the current page and offset */
 	page = nth_page(page, (offset >> PAGE_SHIFT));
@@ -4541,8 +4543,8 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
 	qc->curbytes += qc->sect_size;
 	qc->cursg_ofs += qc->sect_size;
 
-	if (qc->cursg_ofs == (&sg[qc->cursg])->length) {
-		qc->cursg++;
+	if (qc->cursg_ofs == qc->cursg->length) {
+		qc->cursg = sg_next(qc->cursg);
 		qc->cursg_ofs = 0;
 	}
 }
@@ -4635,7 +4637,7 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes)
 		ap->hsm_task_state = HSM_ST_LAST;
 
 next_sg:
-	if (unlikely(qc->cursg >= qc->n_elem)) {
+	if (unlikely(qc->cursg == sg_last(qc->__sg, qc->n_elem))) {
 		/*
 		 * The end of qc->sg is reached and the device expects
 		 * more data to transfer. In order not to overrun qc->sg
@@ -4658,7 +4660,7 @@ next_sg:
 		return;
 	}
 
-	sg = &qc->__sg[qc->cursg];
+	sg = qc->cursg;
 
 	page = sg->page;
 	offset = sg->offset + qc->cursg_ofs;
@@ -4697,7 +4699,7 @@ next_sg:
 	qc->cursg_ofs += count;
 
 	if (qc->cursg_ofs == sg->length) {
-		qc->cursg++;
+		qc->cursg = sg_next(qc->cursg);
 		qc->cursg_ofs = 0;
 	}
 
diff --git a/include/linux/libata.h b/include/linux/libata.h
index 47cd2a1..005b159 100644
--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -30,7 +30,7 @@
 #include <linux/interrupt.h>
 #include <linux/pci.h>
 #include <linux/dma-mapping.h>
-#include <asm/scatterlist.h>
+#include <linux/scatterlist.h>
 #include <linux/io.h>
 #include <linux/ata.h>
 #include <linux/workqueue.h>
@@ -386,6 +386,7 @@ struct ata_queued_cmd {
 	unsigned long		flags;		/* ATA_QCFLAG_xxx */
 	unsigned int		tag;
 	unsigned int		n_elem;
+	unsigned int		n_iter;
 	unsigned int		orig_n_elem;
 
 	int			dma_dir;
@@ -396,7 +397,7 @@ struct ata_queued_cmd {
 	unsigned int		nbytes;
 	unsigned int		curbytes;
 
-	unsigned int		cursg;
+	struct scatterlist	*cursg;
 	unsigned int		cursg_ofs;
 
 	struct scatterlist	sgent;
@@ -943,7 +944,7 @@ ata_sg_is_last(struct scatterlist *sg, struct ata_queued_cmd *qc)
 		return 1;
 	if (qc->pad_len)
 		return 0;
-	if (((sg - qc->__sg) + 1) == qc->n_elem)
+	if (qc->n_iter == qc->n_elem)
 		return 1;
 	return 0;
 }
@@ -951,6 +952,7 @@ ata_sg_is_last(struct scatterlist *sg, struct ata_queued_cmd *qc)
 static inline struct scatterlist *
 ata_qc_first_sg(struct ata_queued_cmd *qc)
 {
+	qc->n_iter = 0;
 	if (qc->n_elem)
 		return qc->__sg;
 	if (qc->pad_len)
@@ -963,8 +965,8 @@ ata_qc_next_sg(struct scatterlist *sg, struct ata_queued_cmd *qc)
 {
 	if (sg == &qc->pad_sgent)
 		return NULL;
-	if (++sg - qc->__sg < qc->n_elem)
-		return sg;
+	if (++qc->n_iter < qc->n_elem)
+		return sg_next(sg);
 	if (qc->pad_len)
 		return &qc->pad_sgent;
 	return NULL;
@@ -1158,9 +1160,11 @@ static inline void ata_qc_reinit(struct ata_queued_cmd *qc)
 	qc->dma_dir = DMA_NONE;
 	qc->__sg = NULL;
 	qc->flags = 0;
-	qc->cursg = qc->cursg_ofs = 0;
+	qc->cursg = NULL;
+	qc->cursg_ofs = 0;
 	qc->nbytes = qc->curbytes = 0;
 	qc->n_elem = 0;
+	qc->n_iter = 0;
 	qc->err_mask = 0;
 	qc->pad_len = 0;
 	qc->sect_size = ATA_SECT_SIZE;
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 19/33] scsi_debug: support sg chaining
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (17 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 18/33] libata: convert to using sg helpers Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 20/33] scsi generic: sg chaining support Jens Axboe
                   ` (16 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, dgilbert

Cc: dgilbert@interlog.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/scsi_debug.c |   30 ++++++++++++++++--------------
 1 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 4cd9c58..46a3e07 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -38,6 +38,7 @@
 #include <linux/proc_fs.h>
 #include <linux/vmalloc.h>
 #include <linux/moduleparam.h>
+#include <linux/scatterlist.h>
 
 #include <linux/blkdev.h>
 #include "scsi.h"
@@ -600,7 +601,7 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
 	int k, req_len, act_len, len, active;
 	void * kaddr;
 	void * kaddr_off;
-	struct scatterlist * sgpnt;
+	struct scatterlist * sg;
 
 	if (0 == scp->request_bufflen)
 		return 0;
@@ -619,16 +620,16 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
 			scp->resid = req_len - act_len;
 		return 0;
 	}
-	sgpnt = (struct scatterlist *)scp->request_buffer;
 	active = 1;
-	for (k = 0, req_len = 0, act_len = 0; k < scp->use_sg; ++k, ++sgpnt) {
+	req_len = act_len = 0;
+	scsi_for_each_sg(scp, sg, scp->use_sg, k) {
 		if (active) {
 			kaddr = (unsigned char *)
-				kmap_atomic(sgpnt->page, KM_USER0);
+				kmap_atomic(sg->page, KM_USER0);
 			if (NULL == kaddr)
 				return (DID_ERROR << 16);
-			kaddr_off = (unsigned char *)kaddr + sgpnt->offset;
-			len = sgpnt->length;
+			kaddr_off = (unsigned char *)kaddr + sg->offset;
+			len = sg->length;
 			if ((req_len + len) > arr_len) {
 				active = 0;
 				len = arr_len - req_len;
@@ -637,7 +638,7 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
 			kunmap_atomic(kaddr, KM_USER0);
 			act_len += len;
 		}
-		req_len += sgpnt->length;
+		req_len += sg->length;
 	}
 	if (scp->resid)
 		scp->resid -= act_len;
@@ -653,7 +654,7 @@ static int fetch_to_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
 	int k, req_len, len, fin;
 	void * kaddr;
 	void * kaddr_off;
-	struct scatterlist * sgpnt;
+	struct scatterlist * sg;
 
 	if (0 == scp->request_bufflen)
 		return 0;
@@ -668,13 +669,14 @@ static int fetch_to_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
 		memcpy(arr, scp->request_buffer, len);
 		return len;
 	}
-	sgpnt = (struct scatterlist *)scp->request_buffer;
-	for (k = 0, req_len = 0, fin = 0; k < scp->use_sg; ++k, ++sgpnt) {
-		kaddr = (unsigned char *)kmap_atomic(sgpnt->page, KM_USER0);
+	sg = scsi_sglist(scp);
+	req_len = fin = 0;
+	for (k = 0; k < scp->use_sg; ++k, sg = sg_next(sg)) {
+		kaddr = (unsigned char *)kmap_atomic(sg->page, KM_USER0);
 		if (NULL == kaddr)
 			return -1;
-		kaddr_off = (unsigned char *)kaddr + sgpnt->offset;
-		len = sgpnt->length;
+		kaddr_off = (unsigned char *)kaddr + sg->offset;
+		len = sg->length;
 		if ((req_len + len) > max_arr_len) {
 			len = max_arr_len - req_len;
 			fin = 1;
@@ -683,7 +685,7 @@ static int fetch_to_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr,
 		kunmap_atomic(kaddr, KM_USER0);
 		if (fin)
 			return req_len + len;
-		req_len += sgpnt->length;
+		req_len += sg->length;
 	}
 	return req_len;
 }
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 20/33] scsi generic: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (18 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 19/33] scsi_debug: support sg chaining Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 21/33] qla1280: " Jens Axboe
                   ` (15 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, dgilbert

Cc: dgilbert@interlog.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/sg.c |   16 ++++++++--------
 1 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index 85d3894..2fc24e7 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -1168,7 +1168,7 @@ sg_vma_nopage(struct vm_area_struct *vma, unsigned long addr, int *type)
 	sg = rsv_schp->buffer;
 	sa = vma->vm_start;
 	for (k = 0; (k < rsv_schp->k_use_sg) && (sa < vma->vm_end);
-	     ++k, ++sg) {
+	     ++k, sg = sg_next(sg)) {
 		len = vma->vm_end - sa;
 		len = (len < sg->length) ? len : sg->length;
 		if (offset < len) {
@@ -1212,7 +1212,7 @@ sg_mmap(struct file *filp, struct vm_area_struct *vma)
 	sa = vma->vm_start;
 	sg = rsv_schp->buffer;
 	for (k = 0; (k < rsv_schp->k_use_sg) && (sa < vma->vm_end);
-	     ++k, ++sg) {
+	     ++k, sg = sg_next(sg)) {
 		len = vma->vm_end - sa;
 		len = (len < sg->length) ? len : sg->length;
 		sa += len;
@@ -1866,7 +1866,7 @@ sg_build_indirect(Sg_scatter_hold * schp, Sg_fd * sfp, int buff_size)
 	}
 	for (k = 0, sg = schp->buffer, rem_sz = blk_size;
 	     (rem_sz > 0) && (k < mx_sc_elems);
-	     ++k, rem_sz -= ret_sz, ++sg) {
+	     ++k, rem_sz -= ret_sz, sg = sg_next(sg)) {
 		
 		num = (rem_sz > scatter_elem_sz_prev) ?
 		      scatter_elem_sz_prev : rem_sz;
@@ -1939,7 +1939,7 @@ sg_write_xfer(Sg_request * srp)
 		if (res)
 			return res;
 
-		for (; p; ++sg, ksglen = sg->length,
+		for (; p; sg = sg_next(sg), ksglen = sg->length,
 		     p = page_address(sg->page)) {
 			if (usglen <= 0)
 				break;
@@ -2018,7 +2018,7 @@ sg_remove_scat(Sg_scatter_hold * schp)
 			int k;
 
 			for (k = 0; (k < schp->k_use_sg) && sg->page;
-			     ++k, ++sg) {
+			     ++k, sg = sg_next(sg)) {
 				SCSI_LOG_TIMEOUT(5, printk(
 				    "sg_remove_scat: k=%d, pg=0x%p, len=%d\n",
 				    k, sg->page, sg->length));
@@ -2071,7 +2071,7 @@ sg_read_xfer(Sg_request * srp)
 		if (res)
 			return res;
 
-		for (; p; ++sg, ksglen = sg->length,
+		for (; p; sg = sg_next(sg), ksglen = sg->length,
 		     p = page_address(sg->page)) {
 			if (usglen <= 0)
 				break;
@@ -2118,7 +2118,7 @@ sg_read_oxfer(Sg_request * srp, char __user *outp, int num_read_xfer)
 	if ((!outp) || (num_read_xfer <= 0))
 		return 0;
 
-	for (k = 0; (k < schp->k_use_sg) && sg->page; ++k, ++sg) {
+	for (k = 0; (k < schp->k_use_sg) && sg->page; ++k, sg = sg_next(sg)) {
 		num = sg->length;
 		if (num > num_read_xfer) {
 			if (__copy_to_user(outp, page_address(sg->page),
@@ -2168,7 +2168,7 @@ sg_link_reserve(Sg_fd * sfp, Sg_request * srp, int size)
 	SCSI_LOG_TIMEOUT(4, printk("sg_link_reserve: size=%d\n", size));
 	rem = size;
 
-	for (k = 0; k < rsv_schp->k_use_sg; ++k, ++sg) {
+	for (k = 0; k < rsv_schp->k_use_sg; ++k, sg = sg_next(sg)) {
 		num = sg->length;
 		if (rem <= num) {
 			sfp->save_scat_len = num;
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 21/33] qla1280: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (19 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 20/33] scsi generic: sg chaining support Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 22/33] aic94xx: " Jens Axboe
                   ` (14 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, hch

Interesting hardware setup...

Cc: hch@infradead.org
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/qla1280.c |   66 +++++++++++++++++++++++++++---------------------
 1 files changed, 37 insertions(+), 29 deletions(-)

diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
index 54d8bdf..bd805ec 100644
--- a/drivers/scsi/qla1280.c
+++ b/drivers/scsi/qla1280.c
@@ -2775,7 +2775,7 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 	struct device_reg __iomem *reg = ha->iobase;
 	struct scsi_cmnd *cmd = sp->cmd;
 	cmd_a64_entry_t *pkt;
-	struct scatterlist *sg = NULL;
+	struct scatterlist *sg = NULL, *s;
 	__le32 *dword_ptr;
 	dma_addr_t dma_handle;
 	int status = 0;
@@ -2889,13 +2889,16 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 	 * Load data segments.
 	 */
 	if (seg_cnt) {	/* If data transfer. */
+		int remseg = seg_cnt;
 		/* Setup packet address segment pointer. */
 		dword_ptr = (u32 *)&pkt->dseg_0_address;
 
 		if (cmd->use_sg) {	/* If scatter gather */
 			/* Load command entry data segments. */
-			for (cnt = 0; cnt < 2 && seg_cnt; cnt++, seg_cnt--) {
-				dma_handle = sg_dma_address(sg);
+			for_each_sg(sg, s, seg_cnt, cnt) {
+				if (cnt == 2)
+					break;
+				dma_handle = sg_dma_address(s);
 #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_SGI_SN2)
 				if (ha->flags.use_pci_vchannel)
 					sn_pci_set_vchan(ha->pdev,
@@ -2906,12 +2909,12 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 					cpu_to_le32(pci_dma_lo32(dma_handle));
 				*dword_ptr++ =
 					cpu_to_le32(pci_dma_hi32(dma_handle));
-				*dword_ptr++ = cpu_to_le32(sg_dma_len(sg));
-				sg++;
+				*dword_ptr++ = cpu_to_le32(sg_dma_len(s));
 				dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n",
 					cpu_to_le32(pci_dma_hi32(dma_handle)),
 					cpu_to_le32(pci_dma_lo32(dma_handle)),
-					cpu_to_le32(sg_dma_len(sg)));
+					cpu_to_le32(sg_dma_len(sg_next(s))));
+				remseg--;
 			}
 			dprintk(5, "qla1280_64bit_start_scsi: Scatter/gather "
 				"command packet data - b %i, t %i, l %i \n",
@@ -2926,7 +2929,9 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 			dprintk(3, "S/G Building Continuation...seg_cnt=0x%x "
 				"remains\n", seg_cnt);
 
-			while (seg_cnt > 0) {
+			while (remseg > 0) {
+				/* Update sg start */
+				sg = s;
 				/* Adjust ring index. */
 				ha->req_ring_index++;
 				if (ha->req_ring_index == REQUEST_ENTRY_CNT) {
@@ -2952,9 +2957,10 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 					(u32 *)&((struct cont_a64_entry *) pkt)->dseg_0_address;
 
 				/* Load continuation entry data segments. */
-				for (cnt = 0; cnt < 5 && seg_cnt;
-				     cnt++, seg_cnt--) {
-					dma_handle = sg_dma_address(sg);
+				for_each_sg(sg, s, remseg, cnt) {
+					if (cnt == 5)
+						break;
+					dma_handle = sg_dma_address(s);
 #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_SGI_SN2)
 				if (ha->flags.use_pci_vchannel)
 					sn_pci_set_vchan(ha->pdev, 
@@ -2966,12 +2972,12 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 					*dword_ptr++ =
 						cpu_to_le32(pci_dma_hi32(dma_handle));
 					*dword_ptr++ =
-						cpu_to_le32(sg_dma_len(sg));
+						cpu_to_le32(sg_dma_len(s));
 					dprintk(3, "S/G Segment Cont. phys_addr=%x %x, len=0x%x\n",
 						cpu_to_le32(pci_dma_hi32(dma_handle)),
 						cpu_to_le32(pci_dma_lo32(dma_handle)),
-						cpu_to_le32(sg_dma_len(sg)));
-					sg++;
+						cpu_to_le32(sg_dma_len(s)));
+					remseg--;
 				}
 				dprintk(5, "qla1280_64bit_start_scsi: "
 					"continuation packet data - b %i, t "
@@ -3062,7 +3068,7 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 	struct device_reg __iomem *reg = ha->iobase;
 	struct scsi_cmnd *cmd = sp->cmd;
 	struct cmd_entry *pkt;
-	struct scatterlist *sg = NULL;
+	struct scatterlist *sg = NULL, *s;
 	__le32 *dword_ptr;
 	int status = 0;
 	int cnt;
@@ -3188,6 +3194,7 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 	 * Load data segments.
 	 */
 	if (seg_cnt) {
+		int remseg = seg_cnt;
 		/* Setup packet address segment pointer. */
 		dword_ptr = &pkt->dseg_0_address;
 
@@ -3196,22 +3203,25 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 			qla1280_dump_buffer(1, (char *)sg, 4 * 16);
 
 			/* Load command entry data segments. */
-			for (cnt = 0; cnt < 4 && seg_cnt; cnt++, seg_cnt--) {
+			for_each_sg(sg, s, seg_cnt, cnt) {
+				if (cnt == 4)
+					break;
 				*dword_ptr++ =
-					cpu_to_le32(pci_dma_lo32(sg_dma_address(sg)));
-				*dword_ptr++ =
-					cpu_to_le32(sg_dma_len(sg));
+					cpu_to_le32(pci_dma_lo32(sg_dma_address(s)));
+				*dword_ptr++ = cpu_to_le32(sg_dma_len(s));
 				dprintk(3, "S/G Segment phys_addr=0x%lx, len=0x%x\n",
-					(pci_dma_lo32(sg_dma_address(sg))),
-					(sg_dma_len(sg)));
-				sg++;
+					(pci_dma_lo32(sg_dma_address(s))),
+					(sg_dma_len(s)));
+				remseg--;
 			}
 			/*
 			 * Build continuation packets.
 			 */
 			dprintk(3, "S/G Building Continuation"
 				"...seg_cnt=0x%x remains\n", seg_cnt);
-			while (seg_cnt > 0) {
+			while (remseg > 0) {
+				/* Continue from end point */
+				sg = s;
 				/* Adjust ring index. */
 				ha->req_ring_index++;
 				if (ha->req_ring_index == REQUEST_ENTRY_CNT) {
@@ -3239,18 +3249,16 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
 					&((struct cont_entry *) pkt)->dseg_0_address;
 
 				/* Load continuation entry data segments. */
-				for (cnt = 0; cnt < 7 && seg_cnt;
-				     cnt++, seg_cnt--) {
+				for_each_sg(sg, s, remseg, cnt) {
 					*dword_ptr++ =
-						cpu_to_le32(pci_dma_lo32(sg_dma_address(sg)));
+						cpu_to_le32(pci_dma_lo32(sg_dma_address(s)));
 					*dword_ptr++ =
-						cpu_to_le32(sg_dma_len(sg));
+						cpu_to_le32(sg_dma_len(s));
 					dprintk(1,
 						"S/G Segment Cont. phys_addr=0x%x, "
 						"len=0x%x\n",
-						cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))),
-						cpu_to_le32(sg_dma_len(sg)));
-					sg++;
+						cpu_to_le32(pci_dma_lo32(sg_dma_address(s))),
+						cpu_to_le32(sg_dma_len(s)));
 				}
 				dprintk(5, "qla1280_32bit_start_scsi: "
 					"continuation packet data - "
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 22/33] aic94xx: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (20 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 21/33] qla1280: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 23/33] qlogicpti: " Jens Axboe
                   ` (13 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/aic94xx/aic94xx_task.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
index e2ad5be..1327281 100644
--- a/drivers/scsi/aic94xx/aic94xx_task.c
+++ b/drivers/scsi/aic94xx/aic94xx_task.c
@@ -89,7 +89,7 @@ static inline int asd_map_scatterlist(struct sas_task *task,
 			res = -ENOMEM;
 			goto err_unmap;
 		}
-		for (sc = task->scatter, i = 0; i < num_sg; i++, sc++) {
+		for_each_sg(task->scatter, sc, num_sg, i) {
 			struct sg_el *sg =
 				&((struct sg_el *)ascb->sg_arr->vaddr)[i];
 			sg->bus_addr = cpu_to_le64((u64)sg_dma_address(sc));
@@ -98,7 +98,7 @@ static inline int asd_map_scatterlist(struct sas_task *task,
 				sg->flags |= ASD_SG_EL_LIST_EOL;
 		}
 
-		for (sc = task->scatter, i = 0; i < 2; i++, sc++) {
+		for_each_sg(task->scatter, sc, 2, i) {
 			sg_arr[i].bus_addr =
 				cpu_to_le64((u64)sg_dma_address(sc));
 			sg_arr[i].size = cpu_to_le32((u32)sg_dma_len(sc));
@@ -110,7 +110,7 @@ static inline int asd_map_scatterlist(struct sas_task *task,
 		sg_arr[2].bus_addr=cpu_to_le64((u64)ascb->sg_arr->dma_handle);
 	} else {
 		int i;
-		for (sc = task->scatter, i = 0; i < num_sg; i++, sc++) {
+		for_each_sg(task->scatter, sc, num_sg, i) {
 			sg_arr[i].bus_addr =
 				cpu_to_le64((u64)sg_dma_address(sc));
 			sg_arr[i].size = cpu_to_le32((u32)sg_dma_len(sc));
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 23/33] qlogicpti: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (21 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 22/33] aic94xx: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 24/33] ide-scsi: " Jens Axboe
                   ` (12 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, davem

Cc: davem@davemloft.net
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/qlogicpti.c |   15 ++++++++-------
 1 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c
index c4195ea..e36e6cd 100644
--- a/drivers/scsi/qlogicpti.c
+++ b/drivers/scsi/qlogicpti.c
@@ -868,7 +868,7 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
 			   struct qlogicpti *qpti, u_int in_ptr, u_int out_ptr)
 {
 	struct dataseg *ds;
-	struct scatterlist *sg;
+	struct scatterlist *sg, *s;
 	int i, n;
 
 	if (Cmnd->use_sg) {
@@ -884,11 +884,12 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
 		n = sg_count;
 		if (n > 4)
 			n = 4;
-		for (i = 0; i < n; i++, sg++) {
-			ds[i].d_base = sg_dma_address(sg);
-			ds[i].d_count = sg_dma_len(sg);
+		for_each_sg(sg, s, n, i) {
+			ds[i].d_base = sg_dma_address(s);
+			ds[i].d_count = sg_dma_len(g);
 		}
 		sg_count -= 4;
+		sg = s;
 		while (sg_count > 0) {
 			struct Continuation_Entry *cont;
 
@@ -907,9 +908,9 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd,
 			n = sg_count;
 			if (n > 7)
 				n = 7;
-			for (i = 0; i < n; i++, sg++) {
-				ds[i].d_base = sg_dma_address(sg);
-				ds[i].d_count = sg_dma_len(sg);
+			for_each_sg(sg, s, n, i) {
+				ds[i].d_base = sg_dma_address(s);
+				ds[i].d_count = sg_dma_len(s);
 			}
 			sg_count -= n;
 		}
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 24/33] ide-scsi: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (22 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 23/33] qlogicpti: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-18 21:03   ` Bartlomiej Zolnierkiewicz
  2007-07-16  9:47 ` [PATCH 25/33] gdth: " Jens Axboe
                   ` (11 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, bzolnier

Cc: bzolnier@gmail.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/ide-scsi.c |   31 +++++++++++++++++++------------
 1 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/drivers/scsi/ide-scsi.c b/drivers/scsi/ide-scsi.c
index bb90df8..61bcc31 100644
--- a/drivers/scsi/ide-scsi.c
+++ b/drivers/scsi/ide-scsi.c
@@ -70,6 +70,7 @@ typedef struct idescsi_pc_s {
 	u8 *buffer;				/* Data buffer */
 	u8 *current_position;			/* Pointer into the above buffer */
 	struct scatterlist *sg;			/* Scatter gather table */
+	struct scatterlist *last_sg;		/* Last sg element */
 	int b_count;				/* Bytes transferred from current entry */
 	struct scsi_cmnd *scsi_cmd;		/* SCSI command */
 	void (*done)(struct scsi_cmnd *);	/* Scsi completion routine */
@@ -175,11 +176,6 @@ static void idescsi_input_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigne
 	char *buf;
 
 	while (bcount) {
-		if (pc->sg - (struct scatterlist *) pc->scsi_cmd->request_buffer > pc->scsi_cmd->use_sg) {
-			printk (KERN_ERR "ide-scsi: scatter gather table too small, discarding data\n");
-			idescsi_discard_data (drive, bcount);
-			return;
-		}
 		count = min(pc->sg->length - pc->b_count, bcount);
 		if (PageHighMem(pc->sg->page)) {
 			unsigned long flags;
@@ -198,10 +194,17 @@ static void idescsi_input_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigne
 		}
 		bcount -= count; pc->b_count += count;
 		if (pc->b_count == pc->sg->length) {
-			pc->sg++;
+			if (pc->sg == pc->last_sg)
+				break;
+			pc->sg = sg_next(pc->sg);
 			pc->b_count = 0;
 		}
 	}
+
+	if (bcount) {
+		printk (KERN_ERR "ide-scsi: scatter gather table too small, discarding data\n");
+		idescsi_discard_data (drive, bcount);
+	}
 }
 
 static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigned int bcount)
@@ -210,11 +213,6 @@ static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsign
 	char *buf;
 
 	while (bcount) {
-		if (pc->sg - (struct scatterlist *) pc->scsi_cmd->request_buffer > pc->scsi_cmd->use_sg) {
-			printk (KERN_ERR "ide-scsi: scatter gather table too small, padding with zeros\n");
-			idescsi_output_zeros (drive, bcount);
-			return;
-		}
 		count = min(pc->sg->length - pc->b_count, bcount);
 		if (PageHighMem(pc->sg->page)) {
 			unsigned long flags;
@@ -233,10 +231,17 @@ static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsign
 		}
 		bcount -= count; pc->b_count += count;
 		if (pc->b_count == pc->sg->length) {
-			pc->sg++;
+			if (pc->sg == pc->last_sg)
+				break;
+			pc->sg = sg_next(pc->sg);
 			pc->b_count = 0;
 		}
 	}
+
+	if (bcount) {
+		printk (KERN_ERR "ide-scsi: scatter gather table too small, padding with zeros\n");
+		idescsi_output_zeros (drive, bcount);
+	}
 }
 
 /*
@@ -910,9 +915,11 @@ static int idescsi_queue (struct scsi_cmnd *cmd,
 	if (cmd->use_sg) {
 		pc->buffer = NULL;
 		pc->sg = cmd->request_buffer;
+		pc->last_sg = sg_last(pc->sg, cmd->use_sg);
 	} else {
 		pc->buffer = cmd->request_buffer;
 		pc->sg = NULL;
+		pc->last_sg = NULL;
 	}
 	pc->b_count = 0;
 	pc->request_transfer = pc->buffer_size = cmd->request_bufflen;
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 25/33] gdth: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (23 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 24/33] ide-scsi: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 26/33] aha1542: convert to use the data buffer accessors Jens Axboe
                   ` (10 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, achim_leubner

Cc: achim_leubner@adaptec.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/gdth.c |   45 +++++++++++++++++++++++----------------------
 1 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c
index d0b95ce..7225c95 100644
--- a/drivers/scsi/gdth.c
+++ b/drivers/scsi/gdth.c
@@ -2656,7 +2656,7 @@ static void gdth_copy_internal_data(int hanum,Scsi_Cmnd *scp,
 {
     ushort cpcount,i;
     ushort cpsum,cpnow;
-    struct scatterlist *sl;
+    struct scatterlist *sl, *sg;
     gdth_ha_str *ha;
     char *address;
 
@@ -2665,29 +2665,30 @@ static void gdth_copy_internal_data(int hanum,Scsi_Cmnd *scp,
 
     if (scp->use_sg) {
         sl = (struct scatterlist *)scp->request_buffer;
-        for (i=0,cpsum=0; i<scp->use_sg; ++i,++sl) {
+	cpsum = 0;
+	for_each_sg(sl, sg, scp->use_sg, i) {
             unsigned long flags;
-            cpnow = (ushort)sl->length;
+            cpnow = (ushort)sg->length;
             TRACE(("copy_internal() now %d sum %d count %d %d\n",
                           cpnow,cpsum,cpcount,(ushort)scp->bufflen));
             if (cpsum+cpnow > cpcount) 
                 cpnow = cpcount - cpsum;
             cpsum += cpnow;
-            if (!sl->page) {
+            if (!sg->page) {
                 printk("GDT-HA %d: invalid sc/gt element in gdth_copy_internal_data()\n",
                        hanum);
                 return;
             }
             local_irq_save(flags);
 #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,0)
-            address = kmap_atomic(sl->page, KM_BIO_SRC_IRQ) + sl->offset;
+            address = kmap_atomic(sg->page, KM_BIO_SRC_IRQ) + sg->offset;
             memcpy(address,buffer,cpnow);
-            flush_dcache_page(sl->page);
+            flush_dcache_page(sg->page);
             kunmap_atomic(address, KM_BIO_SRC_IRQ);
 #else
-            address = kmap_atomic(sl->page, KM_BH_IRQ) + sl->offset;
+            address = kmap_atomic(sg->page, KM_BH_IRQ) + sg->offset;
             memcpy(address,buffer,cpnow);
-            flush_dcache_page(sl->page);
+            flush_dcache_page(sg->page);
             kunmap_atomic(address, KM_BH_IRQ);
 #endif
             local_irq_restore(flags);
@@ -2807,7 +2808,7 @@ static int gdth_fill_cache_cmd(int hanum,Scsi_Cmnd *scp,ushort hdrive)
 {
     register gdth_ha_str *ha;
     register gdth_cmd_str *cmdp;
-    struct scatterlist *sl;
+    struct scatterlist *sl, *sg;
     ulong32 cnt, blockcnt;
     ulong64 no, blockno;
     dma_addr_t phys_addr;
@@ -2913,25 +2914,25 @@ static int gdth_fill_cache_cmd(int hanum,Scsi_Cmnd *scp,ushort hdrive)
             if (mode64) {
                 cmdp->u.cache64.DestAddr= (ulong64)-1;
                 cmdp->u.cache64.sg_canz = sgcnt;
-                for (i=0; i<sgcnt; ++i,++sl) {
-                    cmdp->u.cache64.sg_lst[i].sg_ptr = sg_dma_address(sl);
+		for_each_sg(sl, sg, sgcnt, i) {
+                    cmdp->u.cache64.sg_lst[i].sg_ptr = sg_dma_address(sg);
 #ifdef GDTH_DMA_STATISTICS
                     if (cmdp->u.cache64.sg_lst[i].sg_ptr > (ulong64)0xffffffff)
                         ha->dma64_cnt++;
                     else
                         ha->dma32_cnt++;
 #endif
-                    cmdp->u.cache64.sg_lst[i].sg_len = sg_dma_len(sl);
+                    cmdp->u.cache64.sg_lst[i].sg_len = sg_dma_len(sg);
                 }
             } else {
                 cmdp->u.cache.DestAddr= 0xffffffff;
                 cmdp->u.cache.sg_canz = sgcnt;
-                for (i=0; i<sgcnt; ++i,++sl) {
-                    cmdp->u.cache.sg_lst[i].sg_ptr = sg_dma_address(sl);
+		for_each_sg(sl, sg, sgcnt, i) {
+                    cmdp->u.cache.sg_lst[i].sg_ptr = sg_dma_address(sg);
 #ifdef GDTH_DMA_STATISTICS
                     ha->dma32_cnt++;
 #endif
-                    cmdp->u.cache.sg_lst[i].sg_len = sg_dma_len(sl);
+                    cmdp->u.cache.sg_lst[i].sg_len = sg_dma_len(sg);
                 }
             }
 
@@ -3017,7 +3018,7 @@ static int gdth_fill_raw_cmd(int hanum,Scsi_Cmnd *scp,unchar b)
 {
     register gdth_ha_str *ha;
     register gdth_cmd_str *cmdp;
-    struct scatterlist *sl;
+    struct scatterlist *sl, *sg;
     ushort i;
     dma_addr_t phys_addr, sense_paddr;
     int cmd_index, sgcnt, mode64;
@@ -3120,25 +3121,25 @@ static int gdth_fill_raw_cmd(int hanum,Scsi_Cmnd *scp,unchar b)
             if (mode64) {
                 cmdp->u.raw64.sdata = (ulong64)-1;
                 cmdp->u.raw64.sg_ranz = sgcnt;
-                for (i=0; i<sgcnt; ++i,++sl) {
-                    cmdp->u.raw64.sg_lst[i].sg_ptr = sg_dma_address(sl);
+		for_each_sg(sl, sg, sgcnt, i) {
+                    cmdp->u.raw64.sg_lst[i].sg_ptr = sg_dma_address(sg);
 #ifdef GDTH_DMA_STATISTICS
                     if (cmdp->u.raw64.sg_lst[i].sg_ptr > (ulong64)0xffffffff)
                         ha->dma64_cnt++;
                     else
                         ha->dma32_cnt++;
 #endif
-                    cmdp->u.raw64.sg_lst[i].sg_len = sg_dma_len(sl);
+                    cmdp->u.raw64.sg_lst[i].sg_len = sg_dma_len(sg);
                 }
             } else {
                 cmdp->u.raw.sdata = 0xffffffff;
                 cmdp->u.raw.sg_ranz = sgcnt;
-                for (i=0; i<sgcnt; ++i,++sl) {
-                    cmdp->u.raw.sg_lst[i].sg_ptr = sg_dma_address(sl);
+		for_each_sg(sl, sg, sgcnt, i) {
+                    cmdp->u.raw.sg_lst[i].sg_ptr = sg_dma_address(sg);
 #ifdef GDTH_DMA_STATISTICS
                     ha->dma32_cnt++;
 #endif
-                    cmdp->u.raw.sg_lst[i].sg_len = sg_dma_len(sl);
+                    cmdp->u.raw.sg_lst[i].sg_len = sg_dma_len(sg);
                 }
             }
 
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 26/33] aha1542: convert to use the data buffer accessors
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (24 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 25/33] gdth: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 27/33] advansys: " Jens Axboe
                   ` (9 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/aha1542.c |   32 +++++++++++++++-----------------
 1 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c
index cbbfbc9..961a188 100644
--- a/drivers/scsi/aha1542.c
+++ b/drivers/scsi/aha1542.c
@@ -61,15 +61,15 @@ static void BAD_DMA(void *address, unsigned int length)
 }
 
 static void BAD_SG_DMA(Scsi_Cmnd * SCpnt,
-		       struct scatterlist *sgpnt,
+		       struct scatterlist *sgp,
 		       int nseg,
 		       int badseg)
 {
 	printk(KERN_CRIT "sgpnt[%d:%d] page %p/0x%llx length %u\n",
 	       badseg, nseg,
-	       page_address(sgpnt[badseg].page) + sgpnt[badseg].offset,
-	       (unsigned long long)SCSI_SG_PA(&sgpnt[badseg]),
-	       sgpnt[badseg].length);
+	       page_address(sgp->page) + sgp->offset,
+	       (unsigned long long)SCSI_SG_PA(sgp),
+	       sgp->length);
 
 	/*
 	 * Not safe to continue.
@@ -691,7 +691,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
 	memcpy(ccb[mbo].cdb, cmd, ccb[mbo].cdblen);
 
 	if (SCpnt->use_sg) {
-		struct scatterlist *sgpnt;
+		struct scatterlist *sg;
 		struct chain *cptr;
 #ifdef DEBUG
 		unsigned char *ptr;
@@ -699,23 +699,21 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
 		int i;
 		ccb[mbo].op = 2;	/* SCSI Initiator Command  w/scatter-gather */
 		SCpnt->host_scribble = kmalloc(512, GFP_KERNEL | GFP_DMA);
-		sgpnt = (struct scatterlist *) SCpnt->request_buffer;
 		cptr = (struct chain *) SCpnt->host_scribble;
 		if (cptr == NULL) {
 			/* free the claimed mailbox slot */
 			HOSTDATA(SCpnt->device->host)->SCint[mbo] = NULL;
 			return SCSI_MLQUEUE_HOST_BUSY;
 		}
-		for (i = 0; i < SCpnt->use_sg; i++) {
-			if (sgpnt[i].length == 0 || SCpnt->use_sg > 16 ||
-			    (((int) sgpnt[i].offset) & 1) || (sgpnt[i].length & 1)) {
+		scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) {
+			if (sg->length == 0 || SCpnt->use_sg > 16 ||
+			    (((int) sg->offset) & 1) || (sg->length & 1)) {
 				unsigned char *ptr;
 				printk(KERN_CRIT "Bad segment list supplied to aha1542.c (%d, %d)\n", SCpnt->use_sg, i);
-				for (i = 0; i < SCpnt->use_sg; i++) {
+				scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) {
 					printk(KERN_CRIT "%d: %p %d\n", i,
-					       (page_address(sgpnt[i].page) +
-						sgpnt[i].offset),
-					       sgpnt[i].length);
+					       (page_address(sg->page) +
+						sg->offset), sg->length);
 				};
 				printk(KERN_CRIT "cptr %x: ", (unsigned int) cptr);
 				ptr = (unsigned char *) &cptr[i];
@@ -723,10 +721,10 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *))
 					printk("%02x ", ptr[i]);
 				panic("Foooooooood fight!");
 			};
-			any2scsi(cptr[i].dataptr, SCSI_SG_PA(&sgpnt[i]));
-			if (SCSI_SG_PA(&sgpnt[i]) + sgpnt[i].length - 1 > ISA_DMA_THRESHOLD)
-				BAD_SG_DMA(SCpnt, sgpnt, SCpnt->use_sg, i);
-			any2scsi(cptr[i].datalen, sgpnt[i].length);
+			any2scsi(cptr[i].dataptr, SCSI_SG_PA(sg));
+			if (SCSI_SG_PA(sg) + sg->length - 1 > ISA_DMA_THRESHOLD)
+				BAD_SG_DMA(SCpnt, sg, SCpnt->use_sg, i);
+			any2scsi(cptr[i].datalen, sg->length);
 		};
 		any2scsi(ccb[mbo].datalen, SCpnt->use_sg * sizeof(struct chain));
 		any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(cptr));
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 27/33] advansys: convert to use the data buffer accessors
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (25 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 26/33] aha1542: convert to use the data buffer accessors Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 28/33] ia64 simscsi: convert to use " Jens Axboe
                   ` (8 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/advansys.c |   21 ++++++++++-----------
 1 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c
index 2b66897..f61a0ee 100644
--- a/drivers/scsi/advansys.c
+++ b/drivers/scsi/advansys.c
@@ -6383,7 +6383,7 @@ asc_build_req(asc_board_t *boardp, struct scsi_cmnd *scp)
          */
         int                     sgcnt;
 	int			use_sg;
-        struct scatterlist      *slp;
+        struct scatterlist      *slp, *sg;
 
 	slp = (struct scatterlist *)scp->request_buffer;
 	use_sg = dma_map_sg(dev, slp, scp->use_sg, scp->sc_data_direction);
@@ -6417,10 +6417,10 @@ asc_build_req(asc_board_t *boardp, struct scsi_cmnd *scp)
         /*
          * Convert scatter-gather list into ASC_SG_HEAD list.
          */
-	for (sgcnt = 0; sgcnt < use_sg; sgcnt++, slp++) {
-	    asc_sg_head.sg_list[sgcnt].addr = cpu_to_le32(sg_dma_address(slp));
-	    asc_sg_head.sg_list[sgcnt].bytes = cpu_to_le32(sg_dma_len(slp));
-	    ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(slp), 512));
+	scsi_for_each_sg(scp, sg, use_sg, sgcnt) {
+	    asc_sg_head.sg_list[sgcnt].addr = cpu_to_le32(sg_dma_address(sg));
+	    asc_sg_head.sg_list[sgcnt].bytes = cpu_to_le32(sg_dma_len(sg));
+	    ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(sg), 512));
         }
     }
 
@@ -6615,7 +6615,7 @@ adv_get_sglist(asc_board_t *boardp, adv_req_t *reqp, struct scsi_cmnd *scp, int
 {
     adv_sgblk_t         *sgblkp;
     ADV_SCSI_REQ_Q      *scsiqp;
-    struct scatterlist  *slp;
+    struct scatterlist  *slp, *sg;
     int                 sg_elem_cnt;
     ADV_SG_BLOCK        *sg_block, *prev_sg_block;
     ADV_PADDR           sg_block_paddr;
@@ -6693,11 +6693,11 @@ adv_get_sglist(asc_board_t *boardp, adv_req_t *reqp, struct scsi_cmnd *scp, int
             }
         }
 
-        for (i = 0; i < NO_OF_SG_PER_BLOCK; i++)
+	scsi_for_each_sg(scp, sg, NO_OF_SG_PER_BLOCK, i)
         {
-	    sg_block->sg_list[i].sg_addr = cpu_to_le32(sg_dma_address(slp));
-	    sg_block->sg_list[i].sg_count = cpu_to_le32(sg_dma_len(slp));
-	    ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(slp), 512));
+	    sg_block->sg_list[i].sg_addr = cpu_to_le32(sg_dma_address(sg));
+	    sg_block->sg_list[i].sg_count = cpu_to_le32(sg_dma_len(sg));
+	    ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(sg), 512));
 
             if (--sg_elem_cnt == 0)
             {   /* Last ADV_SG_BLOCK and scatter-gather entry. */
@@ -6705,7 +6705,6 @@ adv_get_sglist(asc_board_t *boardp, adv_req_t *reqp, struct scsi_cmnd *scp, int
                 sg_block->sg_ptr = 0L;    /* Last ADV_SG_BLOCK in list. */
                 return ADV_SUCCESS;
             }
-            slp++;
         }
         sg_block->sg_cnt = NO_OF_SG_PER_BLOCK;
         prev_sg_block = sg_block;
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 28/33] ia64 simscsi: convert to use data buffer accessors
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (26 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 27/33] advansys: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 29/33] infiniband: sg chaining support Jens Axboe
                   ` (7 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, tony.luck

Cc: tony.luck@intel.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 arch/ia64/hp/sim/simscsi.c |   22 +++++++++++++---------
 1 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/hp/sim/simscsi.c b/arch/ia64/hp/sim/simscsi.c
index bb87682..aedce13 100644
--- a/arch/ia64/hp/sim/simscsi.c
+++ b/arch/ia64/hp/sim/simscsi.c
@@ -173,7 +173,7 @@ simscsi_sg_readwrite (struct scsi_cmnd *sc, int mode, unsigned long offset)
 			return;
 		}
 		offset +=  sl->length;
-		sl++;
+		sl = sg_next(sl);
 		list_len--;
 	}
 	sc->result = GOOD;
@@ -239,18 +239,22 @@ simscsi_readwrite10 (struct scsi_cmnd *sc, int mode)
 static void simscsi_fillresult(struct scsi_cmnd *sc, char *buf, unsigned len)
 {
 
-	int scatterlen = sc->use_sg;
-	struct scatterlist *slp;
+	int scatterlen = sc->use_sg, i;
+	struct scatterlist *sg;
 
 	if (scatterlen == 0)
 		memcpy(sc->request_buffer, buf, len);
-	else for (slp = (struct scatterlist *)sc->request_buffer;
-		  scatterlen-- > 0 && len > 0; slp++) {
-		unsigned thislen = min(len, slp->length);
+	else {
+		scsi_for_each_sg(sc, sg, scatterlen, i) {
+			unsigned thislen;
 
-		memcpy(page_address(slp->page) + slp->offset, buf, thislen);
-		slp++;
-		len -= thislen;
+			if (len <= 0)
+				break;
+
+			thislen = min(len, sg->length);
+			memcpy(page_address(sg->page) + sg->offset, buf, thislen);
+			len -= thislen;
+		}
 	}
 }
 
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 29/33] infiniband: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (27 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 28/33] ia64 simscsi: convert to use " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 13:21   ` FUJITA Tomonori
  2007-07-16  9:47 ` [PATCH 30/33] USB storage: " Jens Axboe
                   ` (6 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, rolandd

Cc: rolandd@cisco.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/infiniband/hw/ipath/ipath_dma.c   |    9 ++--
 drivers/infiniband/ulp/iser/iser_memory.c |   75 +++++++++++++++-------------
 2 files changed, 45 insertions(+), 39 deletions(-)

diff --git a/drivers/infiniband/hw/ipath/ipath_dma.c b/drivers/infiniband/hw/ipath/ipath_dma.c
index f87f003..62c87e6 100644
--- a/drivers/infiniband/hw/ipath/ipath_dma.c
+++ b/drivers/infiniband/hw/ipath/ipath_dma.c
@@ -96,17 +96,18 @@ static void ipath_dma_unmap_page(struct ib_device *dev,
 	BUG_ON(!valid_dma_direction(direction));
 }
 
-static int ipath_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents,
-			enum dma_data_direction direction)
+static int ipath_map_sg(struct ib_device *dev, struct scatterlist *sgl,
+			int nents, enum dma_data_direction direction)
 {
+	struct scatterlist *sg;
 	u64 addr;
 	int i;
 	int ret = nents;
 
 	BUG_ON(!valid_dma_direction(direction));
 
-	for (i = 0; i < nents; i++) {
-		addr = (u64) page_address(sg[i].page);
+	for_each_sg(sgl, sg, nents, i) {
+		addr = (u64) page_address(sg->page);
 		/* TODO: handle highmem pages */
 		if (!addr) {
 			ret = 0;
diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index fc9f1fd..ff0c701 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -37,7 +37,6 @@
 #include <linux/mm.h>
 #include <linux/highmem.h>
 #include <asm/io.h>
-#include <asm/scatterlist.h>
 #include <linux/scatterlist.h>
 
 #include "iscsi_iser.h"
@@ -126,17 +125,19 @@ int iser_start_rdma_unaligned_sg(struct iscsi_iser_cmd_task  *iser_ctask,
 
 	if (cmd_dir == ISER_DIR_OUT) {
 		/* copy the unaligned sg the buffer which is used for RDMA */
-		struct scatterlist *sg = (struct scatterlist *)data->buf;
+		struct scatterlist *sgl = (struct scatterlist *)data->buf;
+		struct scatterlist *sg;
 		int i;
 		char *p, *from;
 
-		for (p = mem, i = 0; i < data->size; i++) {
-			from = kmap_atomic(sg[i].page, KM_USER0);
+		p = mem;
+		for_each_sg(sgl, sg, data->size, i) {
+			from = kmap_atomic(sg->page, KM_USER0);
 			memcpy(p,
-			       from + sg[i].offset,
-			       sg[i].length);
+			       from + sg->offset,
+			       sg->length);
 			kunmap_atomic(from, KM_USER0);
-			p += sg[i].length;
+			p += sg->length;
 		}
 	}
 
@@ -178,7 +179,7 @@ void iser_finalize_rdma_unaligned_sg(struct iscsi_iser_cmd_task *iser_ctask,
 
 	if (cmd_dir == ISER_DIR_IN) {
 		char *mem;
-		struct scatterlist *sg;
+		struct scatterlist *sgl, *sg;
 		unsigned char *p, *to;
 		unsigned int sg_size;
 		int i;
@@ -186,16 +187,17 @@ void iser_finalize_rdma_unaligned_sg(struct iscsi_iser_cmd_task *iser_ctask,
 		/* copy back read RDMA to unaligned sg */
 		mem	= mem_copy->copy_buf;
 
-		sg	= (struct scatterlist *)iser_ctask->data[ISER_DIR_IN].buf;
+		sgl	= (struct scatterlist *)iser_ctask->data[ISER_DIR_IN].buf;
 		sg_size = iser_ctask->data[ISER_DIR_IN].size;
 
-		for (p = mem, i = 0; i < sg_size; i++){
-			to = kmap_atomic(sg[i].page, KM_SOFTIRQ0);
-			memcpy(to + sg[i].offset,
+		p = mem;
+		for_each_sg(sgl, sg, sg_size, i) {
+			to = kmap_atomic(sg->page, KM_SOFTIRQ0);
+			memcpy(to + sg->offset,
 			       p,
-			       sg[i].length);
+			       sg->length);
 			kunmap_atomic(to, KM_SOFTIRQ0);
-			p += sg[i].length;
+			p += sg->length;
 		}
 	}
 
@@ -226,7 +228,8 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
 			       struct iser_page_vec *page_vec,
 			       struct ib_device *ibdev)
 {
-	struct scatterlist *sg = (struct scatterlist *)data->buf;
+	struct scatterlist *sgl = (struct scatterlist *)data->buf;
+	struct scatterlist *sg;
 	u64 first_addr, last_addr, page;
 	int end_aligned;
 	unsigned int cur_page = 0;
@@ -234,14 +237,14 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
 	int i;
 
 	/* compute the offset of first element */
-	page_vec->offset = (u64) sg[0].offset & ~MASK_4K;
+	page_vec->offset = (u64) sgl[0].offset & ~MASK_4K;
 
-	for (i = 0; i < data->dma_nents; i++) {
-		unsigned int dma_len = ib_sg_dma_len(ibdev, &sg[i]);
+	for_each_sg(sgl, sg, data->dma_nents, i) {
+		unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
 
 		total_sz += dma_len;
 
-		first_addr = ib_sg_dma_address(ibdev, &sg[i]);
+		first_addr = ib_sg_dma_address(ibdev, sg);
 		last_addr  = first_addr + dma_len;
 
 		end_aligned   = !(last_addr  & ~MASK_4K);
@@ -249,9 +252,9 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
 		/* continue to collect page fragments till aligned or SG ends */
 		while (!end_aligned && (i + 1 < data->dma_nents)) {
 			i++;
-			dma_len = ib_sg_dma_len(ibdev, &sg[i]);
+			dma_len = ib_sg_dma_len(ibdev, sg);
 			total_sz += dma_len;
-			last_addr = ib_sg_dma_address(ibdev, &sg[i]) + dma_len;
+			last_addr = ib_sg_dma_address(ibdev, sg) + dma_len;
 			end_aligned = !(last_addr  & ~MASK_4K);
 		}
 
@@ -286,25 +289,26 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
 static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data,
 					      struct ib_device *ibdev)
 {
-	struct scatterlist *sg;
+	struct scatterlist *sgl, *sg;
 	u64 end_addr, next_addr;
 	int i, cnt;
 	unsigned int ret_len = 0;
 
-	sg = (struct scatterlist *)data->buf;
+	sgl = (struct scatterlist *)data->buf;
 
-	for (cnt = 0, i = 0; i < data->dma_nents; i++, cnt++) {
+	cnt = 0;
+	for_each_sg(sgl, sg, data->dma_nents, i) {
 		/* iser_dbg("Checking sg iobuf [%d]: phys=0x%08lX "
 		   "offset: %ld sz: %ld\n", i,
-		   (unsigned long)page_to_phys(sg[i].page),
-		   (unsigned long)sg[i].offset,
-		   (unsigned long)sg[i].length); */
-		end_addr = ib_sg_dma_address(ibdev, &sg[i]) +
-			   ib_sg_dma_len(ibdev, &sg[i]);
+		   (unsigned long)page_to_phys(sg->page),
+		   (unsigned long)sg->offset,
+		   (unsigned long)sg->length); */
+		end_addr = ib_sg_dma_address(ibdev, sg) +
+			   ib_sg_dma_len(ibdev, sg);
 		/* iser_dbg("Checking sg iobuf end address "
 		       "0x%08lX\n", end_addr); */
 		if (i + 1 < data->dma_nents) {
-			next_addr = ib_sg_dma_address(ibdev, &sg[i+1]);
+			next_addr = ib_sg_dma_address(ibdev, sg_next(sg));
 			/* are i, i+1 fragments of the same page? */
 			if (end_addr == next_addr)
 				continue;
@@ -324,15 +328,16 @@ static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data,
 static void iser_data_buf_dump(struct iser_data_buf *data,
 			       struct ib_device *ibdev)
 {
-	struct scatterlist *sg = (struct scatterlist *)data->buf;
+	struct scatterlist *sgl = (struct scatterlist *)data->buf;
+	struct scatterlist *sg;
 	int i;
 
-	for (i = 0; i < data->dma_nents; i++)
+	for_each_sg(sgl, sg, data->dma_nents, i)
 		iser_err("sg[%d] dma_addr:0x%lX page:0x%p "
 			 "off:0x%x sz:0x%x dma_len:0x%x\n",
-			 i, (unsigned long)ib_sg_dma_address(ibdev, &sg[i]),
-			 sg[i].page, sg[i].offset,
-			 sg[i].length, ib_sg_dma_len(ibdev, &sg[i]));
+			 i, (unsigned long)ib_sg_dma_address(ibdev, sg),
+			 sg->page, sg->offset,
+			 sg->length, ib_sg_dma_len(ibdev, sg));
 }
 
 static void iser_dump_page_vec(struct iser_page_vec *page_vec)
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 30/33] USB storage: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (28 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 29/33] infiniband: sg chaining support Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-17  6:12   ` Greg KH
  2007-07-16  9:47 ` [PATCH 31/33] Fusion: " Jens Axboe
                   ` (5 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, greg

[PATCH] USB storage: sg chaining support

Modify usb_stor_access_xfer_buf() to take a pointer to an sg
entry pointer, so we can keep track of that instead of passing
around an integer index (which we can't use when dealing with
multiple scatterlist arrays).

Cc: greg@kroah.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/usb/storage/alauda.c        |   16 ++++++++++------
 drivers/usb/storage/datafab.c       |   10 ++++++----
 drivers/usb/storage/jumpshot.c      |   10 ++++++----
 drivers/usb/storage/protocol.c      |   20 +++++++++++---------
 drivers/usb/storage/protocol.h      |    2 +-
 drivers/usb/storage/sddr09.c        |   16 ++++++++++------
 drivers/usb/storage/sddr55.c        |   16 ++++++++++------
 drivers/usb/storage/shuttle_usbat.c |   17 ++++++++---------
 8 files changed, 62 insertions(+), 45 deletions(-)

diff --git a/drivers/usb/storage/alauda.c b/drivers/usb/storage/alauda.c
index 4d3cbb1..8d3711a 100644
--- a/drivers/usb/storage/alauda.c
+++ b/drivers/usb/storage/alauda.c
@@ -798,12 +798,13 @@ static int alauda_read_data(struct us_data *us, unsigned long address,
 {
 	unsigned char *buffer;
 	u16 lba, max_lba;
-	unsigned int page, len, index, offset;
+	unsigned int page, len, offset;
 	unsigned int blockshift = MEDIA_INFO(us).blockshift;
 	unsigned int pageshift = MEDIA_INFO(us).pageshift;
 	unsigned int blocksize = MEDIA_INFO(us).blocksize;
 	unsigned int pagesize = MEDIA_INFO(us).pagesize;
 	unsigned int uzonesize = MEDIA_INFO(us).uzonesize;
+	struct scatterlist *sg;
 	int result;
 
 	/*
@@ -827,7 +828,8 @@ static int alauda_read_data(struct us_data *us, unsigned long address,
 	max_lba = MEDIA_INFO(us).capacity >> (blockshift + pageshift);
 
 	result = USB_STOR_TRANSPORT_GOOD;
-	index = offset = 0;
+	offset = 0;
+	sg = NULL;
 
 	while (sectors > 0) {
 		unsigned int zone = lba / uzonesize; /* integer division */
@@ -873,7 +875,7 @@ static int alauda_read_data(struct us_data *us, unsigned long address,
 
 		/* Store the data in the transfer buffer */
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&index, &offset, TO_XFER_BUF);
+				&sg, &offset, TO_XFER_BUF);
 
 		page = 0;
 		lba++;
@@ -891,11 +893,12 @@ static int alauda_write_data(struct us_data *us, unsigned long address,
 		unsigned int sectors)
 {
 	unsigned char *buffer, *blockbuffer;
-	unsigned int page, len, index, offset;
+	unsigned int page, len, offset;
 	unsigned int blockshift = MEDIA_INFO(us).blockshift;
 	unsigned int pageshift = MEDIA_INFO(us).pageshift;
 	unsigned int blocksize = MEDIA_INFO(us).blocksize;
 	unsigned int pagesize = MEDIA_INFO(us).pagesize;
+	struct scatterlist *sg;
 	u16 lba, max_lba;
 	int result;
 
@@ -929,7 +932,8 @@ static int alauda_write_data(struct us_data *us, unsigned long address,
 	max_lba = MEDIA_INFO(us).capacity >> (pageshift + blockshift);
 
 	result = USB_STOR_TRANSPORT_GOOD;
-	index = offset = 0;
+	offset = 0;
+	sg = NULL;
 
 	while (sectors > 0) {
 		/* Write as many sectors as possible in this block */
@@ -946,7 +950,7 @@ static int alauda_write_data(struct us_data *us, unsigned long address,
 
 		/* Get the data from the transfer buffer */
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&index, &offset, FROM_XFER_BUF);
+				&sg, &offset, FROM_XFER_BUF);
 
 		result = alauda_write_lba(us, lba, page, pages, buffer,
 			blockbuffer);
diff --git a/drivers/usb/storage/datafab.c b/drivers/usb/storage/datafab.c
index c87ad1b..579e9f5 100644
--- a/drivers/usb/storage/datafab.c
+++ b/drivers/usb/storage/datafab.c
@@ -98,7 +98,8 @@ static int datafab_read_data(struct us_data *us,
 	unsigned char  thistime;
 	unsigned int totallen, alloclen;
 	int len, result;
-	unsigned int sg_idx = 0, sg_offset = 0;
+	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	// we're working in LBA mode.  according to the ATA spec, 
 	// we can support up to 28-bit addressing.  I don't know if Datafab
@@ -155,7 +156,7 @@ static int datafab_read_data(struct us_data *us,
 
 		// Store the data in the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				 &sg_idx, &sg_offset, TO_XFER_BUF);
+				 &sg, &sg_offset, TO_XFER_BUF);
 
 		sector += thistime;
 		totallen -= len;
@@ -181,7 +182,8 @@ static int datafab_write_data(struct us_data *us,
 	unsigned char thistime;
 	unsigned int totallen, alloclen;
 	int len, result;
-	unsigned int sg_idx = 0, sg_offset = 0;
+	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	// we're working in LBA mode.  according to the ATA spec, 
 	// we can support up to 28-bit addressing.  I don't know if Datafab
@@ -217,7 +219,7 @@ static int datafab_write_data(struct us_data *us,
 
 		// Get the data from the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&sg_idx, &sg_offset, FROM_XFER_BUF);
+				&sg, &sg_offset, FROM_XFER_BUF);
 
 		command[0] = 0;
 		command[1] = thistime;
diff --git a/drivers/usb/storage/jumpshot.c b/drivers/usb/storage/jumpshot.c
index 003fcf5..61097cb 100644
--- a/drivers/usb/storage/jumpshot.c
+++ b/drivers/usb/storage/jumpshot.c
@@ -119,7 +119,8 @@ static int jumpshot_read_data(struct us_data *us,
 	unsigned char  thistime;
 	unsigned int totallen, alloclen;
 	int len, result;
-	unsigned int sg_idx = 0, sg_offset = 0;
+	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	// we're working in LBA mode.  according to the ATA spec, 
 	// we can support up to 28-bit addressing.  I don't know if Jumpshot
@@ -170,7 +171,7 @@ static int jumpshot_read_data(struct us_data *us,
 
 		// Store the data in the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				 &sg_idx, &sg_offset, TO_XFER_BUF);
+				 &sg, &sg_offset, TO_XFER_BUF);
 
 		sector += thistime;
 		totallen -= len;
@@ -195,7 +196,8 @@ static int jumpshot_write_data(struct us_data *us,
 	unsigned char  thistime;
 	unsigned int totallen, alloclen;
 	int len, result, waitcount;
-	unsigned int sg_idx = 0, sg_offset = 0;
+	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	// we're working in LBA mode.  according to the ATA spec, 
 	// we can support up to 28-bit addressing.  I don't know if Jumpshot
@@ -225,7 +227,7 @@ static int jumpshot_write_data(struct us_data *us,
 
 		// Get the data from the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&sg_idx, &sg_offset, FROM_XFER_BUF);
+				&sg, &sg_offset, FROM_XFER_BUF);
 
 		command[0] = 0;
 		command[1] = thistime;
diff --git a/drivers/usb/storage/protocol.c b/drivers/usb/storage/protocol.c
index 9ad3042..cc8f7c5 100644
--- a/drivers/usb/storage/protocol.c
+++ b/drivers/usb/storage/protocol.c
@@ -157,7 +157,7 @@ void usb_stor_transparent_scsi_command(struct scsi_cmnd *srb,
  * pick up from where this one left off. */
 
 unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
-	unsigned int buflen, struct scsi_cmnd *srb, unsigned int *index,
+	unsigned int buflen, struct scsi_cmnd *srb, struct scatterlist **sgptr,
 	unsigned int *offset, enum xfer_buf_dir dir)
 {
 	unsigned int cnt;
@@ -184,16 +184,17 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
 	 * located in high memory -- then kmap() will map it to a temporary
 	 * position in the kernel's virtual address space. */
 	} else {
-		struct scatterlist *sg =
-				(struct scatterlist *) srb->request_buffer
-				+ *index;
+		struct scatterlist *sg = *sgptr;
+
+		if (!sg)
+			sg = (struct scatterlist *) srb->request_buffer;
 
 		/* This loop handles a single s-g list entry, which may
 		 * include multiple pages.  Find the initial page structure
 		 * and the starting offset within the page, and update
 		 * the *offset and *index values for the next loop. */
 		cnt = 0;
-		while (cnt < buflen && *index < srb->use_sg) {
+		while (cnt < buflen) {
 			struct page *page = sg->page +
 					((sg->offset + *offset) >> PAGE_SHIFT);
 			unsigned int poff =
@@ -209,8 +210,7 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
 
 				/* Transfer continues to next s-g entry */
 				*offset = 0;
-				++*index;
-				++sg;
+				sg = sg_next(sg);
 			}
 
 			/* Transfer the data for all the pages in this
@@ -234,6 +234,7 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
 				sglen -= plen;
 			}
 		}
+		*sgptr = sg;
 	}
 
 	/* Return the amount actually transferred */
@@ -245,9 +246,10 @@ unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
 void usb_stor_set_xfer_buf(unsigned char *buffer,
 	unsigned int buflen, struct scsi_cmnd *srb)
 {
-	unsigned int index = 0, offset = 0;
+	unsigned int offset = 0;
+	struct scatterlist *sg = NULL;
 
-	usb_stor_access_xfer_buf(buffer, buflen, srb, &index, &offset,
+	usb_stor_access_xfer_buf(buffer, buflen, srb, &sg, &offset,
 			TO_XFER_BUF);
 	if (buflen < srb->request_bufflen)
 		srb->resid = srb->request_bufflen - buflen;
diff --git a/drivers/usb/storage/protocol.h b/drivers/usb/storage/protocol.h
index 845bed4..8737a36 100644
--- a/drivers/usb/storage/protocol.h
+++ b/drivers/usb/storage/protocol.h
@@ -52,7 +52,7 @@ extern void usb_stor_transparent_scsi_command(struct scsi_cmnd*,
 enum xfer_buf_dir	{TO_XFER_BUF, FROM_XFER_BUF};
 
 extern unsigned int usb_stor_access_xfer_buf(unsigned char *buffer,
-	unsigned int buflen, struct scsi_cmnd *srb, unsigned int *index,
+	unsigned int buflen, struct scsi_cmnd *srb, struct scatterlist **,
 	unsigned int *offset, enum xfer_buf_dir dir);
 
 extern void usb_stor_set_xfer_buf(unsigned char *buffer,
diff --git a/drivers/usb/storage/sddr09.c b/drivers/usb/storage/sddr09.c
index b2ed2a3..b12202c 100644
--- a/drivers/usb/storage/sddr09.c
+++ b/drivers/usb/storage/sddr09.c
@@ -705,7 +705,8 @@ sddr09_read_data(struct us_data *us,
 	unsigned char *buffer;
 	unsigned int lba, maxlba, pba;
 	unsigned int page, pages;
-	unsigned int len, index, offset;
+	unsigned int len, offset;
+	struct scatterlist *sg;
 	int result;
 
 	// Figure out the initial LBA and page
@@ -730,7 +731,8 @@ sddr09_read_data(struct us_data *us,
 	// contiguous LBA's. Another exercise left to the student.
 
 	result = 0;
-	index = offset = 0;
+	offset = 0;
+	sg = NULL;
 
 	while (sectors > 0) {
 
@@ -777,7 +779,7 @@ sddr09_read_data(struct us_data *us,
 
 		// Store the data in the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&index, &offset, TO_XFER_BUF);
+				&sg, &offset, TO_XFER_BUF);
 
 		page = 0;
 		lba++;
@@ -931,7 +933,8 @@ sddr09_write_data(struct us_data *us,
 	unsigned int pagelen, blocklen;
 	unsigned char *blockbuffer;
 	unsigned char *buffer;
-	unsigned int len, index, offset;
+	unsigned int len, offset;
+	struct scatterlist *sg;
 	int result;
 
 	// Figure out the initial LBA and page
@@ -968,7 +971,8 @@ sddr09_write_data(struct us_data *us,
 	}
 
 	result = 0;
-	index = offset = 0;
+	offset = 0;
+	sg = NULL;
 
 	while (sectors > 0) {
 
@@ -987,7 +991,7 @@ sddr09_write_data(struct us_data *us,
 
 		// Get the data from the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&index, &offset, FROM_XFER_BUF);
+				&sg, &offset, FROM_XFER_BUF);
 
 		result = sddr09_write_lba(us, lba, page, pages,
 				buffer, blockbuffer);
diff --git a/drivers/usb/storage/sddr55.c b/drivers/usb/storage/sddr55.c
index 0b1b5b5..d43a341 100644
--- a/drivers/usb/storage/sddr55.c
+++ b/drivers/usb/storage/sddr55.c
@@ -167,7 +167,8 @@ static int sddr55_read_data(struct us_data *us,
 	unsigned long address;
 
 	unsigned short pages;
-	unsigned int len, index, offset;
+	unsigned int len, offset;
+	struct scatterlist *sg;
 
 	// Since we only read in one block at a time, we have to create
 	// a bounce buffer and move the data a piece at a time between the
@@ -178,7 +179,8 @@ static int sddr55_read_data(struct us_data *us,
 	buffer = kmalloc(len, GFP_NOIO);
 	if (buffer == NULL)
 		return USB_STOR_TRANSPORT_ERROR; /* out of memory */
-	index = offset = 0;
+	offset = 0;
+	sg = NULL;
 
 	while (sectors>0) {
 
@@ -255,7 +257,7 @@ static int sddr55_read_data(struct us_data *us,
 
 		// Store the data in the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&index, &offset, TO_XFER_BUF);
+				&sg, &offset, TO_XFER_BUF);
 
 		page = 0;
 		lba++;
@@ -287,7 +289,8 @@ static int sddr55_write_data(struct us_data *us,
 
 	unsigned short pages;
 	int i;
-	unsigned int len, index, offset;
+	unsigned int len, offset;
+	struct scatterlist *sg;
 
 	/* check if we are allowed to write */
 	if (info->read_only || info->force_read_only) {
@@ -304,7 +307,8 @@ static int sddr55_write_data(struct us_data *us,
 	buffer = kmalloc(len, GFP_NOIO);
 	if (buffer == NULL)
 		return USB_STOR_TRANSPORT_ERROR;
-	index = offset = 0;
+	offset = 0;
+	sg = NULL;
 
 	while (sectors > 0) {
 
@@ -322,7 +326,7 @@ static int sddr55_write_data(struct us_data *us,
 
 		// Get the data from the transfer buffer
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-				&index, &offset, FROM_XFER_BUF);
+				&sg, &offset, FROM_XFER_BUF);
 
 		US_DEBUGP("Write %02X pages, to PBA %04X"
 			" (LBA %04X) page %02X\n",
diff --git a/drivers/usb/storage/shuttle_usbat.c b/drivers/usb/storage/shuttle_usbat.c
index 5e27297..a0ff394 100644
--- a/drivers/usb/storage/shuttle_usbat.c
+++ b/drivers/usb/storage/shuttle_usbat.c
@@ -996,7 +996,8 @@ static int usbat_flash_read_data(struct us_data *us,
 	unsigned char  thistime;
 	unsigned int totallen, alloclen;
 	int len, result;
-	unsigned int sg_idx = 0, sg_offset = 0;
+	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	result = usbat_flash_check_media(us, info);
 	if (result != USB_STOR_TRANSPORT_GOOD)
@@ -1050,7 +1051,7 @@ static int usbat_flash_read_data(struct us_data *us,
 	
 		/* Store the data in the transfer buffer */
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-					 &sg_idx, &sg_offset, TO_XFER_BUF);
+					 &sg, &sg_offset, TO_XFER_BUF);
 
 		sector += thistime;
 		totallen -= len;
@@ -1086,7 +1087,8 @@ static int usbat_flash_write_data(struct us_data *us,
 	unsigned char  thistime;
 	unsigned int totallen, alloclen;
 	int len, result;
-	unsigned int sg_idx = 0, sg_offset = 0;
+	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	result = usbat_flash_check_media(us, info);
 	if (result != USB_STOR_TRANSPORT_GOOD)
@@ -1125,7 +1127,7 @@ static int usbat_flash_write_data(struct us_data *us,
 
 		/* Get the data from the transfer buffer */
 		usb_stor_access_xfer_buf(buffer, len, us->srb,
-					 &sg_idx, &sg_offset, FROM_XFER_BUF);
+					 &sg, &sg_offset, FROM_XFER_BUF);
 
 		/* ATA command 0x30 (WRITE SECTORS) */
 		usbat_pack_ata_sector_cmd(command, thistime, sector, 0x30);
@@ -1165,8 +1167,8 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
 	unsigned char *buffer;
 	unsigned int len;
 	unsigned int sector;
-	unsigned int sg_segment = 0;
 	unsigned int sg_offset = 0;
+	struct scatterlist *sg = NULL;
 
 	US_DEBUGP("handle_read10: transfersize %d\n",
 		srb->transfersize);
@@ -1223,9 +1225,6 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
 	sector |= short_pack(data[7+5], data[7+4]);
 	transferred = 0;
 
-	sg_segment = 0; /* for keeping track of where we are in */
-	sg_offset = 0;  /* the scatter/gather list */
-
 	while (transferred != srb->request_bufflen) {
 
 		if (len > srb->request_bufflen - transferred)
@@ -1258,7 +1257,7 @@ static int usbat_hp8200e_handle_read10(struct us_data *us,
 
 		/* Store the data in the transfer buffer */
 		usb_stor_access_xfer_buf(buffer, len, srb,
-				 &sg_segment, &sg_offset, TO_XFER_BUF);
+				 &sg, &sg_offset, TO_XFER_BUF);
 
 		/* Update the amount transferred and the sector number */
 
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 31/33] Fusion: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (29 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 30/33] USB storage: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16 13:20   ` FUJITA Tomonori
  2007-07-16  9:47 ` [PATCH 32/33] i2o: " Jens Axboe
                   ` (4 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, Eric.Moore

Cc: Eric.Moore@lsi.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/message/fusion/mptscsih.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
index d356173..f087249 100644
--- a/drivers/message/fusion/mptscsih.c
+++ b/drivers/message/fusion/mptscsih.c
@@ -297,7 +297,7 @@ nextSGEset:
 		v2 = sg_dma_address(sg);
 		mptscsih_add_sge(psge, sgflags | thisxfer, v2);
 
-		sg++;		/* Get next SG element from the OS */
+		sg = sg_next(sg);	/* Get next SG element from the OS */
 		psge += (sizeof(u32) + sizeof(dma_addr_t));
 		sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
 		sg_done++;
@@ -318,7 +318,7 @@ nextSGEset:
 		v2 = sg_dma_address(sg);
 		mptscsih_add_sge(psge, sgflags | thisxfer, v2);
 		/*
-		sg++;
+		sg = sg_next(sg);
 		psge += (sizeof(u32) + sizeof(dma_addr_t));
 		*/
 		sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 32/33] i2o: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (30 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 31/33] Fusion: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-16  9:47 ` [PATCH 33/33] IDE: " Jens Axboe
                   ` (3 subsequent siblings)
  35 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, alan

Cc: alan@lxorguk.ukuu.org.uk
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 include/linux/i2o.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/include/linux/i2o.h b/include/linux/i2o.h
index 52f53e2..0ae1dcf 100644
--- a/include/linux/i2o.h
+++ b/include/linux/i2o.h
@@ -836,7 +836,7 @@ static inline int i2o_dma_map_sg(struct i2o_controller *c,
 		if ((sizeof(dma_addr_t) > 4) && c->pae_support)
 			*mptr++ = cpu_to_le32(i2o_dma_high(sg_dma_address(sg)));
 #endif
-		sg++;
+		sg = sg_next(sg);
 	}
 	*sg_ptr = mptr;
 
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH 33/33] IDE: sg chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (31 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 32/33] i2o: " Jens Axboe
@ 2007-07-16  9:47 ` Jens Axboe
  2007-07-18 20:56   ` Bartlomiej Zolnierkiewicz
  2007-07-16 13:19 ` [PATCH 00/33] SG table " FUJITA Tomonori
                   ` (2 subsequent siblings)
  35 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16  9:47 UTC (permalink / raw)
  To: linux-kernel, linux-scsi; +Cc: Jens Axboe, bzolnier

Cc: bzolnier@gmail.com
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/ide/cris/ide-cris.c   |    3 ++-
 drivers/ide/ide-dma.c         |    2 +-
 drivers/ide/ide-io.c          |    3 ++-
 drivers/ide/ide-probe.c       |    2 +-
 drivers/ide/ide-taskfile.c    |   17 +++++++++++++----
 drivers/ide/mips/au1xxx-ide.c |    2 +-
 drivers/ide/pci/sgiioc4.c     |    2 +-
 drivers/ide/ppc/pmac.c        |    2 +-
 include/linux/ide.h           |    2 +-
 9 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/ide/cris/ide-cris.c b/drivers/ide/cris/ide-cris.c
index 886091b..e1b2d01 100644
--- a/drivers/ide/cris/ide-cris.c
+++ b/drivers/ide/cris/ide-cris.c
@@ -951,7 +951,8 @@ static int cris_ide_build_dmatable (ide_drive_t *drive)
 		/* group sequential buffers into one large buffer */
 		addr = page_to_phys(sg->page) + sg->offset;
 		size = sg_dma_len(sg);
-		while (sg++, --i) {
+		while (--i) {
+			sg = sg_next(sg);
 			if ((addr + size) != page_to_phys(sg->page) + sg->offset)
 				break;
 			size += sg_dma_len(sg);
diff --git a/drivers/ide/ide-dma.c b/drivers/ide/ide-dma.c
index 5fe1d72..a9a18a1 100644
--- a/drivers/ide/ide-dma.c
+++ b/drivers/ide/ide-dma.c
@@ -294,7 +294,7 @@ int ide_build_dmatable (ide_drive_t *drive, struct request *rq)
 			}
 		}
 
-		sg++;
+		sg = sg_next(sg);
 		i--;
 	}
 
diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c
index c5b5011..2465c24 100644
--- a/drivers/ide/ide-io.c
+++ b/drivers/ide/ide-io.c
@@ -836,7 +836,8 @@ void ide_init_sg_cmd(ide_drive_t *drive, struct request *rq)
 	ide_hwif_t *hwif = drive->hwif;
 
 	hwif->nsect = hwif->nleft = rq->nr_sectors;
-	hwif->cursg = hwif->cursg_ofs = 0;
+	hwif->cursg_ofs = 0;
+	hwif->cursg = NULL;
 }
 
 EXPORT_SYMBOL_GPL(ide_init_sg_cmd);
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index cc58013..111ec02 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -1352,7 +1352,7 @@ static int hwif_init(ide_hwif_t *hwif)
 	if (!hwif->sg_max_nents)
 		hwif->sg_max_nents = PRD_ENTRIES;
 
-	hwif->sg_table = kmalloc(sizeof(struct scatterlist)*hwif->sg_max_nents,
+	hwif->sg_table = kzalloc(sizeof(struct scatterlist)*hwif->sg_max_nents,
 				 GFP_KERNEL);
 	if (!hwif->sg_table) {
 		printk(KERN_ERR "%s: unable to allocate SG table.\n", hwif->name);
diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c
index aa06daf..3c92790 100644
--- a/drivers/ide/ide-taskfile.c
+++ b/drivers/ide/ide-taskfile.c
@@ -263,6 +263,7 @@ static void ide_pio_sector(ide_drive_t *drive, unsigned int write)
 {
 	ide_hwif_t *hwif = drive->hwif;
 	struct scatterlist *sg = hwif->sg_table;
+	struct scatterlist *cursg = hwif->cursg;
 	struct page *page;
 #ifdef CONFIG_HIGHMEM
 	unsigned long flags;
@@ -270,8 +271,14 @@ static void ide_pio_sector(ide_drive_t *drive, unsigned int write)
 	unsigned int offset;
 	u8 *buf;
 
-	page = sg[hwif->cursg].page;
-	offset = sg[hwif->cursg].offset + hwif->cursg_ofs * SECTOR_SIZE;
+	cursg = hwif->cursg;
+	if (!cursg) {
+		cursg = sg;
+		hwif->cursg = sg;
+	}
+
+	page = cursg->page;
+	offset = cursg->offset + hwif->cursg_ofs * SECTOR_SIZE;
 
 	/* get the current page and offset */
 	page = nth_page(page, (offset >> PAGE_SHIFT));
@@ -285,8 +292,8 @@ static void ide_pio_sector(ide_drive_t *drive, unsigned int write)
 	hwif->nleft--;
 	hwif->cursg_ofs++;
 
-	if ((hwif->cursg_ofs * SECTOR_SIZE) == sg[hwif->cursg].length) {
-		hwif->cursg++;
+	if ((hwif->cursg_ofs * SECTOR_SIZE) == cursg->length) {
+		hwif->cursg = sg_next(hwif->cursg);
 		hwif->cursg_ofs = 0;
 	}
 
@@ -367,6 +374,8 @@ static ide_startstop_t task_error(ide_drive_t *drive, struct request *rq,
 
 static void task_end_request(ide_drive_t *drive, struct request *rq, u8 stat)
 {
+	HWIF(drive)->cursg = NULL;
+
 	if (rq->cmd_type == REQ_TYPE_ATA_TASKFILE) {
 		ide_task_t *task = rq->special;
 
diff --git a/drivers/ide/mips/au1xxx-ide.c b/drivers/ide/mips/au1xxx-ide.c
index 2e7013a..48cfb62 100644
--- a/drivers/ide/mips/au1xxx-ide.c
+++ b/drivers/ide/mips/au1xxx-ide.c
@@ -324,7 +324,7 @@ static int auide_build_dmatable(ide_drive_t *drive)
 			cur_addr += tc;
 			cur_len -= tc;
 		}
-		sg++;
+		sg = sg_next(sg);
 		i--;
 	}
 
diff --git a/drivers/ide/pci/sgiioc4.c b/drivers/ide/pci/sgiioc4.c
index d396b29..9e36b29 100644
--- a/drivers/ide/pci/sgiioc4.c
+++ b/drivers/ide/pci/sgiioc4.c
@@ -531,7 +531,7 @@ sgiioc4_build_dma_table(ide_drive_t * drive, struct request *rq, int ddir)
 			}
 		}
 
-		sg++;
+		sg = sg_next(sg);
 		i--;
 	}
 
diff --git a/drivers/ide/ppc/pmac.c b/drivers/ide/ppc/pmac.c
index e46f472..9f78c6a 100644
--- a/drivers/ide/ppc/pmac.c
+++ b/drivers/ide/ppc/pmac.c
@@ -1648,7 +1648,7 @@ pmac_ide_build_dmatable(ide_drive_t *drive, struct request *rq)
 			cur_len -= tc;
 			++table;
 		}
-		sg++;
+		sg = sg_next(sg);
 		i--;
 	}
 
diff --git a/include/linux/ide.h b/include/linux/ide.h
index 19ab258..1e6639e 100644
--- a/include/linux/ide.h
+++ b/include/linux/ide.h
@@ -767,7 +767,7 @@ typedef struct hwif_s {
 
 	unsigned int nsect;
 	unsigned int nleft;
-	unsigned int cursg;
+	struct scatterlist *cursg;
 	unsigned int cursg_ofs;
 
 	int		rqsize;		/* max sectors per request */
-- 
1.5.3.rc0.90.gbaa79


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH 13/33] SPARC: sg chaining support
  2007-07-16  9:47 ` [PATCH 13/33] SPARC: " Jens Axboe
@ 2007-07-16 11:29   ` David Miller
  0 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2007-07-16 11:29 UTC (permalink / raw)
  To: jens.axboe; +Cc: linux-kernel, linux-scsi

From: Jens Axboe <jens.axboe@oracle.com>
Date: Mon, 16 Jul 2007 11:47:27 +0200

> This updates the sparc iommu/pci dma mappers to sg chaining.
> 
> Cc: davem@davemloft.net
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 14/33] SPARC64: sg chaining support
  2007-07-16  9:47 ` [PATCH 14/33] SPARC64: " Jens Axboe
@ 2007-07-16 11:29   ` David Miller
  0 siblings, 0 replies; 73+ messages in thread
From: David Miller @ 2007-07-16 11:29 UTC (permalink / raw)
  To: jens.axboe; +Cc: linux-kernel, linux-scsi

From: Jens Axboe <jens.axboe@oracle.com>
Date: Mon, 16 Jul 2007 11:47:28 +0200

> This updates the sparc64 iommu/pci dma mappers to sg chaining.
> 
> Cc: davem@davemloft.net
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (32 preceding siblings ...)
  2007-07-16  9:47 ` [PATCH 33/33] IDE: " Jens Axboe
@ 2007-07-16 13:19 ` FUJITA Tomonori
  2007-07-16 13:23   ` Jens Axboe
  2007-07-16 13:19 ` FUJITA Tomonori
  2007-07-16 14:05 ` John Stoffel
  35 siblings, 1 reply; 73+ messages in thread
From: FUJITA Tomonori @ 2007-07-16 13:19 UTC (permalink / raw)
  To: mark_salyzyn, jens.axboe; +Cc: linux-kernel, linux-scsi, fujita.tomonori

From: Jens Axboe <jens.axboe@oracle.com>
Subject: [PATCH 00/33] SG table chaining support
Date: Mon, 16 Jul 2007 11:47:14 +0200

> Changes since last post:
> 
> - Rebase to current -git. Lots of SCSI drivers have been converted
>   to use the sg accessor helpers, which nicely shrinks this patchset
>   from 70 to 33 patches. Great!

---
From: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Subject: [PATCH] ips: sg chaining support

ips properly uses scsi_for_each_sg for the normal I/O path, however,
the breakup path doesn't.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 drivers/scsi/ips.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
index 9f8ed6b..cc5ac4f 100644
--- a/drivers/scsi/ips.c
+++ b/drivers/scsi/ips.c
@@ -3251,7 +3251,7 @@ ips_done(ips_ha_t * ha, ips_scb_t * scb)
 		 */
 		if ((scb->breakup) || (scb->sg_break)) {
                         struct scatterlist *sg;
-                        int sg_dma_index, ips_sg_index = 0;
+                        int i, sg_dma_index, ips_sg_index = 0;
 
 			/* we had a data breakup */
 			scb->data_len = 0;
@@ -3260,20 +3260,22 @@ ips_done(ips_ha_t * ha, ips_scb_t * scb)
 
                         /* Spin forward to last dma chunk */
                         sg_dma_index = scb->breakup;
+                        for (i = 0; i < scb->breakup; i++)
+                                sg = sg_next(sg);
 
 			/* Take care of possible partial on last chunk */
                         ips_fill_scb_sg_single(ha,
-                                               sg_dma_address(&sg[sg_dma_index]),
+                                               sg_dma_address(sg),
                                                scb, ips_sg_index++,
-                                               sg_dma_len(&sg[sg_dma_index]));
+                                               sg_dma_len(sg));
 
                         for (; sg_dma_index < scsi_sg_count(scb->scsi_cmd);
-                             sg_dma_index++) {
+                             sg_dma_index++, sg = sg_next(sg)) {
                                 if (ips_fill_scb_sg_single
                                     (ha,
-                                     sg_dma_address(&sg[sg_dma_index]),
+                                     sg_dma_address(sg),
                                      scb, ips_sg_index++,
-                                     sg_dma_len(&sg[sg_dma_index])) < 0)
+                                     sg_dma_len(sg)) < 0)
                                         break;
                         }
 
-- 
1.4.3.2


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (33 preceding siblings ...)
  2007-07-16 13:19 ` [PATCH 00/33] SG table " FUJITA Tomonori
@ 2007-07-16 13:19 ` FUJITA Tomonori
  2007-07-16 13:24   ` Jens Axboe
  2007-07-16 14:05 ` John Stoffel
  35 siblings, 1 reply; 73+ messages in thread
From: FUJITA Tomonori @ 2007-07-16 13:19 UTC (permalink / raw)
  To: michaelc, jens.axboe; +Cc: linux-kernel, linux-scsi, fujita.tomonori

From: Jens Axboe <jens.axboe@oracle.com>
Subject: [PATCH 00/33] SG table chaining support
Date: Mon, 16 Jul 2007 11:47:14 +0200

> Changes since last post:
> 
> - Rebase to current -git. Lots of SCSI drivers have been converted
>   to use the sg accessor helpers, which nicely shrinks this patchset
>   from 70 to 33 patches. Great!

It's against Jens' sglist branch though there are lots of changes to
the data path in Mike's iscsi tree.

---
From: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Subject: [PATCH] iscsi_tcp: sg chaining support

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 drivers/scsi/iscsi_tcp.c |   41 ++++++++++++++++++++++-------------------
 1 files changed, 22 insertions(+), 19 deletions(-)

diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index aebcd5f..4598231 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -316,7 +316,7 @@ iscsi_solicit_data_init(struct iscsi_con
 
 	sg = scsi_sglist(sc);
 	r2t->sg = NULL;
-	for (i = 0; i < scsi_sg_count(sc); i++, sg += 1) {
+	for (i = 0; i < scsi_sg_count(sc); i++, sg = sg_next(sg)) {
 		/* FIXME: prefetch ? */
 		if (sg_count + sg->length > r2t->data_offset) {
 			int page_offset;
@@ -332,7 +332,7 @@ iscsi_solicit_data_init(struct iscsi_con
 			r2t->sendbuf.sg.length -= page_offset;
 
 			/* xmit logic will continue with next one */
-			r2t->sg = sg + 1;
+			r2t->sg = sg_next(sg);;
 			break;
 		}
 		sg_count += sg->length;
@@ -707,18 +707,21 @@ static int iscsi_scsi_data_in(struct isc
 	sg = scsi_sglist(sc);
 
 	if (tcp_ctask->data_offset)
-		for (i = 0; i < tcp_ctask->sg_count; i++)
-			offset -= sg[i].length;
+		for (i = 0; i < tcp_ctask->sg_count; i++, sg = sg_next(sg))
+			offset -= sg->length;
 	/* we've passed through partial sg*/
 	if (offset < 0)
 		offset = 0;
 
-	for (i = tcp_ctask->sg_count; i < scsi_sg_count(sc); i++) {
+	for (i = 0, sg = scsi_sglist(sc); i < tcp_ctask->sg_count; i++)
+		sg = sg_next(sg);
+
+	for (i = tcp_ctask->sg_count; i < scsi_sg_count(sc); i++, sg = sg_next(sg)) {
 		char *dest;
 
-		dest = kmap_atomic(sg[i].page, KM_SOFTIRQ0);
-		rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg[i].offset,
-				      sg[i].length, offset);
+		dest = kmap_atomic(sg->page, KM_SOFTIRQ0);
+		rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg->offset,
+				      sg->length, offset);
 		kunmap_atomic(dest, KM_SOFTIRQ0);
 		if (rc == -EAGAIN)
 			/* continue with the next SKB/PDU */
@@ -728,13 +731,13 @@ static int iscsi_scsi_data_in(struct isc
 				if (!offset)
 					crypto_hash_update(
 							&tcp_conn->rx_hash,
-							&sg[i], sg[i].length);
+							sg, sg->length);
 				else
 					partial_sg_digest_update(
 							&tcp_conn->rx_hash,
-							&sg[i],
-							sg[i].offset + offset,
-							sg[i].length - offset);
+							sg,
+							sg->offset + offset,
+							sg->length - offset);
 			}
 			offset = 0;
 			tcp_ctask->sg_count++;
@@ -746,9 +749,9 @@ static int iscsi_scsi_data_in(struct isc
 				 * data-in is complete, but buffer not...
 				 */
 				partial_sg_digest_update(&tcp_conn->rx_hash,
-							 &sg[i],
-							 sg[i].offset,
-							 sg[i].length-rc);
+							 sg,
+							 sg->offset,
+							 sg->length - rc);
 			rc = 0;
 			break;
 		}
@@ -1246,7 +1249,7 @@ iscsi_solicit_data_cont(struct iscsi_con
 		return;
 
 	iscsi_buf_init_sg(&r2t->sendbuf, r2t->sg);
-	r2t->sg += 1;
+	r2t->sg = sg_next(r2t->sg);
 }
 
 static void iscsi_set_padding(struct iscsi_tcp_cmd_task *tcp_ctask,
@@ -1378,8 +1381,8 @@ iscsi_send_cmd_hdr(struct iscsi_conn *co
 			struct scatterlist *sg = scsi_sglist(sc);
 
 			iscsi_buf_init_sg(&tcp_ctask->sendbuf, sg);
-			tcp_ctask->sg = sg + 1;
-			tcp_ctask->bad_sg = sg + scsi_sg_count(sc);
+			tcp_ctask->sg = sg_next(sg);
+			tcp_ctask->bad_sg = sg_last(sg, scsi_sg_count(sc));
 
 			debug_scsi("cmd [itt 0x%x total %d imm_data %d "
 				   "unsol count %d, unsol offset %d]\n",
@@ -1509,7 +1512,7 @@ iscsi_send_data(struct iscsi_cmd_task *c
 				buf_sent);
 		if (!iscsi_buf_left(sendbuf) && *sg != tcp_ctask->bad_sg) {
 			iscsi_buf_init_sg(sendbuf, *sg);
-			*sg = *sg + 1;
+			*sg = sg_next(*sg);
 		}
 
 		if (rc)
-- 
1.4.3.2


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH 31/33] Fusion: sg chaining support
  2007-07-16  9:47 ` [PATCH 31/33] Fusion: " Jens Axboe
@ 2007-07-16 13:20   ` FUJITA Tomonori
  2007-07-16 13:25     ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: FUJITA Tomonori @ 2007-07-16 13:20 UTC (permalink / raw)
  To: jens.axboe; +Cc: linux-kernel, linux-scsi, Eric.Moore, fujita.tomonori

From: Jens Axboe <jens.axboe@oracle.com>
Subject: [PATCH 31/33] Fusion: sg chaining support
Date: Mon, 16 Jul 2007 11:47:45 +0200

> Cc: Eric.Moore@lsi.com
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  drivers/message/fusion/mptscsih.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
> index d356173..f087249 100644
> --- a/drivers/message/fusion/mptscsih.c
> +++ b/drivers/message/fusion/mptscsih.c
> @@ -297,7 +297,7 @@ nextSGEset:
>  		v2 = sg_dma_address(sg);
>  		mptscsih_add_sge(psge, sgflags | thisxfer, v2);
>  
> -		sg++;		/* Get next SG element from the OS */
> +		sg = sg_next(sg);	/* Get next SG element from the OS */
>  		psge += (sizeof(u32) + sizeof(dma_addr_t));
>  		sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
>  		sg_done++;
> @@ -318,7 +318,7 @@ nextSGEset:
>  		v2 = sg_dma_address(sg);
>  		mptscsih_add_sge(psge, sgflags | thisxfer, v2);
>  		/*
> -		sg++;
> +		sg = sg_next(sg);
>  		psge += (sizeof(u32) + sizeof(dma_addr_t));
>  		*/
>  		sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
> -- 

We also need this change?

diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
index f087249..a3e6170 100644
--- a/drivers/message/fusion/mptscsih.c
+++ b/drivers/message/fusion/mptscsih.c
@@ -289,7 +289,7 @@ nextSGEset:
 	for (ii=0; ii < (numSgeThisFrame-1); ii++) {
 		thisxfer = sg_dma_len(sg);
 		if (thisxfer == 0) {
-			sg ++; /* Get next SG element from the OS */
+			sg = sg_next(sg); /* Get next SG element from the OS */
 			sg_done++;
 			continue;
 		}
-- 
1.4.3.2


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH 29/33] infiniband: sg chaining support
  2007-07-16  9:47 ` [PATCH 29/33] infiniband: sg chaining support Jens Axboe
@ 2007-07-16 13:21   ` FUJITA Tomonori
  2007-07-16 13:26     ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: FUJITA Tomonori @ 2007-07-16 13:21 UTC (permalink / raw)
  To: jens.axboe; +Cc: linux-kernel, linux-scsi, rolandd, fujita.tomonori

From: Jens Axboe <jens.axboe@oracle.com>
Subject: [PATCH 29/33] infiniband: sg chaining support
Date: Mon, 16 Jul 2007 11:47:43 +0200

> @@ -226,7 +228,8 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
>  			       struct iser_page_vec *page_vec,
>  			       struct ib_device *ibdev)
>  {
> -	struct scatterlist *sg = (struct scatterlist *)data->buf;
> +	struct scatterlist *sgl = (struct scatterlist *)data->buf;
> +	struct scatterlist *sg;
>  	u64 first_addr, last_addr, page;
>  	int end_aligned;
>  	unsigned int cur_page = 0;
> @@ -234,14 +237,14 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
>  	int i;
>  
>  	/* compute the offset of first element */
> -	page_vec->offset = (u64) sg[0].offset & ~MASK_4K;
> +	page_vec->offset = (u64) sgl[0].offset & ~MASK_4K;
>  
> -	for (i = 0; i < data->dma_nents; i++) {
> -		unsigned int dma_len = ib_sg_dma_len(ibdev, &sg[i]);
> +	for_each_sg(sgl, sg, data->dma_nents, i) {
> +		unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
>  
>  		total_sz += dma_len;
>  
> -		first_addr = ib_sg_dma_address(ibdev, &sg[i]);
> +		first_addr = ib_sg_dma_address(ibdev, sg);
>  		last_addr  = first_addr + dma_len;
>  
>  		end_aligned   = !(last_addr  & ~MASK_4K);
> @@ -249,9 +252,9 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
>  		/* continue to collect page fragments till aligned or SG ends */
>  		while (!end_aligned && (i + 1 < data->dma_nents)) {
>  			i++;

Do we need to replace i++ with sg = sg_next(sg) here?


> -			dma_len = ib_sg_dma_len(ibdev, &sg[i]);
> +			dma_len = ib_sg_dma_len(ibdev, sg);
>  			total_sz += dma_len;
> -			last_addr = ib_sg_dma_address(ibdev, &sg[i]) + dma_len;
> +			last_addr = ib_sg_dma_address(ibdev, sg) + dma_len;
>  			end_aligned = !(last_addr  & ~MASK_4K);
>  		}
>  
> @@ -286,25 +289,26 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
>  static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data,
>  					      struct ib_device *ibdev)
>  {
> -	struct scatterlist *sg;
> +	struct scatterlist *sgl, *sg;
>  	u64 end_addr, next_addr;
>  	int i, cnt;
>  	unsigned int ret_len = 0;
>  
> -	sg = (struct scatterlist *)data->buf;
> +	sgl = (struct scatterlist *)data->buf;
>  
> -	for (cnt = 0, i = 0; i < data->dma_nents; i++, cnt++) {
> +	cnt = 0;
> +	for_each_sg(sgl, sg, data->dma_nents, i) {
>  		/* iser_dbg("Checking sg iobuf [%d]: phys=0x%08lX "
>  		   "offset: %ld sz: %ld\n", i,
> -		   (unsigned long)page_to_phys(sg[i].page),
> -		   (unsigned long)sg[i].offset,
> -		   (unsigned long)sg[i].length); */
> -		end_addr = ib_sg_dma_address(ibdev, &sg[i]) +
> -			   ib_sg_dma_len(ibdev, &sg[i]);
> +		   (unsigned long)page_to_phys(sg->page),
> +		   (unsigned long)sg->offset,
> +		   (unsigned long)sg->length); */
> +		end_addr = ib_sg_dma_address(ibdev, sg) +
> +			   ib_sg_dma_len(ibdev, sg);
>  		/* iser_dbg("Checking sg iobuf end address "
>  		       "0x%08lX\n", end_addr); */
>  		if (i + 1 < data->dma_nents) {
> -			next_addr = ib_sg_dma_address(ibdev, &sg[i+1]);
> +			next_addr = ib_sg_dma_address(ibdev, sg_next(sg));
>  			/* are i, i+1 fragments of the same page? */
>  			if (end_addr == next_addr)
>  				continue;
> @@ -324,15 +328,16 @@ static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data,
>  static void iser_data_buf_dump(struct iser_data_buf *data,
>  			       struct ib_device *ibdev)
>  {
> -	struct scatterlist *sg = (struct scatterlist *)data->buf;
> +	struct scatterlist *sgl = (struct scatterlist *)data->buf;
> +	struct scatterlist *sg;
>  	int i;
>  
> -	for (i = 0; i < data->dma_nents; i++)
> +	for_each_sg(sgl, sg, data->dma_nents, i)
>  		iser_err("sg[%d] dma_addr:0x%lX page:0x%p "
>  			 "off:0x%x sz:0x%x dma_len:0x%x\n",
> -			 i, (unsigned long)ib_sg_dma_address(ibdev, &sg[i]),
> -			 sg[i].page, sg[i].offset,
> -			 sg[i].length, ib_sg_dma_len(ibdev, &sg[i]));
> +			 i, (unsigned long)ib_sg_dma_address(ibdev, sg),
> +			 sg->page, sg->offset,
> +			 sg->length, ib_sg_dma_len(ibdev, sg));
>  }
>  
>  static void iser_dump_page_vec(struct iser_page_vec *page_vec)
> -- 
> 1.5.3.rc0.90.gbaa79
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16 13:19 ` [PATCH 00/33] SG table " FUJITA Tomonori
@ 2007-07-16 13:23   ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 13:23 UTC (permalink / raw)
  To: FUJITA Tomonori; +Cc: mark_salyzyn, linux-kernel, linux-scsi, fujita.tomonori

On Mon, Jul 16 2007, FUJITA Tomonori wrote:
> From: Jens Axboe <jens.axboe@oracle.com>
> Subject: [PATCH 00/33] SG table chaining support
> Date: Mon, 16 Jul 2007 11:47:14 +0200
> 
> > Changes since last post:
> > 
> > - Rebase to current -git. Lots of SCSI drivers have been converted
> >   to use the sg accessor helpers, which nicely shrinks this patchset
> >   from 70 to 33 patches. Great!
> 
> ---
> From: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> Subject: [PATCH] ips: sg chaining support
> 
> ips properly uses scsi_for_each_sg for the normal I/O path, however,
> the breakup path doesn't.
> 
> Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>

Thanks, applied!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16 13:19 ` FUJITA Tomonori
@ 2007-07-16 13:24   ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 13:24 UTC (permalink / raw)
  To: FUJITA Tomonori; +Cc: michaelc, linux-kernel, linux-scsi, fujita.tomonori

On Mon, Jul 16 2007, FUJITA Tomonori wrote:
> From: Jens Axboe <jens.axboe@oracle.com>
> Subject: [PATCH 00/33] SG table chaining support
> Date: Mon, 16 Jul 2007 11:47:14 +0200
> 
> > Changes since last post:
> > 
> > - Rebase to current -git. Lots of SCSI drivers have been converted
> >   to use the sg accessor helpers, which nicely shrinks this patchset
> >   from 70 to 33 patches. Great!
> 
> It's against Jens' sglist branch though there are lots of changes to
> the data path in Mike's iscsi tree.
> 
> ---
> From: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> Subject: [PATCH] iscsi_tcp: sg chaining support

Thanks, applied!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 31/33] Fusion: sg chaining support
  2007-07-16 13:20   ` FUJITA Tomonori
@ 2007-07-16 13:25     ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 13:25 UTC (permalink / raw)
  To: FUJITA Tomonori; +Cc: linux-kernel, linux-scsi, Eric.Moore, fujita.tomonori

On Mon, Jul 16 2007, FUJITA Tomonori wrote:
> From: Jens Axboe <jens.axboe@oracle.com>
> Subject: [PATCH 31/33] Fusion: sg chaining support
> Date: Mon, 16 Jul 2007 11:47:45 +0200
> 
> > Cc: Eric.Moore@lsi.com
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > ---
> >  drivers/message/fusion/mptscsih.c |    4 ++--
> >  1 files changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
> > index d356173..f087249 100644
> > --- a/drivers/message/fusion/mptscsih.c
> > +++ b/drivers/message/fusion/mptscsih.c
> > @@ -297,7 +297,7 @@ nextSGEset:
> >  		v2 = sg_dma_address(sg);
> >  		mptscsih_add_sge(psge, sgflags | thisxfer, v2);
> >  
> > -		sg++;		/* Get next SG element from the OS */
> > +		sg = sg_next(sg);	/* Get next SG element from the OS */
> >  		psge += (sizeof(u32) + sizeof(dma_addr_t));
> >  		sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
> >  		sg_done++;
> > @@ -318,7 +318,7 @@ nextSGEset:
> >  		v2 = sg_dma_address(sg);
> >  		mptscsih_add_sge(psge, sgflags | thisxfer, v2);
> >  		/*
> > -		sg++;
> > +		sg = sg_next(sg);
> >  		psge += (sizeof(u32) + sizeof(dma_addr_t));
> >  		*/
> >  		sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
> > -- 
> 
> We also need this change?
> 
> diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
> index f087249..a3e6170 100644
> --- a/drivers/message/fusion/mptscsih.c
> +++ b/drivers/message/fusion/mptscsih.c
> @@ -289,7 +289,7 @@ nextSGEset:
>  	for (ii=0; ii < (numSgeThisFrame-1); ii++) {
>  		thisxfer = sg_dma_len(sg);
>  		if (thisxfer == 0) {
> -			sg ++; /* Get next SG element from the OS */
> +			sg = sg_next(sg); /* Get next SG element from the OS */
>  			sg_done++;
>  			continue;
>  		}

Indeed we do, thanks for spotting that. Applied.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 29/33] infiniband: sg chaining support
  2007-07-16 13:21   ` FUJITA Tomonori
@ 2007-07-16 13:26     ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 13:26 UTC (permalink / raw)
  To: FUJITA Tomonori; +Cc: linux-kernel, linux-scsi, rolandd, fujita.tomonori

On Mon, Jul 16 2007, FUJITA Tomonori wrote:
> From: Jens Axboe <jens.axboe@oracle.com>
> Subject: [PATCH 29/33] infiniband: sg chaining support
> Date: Mon, 16 Jul 2007 11:47:43 +0200
> 
> > @@ -226,7 +228,8 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
> >  			       struct iser_page_vec *page_vec,
> >  			       struct ib_device *ibdev)
> >  {
> > -	struct scatterlist *sg = (struct scatterlist *)data->buf;
> > +	struct scatterlist *sgl = (struct scatterlist *)data->buf;
> > +	struct scatterlist *sg;
> >  	u64 first_addr, last_addr, page;
> >  	int end_aligned;
> >  	unsigned int cur_page = 0;
> > @@ -234,14 +237,14 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
> >  	int i;
> >  
> >  	/* compute the offset of first element */
> > -	page_vec->offset = (u64) sg[0].offset & ~MASK_4K;
> > +	page_vec->offset = (u64) sgl[0].offset & ~MASK_4K;
> >  
> > -	for (i = 0; i < data->dma_nents; i++) {
> > -		unsigned int dma_len = ib_sg_dma_len(ibdev, &sg[i]);
> > +	for_each_sg(sgl, sg, data->dma_nents, i) {
> > +		unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
> >  
> >  		total_sz += dma_len;
> >  
> > -		first_addr = ib_sg_dma_address(ibdev, &sg[i]);
> > +		first_addr = ib_sg_dma_address(ibdev, sg);
> >  		last_addr  = first_addr + dma_len;
> >  
> >  		end_aligned   = !(last_addr  & ~MASK_4K);
> > @@ -249,9 +252,9 @@ static int iser_sg_to_page_vec(struct iser_data_buf *data,
> >  		/* continue to collect page fragments till aligned or SG ends */
> >  		while (!end_aligned && (i + 1 < data->dma_nents)) {
> >  			i++;
> 
> Do we need to replace i++ with sg = sg_next(sg) here?

We do, updating the patch. Thanks for your review!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
                   ` (34 preceding siblings ...)
  2007-07-16 13:19 ` FUJITA Tomonori
@ 2007-07-16 14:05 ` John Stoffel
  2007-07-16 14:23   ` Martin K. Petersen
  35 siblings, 1 reply; 73+ messages in thread
From: John Stoffel @ 2007-07-16 14:05 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi


Jens> A repost of this patchset, which adds support forchaining of sg
Jens> tables.  This enables much larger IO commands, since we don't
Jens> have to allocate large consecutive pieces of memory to represent
Jens> the sgtable of a huge command. Right now Linux is limited to
Jens> somewhere between 128 and 256 segments, depending on the
Jens> architecture. This translates into at most 512k-1mb request
Jens> sizes. With this patchset, I've successfully pushed 10MiB
Jens> commands through the IO stack.

Will this help out tape drive performance at all?  I looked through
the patches quickly, esp the AIC7xxx stuff since that's what I use,
but nothing jumped out at me...

John

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16 14:05 ` John Stoffel
@ 2007-07-16 14:23   ` Martin K. Petersen
  2007-07-16 14:43     ` Jens Axboe
  2007-07-16 16:02     ` Kai Makisara
  0 siblings, 2 replies; 73+ messages in thread
From: Martin K. Petersen @ 2007-07-16 14:23 UTC (permalink / raw)
  To: John Stoffel; +Cc: Jens Axboe, linux-kernel, linux-scsi

>>>>> "John" == John Stoffel <john@stoffel.org> writes:

John> Will this help out tape drive performance at all?  I looked
John> through the patches quickly, esp the AIC7xxx stuff since that's
John> what I use, but nothing jumped out at me...

Yes.  Most modern tape drives want a block size of 1MB or higher.
With the old stack we'd be stuck at 512KB because the sg limitations
caused us to come just short of 1MB...

-- 
Martin K. Petersen	Oracle Linux Engineering


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16 14:23   ` Martin K. Petersen
@ 2007-07-16 14:43     ` Jens Axboe
  2007-07-16 16:02     ` Kai Makisara
  1 sibling, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 14:43 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: John Stoffel, linux-kernel, linux-scsi

On Mon, Jul 16 2007, Martin K. Petersen wrote:
> >>>>> "John" == John Stoffel <john@stoffel.org> writes:
> 
> John> Will this help out tape drive performance at all?  I looked
> John> through the patches quickly, esp the AIC7xxx stuff since that's
> John> what I use, but nothing jumped out at me...
> 
> Yes.  Most modern tape drives want a block size of 1MB or higher.
> With the old stack we'd be stuck at 512KB because the sg limitations
> caused us to come just short of 1MB...

Indeed. John, note that the driver changes aren't related to enabling
some hardware feature. Drivers just need to be converted to use the sg
walker helpers instead of doing it manually, then they'll also gain
larger IO support. The SCSI drivers are currently being transitioned to
that seperately, my patchset just contains patches for remaining drivers
(which include non-SCSI ones as well).

The hardware has to support a big number of segments of course, looking
at aic7xxx it seems to be limited at 128. From the comment that looks
like it can be increased though, see AHD_NSEG in
drivers/scsi/aic7xxx/aic79xx_osm.h:

/*
 * Number of SG segments we require.  So long as the S/G segments for
 * a particular transaction are allocated in a physically contiguous
 * manner and are allocated below 4GB, the number of S/G segments is
 * unrestricted.
 */
#define AHD_NSEG 128

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16 14:23   ` Martin K. Petersen
  2007-07-16 14:43     ` Jens Axboe
@ 2007-07-16 16:02     ` Kai Makisara
  2007-07-16 16:43       ` Jens Axboe
  1 sibling, 1 reply; 73+ messages in thread
From: Kai Makisara @ 2007-07-16 16:02 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: John Stoffel, Jens Axboe, linux-kernel, linux-scsi

On Mon, 16 Jul 2007, Martin K. Petersen wrote:

> >>>>> "John" == John Stoffel <john@stoffel.org> writes:
> 
> John> Will this help out tape drive performance at all?  I looked
> John> through the patches quickly, esp the AIC7xxx stuff since that's
> John> what I use, but nothing jumped out at me...
> 
> Yes.  Most modern tape drives want a block size of 1MB or higher.
> With the old stack we'd be stuck at 512KB because the sg limitations
> caused us to come just short of 1MB...
> 
Tape block sizes up to 16 MB have been possible for a very long time but 
this has required tuning of the block/scsi parameters. Very few people 
seem to have done this and the common (mis)belief seems to be that the 
tape block size limit has been 512 kB. It is good if this tuning is not
needed in future.

-- 
Kai

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 00/33] SG table chaining support
  2007-07-16 16:02     ` Kai Makisara
@ 2007-07-16 16:43       ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 16:43 UTC (permalink / raw)
  To: Kai Makisara; +Cc: Martin K. Petersen, John Stoffel, linux-kernel, linux-scsi

On Mon, Jul 16 2007, Kai Makisara wrote:
> On Mon, 16 Jul 2007, Martin K. Petersen wrote:
> 
> > >>>>> "John" == John Stoffel <john@stoffel.org> writes:
> > 
> > John> Will this help out tape drive performance at all?  I looked
> > John> through the patches quickly, esp the AIC7xxx stuff since that's
> > John> what I use, but nothing jumped out at me...
> > 
> > Yes.  Most modern tape drives want a block size of 1MB or higher.
> > With the old stack we'd be stuck at 512KB because the sg limitations
> > caused us to come just short of 1MB...
> > 
> Tape block sizes up to 16 MB have been possible for a very long time but 
> this has required tuning of the block/scsi parameters. Very few people 
> seem to have done this and the common (mis)belief seems to be that the 
> tape block size limit has been 512 kB. It is good if this tuning is not
> needed in future.

The main difference is now you get to do it without hacks and in a clean
way, so it works through the normal IO path and not some on-the-side
thing (or st only).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 03/33] block: convert to using sg helpers
  2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
@ 2007-07-16 19:14     ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 19:14 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz; +Cc: linux-kernel, linux-scsi

On Mon, Jul 16 2007, Bartlomiej Zolnierkiewicz wrote:
> 
> Hi,
> 
> On Monday 16 July 2007, Jens Axboe wrote:
> > Convert the main rq mapper (blk_rq_map_sg()) to the sg helper setup.
> > 
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > ---
> >  block/ll_rw_blk.c |   19 ++++++++++++-------
> >  1 files changed, 12 insertions(+), 7 deletions(-)
> > 
> > diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c
> > index ef42bb2..ab71087 100644
> > --- a/block/ll_rw_blk.c
> > +++ b/block/ll_rw_blk.c
> > @@ -30,6 +30,7 @@
> >  #include <linux/cpu.h>
> >  #include <linux/blktrace_api.h>
> >  #include <linux/fault-inject.h>
> > +#include <linux/scatterlist.h>
> >  
> >  /*
> >   * for max sense size
> > @@ -1307,9 +1308,11 @@ static int blk_hw_contig_segment(request_queue_t *q, struct bio *bio,
> >   * map a request to scatterlist, return number of sg entries setup. Caller
> >   * must make sure sg can hold rq->nr_phys_segments entries
> >   */
> > -int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg)
> > +int blk_rq_map_sg(request_queue_t *q, struct request *rq,
> > +		  struct scatterlist *sglist)
> >  {
> >  	struct bio_vec *bvec, *bvprv;
> > +	struct scatterlist *next_sg, *sg;
> >  	struct bio *bio;
> >  	int nsegs, i, cluster;
> >  
> > @@ -1320,6 +1323,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
> >  	 * for each bio in rq
> >  	 */
> >  	bvprv = NULL;
> > +	sg = next_sg = &sglist[0];
> >  	rq_for_each_bio(bio, rq) {
> >  		/*
> >  		 * for each segment in bio
> > @@ -1328,7 +1332,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
> >  			int nbytes = bvec->bv_len;
> >  
> >  			if (bvprv && cluster) {
> > -				if (sg[nsegs - 1].length + nbytes > q->max_segment_size)
> > +				if (sg->length + nbytes > q->max_segment_size)
> >  					goto new_segment;
> >  
> >  				if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
> > @@ -1336,14 +1340,15 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
> >  				if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
> >  					goto new_segment;
> >  
> > -				sg[nsegs - 1].length += nbytes;
> > +				sg->length += nbytes;
> >  			} else {
> >  new_segment:
> > -				memset(&sg[nsegs],0,sizeof(struct scatterlist));
> 
> Is this intended?  If so why this is no longer needed?

It is, actually it's even required. All fields input fields should be
set, the mapping/iommu helpers should set all output fields.

But I understand your concerns, it's definitely something that should be
kept an eye on. And I have so far. I even listed it as the primary
concern in an email (off list) to Tomo today.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 05/33] Add chained sg support to linux/scatterlist.h
  2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
@ 2007-07-16 19:15     ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 19:15 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz; +Cc: linux-kernel, linux-scsi

On Mon, Jul 16 2007, Bartlomiej Zolnierkiewicz wrote:
> On Monday 16 July 2007, Jens Axboe wrote:
> > The core of the patch - allow the last sg element in a scatterlist
> > table to point to the start of a new table. We overload the LSB of
> > the page pointer to indicate whether this is a valid sg entry, or
> > merely a link to the next list.
> > 
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > ---
> >  include/linux/scatterlist.h |   79 +++++++++++++++++++++++++++++++++++++++++-
> >  1 files changed, 77 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index bed5ab4..10f6223 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -20,8 +20,36 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
> >  	sg_set_buf(sg, buf, buflen);
> >  }
> >  
> > -#define sg_next(sg)		((sg) + 1)
> > -#define sg_last(sg, nents)	(&(sg[(nents) - 1]))
> > +/*
> > + * We overload the LSB of the page pointer to indicate whether it's
> > + * a valid sg entry, or whether it points to the start of a new scatterlist.
> > + * Those low bits are there for everyone! (thanks mason :-)
> > + */
> > +#define sg_is_chain(sg)		((unsigned long) (sg)->page & 0x01)
> > +#define sg_chain_ptr(sg)	\
> > +	((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01))
> > +
> > +/**
> > + * sg_next - return the next scatterlist entry in a list
> > + * @sg:		The current sg entry
> > + *
> > + * Usually the next entry will be @sg@ + 1, but if this sg element is part
> > + * of a chained scatterlist, it could jump to the start of a new
> > + * scatterlist array.
> > + *
> > + * Note that the caller must ensure that there are further entries after
> > + * the current entry, this function will NOT return NULL for an end-of-list.
> > + *
> > + */
> > +static inline struct scatterlist *sg_next(struct scatterlist *sg)
> > +{
> > +	sg++;
> > +
> > +	if (unlikely(sg_is_chain(sg)))
> > +		sg = sg_chain_ptr(sg);
> > +
> > +	return sg;
> > +}
> >  
> >  /*
> >   * Loop over each sg element, following the pointer to a new list if necessary
> > @@ -29,4 +57,51 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
> >  #define for_each_sg(sglist, sg, nr, __i)	\
> >  	for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg))
> >  
> > +/**
> > + * sg_last - return the last scatterlist entry in a list
> > + * @sgl:	First entry in the scatterlist
> > + * @nents:	Number of entries in the scatterlist
> > + *
> > + * Should only be used casually, it (currently) scan the entire list
> > + * to get the last entry.
> > + *
> > + * Note that the @sgl@ pointer passed in need not be the first one,
> > + * the important bit is that @nents@ denotes the number of entries that
> > + * exist from @sgl@.
> > + *
> > + */
> > +static inline struct scatterlist *sg_last(struct scatterlist *sgl,
> > +					  unsigned int nents)
> > +{
> > +#ifdef ARCH_HAS_SG_CHAIN
> 
> Shouldn't this be #ifndef?

Good spotting! This function is only used for libata atapi, which
explains why my testing today didn't find any errors.

Thanks Bart, I'll rebase with that fix.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 03/33] block: convert to using sg helpers
  2007-07-16  9:47 ` [PATCH 03/33] block: convert to using sg helpers Jens Axboe
@ 2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
  2007-07-16 19:14     ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2007-07-16 19:21 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi


Hi,

On Monday 16 July 2007, Jens Axboe wrote:
> Convert the main rq mapper (blk_rq_map_sg()) to the sg helper setup.
> 
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  block/ll_rw_blk.c |   19 ++++++++++++-------
>  1 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c
> index ef42bb2..ab71087 100644
> --- a/block/ll_rw_blk.c
> +++ b/block/ll_rw_blk.c
> @@ -30,6 +30,7 @@
>  #include <linux/cpu.h>
>  #include <linux/blktrace_api.h>
>  #include <linux/fault-inject.h>
> +#include <linux/scatterlist.h>
>  
>  /*
>   * for max sense size
> @@ -1307,9 +1308,11 @@ static int blk_hw_contig_segment(request_queue_t *q, struct bio *bio,
>   * map a request to scatterlist, return number of sg entries setup. Caller
>   * must make sure sg can hold rq->nr_phys_segments entries
>   */
> -int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg)
> +int blk_rq_map_sg(request_queue_t *q, struct request *rq,
> +		  struct scatterlist *sglist)
>  {
>  	struct bio_vec *bvec, *bvprv;
> +	struct scatterlist *next_sg, *sg;
>  	struct bio *bio;
>  	int nsegs, i, cluster;
>  
> @@ -1320,6 +1323,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
>  	 * for each bio in rq
>  	 */
>  	bvprv = NULL;
> +	sg = next_sg = &sglist[0];
>  	rq_for_each_bio(bio, rq) {
>  		/*
>  		 * for each segment in bio
> @@ -1328,7 +1332,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
>  			int nbytes = bvec->bv_len;
>  
>  			if (bvprv && cluster) {
> -				if (sg[nsegs - 1].length + nbytes > q->max_segment_size)
> +				if (sg->length + nbytes > q->max_segment_size)
>  					goto new_segment;
>  
>  				if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
> @@ -1336,14 +1340,15 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg
>  				if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
>  					goto new_segment;
>  
> -				sg[nsegs - 1].length += nbytes;
> +				sg->length += nbytes;
>  			} else {
>  new_segment:
> -				memset(&sg[nsegs],0,sizeof(struct scatterlist));

Is this intended?  If so why this is no longer needed?

> -				sg[nsegs].page = bvec->bv_page;
> -				sg[nsegs].length = nbytes;
> -				sg[nsegs].offset = bvec->bv_offset;
> +				sg = next_sg;
> +				next_sg = sg_next(sg);
>  
> +				sg->page = bvec->bv_page;
> +				sg->length = nbytes;
> +				sg->offset = bvec->bv_offset;
>  				nsegs++;
>  			}
>  			bvprv = bvec;

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 05/33] Add chained sg support to linux/scatterlist.h
  2007-07-16  9:47 ` [PATCH 05/33] Add chained sg support to linux/scatterlist.h Jens Axboe
@ 2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
  2007-07-16 19:15     ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2007-07-16 19:21 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

On Monday 16 July 2007, Jens Axboe wrote:
> The core of the patch - allow the last sg element in a scatterlist
> table to point to the start of a new table. We overload the LSB of
> the page pointer to indicate whether this is a valid sg entry, or
> merely a link to the next list.
> 
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  include/linux/scatterlist.h |   79 +++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 77 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index bed5ab4..10f6223 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -20,8 +20,36 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
>  	sg_set_buf(sg, buf, buflen);
>  }
>  
> -#define sg_next(sg)		((sg) + 1)
> -#define sg_last(sg, nents)	(&(sg[(nents) - 1]))
> +/*
> + * We overload the LSB of the page pointer to indicate whether it's
> + * a valid sg entry, or whether it points to the start of a new scatterlist.
> + * Those low bits are there for everyone! (thanks mason :-)
> + */
> +#define sg_is_chain(sg)		((unsigned long) (sg)->page & 0x01)
> +#define sg_chain_ptr(sg)	\
> +	((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01))
> +
> +/**
> + * sg_next - return the next scatterlist entry in a list
> + * @sg:		The current sg entry
> + *
> + * Usually the next entry will be @sg@ + 1, but if this sg element is part
> + * of a chained scatterlist, it could jump to the start of a new
> + * scatterlist array.
> + *
> + * Note that the caller must ensure that there are further entries after
> + * the current entry, this function will NOT return NULL for an end-of-list.
> + *
> + */
> +static inline struct scatterlist *sg_next(struct scatterlist *sg)
> +{
> +	sg++;
> +
> +	if (unlikely(sg_is_chain(sg)))
> +		sg = sg_chain_ptr(sg);
> +
> +	return sg;
> +}
>  
>  /*
>   * Loop over each sg element, following the pointer to a new list if necessary
> @@ -29,4 +57,51 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf,
>  #define for_each_sg(sglist, sg, nr, __i)	\
>  	for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg))
>  
> +/**
> + * sg_last - return the last scatterlist entry in a list
> + * @sgl:	First entry in the scatterlist
> + * @nents:	Number of entries in the scatterlist
> + *
> + * Should only be used casually, it (currently) scan the entire list
> + * to get the last entry.
> + *
> + * Note that the @sgl@ pointer passed in need not be the first one,
> + * the important bit is that @nents@ denotes the number of entries that
> + * exist from @sgl@.
> + *
> + */
> +static inline struct scatterlist *sg_last(struct scatterlist *sgl,
> +					  unsigned int nents)
> +{
> +#ifdef ARCH_HAS_SG_CHAIN

Shouldn't this be #ifndef?

> +	struct scatterlist *ret = &sgl[nents - 1];
> +#else
> +	struct scatterlist *sg, *ret = NULL;
> +	int i;
> +
> +	for_each_sg(sgl, sg, nents, i)
> +		ret = sg;
> +
> +#endif
> +	return ret;
> +}
> +
> +/**
> + * sg_chain - Chain two sglists together
> + * @prv:	First scatterlist
> + * @prv_nents:	Number of entries in prv
> + * @sgl:	Second scatterlist
> + *
> + * Links @prv@ and @sgl@ together, to form a longer scatterlist.
> + *
> + */
> +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
> +			    struct scatterlist *sgl)
> +{
> +#ifndef ARCH_HAS_SG_CHAIN
> +	BUG();
> +#endif
> +	prv[prv_nents - 1].page = (struct page *) ((unsigned long) sgl | 0x01);
> +}
> +
>  #endif /* _LINUX_SCATTERLIST_H */

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16  9:47 ` [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe
@ 2007-07-16 20:06   ` Andrew Morton
  2007-07-16 20:10     ` Jens Axboe
  2007-07-17 11:03   ` Muli Ben-Yehuda
  1 sibling, 1 reply; 73+ messages in thread
From: Andrew Morton @ 2007-07-16 20:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi, ak

On Mon, 16 Jul 2007 11:47:23 +0200
Jens Axboe <jens.axboe@oracle.com> wrote:

> This prepares x86-64 for sg chaining support.
> 
> Additional improvements/fixups for pci-gart from
> Benny Halevy <bhalevy@panasas.com>
> 
> Cc: ak@suse.de
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
>  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
>  arch/x86_64/kernel/pci-nommu.c   |    5 ++-
>  3 files changed, 51 insertions(+), 42 deletions(-)

This causes fairly extensive destruction of 2.6.23 changes which are
pending in Andi's tree.

I shall drop the block tree until a) Andi has merged the pending Calgary
changes and b) the block tree has been fixed up to account for those (and
other) changes.

Really, this was the perfectly worst time possible to go tromping all over
everyone else's trees.


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16 20:06   ` Andrew Morton
@ 2007-07-16 20:10     ` Jens Axboe
  2007-07-16 20:34       ` Andrew Morton
  0 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-16 20:10 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, linux-scsi, ak

On Mon, Jul 16 2007, Andrew Morton wrote:
> On Mon, 16 Jul 2007 11:47:23 +0200
> Jens Axboe <jens.axboe@oracle.com> wrote:
> 
> > This prepares x86-64 for sg chaining support.
> > 
> > Additional improvements/fixups for pci-gart from
> > Benny Halevy <bhalevy@panasas.com>
> > 
> > Cc: ak@suse.de
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > ---
> >  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
> >  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
> >  arch/x86_64/kernel/pci-nommu.c   |    5 ++-
> >  3 files changed, 51 insertions(+), 42 deletions(-)
> 
> This causes fairly extensive destruction of 2.6.23 changes which are
> pending in Andi's tree.
> 
> I shall drop the block tree until a) Andi has merged the pending Calgary
> changes and b) the block tree has been fixed up to account for those (and
> other) changes.
> 
> Really, this was the perfectly worst time possible to go tromping all over
> everyone else's trees.

Well, that's a bit hard for me to know, all I can do is push my stuff to
for-akpm so that it gets -mm exposure. If I don't, then you are yelling
at me as well :-)

I'll keep rebasing sglist and the other branches I pull into for-akpm,
so you can just re-enable the for-akpm pull when the a) is true.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16 20:10     ` Jens Axboe
@ 2007-07-16 20:34       ` Andrew Morton
  2007-07-16 22:08         ` Muli Ben-Yehuda
  2007-07-17  7:02         ` Jens Axboe
  0 siblings, 2 replies; 73+ messages in thread
From: Andrew Morton @ 2007-07-16 20:34 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi, ak

On Mon, 16 Jul 2007 22:10:03 +0200
Jens Axboe <jens.axboe@oracle.com> wrote:

> On Mon, Jul 16 2007, Andrew Morton wrote:
> > On Mon, 16 Jul 2007 11:47:23 +0200
> > Jens Axboe <jens.axboe@oracle.com> wrote:
> > 
> > > This prepares x86-64 for sg chaining support.
> > > 
> > > Additional improvements/fixups for pci-gart from
> > > Benny Halevy <bhalevy@panasas.com>
> > > 
> > > Cc: ak@suse.de
> > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > > ---
> > >  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
> > >  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
> > >  arch/x86_64/kernel/pci-nommu.c   |    5 ++-
> > >  3 files changed, 51 insertions(+), 42 deletions(-)
> > 
> > This causes fairly extensive destruction of 2.6.23 changes which are
> > pending in Andi's tree.
> > 
> > I shall drop the block tree until a) Andi has merged the pending Calgary
> > changes and b) the block tree has been fixed up to account for those (and
> > other) changes.
> > 
> > Really, this was the perfectly worst time possible to go tromping all over
> > everyone else's trees.
> 
> Well, that's a bit hard for me to know, all I can do is push my stuff to
> for-akpm so that it gets -mm exposure. If I don't, then you are yelling
> at me as well :-)

It is to be expected that there will be conflicts when making large-scale
changes like this while there is 25MB of unmerged diff sitting out in 50
different trees.  Doing it around the -rc1 time would be better.

Even better would be not doing it at all: add the necessary stuff into the
block core and then send the followup work to the relevant maintainers. ie:
standard practice.

> I'll keep rebasing sglist and the other branches I pull into for-akpm,
> so you can just re-enable the for-akpm pull when the a) is true.

Andi's tree is very out of date and is about to be damaged (to what extent
I don't yet know) by a xen merge.  I don't know if a) is going to happen.


Things are more than usually messy this time around.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16 20:34       ` Andrew Morton
@ 2007-07-16 22:08         ` Muli Ben-Yehuda
  2007-07-17  7:38           ` Jens Axboe
  2007-07-17  7:02         ` Jens Axboe
  1 sibling, 1 reply; 73+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-16 22:08 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Jens Axboe, linux-kernel, linux-scsi, ak

On Mon, Jul 16, 2007 at 01:34:27PM -0700, Andrew Morton wrote:

> > I'll keep rebasing sglist and the other branches I pull into for-akpm,
> > so you can just re-enable the for-akpm pull when the a) is true.
> 
> Andi's tree is very out of date and is about to be damaged (to what
> extent I don't yet know) by a xen merge.  I don't know if a) is
> going to happen.

The Xen and Calgary bits are mutually exclusive, so hopefully (a) will
not be held up on account of the Xen merge (or for any other
reason... CalIOC2 machines which are out there won't boot without it
if CONFIG_CALGARY_IOMMU is enabled).

Cheers,
Muli

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 30/33] USB storage: sg chaining support
  2007-07-16  9:47 ` [PATCH 30/33] USB storage: " Jens Axboe
@ 2007-07-17  6:12   ` Greg KH
  2007-07-17  7:01     ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: Greg KH @ 2007-07-17  6:12 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

On Mon, Jul 16, 2007 at 11:47:44AM +0200, Jens Axboe wrote:
> [PATCH] USB storage: sg chaining support
> 
> Modify usb_stor_access_xfer_buf() to take a pointer to an sg
> entry pointer, so we can keep track of that instead of passing
> around an integer index (which we can't use when dealing with
> multiple scatterlist arrays).
> 
> Cc: greg@kroah.com
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

No objection from me, I'm guessing this needs to go through the scsi
tree?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 30/33] USB storage: sg chaining support
  2007-07-17  6:12   ` Greg KH
@ 2007-07-17  7:01     ` Jens Axboe
  2007-07-17  7:05       ` Greg KH
  0 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-17  7:01 UTC (permalink / raw)
  To: Greg KH; +Cc: linux-kernel, linux-scsi

On Mon, Jul 16 2007, Greg KH wrote:
> On Mon, Jul 16, 2007 at 11:47:44AM +0200, Jens Axboe wrote:
> > [PATCH] USB storage: sg chaining support
> > 
> > Modify usb_stor_access_xfer_buf() to take a pointer to an sg
> > entry pointer, so we can keep track of that instead of passing
> > around an integer index (which we can't use when dealing with
> > multiple scatterlist arrays).
> > 
> > Cc: greg@kroah.com
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> 
> No objection from me, I'm guessing this needs to go through the scsi
> tree?

Yes it will, I just want to make sure that the relevant people have seen
(and preferably acked them :) the changes before they go in eventually.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16 20:34       ` Andrew Morton
  2007-07-16 22:08         ` Muli Ben-Yehuda
@ 2007-07-17  7:02         ` Jens Axboe
  2007-07-17  7:56           ` Jens Axboe
  1 sibling, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-17  7:02 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, linux-scsi, ak

On Mon, Jul 16 2007, Andrew Morton wrote:
> On Mon, 16 Jul 2007 22:10:03 +0200
> Jens Axboe <jens.axboe@oracle.com> wrote:
> 
> > On Mon, Jul 16 2007, Andrew Morton wrote:
> > > On Mon, 16 Jul 2007 11:47:23 +0200
> > > Jens Axboe <jens.axboe@oracle.com> wrote:
> > > 
> > > > This prepares x86-64 for sg chaining support.
> > > > 
> > > > Additional improvements/fixups for pci-gart from
> > > > Benny Halevy <bhalevy@panasas.com>
> > > > 
> > > > Cc: ak@suse.de
> > > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > > > ---
> > > >  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
> > > >  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
> > > >  arch/x86_64/kernel/pci-nommu.c   |    5 ++-
> > > >  3 files changed, 51 insertions(+), 42 deletions(-)
> > > 
> > > This causes fairly extensive destruction of 2.6.23 changes which are
> > > pending in Andi's tree.
> > > 
> > > I shall drop the block tree until a) Andi has merged the pending Calgary
> > > changes and b) the block tree has been fixed up to account for those (and
> > > other) changes.
> > > 
> > > Really, this was the perfectly worst time possible to go tromping all over
> > > everyone else's trees.
> > 
> > Well, that's a bit hard for me to know, all I can do is push my stuff to
> > for-akpm so that it gets -mm exposure. If I don't, then you are yelling
> > at me as well :-)
> 
> It is to be expected that there will be conflicts when making large-scale
> changes like this while there is 25MB of unmerged diff sitting out in 50
> different trees.  Doing it around the -rc1 time would be better.
> 
> Even better would be not doing it at all: add the necessary stuff into the
> block core and then send the followup work to the relevant maintainers. ie:
> standard practice.

I'll do that. Most of the stuff is trivial, but the x86-64 iommu changes
are probably the most tricky and invasive of them all.

> > I'll keep rebasing sglist and the other branches I pull into for-akpm,
> > so you can just re-enable the for-akpm pull when the a) is true.
> 
> Andi's tree is very out of date and is about to be damaged (to what extent
> I don't yet know) by a xen merge.  I don't know if a) is going to happen.
> 
> 
> Things are more than usually messy this time around.

Seems so :)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 30/33] USB storage: sg chaining support
  2007-07-17  7:01     ` Jens Axboe
@ 2007-07-17  7:05       ` Greg KH
  2007-07-17  7:07         ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: Greg KH @ 2007-07-17  7:05 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

On Tue, Jul 17, 2007 at 09:01:02AM +0200, Jens Axboe wrote:
> On Mon, Jul 16 2007, Greg KH wrote:
> > On Mon, Jul 16, 2007 at 11:47:44AM +0200, Jens Axboe wrote:
> > > [PATCH] USB storage: sg chaining support
> > > 
> > > Modify usb_stor_access_xfer_buf() to take a pointer to an sg
> > > entry pointer, so we can keep track of that instead of passing
> > > around an integer index (which we can't use when dealing with
> > > multiple scatterlist arrays).
> > > 
> > > Cc: greg@kroah.com
> > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > 
> > No objection from me, I'm guessing this needs to go through the scsi
> > tree?
> 
> Yes it will, I just want to make sure that the relevant people have seen
> (and preferably acked them :) the changes before they go in eventually.

You might want to pass this one by the usb-storage developer to make
sure.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 30/33] USB storage: sg chaining support
  2007-07-17  7:05       ` Greg KH
@ 2007-07-17  7:07         ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-17  7:07 UTC (permalink / raw)
  To: Greg KH; +Cc: linux-kernel, linux-scsi

On Tue, Jul 17 2007, Greg KH wrote:
> On Tue, Jul 17, 2007 at 09:01:02AM +0200, Jens Axboe wrote:
> > On Mon, Jul 16 2007, Greg KH wrote:
> > > On Mon, Jul 16, 2007 at 11:47:44AM +0200, Jens Axboe wrote:
> > > > [PATCH] USB storage: sg chaining support
> > > > 
> > > > Modify usb_stor_access_xfer_buf() to take a pointer to an sg
> > > > entry pointer, so we can keep track of that instead of passing
> > > > around an integer index (which we can't use when dealing with
> > > > multiple scatterlist arrays).
> > > > 
> > > > Cc: greg@kroah.com
> > > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > > 
> > > No objection from me, I'm guessing this needs to go through the scsi
> > > tree?
> > 
> > Yes it will, I just want to make sure that the relevant people have seen
> > (and preferably acked them :) the changes before they go in eventually.
> 
> You might want to pass this one by the usb-storage developer to make
> sure.

OK, will do.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16 22:08         ` Muli Ben-Yehuda
@ 2007-07-17  7:38           ` Jens Axboe
  2007-07-17  8:49             ` Muli Ben-Yehuda
  0 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-17  7:38 UTC (permalink / raw)
  To: Muli Ben-Yehuda; +Cc: Andrew Morton, linux-kernel, linux-scsi, ak

On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:
> On Mon, Jul 16, 2007 at 01:34:27PM -0700, Andrew Morton wrote:
> 
> > > I'll keep rebasing sglist and the other branches I pull into for-akpm,
> > > so you can just re-enable the for-akpm pull when the a) is true.
> > 
> > Andi's tree is very out of date and is about to be damaged (to what
> > extent I don't yet know) by a xen merge.  I don't know if a) is
> > going to happen.
> 
> The Xen and Calgary bits are mutually exclusive, so hopefully (a) will
> not be held up on account of the Xen merge (or for any other
> reason... CalIOC2 machines which are out there won't boot without it
> if CONFIG_CALGARY_IOMMU is enabled).

Do you have any comments for the patch to calgary (the mail you are
replying to)?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-17  7:02         ` Jens Axboe
@ 2007-07-17  7:56           ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-17  7:56 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, linux-scsi, ak

On Tue, Jul 17 2007, Jens Axboe wrote:
> On Mon, Jul 16 2007, Andrew Morton wrote:
> > On Mon, 16 Jul 2007 22:10:03 +0200
> > Jens Axboe <jens.axboe@oracle.com> wrote:
> > 
> > > On Mon, Jul 16 2007, Andrew Morton wrote:
> > > > On Mon, 16 Jul 2007 11:47:23 +0200
> > > > Jens Axboe <jens.axboe@oracle.com> wrote:
> > > > 
> > > > > This prepares x86-64 for sg chaining support.
> > > > > 
> > > > > Additional improvements/fixups for pci-gart from
> > > > > Benny Halevy <bhalevy@panasas.com>
> > > > > 
> > > > > Cc: ak@suse.de
> > > > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > > > > ---
> > > > >  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
> > > > >  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
> > > > >  arch/x86_64/kernel/pci-nommu.c   |    5 ++-
> > > > >  3 files changed, 51 insertions(+), 42 deletions(-)
> > > > 
> > > > This causes fairly extensive destruction of 2.6.23 changes which are
> > > > pending in Andi's tree.
> > > > 
> > > > I shall drop the block tree until a) Andi has merged the pending Calgary
> > > > changes and b) the block tree has been fixed up to account for those (and
> > > > other) changes.
> > > > 
> > > > Really, this was the perfectly worst time possible to go tromping all over
> > > > everyone else's trees.
> > > 
> > > Well, that's a bit hard for me to know, all I can do is push my stuff to
> > > for-akpm so that it gets -mm exposure. If I don't, then you are yelling
> > > at me as well :-)
> > 
> > It is to be expected that there will be conflicts when making large-scale
> > changes like this while there is 25MB of unmerged diff sitting out in 50
> > different trees.  Doing it around the -rc1 time would be better.
> > 
> > Even better would be not doing it at all: add the necessary stuff into the
> > block core and then send the followup work to the relevant maintainers. ie:
> > standard practice.
> 
> I'll do that. Most of the stuff is trivial, but the x86-64 iommu changes
> are probably the most tricky and invasive of them all.

OK, this is what I did:

Split the set into 3, in the following order:

CORE -> DRIVERS -> ARCH

So that's 3 branches, core is based off master, drivers off core, and
arch off drivers. I'll push core+drivers into #for-akpm for testing,
then the arch bits will stay undisturbed until acked by maintainers.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-17  7:38           ` Jens Axboe
@ 2007-07-17  8:49             ` Muli Ben-Yehuda
  2007-07-17  8:51               ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-17  8:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Andrew Morton, linux-kernel, linux-scsi, ak

On Tue, Jul 17, 2007 at 09:38:59AM +0200, Jens Axboe wrote:

> On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:

> > The Xen and Calgary bits are mutually exclusive, so hopefully (a)
> > will not be held up on account of the Xen merge (or for any other
> > reason... CalIOC2 machines which are out there won't boot without
> > it if CONFIG_CALGARY_IOMMU is enabled).
> 
> Do you have any comments for the patch to calgary (the mail you are
> replying to)?

It looks ok, I just didn't want to ACK it before testing (I'm paranoid
that way).

Cheers,
Muli

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-17  8:49             ` Muli Ben-Yehuda
@ 2007-07-17  8:51               ` Jens Axboe
  2007-07-17  9:00                 ` Muli Ben-Yehuda
  0 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-17  8:51 UTC (permalink / raw)
  To: Muli Ben-Yehuda; +Cc: Andrew Morton, linux-kernel, linux-scsi, ak

On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:
> On Tue, Jul 17, 2007 at 09:38:59AM +0200, Jens Axboe wrote:
> 
> > On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:
> 
> > > The Xen and Calgary bits are mutually exclusive, so hopefully (a)
> > > will not be held up on account of the Xen merge (or for any other
> > > reason... CalIOC2 machines which are out there won't boot without
> > > it if CONFIG_CALGARY_IOMMU is enabled).
> > 
> > Do you have any comments for the patch to calgary (the mail you are
> > replying to)?
> 
> It looks ok, I just didn't want to ACK it before testing (I'm paranoid
> that way).

That's certainly understandable! Can you mail me an ack once you've had
a chance to test it?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-17  8:51               ` Jens Axboe
@ 2007-07-17  9:00                 ` Muli Ben-Yehuda
  0 siblings, 0 replies; 73+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-17  9:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Andrew Morton, linux-kernel, linux-scsi, ak

On Tue, Jul 17, 2007 at 10:51:23AM +0200, Jens Axboe wrote:
> On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:
> > On Tue, Jul 17, 2007 at 09:38:59AM +0200, Jens Axboe wrote:
> > 
> > > On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:
> > 
> > > > The Xen and Calgary bits are mutually exclusive, so hopefully (a)
> > > > will not be held up on account of the Xen merge (or for any other
> > > > reason... CalIOC2 machines which are out there won't boot without
> > > > it if CONFIG_CALGARY_IOMMU is enabled).
> > > 
> > > Do you have any comments for the patch to calgary (the mail you are
> > > replying to)?
> > 
> > It looks ok, I just didn't want to ACK it before testing (I'm paranoid
> > that way).
> 
> That's certainly understandable! Can you mail me an ack once you've had
> a chance to test it?

Yep, taking it for a spin now.

Cheers,
Muli

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-16  9:47 ` [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe
  2007-07-16 20:06   ` Andrew Morton
@ 2007-07-17 11:03   ` Muli Ben-Yehuda
  2007-07-17 11:05     ` Jens Axboe
  1 sibling, 1 reply; 73+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-17 11:03 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi, ak

On Mon, Jul 16, 2007 at 11:47:23AM +0200, Jens Axboe wrote:

> This prepares x86-64 for sg chaining support.
> 
> Additional improvements/fixups for pci-gart from
> Benny Halevy <bhalevy@panasas.com>
> 
> Cc: ak@suse.de
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
>  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
>  arch/x86_64/kernel/pci-nommu.c   |    5 ++-

Calgary and nommu bits are:

Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>

FYI, here's the Calgary diff on top of the outstanding Calgary
changes.

diff -r 8642809f9e33 arch/x86_64/kernel/pci-calgary.c
--- a/arch/x86_64/kernel/pci-calgary.c	Mon Jul 16 09:38:14 2007 +0300
+++ b/arch/x86_64/kernel/pci-calgary.c	Tue Jul 17 13:33:24 2007 +0300
@@ -35,6 +35,7 @@
 #include <linux/pci_ids.h>
 #include <linux/pci.h>
 #include <linux/delay.h>
+#include <linux/scatterlist.h>
 #include <asm/proto.h>
 #include <asm/calgary.h>
 #include <asm/tce.h>
@@ -387,31 +388,32 @@ static void calgary_unmap_sg(struct devi
 	struct scatterlist *sglist, int nelems, int direction)
 {
 	struct iommu_table *tbl = find_iommu_table(dev);
+	struct scatterlist *s;
+	int i;
 
 	if (!translate_phb(to_pci_dev(dev)))
 		return;
 
-	while (nelems--) {
+	for_each_sg(sglist, s, nelems, i) {
 		unsigned int npages;
-		dma_addr_t dma = sglist->dma_address;
-		unsigned int dmalen = sglist->dma_length;
+		dma_addr_t dma = s->dma_address;
+		unsigned int dmalen = s->dma_length;
 
 		if (dmalen == 0)
 			break;
 
 		npages = num_dma_pages(dma, dmalen);
 		iommu_free(tbl, dma, npages);
-		sglist++;
 	}
 }
 
 static int calgary_nontranslate_map_sg(struct device* dev,
 	struct scatterlist *sg, int nelems, int direction)
 {
+	struct scatterlist *s;
 	int i;
 
-	for (i = 0; i < nelems; i++ ) {
-		struct scatterlist *s = &sg[i];
+	for_each_sg(sg, s, nelems, i) {
 		BUG_ON(!s->page);
 		s->dma_address = virt_to_bus(page_address(s->page) +s->offset);
 		s->dma_length = s->length;
@@ -423,6 +425,7 @@ static int calgary_map_sg(struct device 
 	int nelems, int direction)
 {
 	struct iommu_table *tbl = find_iommu_table(dev);
+	struct scatterlist *s;
 	unsigned long vaddr;
 	unsigned int npages;
 	unsigned long entry;
@@ -431,8 +434,7 @@ static int calgary_map_sg(struct device 
 	if (!translate_phb(to_pci_dev(dev)))
 		return calgary_nontranslate_map_sg(dev, sg, nelems, direction);
 
-	for (i = 0; i < nelems; i++ ) {
-		struct scatterlist *s = &sg[i];
+ 	for_each_sg(sg, s, nelems, i) {
 		BUG_ON(!s->page);
 
 		vaddr = (unsigned long)page_address(s->page) + s->offset;
@@ -457,9 +459,9 @@ static int calgary_map_sg(struct device 
 	return nelems;
 error:
 	calgary_unmap_sg(dev, sg, nelems, direction);
-	for (i = 0; i < nelems; i++) {
-		sg[i].dma_address = bad_dma_address;
-		sg[i].dma_length = 0;
+	for_each_sg(sg, s, nelems, i) {
+		s->dma_address = bad_dma_address;
+		s->dma_length = 0;
 	}
 	return 0;
 }

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-17 11:03   ` Muli Ben-Yehuda
@ 2007-07-17 11:05     ` Jens Axboe
  2007-07-17 11:10       ` Muli Ben-Yehuda
  0 siblings, 1 reply; 73+ messages in thread
From: Jens Axboe @ 2007-07-17 11:05 UTC (permalink / raw)
  To: Muli Ben-Yehuda; +Cc: linux-kernel, linux-scsi, ak

On Tue, Jul 17 2007, Muli Ben-Yehuda wrote:
> On Mon, Jul 16, 2007 at 11:47:23AM +0200, Jens Axboe wrote:
> 
> > This prepares x86-64 for sg chaining support.
> > 
> > Additional improvements/fixups for pci-gart from
> > Benny Halevy <bhalevy@panasas.com>
> > 
> > Cc: ak@suse.de
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > ---
> >  arch/x86_64/kernel/pci-calgary.c |   25 ++++++++------
> >  arch/x86_64/kernel/pci-gart.c    |   63 ++++++++++++++++++++-----------------
> >  arch/x86_64/kernel/pci-nommu.c   |    5 ++-
> 
> Calgary and nommu bits are:
> 
> Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>

Great, thanks!

> FYI, here's the Calgary diff on top of the outstanding Calgary
> changes.

When are these bits being merged into mainline? I'll hang on to this
version for helping me rebase the arch bits of chained sglists once that
happens.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers
  2007-07-17 11:05     ` Jens Axboe
@ 2007-07-17 11:10       ` Muli Ben-Yehuda
  0 siblings, 0 replies; 73+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-17 11:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi, ak

On Tue, Jul 17, 2007 at 01:05:03PM +0200, Jens Axboe wrote:

> > FYI, here's the Calgary diff on top of the outstanding Calgary
> > changes.
> 
> When are these bits being merged into mainline? I'll hang on to this
> version for helping me rebase the arch bits of chained sglists once
> that happens.

I sincerely hope they'll make 2.6.23 but Andi appears to have fallen
into a black hole.

Cheers,
Muli

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 33/33] IDE: sg chaining support
  2007-07-16  9:47 ` [PATCH 33/33] IDE: " Jens Axboe
@ 2007-07-18 20:56   ` Bartlomiej Zolnierkiewicz
  2007-07-19  6:19     ` Jens Axboe
  0 siblings, 1 reply; 73+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2007-07-18 20:56 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

On Monday 16 July 2007, Jens Axboe wrote:
> Cc: bzolnier@gmail.com
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 24/33] ide-scsi: sg chaining support
  2007-07-16  9:47 ` [PATCH 24/33] ide-scsi: " Jens Axboe
@ 2007-07-18 21:03   ` Bartlomiej Zolnierkiewicz
  0 siblings, 0 replies; 73+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2007-07-18 21:03 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

On Monday 16 July 2007, Jens Axboe wrote:
> Cc: bzolnier@gmail.com
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH 33/33] IDE: sg chaining support
  2007-07-18 20:56   ` Bartlomiej Zolnierkiewicz
@ 2007-07-19  6:19     ` Jens Axboe
  0 siblings, 0 replies; 73+ messages in thread
From: Jens Axboe @ 2007-07-19  6:19 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz; +Cc: linux-kernel, linux-scsi

On Wed, Jul 18 2007, Bartlomiej Zolnierkiewicz wrote:
> On Monday 16 July 2007, Jens Axboe wrote:
> > Cc: bzolnier@gmail.com
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> 
> Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>

(for both acks) Thanks for reviewing and acking!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2007-07-19  6:19 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-16  9:47 [PATCH 00/33] SG table chaining support Jens Axboe
2007-07-16  9:47 ` [PATCH 01/33] crypto: don't pollute the global namespace with sg_next() Jens Axboe
2007-07-16  9:47 ` [PATCH 02/33] Add sg helpers for iterating over a scatterlist table Jens Axboe
2007-07-16  9:47 ` [PATCH 03/33] block: convert to using sg helpers Jens Axboe
2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
2007-07-16 19:14     ` Jens Axboe
2007-07-16  9:47 ` [PATCH 04/33] scsi: " Jens Axboe
2007-07-16  9:47 ` [PATCH 05/33] Add chained sg support to linux/scatterlist.h Jens Axboe
2007-07-16 19:21   ` Bartlomiej Zolnierkiewicz
2007-07-16 19:15     ` Jens Axboe
2007-07-16  9:47 ` [PATCH 06/33] i386 dma_map_sg: convert to using sg helpers Jens Axboe
2007-07-16  9:47 ` [PATCH 07/33] i386: enable sg chaining Jens Axboe
2007-07-16  9:47 ` [PATCH 08/33] swiotlb: sg chaining support Jens Axboe
2007-07-16  9:47 ` [PATCH 09/33] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe
2007-07-16 20:06   ` Andrew Morton
2007-07-16 20:10     ` Jens Axboe
2007-07-16 20:34       ` Andrew Morton
2007-07-16 22:08         ` Muli Ben-Yehuda
2007-07-17  7:38           ` Jens Axboe
2007-07-17  8:49             ` Muli Ben-Yehuda
2007-07-17  8:51               ` Jens Axboe
2007-07-17  9:00                 ` Muli Ben-Yehuda
2007-07-17  7:02         ` Jens Axboe
2007-07-17  7:56           ` Jens Axboe
2007-07-17 11:03   ` Muli Ben-Yehuda
2007-07-17 11:05     ` Jens Axboe
2007-07-17 11:10       ` Muli Ben-Yehuda
2007-07-16  9:47 ` [PATCH 10/33] x86-64: enable sg chaining Jens Axboe
2007-07-16  9:47 ` [PATCH 11/33] IA64: sg chaining support Jens Axboe
2007-07-16  9:47 ` [PATCH 12/33] PPC: " Jens Axboe
2007-07-16  9:47 ` [PATCH 13/33] SPARC: " Jens Axboe
2007-07-16 11:29   ` David Miller
2007-07-16  9:47 ` [PATCH 14/33] SPARC64: " Jens Axboe
2007-07-16 11:29   ` David Miller
2007-07-16  9:47 ` [PATCH 15/33] scsi: simplify scsi_free_sgtable() Jens Axboe
2007-07-16  9:47 ` [PATCH 16/33] SCSI: support for allocating large scatterlists Jens Axboe
2007-07-16  9:47 ` [PATCH 17/33] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe
2007-07-16  9:47 ` [PATCH 18/33] libata: convert to using sg helpers Jens Axboe
2007-07-16  9:47 ` [PATCH 19/33] scsi_debug: support sg chaining Jens Axboe
2007-07-16  9:47 ` [PATCH 20/33] scsi generic: sg chaining support Jens Axboe
2007-07-16  9:47 ` [PATCH 21/33] qla1280: " Jens Axboe
2007-07-16  9:47 ` [PATCH 22/33] aic94xx: " Jens Axboe
2007-07-16  9:47 ` [PATCH 23/33] qlogicpti: " Jens Axboe
2007-07-16  9:47 ` [PATCH 24/33] ide-scsi: " Jens Axboe
2007-07-18 21:03   ` Bartlomiej Zolnierkiewicz
2007-07-16  9:47 ` [PATCH 25/33] gdth: " Jens Axboe
2007-07-16  9:47 ` [PATCH 26/33] aha1542: convert to use the data buffer accessors Jens Axboe
2007-07-16  9:47 ` [PATCH 27/33] advansys: " Jens Axboe
2007-07-16  9:47 ` [PATCH 28/33] ia64 simscsi: convert to use " Jens Axboe
2007-07-16  9:47 ` [PATCH 29/33] infiniband: sg chaining support Jens Axboe
2007-07-16 13:21   ` FUJITA Tomonori
2007-07-16 13:26     ` Jens Axboe
2007-07-16  9:47 ` [PATCH 30/33] USB storage: " Jens Axboe
2007-07-17  6:12   ` Greg KH
2007-07-17  7:01     ` Jens Axboe
2007-07-17  7:05       ` Greg KH
2007-07-17  7:07         ` Jens Axboe
2007-07-16  9:47 ` [PATCH 31/33] Fusion: " Jens Axboe
2007-07-16 13:20   ` FUJITA Tomonori
2007-07-16 13:25     ` Jens Axboe
2007-07-16  9:47 ` [PATCH 32/33] i2o: " Jens Axboe
2007-07-16  9:47 ` [PATCH 33/33] IDE: " Jens Axboe
2007-07-18 20:56   ` Bartlomiej Zolnierkiewicz
2007-07-19  6:19     ` Jens Axboe
2007-07-16 13:19 ` [PATCH 00/33] SG table " FUJITA Tomonori
2007-07-16 13:23   ` Jens Axboe
2007-07-16 13:19 ` FUJITA Tomonori
2007-07-16 13:24   ` Jens Axboe
2007-07-16 14:05 ` John Stoffel
2007-07-16 14:23   ` Martin K. Petersen
2007-07-16 14:43     ` Jens Axboe
2007-07-16 16:02     ` Kai Makisara
2007-07-16 16:43       ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).