linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] [0/7] Block layer rework for mask allocator
@ 2008-03-07  9:13 Andi Kleen
  2008-03-07  9:13 ` [PATCH] [1/7] Convert a few direct bounce_gfp users over to the blk_* wrappers Andi Kleen
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


This reworks the block layer bouncing to use the mask allocator

Instead of using GFP_DMA use device masks derived from the bounce gfp
and pass it to the mask allocator. On architectures not converted
to the mask allocator the code will transparently fall back to GFP_DMA.

Requires the earlier mask allocator patchkit I posted.

It is still not 100% finished -- in particular the bouncer still
does not know how to use masks > ISA mask, but that is not very difficult
to add now. I first wanted to laid the groundwork.

One patch ("Convert the blk allocator functions over to the mask allocator")
depends on the "blk allocator" in the earlier SCSI DMA rework 
patchkit I've been posting. That patch can be dropped without trouble 
if that one is not applied, but if it is applied it is needed.

-Andi

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [1/7] Convert a few direct bounce_gfp users over to the blk_* wrappers
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-07  9:13 ` [PATCH] [2/7] Convert open coded reference in libata to q->bounce_gfp to blk_kmalloc Andi Kleen
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


Signed-off-by: Andi Kleen <ak@suse.de>

---
 block/scsi_ioctl.c |    6 ++++--
 fs/bio.c           |    2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)

Index: linux/block/scsi_ioctl.c
===================================================================
--- linux.orig/block/scsi_ioctl.c
+++ linux/block/scsi_ioctl.c
@@ -433,10 +433,12 @@ int sg_scsi_ioctl(struct file *file, str
 
 	bytes = max(in_len, out_len);
 	if (bytes) {
-		buffer = kzalloc(bytes, q->bounce_gfp | GFP_USER| __GFP_NOWARN);
+		/* RED-PEN GFP_NOIO really needed? */
+		buffer = blk_kmalloc(q, bytes,
+				GFP_NOIO | GFP_USER | __GFP_NOWARN);
 		if (!buffer)
 			return -ENOMEM;
-
+		memset(buffer, 0, bytes);
 	}
 
 	rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_WAIT);
Index: linux/fs/bio.c
===================================================================
--- linux.orig/fs/bio.c
+++ linux/fs/bio.c
@@ -545,7 +545,7 @@ struct bio *bio_copy_user(struct request
 		if (bytes > len)
 			bytes = len;
 
-		page = alloc_page(q->bounce_gfp | GFP_KERNEL);
+		page = blk_alloc_pages(q, GFP_KERNEL, PAGE_SIZE);
 		if (!page) {
 			ret = -ENOMEM;
 			break;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [2/7] Convert open coded reference in libata to q->bounce_gfp to blk_kmalloc
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
  2008-03-07  9:13 ` [PATCH] [1/7] Convert a few direct bounce_gfp users over to the blk_* wrappers Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-07 21:06   ` Jeff Garzik
  2008-03-07  9:13 ` [PATCH] [3/7] Add mempool support for page allocation through the mask allocator Andi Kleen
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: jgarzik, axboe, linux-kernel


Only difference in behaviour is that GFP_NOIO is not passed here now,
but I think that is ok in this case because this is not in a write out path.

Cc: jgarzik@pobox.com

Signed-off-by: Andi Kleen <ak@suse.de>

---
 drivers/ata/libata-scsi.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux/drivers/ata/libata-scsi.c
===================================================================
--- linux.orig/drivers/ata/libata-scsi.c
+++ linux/drivers/ata/libata-scsi.c
@@ -868,7 +868,7 @@ static int ata_scsi_dev_config(struct sc
 		blk_queue_dma_pad(sdev->request_queue, ATA_DMA_PAD_SZ - 1);
 
 		/* configure draining */
-		buf = kmalloc(ATAPI_MAX_DRAIN, q->bounce_gfp | GFP_KERNEL);
+		buf = blk_kmalloc(q, ATAPI_MAX_DRAIN, GFP_KERNEL);
 		if (!buf) {
 			ata_dev_printk(dev, KERN_ERR,
 				       "drain buffer allocation failed\n");

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [3/7] Add mempool support for page allocation through the mask allocator
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
  2008-03-07  9:13 ` [PATCH] [1/7] Convert a few direct bounce_gfp users over to the blk_* wrappers Andi Kleen
  2008-03-07  9:13 ` [PATCH] [2/7] Convert open coded reference in libata to q->bounce_gfp to blk_kmalloc Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-07  9:13 ` [PATCH] [4/7] Add blk_q_mask Andi Kleen
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


Right now for struct page *s because that is what the block bounce code
needs.

I chose to add a small scratch area to the mempool structure instead
of allocating separately.

Signed-off-by: Andi Kleen <ak@suse.de>

---
 include/linux/mempool.h |    3 +++
 mm/mempool.c            |   31 +++++++++++++++++++++++++++++++
 2 files changed, 34 insertions(+)

Index: linux/mm/mempool.c
===================================================================
--- linux.orig/mm/mempool.c
+++ linux/mm/mempool.c
@@ -338,3 +338,34 @@ void mempool_free_pages(void *element, v
 	__free_pages(element, order);
 }
 EXPORT_SYMBOL(mempool_free_pages);
+
+struct mempool_apm_data {
+	u64 mask;
+	unsigned size;
+};
+
+static void *mempool_alloc_pages_mask(gfp_t gfp_mask, void *pool_data)
+{
+	struct mempool_apm_data *apm = (struct mempool_apm_data *)pool_data;
+	return alloc_pages_mask(gfp_mask, apm->size, apm->mask);
+}
+
+static void mempool_free_pages_mask(void *element, void *pool_data)
+{
+	struct mempool_apm_data *apm = (struct mempool_apm_data *)pool_data;
+	__free_pages_mask(element, apm->size);
+}
+
+mempool_t *mempool_create_pool_pmask(int min_nr, int size, u64 mask)
+{
+	struct mempool_apm_data apm = { .size = size, .mask = mask };
+	mempool_t *m = mempool_create(min_nr, mempool_alloc_pages_mask,
+				      mempool_free_pages_mask, &apm);
+	if (m) {
+		BUILD_BUG_ON(sizeof(m->private) < sizeof(apm));
+		memcpy(m->private, &apm, sizeof(struct mempool_apm_data));
+		m->pool_data = (struct mempool_apm_data *)&m->private;
+	}
+	return m;
+}
+EXPORT_SYMBOL(mempool_create_pool_pmask);
Index: linux/include/linux/mempool.h
===================================================================
--- linux.orig/include/linux/mempool.h
+++ linux/include/linux/mempool.h
@@ -21,6 +21,7 @@ typedef struct mempool_s {
 	mempool_alloc_t *alloc;
 	mempool_free_t *free;
 	wait_queue_head_t wait;
+	char private[16];
 } mempool_t;
 
 extern mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
@@ -76,4 +77,6 @@ static inline mempool_t *mempool_create_
 			      (void *)(long)order);
 }
 
+mempool_t *mempool_create_pool_pmask(int min_nr, int size, u64 mask);
+
 #endif /* _LINUX_MEMPOOL_H */

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [4/7] Add blk_q_mask
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
                   ` (2 preceding siblings ...)
  2008-03-07  9:13 ` [PATCH] [3/7] Add mempool support for page allocation through the mask allocator Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-07  9:13 ` [PATCH] [5/7] Convert the blk allocator functions over to the mask allocator Andi Kleen
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


Converts the queue bounce_pfn to a DMA mask suitable for the mask allocator

Signed-off-by: Andi Kleen <ak@suse.de>

---
 include/linux/blkdev.h |    5 +++++
 1 file changed, 5 insertions(+)

Index: linux/include/linux/blkdev.h
===================================================================
--- linux.orig/include/linux/blkdev.h
+++ linux/include/linux/blkdev.h
@@ -814,6 +814,11 @@ static inline unsigned int block_size(st
 	return bdev->bd_block_size;
 }
 
+static inline u64 blk_q_mask(struct request_queue *q)
+{
+	return ~(-1LL << (PAGE_SHIFT + fls64(q->bounce_pfn)));
+}
+
 typedef struct {struct page *v;} Sector;
 
 unsigned char *read_dev_sector(struct block_device *, sector_t, Sector *);

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [5/7] Convert the blk allocator functions over to the mask allocator
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
                   ` (3 preceding siblings ...)
  2008-03-07  9:13 ` [PATCH] [4/7] Add blk_q_mask Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-07  9:13 ` [PATCH] [6/7] Remove bounce_gfp Andi Kleen
  2008-03-07  9:13 ` [PATCH] [7/7] Allow swiotlb to move block data bouncing to the block layer Andi Kleen
  6 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


This is a separate patch because the block allocator functions are 
only getting introduced by my earlier SCSI DMA cleanup patch kit.
This means this patch alone relies on them. Without that patchkit
applied earlier it can be safely dropped.

Right now blk_kmalloc rounds up to pages. There are only a few callers
so adding a sub allocator for that doesn't seem worthwhile.

Signed-off-by: Andi Kleen <ak@suse.de>

---
 block/blk-settings.c   |    7 +++++--
 include/linux/blkdev.h |    4 ++--
 2 files changed, 7 insertions(+), 4 deletions(-)

Index: linux/include/linux/blkdev.h
===================================================================
--- linux.orig/include/linux/blkdev.h
+++ linux/include/linux/blkdev.h
@@ -560,12 +560,12 @@ extern struct page *blk_alloc_pages(stru
 extern void *blk_kmalloc(struct request_queue *q, unsigned size, gfp_t gfp);
 static inline void blk_free_pages(struct page *p, int size)
 {
-	__free_pages(p, get_order(size));
+	__free_pages_mask(p, size);
 }
 
 static inline void blk_kfree(void *p, int size)
 {
-	kfree(p);
+	free_pages_mask(p, size);
 }
 
 struct req_iterator {
Index: linux/block/blk-settings.c
===================================================================
--- linux.orig/block/blk-settings.c
+++ linux/block/blk-settings.c
@@ -159,13 +159,16 @@ EXPORT_SYMBOL(blk_queue_bounce_limit);
 struct page *blk_alloc_pages(struct request_queue *q,
 				   gfp_t gfp, int size)
 {
-	return alloc_pages((q->bounce_gfp & ~GFP_NOIO) | gfp, get_order(size));
+	return alloc_pages_mask(gfp, size, blk_q_mask(q));
 }
 EXPORT_SYMBOL(blk_alloc_pages);
 
 void *blk_kmalloc(struct request_queue *q, unsigned size, gfp_t gfp)
 {
-	return kmalloc(size, gfp | (q->bounce_gfp & ~GFP_NOIO));
+	/* Right now gets pages. That is ok for the few callers. Later
+	 * might need to use a sub allocator.
+	 */
+	return get_pages_mask(gfp, size, blk_q_mask(q));
 }
 EXPORT_SYMBOL(blk_kmalloc);
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [6/7] Remove bounce_gfp
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
                   ` (4 preceding siblings ...)
  2008-03-07  9:13 ` [PATCH] [5/7] Convert the blk allocator functions over to the mask allocator Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-07  9:13 ` [PATCH] [7/7] Allow swiotlb to move block data bouncing to the block layer Andi Kleen
  6 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


Convert mm/bounce.c to use a mask allocator mempool for ISA DMA 
memory and then remove bounce_gfp. All bouncing is controlled
by bounce_pfn now.

Signed-off-by: Andi Kleen <ak@suse.de>

---
 block/blk-settings.c   |    2 --
 include/linux/blkdev.h |    1 -
 mm/bounce.c            |   16 ++++------------
 3 files changed, 4 insertions(+), 15 deletions(-)

Index: linux/include/linux/blkdev.h
===================================================================
--- linux.orig/include/linux/blkdev.h
+++ linux/include/linux/blkdev.h
@@ -324,7 +324,6 @@ struct request_queue
 	 * queue needs bounce pages for pages above this limit
 	 */
 	unsigned long		bounce_pfn;
-	gfp_t			bounce_gfp;
 
 	/*
 	 * various queue flags, see QUEUE_* below
Index: linux/block/blk-settings.c
===================================================================
--- linux.orig/block/blk-settings.c
+++ linux/block/blk-settings.c
@@ -135,7 +135,6 @@ void blk_queue_bounce_limit(struct reque
 	unsigned long b_pfn = dma_addr >> PAGE_SHIFT;
 	int dma = 0;
 
-	q->bounce_gfp = GFP_NOIO;
 #if BITS_PER_LONG == 64
 	/* Assume anything <= 4GB can be handled by IOMMU.
 	   Actually some IOMMUs can handle everything, but I don't
@@ -150,7 +149,6 @@ void blk_queue_bounce_limit(struct reque
 #endif
 	if (dma) {
 		init_emergency_isa_pool();
-		q->bounce_gfp = GFP_NOIO | GFP_DMA;
 		q->bounce_pfn = b_pfn;
 	}
 }
Index: linux/mm/bounce.c
===================================================================
--- linux.orig/mm/bounce.c
+++ linux/mm/bounce.c
@@ -63,14 +63,6 @@ static void bounce_copy_vec(struct bio_v
 #endif /* CONFIG_HIGHMEM */
 
 /*
- * allocate pages in the DMA region for the ISA pool
- */
-static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
-{
-	return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
-}
-
-/*
  * gets called "every" time someone init's a queue with BLK_BOUNCE_ISA
  * as the max address, so check if the pool has already been created.
  */
@@ -79,8 +71,8 @@ int init_emergency_isa_pool(void)
 	if (isa_page_pool)
 		return 0;
 
-	isa_page_pool = mempool_create(ISA_POOL_SIZE, mempool_alloc_pages_isa,
-				       mempool_free_pages, (void *) 0);
+	isa_page_pool = mempool_create_pool_pmask(ISA_POOL_SIZE, PAGE_SIZE,
+					ISA_DMA_THRESHOLD);
 	BUG_ON(!isa_page_pool);
 
 	printk("isa bounce pool size: %d pages\n", ISA_POOL_SIZE);
@@ -200,7 +192,7 @@ static void __blk_queue_bounce(struct re
 
 		to = bio->bi_io_vec + i;
 
-		to->bv_page = mempool_alloc(pool, q->bounce_gfp);
+		to->bv_page = mempool_alloc(pool, GFP_NOIO);
 		to->bv_len = from->bv_len;
 		to->bv_offset = from->bv_offset;
 		inc_zone_page_state(to->bv_page, NR_BOUNCE);
@@ -275,7 +267,7 @@ void blk_queue_bounce(struct request_que
 	 * to or bigger than the highest pfn in the system -- in that case,
 	 * don't waste time iterating over bio segments
 	 */
-	if (!(q->bounce_gfp & GFP_DMA)) {
+	if (q->bounce_pfn >= BLK_BOUNCE_ISA) {
 		if (q->bounce_pfn >= blk_max_pfn)
 			return;
 		pool = page_pool;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] [7/7] Allow swiotlb to move block data bouncing to the block layer
  2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
                   ` (5 preceding siblings ...)
  2008-03-07  9:13 ` [PATCH] [6/7] Remove bounce_gfp Andi Kleen
@ 2008-03-07  9:13 ` Andi Kleen
  2008-03-08 12:08   ` Andi Kleen
  6 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2008-03-07  9:13 UTC (permalink / raw)
  To: axboe, linux-kernel


The block layer is generally in a better position to bounce
than swiotlb because it is allowed to sleep at
the right places. swiotlb cannot do that and is thus more prone
to panic and failing on overflow.

With the mask allocator pool block layer bouncing and swiotlb
allocate from the same pool, so it's quite possible to shift
some of the bounce burden to the block layer.

This patch adds a new variable to set the bounce threshold
for 64bit hosts and sets it to zero for swiotlb. This has
the effect to always doing the data bounces in the block layer 
for swiotlb, assuming the block driver set the correct
bounce pfn. If it forgets that only sets its dma mask
swiotlb will still take the burden.

Signed-off-by: Andi Kleen <ak@suse.de>

---
 arch/x86/kernel/pci-swiotlb_64.c |    3 +++
 block/blk-settings.c             |    4 +++-
 include/linux/blkdev.h           |    2 +-
 3 files changed, 7 insertions(+), 2 deletions(-)

Index: linux/block/blk-settings.c
===================================================================
--- linux.orig/block/blk-settings.c
+++ linux/block/blk-settings.c
@@ -16,6 +16,8 @@ EXPORT_SYMBOL(blk_max_low_pfn);
 unsigned long blk_max_pfn;
 EXPORT_SYMBOL(blk_max_pfn);
 
+unsigned long blk_min_iommu = 0xffffffff;
+
 /**
  * blk_queue_prep_rq - set a prepare_request function for queue
  * @q:		queue
@@ -139,7 +141,7 @@ void blk_queue_bounce_limit(struct reque
 	/* Assume anything <= 4GB can be handled by IOMMU.
 	   Actually some IOMMUs can handle everything, but I don't
 	   know of a way to test this here. */
-	if (b_pfn <= (min_t(u64, 0xffffffff, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
+	if (b_pfn <= (min_t(u64, blk_min_iommu, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
 		dma = 1;
 	q->bounce_pfn = max_low_pfn;
 #else
Index: linux/arch/x86/kernel/pci-swiotlb_64.c
===================================================================
--- linux.orig/arch/x86/kernel/pci-swiotlb_64.c
+++ linux/arch/x86/kernel/pci-swiotlb_64.c
@@ -15,6 +15,7 @@
 #include <linux/ctype.h>
 #include <linux/bootmem.h>
 #include <linux/hardirq.h>
+#include <linux/blkdev.h>
 
 #include <asm/gart.h>
 #include <asm/swiotlb.h>
@@ -411,6 +412,8 @@ void __init pci_swiotlb_init(void)
 
 		dma_ops = &dmatlb_dma_ops;
 	}
+
+	blk_min_iommu = BLK_BOUNCE_HIGH;
 }
 
 #define COMPAT_IO_TLB_SHIFT 11
Index: linux/include/linux/blkdev.h
===================================================================
--- linux.orig/include/linux/blkdev.h
+++ linux/include/linux/blkdev.h
@@ -523,7 +523,7 @@ static inline void blk_clear_queue_full(
 #define BLKPREP_KILL		1	/* fatal error, kill */
 #define BLKPREP_DEFER		2	/* leave on queue */
 
-extern unsigned long blk_max_low_pfn, blk_max_pfn;
+extern unsigned long blk_max_low_pfn, blk_max_pfn, blk_min_iommu;
 
 /*
  * standard bounce addresses:

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] [2/7] Convert open coded reference in libata to q->bounce_gfp to blk_kmalloc
  2008-03-07  9:13 ` [PATCH] [2/7] Convert open coded reference in libata to q->bounce_gfp to blk_kmalloc Andi Kleen
@ 2008-03-07 21:06   ` Jeff Garzik
  0 siblings, 0 replies; 10+ messages in thread
From: Jeff Garzik @ 2008-03-07 21:06 UTC (permalink / raw)
  To: Andi Kleen; +Cc: axboe, linux-kernel

Andi Kleen wrote:
> Only difference in behaviour is that GFP_NOIO is not passed here now,
> but I think that is ok in this case because this is not in a write out path.
> 
> Cc: jgarzik@pobox.com
> 
> Signed-off-by: Andi Kleen <ak@suse.de>
> 
> ---
>  drivers/ata/libata-scsi.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> Index: linux/drivers/ata/libata-scsi.c
> ===================================================================
> --- linux.orig/drivers/ata/libata-scsi.c
> +++ linux/drivers/ata/libata-scsi.c
> @@ -868,7 +868,7 @@ static int ata_scsi_dev_config(struct sc
>  		blk_queue_dma_pad(sdev->request_queue, ATA_DMA_PAD_SZ - 1);
>  
>  		/* configure draining */
> -		buf = kmalloc(ATAPI_MAX_DRAIN, q->bounce_gfp | GFP_KERNEL);
> +		buf = blk_kmalloc(q, ATAPI_MAX_DRAIN, GFP_KERNEL);

I think that's a fair assessment... ACK



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] [7/7] Allow swiotlb to move block data bouncing to the block layer
  2008-03-07  9:13 ` [PATCH] [7/7] Allow swiotlb to move block data bouncing to the block layer Andi Kleen
@ 2008-03-08 12:08   ` Andi Kleen
  0 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2008-03-08 12:08 UTC (permalink / raw)
  To: axboe; +Cc: linux-kernel

Andi Kleen <andi@firstfloor.org> writes:

> The block layer is generally in a better position to bounce
> than swiotlb because it is allowed to sleep at
> the right places. swiotlb cannot do that and is thus more prone
> to panic and failing on overflow.

[...]

The patch in this form right now is not a good idea. It really
depends on another patch which I had dropped from the public
posting. In the current form it would lead to all block swiotlb
bounces go through the low 16MB zone because block layer doesn't know
yet how to allocate bounce buffers above it and the block layer
would force it through its 16MB limited isa mempool.

I fixed that now properly for the next spin, with proper mask bouncing.

Anyways, just if anybody wants to test don't apply this patch
or use the latest patchkit from ftp://firstfloor.org/pub/ak/mask/patches/
which will bounce properly (and which also has some other improvements) 

-Andi

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2008-03-08 12:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-03-07  9:13 [PATCH] [0/7] Block layer rework for mask allocator Andi Kleen
2008-03-07  9:13 ` [PATCH] [1/7] Convert a few direct bounce_gfp users over to the blk_* wrappers Andi Kleen
2008-03-07  9:13 ` [PATCH] [2/7] Convert open coded reference in libata to q->bounce_gfp to blk_kmalloc Andi Kleen
2008-03-07 21:06   ` Jeff Garzik
2008-03-07  9:13 ` [PATCH] [3/7] Add mempool support for page allocation through the mask allocator Andi Kleen
2008-03-07  9:13 ` [PATCH] [4/7] Add blk_q_mask Andi Kleen
2008-03-07  9:13 ` [PATCH] [5/7] Convert the blk allocator functions over to the mask allocator Andi Kleen
2008-03-07  9:13 ` [PATCH] [6/7] Remove bounce_gfp Andi Kleen
2008-03-07  9:13 ` [PATCH] [7/7] Allow swiotlb to move block data bouncing to the block layer Andi Kleen
2008-03-08 12:08   ` Andi Kleen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).