All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/12] Per-bdi writeback flusher threads #5
@ 2009-05-25  7:30 Jens Axboe
  2009-05-25  7:30 ` [PATCH 01/13] libata: get rid of ATA_MAX_QUEUE loop in ata_qc_complete_multiple() Jens Axboe
                   ` (25 more replies)
  0 siblings, 26 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang

Hi,

Here's the 5th version of the writeback patches. Changes since v4:

- Missing memory barrier before wake_up_bit() could cause weird stalls,
  now fixed.
- Use dynamic bdi_work allocation in bdi_start_writeback(). We still
  fall back to the stack allocation if this should fail. But with the
  dynamic we don't have to wait for wb threads to have noticed the work,
  so the dynamic allocaiton avoids that (small) serialization point.
- Pass down wbc->sync_mode so queued work doesn't always use
  WB_SYNC_NONE in __wb_writeback() (Thanks Jan Kara).
- Don't check background threshold for WB_SYNC_ALL in __wb_writeback.
  This would sometimes leave dirty data around when the system became
  idle.
- Make bdi_writeback_all() and the write path from
  generic_sync_sb_inodes() write out in-line instead of punting to the
  wb threads. This retains the behaviour we have in the kernel now and
  also fixes the oops reported by Yanmin Zhang.
- Replace rcu/spin_lock_bh protected bdi_list and bdi_pending_list with
  a simple mutex. This both simplied the code (and allowed for the above
  fix easily) and made the locking there more trivial. This doesn't
  hurt the fast path, since that path is generally only done for full
  system sync.
- Let bdi_forker_task() wake up at dirty_writeback_interval like the wb
  threads, so that potential dirty data on the default_backing_dev_info
  gets flushed at the same intervals.
- When bdi_forker_task() wakes up, let it scan the bdi_list for bdi's
  with dirty data. If it finds one and it doesn't have an associated
  writeback thread, start one. Otherwise we could have to reach memory
  pressure conditions before some threads got started, meaning that
  dirty data for those almost idle devices sat around for a long time.
- Call try_to_freeze() in bdi_forker_task(). It's defined as freezable,
  so if we don't freeze then we get hangs on suspend.
- Pull out the ntfs sb_has_dirty_io() part and add it at the front as a
  preparatory patch. Ditto the btrfs bdi register patch.
- Shuffle some patches around for a cleaner series. Made sure it's all
  bisectable.

I ran performance testing again and compared to v4, and as expected it's
the same. The changes are mostly in the sync(1) or umount writeback
paths, so the general writeback functions like in v4.

This should be pretty much final and mergeable. So please run your
favorite performance benchmarks that exercise buffered writeout and
report any problems and/or performance differences (good as well as bad,
please). Thanks!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 01/13] libata: get rid of ATA_MAX_QUEUE loop in ata_qc_complete_multiple()
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 01/12] ntfs: remove old debug check for dirty data in ntfs_put_super() Jens Axboe
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

We very rarely (if ever) complete more than one command in the
sactive mask at the time, even for extremely high IO rates. So
looping over the entire range of possible tags is pointless,
instead use __ffs() to just find the completed tags directly.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/ata/libata-core.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index c924230..ca4d208 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -5031,7 +5031,6 @@ int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active)
 {
 	int nr_done = 0;
 	u32 done_mask;
-	int i;
 
 	done_mask = ap->qc_active ^ qc_active;
 
@@ -5041,16 +5040,16 @@ int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active)
 		return -EINVAL;
 	}
 
-	for (i = 0; i < ATA_MAX_QUEUE; i++) {
+	while (done_mask) {
 		struct ata_queued_cmd *qc;
+		unsigned int tag = __ffs(done_mask);
 
-		if (!(done_mask & (1 << i)))
-			continue;
-
-		if ((qc = ata_qc_from_tag(ap, i))) {
+		qc = ata_qc_from_tag(ap, tag);
+		if (qc) {
 			ata_qc_complete(qc);
 			nr_done++;
 		}
+		done_mask &= ~(1 << tag);
 	}
 
 	return nr_done;
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 01/12] ntfs: remove old debug check for dirty data in ntfs_put_super()
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
  2009-05-25  7:30 ` [PATCH 01/13] libata: get rid of ATA_MAX_QUEUE loop in ata_qc_complete_multiple() Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 02/13] block: add static rq allocation cache Jens Axboe
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

This should not trigger anymore, so kill it.

Acked-by: Anton Altaparmakov <aia21@cam.ac.uk>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/ntfs/super.c |   33 +++------------------------------
 1 files changed, 3 insertions(+), 30 deletions(-)

diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
index f76951d..3fc03bd 100644
--- a/fs/ntfs/super.c
+++ b/fs/ntfs/super.c
@@ -2373,39 +2373,12 @@ static void ntfs_put_super(struct super_block *sb)
 		vol->mftmirr_ino = NULL;
 	}
 	/*
-	 * If any dirty inodes are left, throw away all mft data page cache
-	 * pages to allow a clean umount.  This should never happen any more
-	 * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
-	 * the underlying mft records are written out and cleaned.  If it does,
-	 * happen anyway, we want to know...
+	 * We should have no dirty inodes left, due to
+	 * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
+	 * the underlying mft records are written out and cleaned.
 	 */
 	ntfs_commit_inode(vol->mft_ino);
 	write_inode_now(vol->mft_ino, 1);
-	if (sb_has_dirty_inodes(sb)) {
-		const char *s1, *s2;
-
-		mutex_lock(&vol->mft_ino->i_mutex);
-		truncate_inode_pages(vol->mft_ino->i_mapping, 0);
-		mutex_unlock(&vol->mft_ino->i_mutex);
-		write_inode_now(vol->mft_ino, 1);
-		if (sb_has_dirty_inodes(sb)) {
-			static const char *_s1 = "inodes";
-			static const char *_s2 = "";
-			s1 = _s1;
-			s2 = _s2;
-		} else {
-			static const char *_s1 = "mft pages";
-			static const char *_s2 = "They have been thrown "
-					"away.  ";
-			s1 = _s1;
-			s2 = _s2;
-		}
-		ntfs_error(sb, "Dirty %s found at umount time.  %sYou should "
-				"run chkdsk.  Please email "
-				"linux-ntfs-dev@lists.sourceforge.net and say "
-				"that you saw this message.  Thank you.", s1,
-				s2);
-	}
 #endif /* NTFS_RW */
 
 	iput(vol->mft_ino);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 02/13] block: add static rq allocation cache
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
  2009-05-25  7:30 ` [PATCH 01/13] libata: get rid of ATA_MAX_QUEUE loop in ata_qc_complete_multiple() Jens Axboe
  2009-05-25  7:30 ` [PATCH 01/12] ntfs: remove old debug check for dirty data in ntfs_put_super() Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 02/12] btrfs: properly register fs backing device Jens Axboe
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Normally a request is allocated through mempool, which means that
we do a slab allocation for each request. To check whether this
slows us down for high iops rates, add a sysfs file that allows
the user to setup a preallocated request cache to avoid going into
slab for each request.

Typically, you'd setup a cache for the full depth of the device.
This defaults to 128, so by doing:

	echo 128 > /sys/block/sda/queue/rq_cache

you would turn this feature on for sda. Writing "0" to the file
will turn it back off.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-core.c       |   43 ++++++++++++++++++++++++++-
 block/blk-sysfs.c      |   74 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/blkdev.h |    5 +++
 3 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..fe1eca4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -635,17 +635,56 @@ int blk_get_queue(struct request_queue *q)
 	return 1;
 }
 
+static struct request *blk_rq_cache_alloc(struct request_queue *q)
+{
+	int tag;
+
+	do {
+		if (q->rq_cache_last != -1) {
+			tag = q->rq_cache_last;
+			q->rq_cache_last = -1;
+		} else {
+			tag = find_first_zero_bit(q->rq_cache_map,
+							q->rq_cache_sz);
+		}
+		if (tag >= q->rq_cache_sz)
+			return NULL;
+	} while (test_and_set_bit_lock(tag, q->rq_cache_map));
+
+	return &q->rq_cache[tag];
+}
+
+static int blk_rq_cache_free(struct request_queue *q, struct request *rq)
+{
+	if (!q->rq_cache)
+		return 1;
+	if (rq >= &q->rq_cache[0] && rq <= &q->rq_cache[q->rq_cache_sz - 1]) {
+		unsigned long idx = rq - q->rq_cache;
+
+		clear_bit(idx, q->rq_cache_map);
+		q->rq_cache_last = idx;
+		return 0;
+	}
+
+	return 1;
+}
+
 static inline void blk_free_request(struct request_queue *q, struct request *rq)
 {
 	if (rq->cmd_flags & REQ_ELVPRIV)
 		elv_put_request(q, rq);
-	mempool_free(rq, q->rq.rq_pool);
+	if (blk_rq_cache_free(q, rq))
+		mempool_free(rq, q->rq.rq_pool);
 }
 
 static struct request *
 blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
 {
-	struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+	struct request *rq;
+
+	rq = blk_rq_cache_alloc(q);
+	if (!rq)
+		rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
 
 	if (!rq)
 		return NULL;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 3ff9bba..c2d8a71 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -218,6 +218,68 @@ static ssize_t queue_iostats_store(struct request_queue *q, const char *page,
 	return ret;
 }
 
+static ssize_t queue_rq_cache_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(q->rq_cache_sz, page);
+}
+
+static ssize_t
+queue_rq_cache_store(struct request_queue *q, const char *page, size_t count)
+{
+	unsigned long *rq_cache_map = NULL;
+	struct request *rq_cache = NULL;
+	unsigned long val;
+	ssize_t ret;
+
+	/*
+	 * alloc cache up front
+	 */
+	ret = queue_var_store(&val, page, count);
+	if (val) {
+		unsigned int map_sz;
+
+		if (val > q->nr_requests)
+			val = q->nr_requests;
+
+		rq_cache = kcalloc(val, sizeof(*rq_cache), GFP_KERNEL);
+		if (!rq_cache)
+			return -ENOMEM;
+
+		map_sz = (val + BITS_PER_LONG - 1) / BITS_PER_LONG;
+		rq_cache_map = kzalloc(map_sz, GFP_KERNEL);
+		if (!rq_cache_map) {
+			kfree(rq_cache);
+			return -ENOMEM;
+		}
+	}
+
+	spin_lock_irq(q->queue_lock);
+	elv_quiesce_start(q);
+
+	/*
+	 * free existing rqcache
+	 */
+	if (q->rq_cache_sz) {
+		kfree(q->rq_cache);
+		kfree(q->rq_cache_map);
+		q->rq_cache = NULL;
+		q->rq_cache_map = NULL;
+		q->rq_cache_sz = 0;
+	}
+
+	if (val) {
+		memset(rq_cache, 0, val * sizeof(struct request));
+		q->rq_cache = rq_cache;
+		q->rq_cache_map = rq_cache_map;
+		q->rq_cache_sz = val;
+		q->rq_cache_last = -1;
+	}
+
+	elv_quiesce_end(q);
+	spin_unlock_irq(q->queue_lock);
+	return ret;
+}
+
 static struct queue_sysfs_entry queue_requests_entry = {
 	.attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
 	.show = queue_requests_show,
@@ -276,6 +338,12 @@ static struct queue_sysfs_entry queue_iostats_entry = {
 	.store = queue_iostats_store,
 };
 
+static struct queue_sysfs_entry queue_rqcache_entry = {
+	.attr = {.name = "rq_cache", .mode = S_IRUGO | S_IWUSR },
+	.show = queue_rq_cache_show,
+	.store = queue_rq_cache_store,
+};
+
 static struct attribute *default_attrs[] = {
 	&queue_requests_entry.attr,
 	&queue_ra_entry.attr,
@@ -287,6 +355,7 @@ static struct attribute *default_attrs[] = {
 	&queue_nomerges_entry.attr,
 	&queue_rq_affinity_entry.attr,
 	&queue_iostats_entry.attr,
+	&queue_rqcache_entry.attr,
 	NULL,
 };
 
@@ -363,6 +432,11 @@ static void blk_release_queue(struct kobject *kobj)
 	if (q->queue_tags)
 		__blk_queue_free_tags(q);
 
+	if (q->rq_cache) {
+		kfree(q->rq_cache);
+		kfree(q->rq_cache_map);
+	}
+
 	blk_trace_shutdown(q);
 
 	bdi_destroy(&q->backing_dev_info);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b4f71f1..c00f050 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -444,6 +444,11 @@ struct request_queue
 	struct bsg_class_device bsg_dev;
 #endif
 	struct blk_cmd_filter cmd_filter;
+
+	struct request *rq_cache;
+	unsigned int rq_cache_sz;
+	unsigned int rq_cache_last;
+	unsigned long *rq_cache_map;
 };
 
 #define QUEUE_FLAG_CLUSTER	0	/* cluster several segments into 1 */
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 02/12] btrfs: properly register fs backing device
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (2 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 02/13] block: add static rq allocation cache Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer Jens Axboe
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

btrfs assigns this bdi to all inodes on that file system, so make
sure it's registered. This isn't really important now, but will be
when we put dirty inodes there. Even now, we miss the stats when the
bdi isn't visible.

Also fixes failure to check bdi_init() return value, and bad inherit of
->capabilities flags from the default bdi.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/btrfs/disk-io.c |   23 ++++++++++++++++++-----
 1 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..2dc19c9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1345,12 +1345,24 @@ static void btrfs_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 	free_extent_map(em);
 }
 
+/*
+ * If this fails, caller must call bdi_destroy() to get rid of the
+ * bdi again.
+ */
 static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
 {
-	bdi_init(bdi);
+	int err;
+
+	bdi->capabilities = BDI_CAP_MAP_COPY;
+	err = bdi_init(bdi);
+	if (err)
+		return err;
+
+	err = bdi_register(bdi, NULL, "btrfs");
+	if (err)
+		return err;
+
 	bdi->ra_pages	= default_backing_dev_info.ra_pages;
-	bdi->state		= 0;
-	bdi->capabilities	= default_backing_dev_info.capabilities;
 	bdi->unplug_io_fn	= btrfs_unplug_io_fn;
 	bdi->unplug_io_data	= info;
 	bdi->congested_fn	= btrfs_congested_fn;
@@ -1574,7 +1586,8 @@ struct btrfs_root *open_ctree(struct super_block *sb,
 	fs_info->sb = sb;
 	fs_info->max_extent = (u64)-1;
 	fs_info->max_inline = 8192 * 1024;
-	setup_bdi(fs_info, &fs_info->bdi);
+	if (setup_bdi(fs_info, &fs_info->bdi))
+		goto fail_bdi;
 	fs_info->btree_inode = new_inode(sb);
 	fs_info->btree_inode->i_ino = 1;
 	fs_info->btree_inode->i_nlink = 1;
@@ -1931,8 +1944,8 @@ fail_iput:
 
 	btrfs_close_devices(fs_info->fs_devices);
 	btrfs_mapping_tree_free(&fs_info->mapping_tree);
+fail_bdi:
 	bdi_destroy(&fs_info->bdi);
-
 fail:
 	kfree(extent_root);
 	kfree(tree_root);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (3 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 02/12] btrfs: properly register fs backing device Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:41   ` Christoph Hellwig
                     ` (2 more replies)
  2009-05-25  7:30 ` [PATCH 03/12] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
                   ` (20 subsequent siblings)
  25 siblings, 3 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Fold the sense buffer into the command, thereby eliminating a slab
allocation and free per command.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/scsi.c      |   44 ++++++++++----------------------------------
 include/scsi/scsi_cmnd.h |   12 ++++++------
 2 files changed, 16 insertions(+), 40 deletions(-)

diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 166417a..6a993af 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -133,7 +133,6 @@ EXPORT_SYMBOL(scsi_device_type);
 
 struct scsi_host_cmd_pool {
 	struct kmem_cache	*cmd_slab;
-	struct kmem_cache	*sense_slab;
 	unsigned int		users;
 	char			*cmd_name;
 	char			*sense_name;
@@ -167,20 +166,9 @@ static DEFINE_MUTEX(host_cmd_pool_mutex);
 static struct scsi_cmnd *
 scsi_pool_alloc_command(struct scsi_host_cmd_pool *pool, gfp_t gfp_mask)
 {
-	struct scsi_cmnd *cmd;
-
-	cmd = kmem_cache_zalloc(pool->cmd_slab, gfp_mask | pool->gfp_mask);
-	if (!cmd)
-		return NULL;
+	gfp_t gfp = gfp_mask | pool->gfp_mask;
 
-	cmd->sense_buffer = kmem_cache_alloc(pool->sense_slab,
-					     gfp_mask | pool->gfp_mask);
-	if (!cmd->sense_buffer) {
-		kmem_cache_free(pool->cmd_slab, cmd);
-		return NULL;
-	}
-
-	return cmd;
+	return kmem_cache_zalloc(pool->cmd_slab, gfp);
 }
 
 /**
@@ -198,7 +186,6 @@ scsi_pool_free_command(struct scsi_host_cmd_pool *pool,
 	if (cmd->prot_sdb)
 		kmem_cache_free(scsi_sdb_cache, cmd->prot_sdb);
 
-	kmem_cache_free(pool->sense_slab, cmd->sense_buffer);
 	kmem_cache_free(pool->cmd_slab, cmd);
 }
 
@@ -242,7 +229,6 @@ scsi_host_alloc_command(struct Scsi_Host *shost, gfp_t gfp_mask)
 struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
 {
 	struct scsi_cmnd *cmd;
-	unsigned char *buf;
 
 	cmd = scsi_host_alloc_command(shost, gfp_mask);
 
@@ -257,11 +243,8 @@ struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
 		}
 		spin_unlock_irqrestore(&shost->free_list_lock, flags);
 
-		if (cmd) {
-			buf = cmd->sense_buffer;
+		if (cmd)
 			memset(cmd, 0, sizeof(*cmd));
-			cmd->sense_buffer = buf;
-		}
 	}
 
 	return cmd;
@@ -361,19 +344,13 @@ static struct scsi_host_cmd_pool *scsi_get_host_cmd_pool(gfp_t gfp_mask)
 	pool = (gfp_mask & __GFP_DMA) ? &scsi_cmd_dma_pool :
 		&scsi_cmd_pool;
 	if (!pool->users) {
-		pool->cmd_slab = kmem_cache_create(pool->cmd_name,
-						   sizeof(struct scsi_cmnd), 0,
-						   pool->slab_flags, NULL);
-		if (!pool->cmd_slab)
-			goto fail;
+		unsigned int slab_size;
 
-		pool->sense_slab = kmem_cache_create(pool->sense_name,
-						     SCSI_SENSE_BUFFERSIZE, 0,
-						     pool->slab_flags, NULL);
-		if (!pool->sense_slab) {
-			kmem_cache_destroy(pool->cmd_slab);
+		slab_size = sizeof(struct scsi_cmnd) + SCSI_SENSE_BUFFERSIZE;
+		pool->cmd_slab = kmem_cache_create(pool->cmd_name, slab_size,
+						   0, pool->slab_flags, NULL);
+		if (!pool->cmd_slab)
 			goto fail;
-		}
 	}
 
 	pool->users++;
@@ -397,10 +374,9 @@ static void scsi_put_host_cmd_pool(gfp_t gfp_mask)
 	 */
 	BUG_ON(pool->users == 0);
 
-	if (!--pool->users) {
+	if (!--pool->users)
 		kmem_cache_destroy(pool->cmd_slab);
-		kmem_cache_destroy(pool->sense_slab);
-	}
+
 	mutex_unlock(&host_cmd_pool_mutex);
 }
 
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 43b50d3..649ad36 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -102,12 +102,6 @@ struct scsi_cmnd {
 	struct request *request;	/* The command we are
 				   	   working on */
 
-#define SCSI_SENSE_BUFFERSIZE 	96
-	unsigned char *sense_buffer;
-				/* obtained by REQUEST SENSE when
-				 * CHECK CONDITION is received on original
-				 * command (auto-sense) */
-
 	/* Low-level done function - can be used by low-level driver to point
 	 *        to completion function.  Not used by mid/upper level code. */
 	void (*scsi_done) (struct scsi_cmnd *);
@@ -129,6 +123,12 @@ struct scsi_cmnd {
 	int result;		/* Status code from lower level driver */
 
 	unsigned char tag;	/* SCSI-II queued command tag */
+
+#define SCSI_SENSE_BUFFERSIZE 	96
+	unsigned char sense_buffer[0];
+				/* obtained by REQUEST SENSE when
+				 * CHECK CONDITION is received on original
+				 * command (auto-sense) */
 };
 
 extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 03/12] writeback: move dirty inodes from super_block to backing_dev_info
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (4 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 04/13] scsi: get rid of lock in __scsi_put_command() Jens Axboe
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

This is a first step at introducing per-bdi flusher threads. We should
have no change in behaviour, although sb_has_dirty_inodes() is now
ridiculously expensive, as there's no easy way to answer that question.
Not a huge problem, since it'll be deleted in subsequent patches.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  196 +++++++++++++++++++++++++++---------------
 fs/super.c                  |    3 -
 include/linux/backing-dev.h |    9 ++
 include/linux/fs.h          |    5 +-
 mm/backing-dev.c            |   30 +++++++
 mm/page-writeback.c         |   11 +--
 6 files changed, 170 insertions(+), 84 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 91013ff..1137408 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -25,6 +25,7 @@
 #include <linux/buffer_head.h>
 #include "internal.h"
 
+#define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
 /**
  * writeback_acquire - attempt to get exclusive writeback access to a device
@@ -158,12 +159,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 			goto out;
 
 		/*
-		 * If the inode was already on s_dirty/s_io/s_more_io, don't
-		 * reposition it (that would break s_dirty time-ordering).
+		 * If the inode was already on b_dirty/b_io/b_more_io, don't
+		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
 			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list, &sb->s_dirty);
+			list_move(&inode->i_list,
+					&inode_to_bdi(inode)->b_dirty);
 		}
 	}
 out:
@@ -184,31 +186,30 @@ static int write_inode(struct inode *inode, int sync)
  * furthest end of its superblock's dirty-inode list.
  *
  * Before stamping the inode's ->dirtied_when, we check to see whether it is
- * already the most-recently-dirtied inode on the s_dirty list.  If that is
+ * already the most-recently-dirtied inode on the b_dirty list.  If that is
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct super_block *sb = inode->i_sb;
+	struct backing_dev_info *bdi = inode_to_bdi(inode);
 
-	if (!list_empty(&sb->s_dirty)) {
-		struct inode *tail_inode;
+	if (!list_empty(&bdi->b_dirty)) {
+		struct inode *tail;
 
-		tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
-		if (time_before(inode->dirtied_when,
-				tail_inode->dirtied_when))
+		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &sb->s_dirty);
+	list_move(&inode->i_list, &bdi->b_dirty);
 }
 
 /*
- * requeue inode for re-scanning after sb->s_io list is exhausted.
+ * requeue inode for re-scanning after bdi->b_io list is exhausted.
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode->i_sb->s_more_io);
+	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -255,18 +256,50 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct super_block *sb,
-				unsigned long *older_than_this)
+static void queue_io(struct backing_dev_info *bdi,
+		     unsigned long *older_than_this)
+{
+	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
+	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+}
+
+static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
 {
-	list_splice_init(&sb->s_more_io, sb->s_io.prev);
-	move_expired_inodes(&sb->s_dirty, &sb->s_io, older_than_this);
+	struct inode *inode;
+	int ret = 0;
+
+	spin_lock(&inode_lock);
+	list_for_each_entry(inode, list, i_list) {
+		if (inode->i_sb == sb) {
+			ret = 1;
+			break;
+		}
+	}
+	spin_unlock(&inode_lock);
+	return ret;
 }
 
 int sb_has_dirty_inodes(struct super_block *sb)
 {
-	return !list_empty(&sb->s_dirty) ||
-	       !list_empty(&sb->s_io) ||
-	       !list_empty(&sb->s_more_io);
+	struct backing_dev_info *bdi;
+	int ret = 0;
+
+	/*
+	 * This is REALLY expensive right now, but it'll go away
+	 * when the bdi writeback is introduced
+	 */
+	mutex_lock(&bdi_lock);
+	list_for_each_entry(bdi, &bdi_list, bdi_list) {
+		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
+		    sb_on_inode_list(sb, &bdi->b_io) ||
+		    sb_on_inode_list(sb, &bdi->b_more_io)) {
+			ret = 1;
+			break;
+		}
+	}
+	mutex_unlock(&bdi_lock);
+
+	return ret;
 }
 EXPORT_SYMBOL(sb_has_dirty_inodes);
 
@@ -322,11 +355,11 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
 			/*
 			 * We didn't write back all the pages.  nfs_writepages()
 			 * sometimes bales out without doing anything. Redirty
-			 * the inode; Move it from s_io onto s_more_io/s_dirty.
+			 * the inode; Move it from b_io onto b_more_io/b_dirty.
 			 */
 			/*
 			 * akpm: if the caller was the kupdate function we put
-			 * this inode at the head of s_dirty so it gets first
+			 * this inode at the head of b_dirty so it gets first
 			 * consideration.  Otherwise, move it to the tail, for
 			 * the reasons described there.  I'm not really sure
 			 * how much sense this makes.  Presumably I had a good
@@ -336,7 +369,7 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
 			if (wbc->for_kupdate) {
 				/*
 				 * For the kupdate function we move the inode
-				 * to s_more_io so it will get more writeout as
+				 * to b_more_io so it will get more writeout as
 				 * soon as the queue becomes uncongested.
 				 */
 				inode->i_state |= I_DIRTY_PAGES;
@@ -402,10 +435,10 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_SYNC)) {
 		/*
 		 * We're skipping this inode because it's locked, and we're not
-		 * doing writeback-for-data-integrity.  Move it to s_more_io so
-		 * that writeback can proceed with the other inodes on s_io.
+		 * doing writeback-for-data-integrity.  Move it to b_more_io so
+		 * that writeback can proceed with the other inodes on b_io.
 		 * We'll have another go at writing back this inode when we
-		 * completed a full scan of s_io.
+		 * completed a full scan of b_io.
 		 */
 		requeue_io(inode);
 		return 0;
@@ -428,51 +461,34 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-/*
- * Write out a superblock's list of dirty inodes.  A wait will be performed
- * upon no inodes, all inodes or the final one, depending upon sync_mode.
- *
- * If older_than_this is non-NULL, then only write out inodes which
- * had their first dirtying at a time earlier than *older_than_this.
- *
- * If we're a pdflush thread, then implement pdflush collision avoidance
- * against the entire list.
- *
- * If `bdi' is non-zero then we're being asked to writeback a specific queue.
- * This function assumes that the blockdev superblock's inodes are backed by
- * a variety of queues, so all inodes are searched.  For other superblocks,
- * assume that all inodes are backed by the same queue.
- *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
- *
- * The inodes to be written are parked on sb->s_io.  They are moved back onto
- * sb->s_dirty as they are selected for writing.  This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
- */
-void generic_sync_sb_inodes(struct super_block *sb,
-				struct writeback_control *wbc)
+static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
+				    struct writeback_control *wbc,
+				    struct super_block *sb,
+				    int is_blkdev_sb)
 {
 	const unsigned long start = jiffies;	/* livelock avoidance */
-	int sync = wbc->sync_mode == WB_SYNC_ALL;
 
 	spin_lock(&inode_lock);
-	if (!wbc->for_kupdate || list_empty(&sb->s_io))
-		queue_io(sb, wbc->older_than_this);
 
-	while (!list_empty(&sb->s_io)) {
-		struct inode *inode = list_entry(sb->s_io.prev,
+	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
+		queue_io(bdi, wbc->older_than_this);
+
+	while (!list_empty(&bdi->b_io)) {
+		struct inode *inode = list_entry(bdi->b_io.prev,
 						struct inode, i_list);
-		struct address_space *mapping = inode->i_mapping;
-		struct backing_dev_info *bdi = mapping->backing_dev_info;
 		long pages_skipped;
 
+		/*
+		 * super block given and doesn't match, skip this inode
+		 */
+		if (sb && sb != inode->i_sb) {
+			redirty_tail(inode);
+			continue;
+		}
+
 		if (!bdi_cap_writeback_dirty(bdi)) {
 			redirty_tail(inode);
-			if (sb_is_blkdev_sb(sb)) {
+			if (is_blkdev_sb) {
 				/*
 				 * Dirty memory-backed blockdev: the ramdisk
 				 * driver does this.  Skip just this inode
@@ -494,14 +510,14 @@ void generic_sync_sb_inodes(struct super_block *sb,
 
 		if (wbc->nonblocking && bdi_write_congested(bdi)) {
 			wbc->encountered_congestion = 1;
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
 			requeue_io(inode);
 			continue;		/* Skip a congested blockdev */
 		}
 
 		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* fs has the wrong queue */
 			requeue_io(inode);
 			continue;		/* blockdev has wrong queue */
@@ -539,13 +555,55 @@ void generic_sync_sb_inodes(struct super_block *sb,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&sb->s_more_io))
+		if (!list_empty(&bdi->b_more_io))
 			wbc->more_io = 1;
 	}
 
-	if (sync) {
+	spin_unlock(&inode_lock);
+	/* Leave any unwritten inodes on b_io */
+}
+
+/*
+ * Write out a superblock's list of dirty inodes.  A wait will be performed
+ * upon no inodes, all inodes or the final one, depending upon sync_mode.
+ *
+ * If older_than_this is non-NULL, then only write out inodes which
+ * had their first dirtying at a time earlier than *older_than_this.
+ *
+ * If we're a pdlfush thread, then implement pdflush collision avoidance
+ * against the entire list.
+ *
+ * If `bdi' is non-zero then we're being asked to writeback a specific queue.
+ * This function assumes that the blockdev superblock's inodes are backed by
+ * a variety of queues, so all inodes are searched.  For other superblocks,
+ * assume that all inodes are backed by the same queue.
+ *
+ * FIXME: this linear search could get expensive with many fileystems.  But
+ * how to fix?  We need to go from an address_space to all inodes which share
+ * a queue with that address_space.  (Easy: have a global "dirty superblocks"
+ * list).
+ *
+ * The inodes to be written are parked on bdi->b_io.  They are moved back onto
+ * bdi->b_dirty as they are selected for writing.  This way, none can be missed
+ * on the writer throttling path, and we get decent balancing between many
+ * throttled threads: we don't want them all piling up on inode_sync_wait.
+ */
+void generic_sync_sb_inodes(struct super_block *sb,
+				struct writeback_control *wbc)
+{
+	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+	struct backing_dev_info *bdi;
+
+	mutex_lock(&bdi_lock);
+	list_for_each_entry(bdi, &bdi_list, bdi_list)
+		generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
+	mutex_unlock(&bdi_lock);
+
+	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
 
+		spin_lock(&inode_lock);
+
 		/*
 		 * Data integrity sync. Must wait for all pages under writeback,
 		 * because there may have been pages dirtied before our sync
@@ -583,10 +641,8 @@ void generic_sync_sb_inodes(struct super_block *sb,
 		}
 		spin_unlock(&inode_lock);
 		iput(old_inode);
-	} else
-		spin_unlock(&inode_lock);
+	}
 
-	return;		/* Leave any unwritten inodes on s_io */
 }
 EXPORT_SYMBOL_GPL(generic_sync_sb_inodes);
 
@@ -601,8 +657,8 @@ static void sync_sb_inodes(struct super_block *sb,
  *
  * Note:
  * We don't need to grab a reference to superblock here. If it has non-empty
- * ->s_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->s_dirty/s_io/s_more_io lists are all
+ * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
+ * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
  * empty. Since __sync_single_inode() regains inode_lock before it finally moves
  * inode from superblock lists we are OK.
  *
diff --git a/fs/super.c b/fs/super.c
index 1943fdf..76dd5b2 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -64,9 +64,6 @@ static struct super_block *alloc_super(struct file_system_type *type)
 			s = NULL;
 			goto out;
 		}
-		INIT_LIST_HEAD(&s->s_dirty);
-		INIT_LIST_HEAD(&s->s_io);
-		INIT_LIST_HEAD(&s->s_more_io);
 		INIT_LIST_HEAD(&s->s_files);
 		INIT_LIST_HEAD(&s->s_instances);
 		INIT_HLIST_HEAD(&s->s_anon);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..8719c87 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -40,6 +40,8 @@ enum bdi_stat_item {
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
 struct backing_dev_info {
+	struct list_head bdi_list;
+
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -58,6 +60,10 @@ struct backing_dev_info {
 
 	struct device *dev;
 
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debug_dir;
 	struct dentry *debug_stats;
@@ -72,6 +78,9 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 
+extern struct mutex bdi_lock;
+extern struct list_head bdi_list;
+
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
 {
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3b534e5..6b475d4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -712,7 +712,7 @@ static inline int mapping_writably_mapped(struct address_space *mapping)
 
 struct inode {
 	struct hlist_node	i_hash;
-	struct list_head	i_list;
+	struct list_head	i_list;		/* backing dev IO list */
 	struct list_head	i_sb_list;
 	struct list_head	i_dentry;
 	unsigned long		i_ino;
@@ -1329,9 +1329,6 @@ struct super_block {
 	struct xattr_handler	**s_xattr;
 
 	struct list_head	s_inodes;	/* all inodes */
-	struct list_head	s_dirty;	/* dirty inodes */
-	struct list_head	s_io;		/* parked for writeback */
-	struct list_head	s_more_io;	/* parked for more writeback */
 	struct hlist_head	s_anon;		/* anonymous dentries for (nfs) exporting */
 	struct list_head	s_files;
 	/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..186fdce 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -22,6 +22,8 @@ struct backing_dev_info default_backing_dev_info = {
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
+DEFINE_MUTEX(bdi_lock);
+LIST_HEAD(bdi_list);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -211,6 +213,10 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
+	mutex_lock(&bdi_lock);
+	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_lock);
+
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
 
@@ -225,9 +231,23 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
+static void bdi_remove_from_list(struct backing_dev_info *bdi)
+{
+	mutex_lock(&bdi_lock);
+	list_del_rcu(&bdi->bdi_list);
+	mutex_unlock(&bdi_lock);
+
+	/*
+	 * In case the bdi is freed right after unregister, we need to
+	 * make sure any RCU sections have exited
+	 */
+	synchronize_rcu();
+}
+
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
+		bdi_remove_from_list(bdi);
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -245,6 +265,10 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	INIT_LIST_HEAD(&bdi->bdi_list);
+	INIT_LIST_HEAD(&bdi->b_io);
+	INIT_LIST_HEAD(&bdi->b_dirty);
+	INIT_LIST_HEAD(&bdi->b_more_io);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -259,6 +283,8 @@ int bdi_init(struct backing_dev_info *bdi)
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
+
+		bdi_remove_from_list(bdi);
 	}
 
 	return err;
@@ -269,6 +295,10 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
+	WARN_ON(!list_empty(&bdi->b_dirty));
+	WARN_ON(!list_empty(&bdi->b_io));
+	WARN_ON(!list_empty(&bdi->b_more_io));
+
 	bdi_unregister(bdi);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..7c44314 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -319,15 +319,13 @@ static void task_dirty_limit(struct task_struct *tsk, long *pdirty)
 /*
  *
  */
-static DEFINE_SPINLOCK(bdi_lock);
 static unsigned int bdi_min_ratio;
 
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 {
 	int ret = 0;
-	unsigned long flags;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_lock);
 	if (min_ratio > bdi->max_ratio) {
 		ret = -EINVAL;
 	} else {
@@ -339,27 +337,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 			ret = -EINVAL;
 		}
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_lock);
 
 	return ret;
 }
 
 int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
 {
-	unsigned long flags;
 	int ret = 0;
 
 	if (max_ratio > 100)
 		return -EINVAL;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_lock);
 	if (bdi->min_ratio > max_ratio) {
 		ret = -EINVAL;
 	} else {
 		bdi->max_ratio = max_ratio;
 		bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_lock);
 
 	return ret;
 }
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 04/13] scsi: get rid of lock in __scsi_put_command()
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (5 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 03/12] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 04/12] writeback: switch to per-bdi threads for flushing data Jens Axboe
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

A memory barrier is enough to check for the list being empty
or not, then we only have to grab the lock if we stole the
reserved command (which is very unlikely).

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/scsi/scsi.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 6a993af..da33b7a 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -293,15 +293,15 @@ EXPORT_SYMBOL(scsi_get_command);
 void __scsi_put_command(struct Scsi_Host *shost, struct scsi_cmnd *cmd,
 			struct device *dev)
 {
-	unsigned long flags;
-
-	/* changing locks here, don't need to restore the irq state */
-	spin_lock_irqsave(&shost->free_list_lock, flags);
+	smp_mb();
 	if (unlikely(list_empty(&shost->free_list))) {
+		unsigned long flags;
+
+		spin_lock_irqsave(&shost->free_list_lock, flags);
 		list_add(&cmd->list, &shost->free_list);
+		spin_unlock_irqrestore(&shost->free_list_lock, flags);
 		cmd = NULL;
 	}
-	spin_unlock_irqrestore(&shost->free_list_lock, flags);
 
 	if (likely(cmd != NULL))
 		scsi_pool_free_command(shost->cmd_pool, cmd);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 04/12] writeback: switch to per-bdi threads for flushing data
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (6 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 04/13] scsi: get rid of lock in __scsi_put_command() Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 05/13] aio: mostly crap Jens Axboe
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

This gets rid of pdflush for bdi writeout and kupdated style cleaning.
This is an experiment to see if we get better writeout behaviour with
per-bdi flushing. Some initial tests look pretty encouraging. A sample
ffsb workload that does random writes to files is about 8% faster here
on a simple SATA drive during the benchmark phase. File layout also seems
a LOT more smooth in vmstat:

 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  1      0 608848   2652 375372    0    0     0 71024  604    24  1 10 48 42
 0  1      0 549644   2712 433736    0    0     0 60692  505    27  1  8 48 44
 1  0      0 476928   2784 505192    0    0     4 29540  553    24  0  9 53 37
 0  1      0 457972   2808 524008    0    0     0 54876  331    16  0  4 38 58
 0  1      0 366128   2928 614284    0    0     4 92168  710    58  0 13 53 34
 0  1      0 295092   3000 684140    0    0     0 62924  572    23  0  9 53 37
 0  1      0 236592   3064 741704    0    0     4 58256  523    17  0  8 48 44
 0  1      0 165608   3132 811464    0    0     0 57460  560    21  0  8 54 38
 0  1      0 102952   3200 873164    0    0     4 74748  540    29  1 10 48 41
 0  1      0  48604   3252 926472    0    0     0 53248  469    29  0  7 47 45

where vanilla tends to fluctuate a lot in the creation phase:

 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 1  1      0 678716   5792 303380    0    0     0 74064  565    50  1 11 52 36
 1  0      0 662488   5864 319396    0    0     4   352  302   329  0  2 47 51
 0  1      0 599312   5924 381468    0    0     0 78164  516    55  0  9 51 40
 0  1      0 519952   6008 459516    0    0     4 78156  622    56  1 11 52 37
 1  1      0 436640   6092 541632    0    0     0 82244  622    54  0 11 48 41
 0  1      0 436640   6092 541660    0    0     0     8  152    39  0  0 51 49
 0  1      0 332224   6200 644252    0    0     4 102800  728    46  1 13 49 36
 1  0      0 274492   6260 701056    0    0     4 12328  459    49  0  7 50 43
 0  1      0 211220   6324 763356    0    0     0 106940  515    37  1 10 51 39
 1  0      0 160412   6376 813468    0    0     0  8224  415    43  0  6 49 45
 1  1      0  85980   6452 886556    0    0     4 113516  575    39  1 11 54 34
 0  2      0  85968   6452 886620    0    0     0  1640  158   211  0  0 46 54

So apart from seemingly behaving better for buffered writeout, this also
allows us to potentially have more than one bdi thread flushing out data.
This may be useful for NUMA type setups.

A 10 disk test with btrfs performs 26% faster with per-bdi flushing. Other
tests pending.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/buffer.c                 |    2 +-
 fs/fs-writeback.c           |  316 ++++++++++++++++++++++++++-----------------
 fs/sync.c                   |    2 +-
 include/linux/backing-dev.h |   30 ++++
 include/linux/fs.h          |    3 +-
 include/linux/writeback.h   |    2 +-
 mm/backing-dev.c            |  181 +++++++++++++++++++++++--
 mm/page-writeback.c         |  141 +------------------
 mm/vmscan.c                 |    2 +-
 9 files changed, 402 insertions(+), 277 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index aed2977..14f0802 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -281,7 +281,7 @@ static void free_more_memory(void)
 	struct zone *zone;
 	int nid;
 
-	wakeup_pdflush(1024);
+	wakeup_flusher_threads(1024);
 	yield();
 
 	for_each_online_node(nid) {
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 1137408..7cb4d02 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -19,6 +19,8 @@
 #include <linux/sched.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/writeback.h>
 #include <linux/blkdev.h>
 #include <linux/backing-dev.h>
@@ -61,10 +63,193 @@ int writeback_in_progress(struct backing_dev_info *bdi)
  */
 static void writeback_release(struct backing_dev_info *bdi)
 {
-	BUG_ON(!writeback_in_progress(bdi));
+	WARN_ON_ONCE(!writeback_in_progress(bdi));
+	bdi->wb_arg.nr_pages = 0;
+	bdi->wb_arg.sb = NULL;
 	clear_bit(BDI_pdflush, &bdi->state);
 }
 
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages, enum writeback_sync_modes sync_mode)
+{
+	/*
+	 * This only happens the first time someone kicks this bdi, so put
+	 * it out-of-line.
+	 */
+	if (unlikely(!bdi->task)) {
+		bdi_add_default_flusher_task(bdi);
+		return 1;
+	}
+
+	if (writeback_acquire(bdi)) {
+		bdi->wb_arg.nr_pages = nr_pages;
+		bdi->wb_arg.sb = sb;
+		bdi->wb_arg.sync_mode = sync_mode;
+		/*
+		 * make above store seen before the task is woken
+		 */
+		smp_mb();
+		wake_up(&bdi->wait);
+	}
+
+	return 0;
+}
+
+/*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation.  We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode.  Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES     1024
+
+/*
+ * Periodic writeback of "old" data.
+ *
+ * Define "old": the first time one of an inode's pages is dirtied, we mark the
+ * dirtying-time in the inode's address_space.  So this periodic writeback code
+ * just walks the superblock inode list, writing back any inodes which are
+ * older than a specific point in time.
+ *
+ * Try to run once per dirty_writeback_interval.  But if a writeback event
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
+ *
+ * older_than_this takes precedence over nr_to_write.  So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
+ */
+static void bdi_kupdated(struct backing_dev_info *bdi)
+{
+	unsigned long oldest_jif;
+	long nr_to_write;
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= &oldest_jif,
+		.nr_to_write		= 0,
+		.for_kupdate		= 1,
+		.range_cyclic		= 1,
+	};
+
+	sync_supers();
+
+	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
+
+	nr_to_write = global_page_state(NR_FILE_DIRTY) +
+			global_page_state(NR_UNSTABLE_NFS) +
+			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
+
+	while (nr_to_write > 0) {
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		generic_sync_bdi_inodes(NULL, &wbc);
+		if (wbc.nr_to_write > 0)
+			break;	/* All the old data is written */
+		nr_to_write -= MAX_WRITEBACK_PAGES;
+	}
+}
+
+static inline bool over_bground_thresh(void)
+{
+	unsigned long background_thresh, dirty_thresh;
+
+	get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+
+	return (global_page_state(NR_FILE_DIRTY) +
+		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
+}
+
+static void bdi_pdflush(struct backing_dev_info *bdi)
+{
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= bdi->wb_arg.sync_mode,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+	};
+	long nr_pages = bdi->wb_arg.nr_pages;
+
+	for (;;) {
+		if (wbc.sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
+		    !over_bground_thresh())
+			break;
+
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		wbc.pages_skipped = 0;
+		generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		/*
+		 * If we ran out of stuff to write, bail unless more_io got set
+		 */
+		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
+			if (wbc.more_io)
+				continue;
+			break;
+		}
+	}
+}
+
+/*
+ * Handle writeback of dirty data for the device backed by this bdi. Also
+ * wakes up periodically and does kupdated style flushing.
+ */
+int bdi_writeback_task(struct backing_dev_info *bdi)
+{
+	while (!kthread_should_stop()) {
+		unsigned long wait_jiffies;
+		DEFINE_WAIT(wait);
+
+		prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+		schedule_timeout(wait_jiffies);
+		try_to_freeze();
+
+		/*
+		 * We get here in two cases:
+		 *
+		 *  schedule_timeout() returned because the dirty writeback
+		 *  interval has elapsed. If that happens, we will be able
+		 *  to acquire the writeback lock and will proceed to do
+		 *  kupdated style writeout.
+		 *
+		 *  Someone called bdi_start_writeback(), which will acquire
+		 *  the writeback lock. This means our writeback_acquire()
+		 *  below will fail and we call into bdi_pdflush() for
+		 *  pdflush style writeout.
+		 *
+		 */
+		if (writeback_acquire(bdi))
+			bdi_kupdated(bdi);
+		else
+			bdi_pdflush(bdi);
+
+		writeback_release(bdi);
+		finish_wait(&bdi->wait, &wait);
+	}
+
+	return 0;
+}
+
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+		       enum writeback_sync_modes sync_mode)
+{
+	struct backing_dev_info *bdi, *tmp;
+
+	mutex_lock(&bdi_lock);
+
+	list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+		if (!bdi_has_dirty_io(bdi))
+			continue;
+		bdi_start_writeback(bdi, sb, nr_pages, sync_mode);
+	}
+
+	mutex_unlock(&bdi_lock);
+}
+
 /**
  *	__mark_inode_dirty -	internal function
  *	@inode: inode to mark
@@ -263,46 +448,6 @@ static void queue_io(struct backing_dev_info *bdi,
 	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
 }
 
-static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
-{
-	struct inode *inode;
-	int ret = 0;
-
-	spin_lock(&inode_lock);
-	list_for_each_entry(inode, list, i_list) {
-		if (inode->i_sb == sb) {
-			ret = 1;
-			break;
-		}
-	}
-	spin_unlock(&inode_lock);
-	return ret;
-}
-
-int sb_has_dirty_inodes(struct super_block *sb)
-{
-	struct backing_dev_info *bdi;
-	int ret = 0;
-
-	/*
-	 * This is REALLY expensive right now, but it'll go away
-	 * when the bdi writeback is introduced
-	 */
-	mutex_lock(&bdi_lock);
-	list_for_each_entry(bdi, &bdi_list, bdi_list) {
-		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
-		    sb_on_inode_list(sb, &bdi->b_io) ||
-		    sb_on_inode_list(sb, &bdi->b_more_io)) {
-			ret = 1;
-			break;
-		}
-	}
-	mutex_unlock(&bdi_lock);
-
-	return ret;
-}
-EXPORT_SYMBOL(sb_has_dirty_inodes);
-
 /*
  * Write a single inode's dirty pages and inode data out to disk.
  * If `wait' is set, wait on the writeout.
@@ -461,11 +606,11 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
-				    struct writeback_control *wbc,
-				    struct super_block *sb,
-				    int is_blkdev_sb)
+void generic_sync_bdi_inodes(struct super_block *sb,
+			     struct writeback_control *wbc)
 {
+	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+	struct backing_dev_info *bdi = wbc->bdi;
 	const unsigned long start = jiffies;	/* livelock avoidance */
 
 	spin_lock(&inode_lock);
@@ -516,13 +661,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 			continue;		/* Skip a congested blockdev */
 		}
 
-		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!is_blkdev_sb)
-				break;		/* fs has the wrong queue */
-			requeue_io(inode);
-			continue;		/* blockdev has wrong queue */
-		}
-
 		/*
 		 * Was this inode dirtied after sync_sb_inodes was called?
 		 * This keeps sync from extra jobs and livelock.
@@ -530,16 +668,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 		if (inode_dirtied_after(inode, start))
 			break;
 
-		/* Is another pdflush already flushing this queue? */
-		if (current_is_pdflush() && !writeback_acquire(bdi))
-			break;
-
 		BUG_ON(inode->i_state & I_FREEING);
 		__iget(inode);
 		pages_skipped = wbc->pages_skipped;
 		__writeback_single_inode(inode, wbc);
-		if (current_is_pdflush())
-			writeback_release(bdi);
 		if (wbc->pages_skipped != pages_skipped) {
 			/*
 			 * writeback is not making progress due to locked
@@ -578,11 +710,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
  * a variety of queues, so all inodes are searched.  For other superblocks,
  * assume that all inodes are backed by the same queue.
  *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
- *
  * The inodes to be written are parked on bdi->b_io.  They are moved back onto
  * bdi->b_dirty as they are selected for writing.  This way, none can be missed
  * on the writer throttling path, and we get decent balancing between many
@@ -591,13 +718,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc)
 {
-	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
-	struct backing_dev_info *bdi;
-
-	mutex_lock(&bdi_lock);
-	list_for_each_entry(bdi, &bdi_list, bdi_list)
-		generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
-	mutex_unlock(&bdi_lock);
+	if (wbc->bdi)
+		generic_sync_bdi_inodes(sb, wbc);
+	else
+		bdi_writeback_all(sb, wbc->nr_to_write, wbc->sync_mode);
 
 	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
@@ -653,58 +777,6 @@ static void sync_sb_inodes(struct super_block *sb,
 }
 
 /*
- * Start writeback of dirty pagecache data against all unlocked inodes.
- *
- * Note:
- * We don't need to grab a reference to superblock here. If it has non-empty
- * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
- * empty. Since __sync_single_inode() regains inode_lock before it finally moves
- * inode from superblock lists we are OK.
- *
- * If `older_than_this' is non-zero then only flush inodes which have a
- * flushtime older than *older_than_this.
- *
- * If `bdi' is non-zero then we will scan the first inode against each
- * superblock until we find the matching ones.  One group will be the dirty
- * inodes against a filesystem.  Then when we hit the dummy blockdev superblock,
- * sync_sb_inodes will seekout the blockdev which matches `bdi'.  Maybe not
- * super-efficient but we're about to do a ton of I/O...
- */
-void
-writeback_inodes(struct writeback_control *wbc)
-{
-	struct super_block *sb;
-
-	might_sleep();
-	spin_lock(&sb_lock);
-restart:
-	list_for_each_entry_reverse(sb, &super_blocks, s_list) {
-		if (sb_has_dirty_inodes(sb)) {
-			/* we're making our own get_super here */
-			sb->s_count++;
-			spin_unlock(&sb_lock);
-			/*
-			 * If we can't get the readlock, there's no sense in
-			 * waiting around, most of the time the FS is going to
-			 * be unmounted by the time it is released.
-			 */
-			if (down_read_trylock(&sb->s_umount)) {
-				if (sb->s_root)
-					sync_sb_inodes(sb, wbc);
-				up_read(&sb->s_umount);
-			}
-			spin_lock(&sb_lock);
-			if (__put_super_and_need_restart(sb))
-				goto restart;
-		}
-		if (wbc->nr_to_write <= 0)
-			break;
-	}
-	spin_unlock(&sb_lock);
-}
-
-/*
  * writeback and wait upon the filesystem's dirty inodes.  The caller will
  * do this in two passes - one to write, and one to wait.
  *
diff --git a/fs/sync.c b/fs/sync.c
index 7abc65f..3887f10 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -23,7 +23,7 @@
  */
 static void do_sync(unsigned long wait)
 {
-	wakeup_pdflush(0);
+	wakeup_flusher_threads(0);
 	sync_inodes(0);		/* All mappings, inodes and their blockdevs */
 	vfs_dq_sync(NULL);
 	sync_supers();		/* Write the superblocks */
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 8719c87..f164925 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,7 @@
 #include <linux/proportions.h>
 #include <linux/kernel.h>
 #include <linux/fs.h>
+#include <linux/writeback.h>
 #include <asm/atomic.h>
 
 struct page;
@@ -24,6 +25,7 @@ struct dentry;
  */
 enum bdi_state {
 	BDI_pdflush,		/* A pdflush thread is working this device */
+	BDI_pending,		/* On its way to being activated */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -39,6 +41,12 @@ enum bdi_stat_item {
 
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
+struct bdi_writeback_arg {
+	unsigned long nr_pages;
+	struct super_block *sb;
+	enum writeback_sync_modes sync_mode;
+};
+
 struct backing_dev_info {
 	struct list_head bdi_list;
 
@@ -60,6 +68,9 @@ struct backing_dev_info {
 
 	struct device *dev;
 
+	struct task_struct	*task;		/* writeback task */
+	wait_queue_head_t	wait;
+	struct bdi_writeback_arg wb_arg;	/* protected by BDI_pdflush */
 	struct list_head	b_dirty;	/* dirty inodes */
 	struct list_head	b_io;		/* parked for writeback */
 	struct list_head	b_more_io;	/* parked for more writeback */
@@ -77,10 +88,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...);
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages, enum writeback_sync_modes sync_mode);
+int bdi_writeback_task(struct backing_dev_info *bdi);
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+			enum writeback_sync_modes sync_mode);
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
 
 extern struct mutex bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+	return !list_empty(&bdi->b_dirty) ||
+	       !list_empty(&bdi->b_io) ||
+	       !list_empty(&bdi->b_more_io);
+}
+
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
 {
@@ -196,6 +220,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
 #define BDI_CAP_EXEC_MAP	0x00000040
 #define BDI_CAP_NO_ACCT_WB	0x00000080
 #define BDI_CAP_SWAP_BACKED	0x00000100
+#define BDI_CAP_FLUSH_FORKER	0x00000200
 
 #define BDI_CAP_VMFLAGS \
 	(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
@@ -265,6 +290,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
 	return bdi->capabilities & BDI_CAP_SWAP_BACKED;
 }
 
+static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
+{
+	return bdi->capabilities & BDI_CAP_FLUSH_FORKER;
+}
+
 static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
 {
 	return bdi_cap_writeback_dirty(mapping->backing_dev_info);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 6b475d4..ecdc544 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2063,6 +2063,8 @@ extern int invalidate_inode_pages2_range(struct address_space *mapping,
 					 pgoff_t start, pgoff_t end);
 extern void generic_sync_sb_inodes(struct super_block *sb,
 				struct writeback_control *wbc);
+extern void generic_sync_bdi_inodes(struct super_block *sb,
+				struct writeback_control *);
 extern int write_inode_now(struct inode *, int);
 extern int filemap_fdatawrite(struct address_space *);
 extern int filemap_flush(struct address_space *);
@@ -2180,7 +2182,6 @@ extern int bdev_read_only(struct block_device *);
 extern int set_blocksize(struct block_device *, int);
 extern int sb_set_blocksize(struct super_block *, int);
 extern int sb_min_blocksize(struct super_block *, int);
-extern int sb_has_dirty_inodes(struct super_block *);
 
 extern int generic_file_mmap(struct file *, struct vm_area_struct *);
 extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 9344547..a8e9f78 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -99,7 +99,7 @@ static inline void inode_sync_wait(struct inode *inode)
 /*
  * mm/page-writeback.c
  */
-int wakeup_pdflush(long nr_pages);
+void wakeup_flusher_threads(long nr_pages);
 void laptop_io_completion(void);
 void laptop_sync_completion(void);
 void throttle_vm_writeout(gfp_t gfp_mask);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 186fdce..57c8487 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -1,8 +1,11 @@
 
 #include <linux/wait.h>
 #include <linux/backing-dev.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/fs.h>
 #include <linux/pagemap.h>
+#include <linux/mm.h>
 #include <linux/sched.h>
 #include <linux/module.h>
 #include <linux/writeback.h>
@@ -16,7 +19,7 @@ EXPORT_SYMBOL(default_unplug_io_fn);
 struct backing_dev_info default_backing_dev_info = {
 	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
 	.state		= 0,
-	.capabilities	= BDI_CAP_MAP_COPY,
+	.capabilities	= BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
 	.unplug_io_fn	= default_unplug_io_fn,
 };
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
@@ -24,6 +27,7 @@ EXPORT_SYMBOL_GPL(default_backing_dev_info);
 static struct class *bdi_class;
 DEFINE_MUTEX(bdi_lock);
 LIST_HEAD(bdi_list);
+LIST_HEAD(bdi_pending_list);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -195,6 +199,127 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
+static int bdi_start_fn(void *ptr)
+{
+	struct backing_dev_info *bdi = ptr;
+	struct task_struct *tsk = current;
+
+	/*
+	 * Add us to the active bdi_list
+	 */
+	mutex_lock(&bdi_lock);
+	list_add(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_lock);
+
+	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
+	set_freezable();
+
+	/*
+	 * Our parent may run at a different priority, just set us to normal
+	 */
+	set_user_nice(tsk, 0);
+
+	/*
+	 * Clear pending bit and wakeup anybody waiting to tear us down
+	 */
+	clear_bit(BDI_pending, &bdi->state);
+	smp_mb__after_clear_bit();
+	wake_up_bit(&bdi->state, BDI_pending);
+
+	return bdi_writeback_task(bdi);
+}
+
+static int bdi_forker_task(void *ptr)
+{
+	struct backing_dev_info *me = ptr;
+	DEFINE_WAIT(wait);
+
+	for (;;) {
+		struct backing_dev_info *bdi, *tmp;
+
+		/*
+		 * Should never trigger on the default bdi
+		 */
+		WARN_ON(bdi_has_dirty_io(me));
+
+		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+		mutex_lock(&bdi_lock);
+
+		/*
+		 * Check if any existing bdi's have dirty data without
+		 * a thread registered. If so, set that up.
+		 */
+		list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+			if (bdi->task || !bdi_has_dirty_io(bdi))
+				continue;
+
+			bdi_add_default_flusher_task(bdi);
+		}
+
+		if (list_empty(&bdi_pending_list)) {
+			unsigned long wait;
+
+			mutex_unlock(&bdi_lock);
+			wait = msecs_to_jiffies(dirty_writeback_interval * 10);
+			schedule_timeout(wait);
+			try_to_freeze();
+			continue;
+		}
+
+		bdi = list_entry(bdi_pending_list.next, struct backing_dev_info,
+				 bdi_list);
+		list_del_init(&bdi->bdi_list);
+		mutex_unlock(&bdi_lock);
+
+		BUG_ON(bdi->task);
+
+		bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+					dev_name(bdi->dev));
+		/*
+		 * If task creation fails, then readd the bdi to
+		 * the pending list and force writeout of the bdi
+		 * from this forker thread. That will free some memory
+		 * and we can try again.
+		 */
+		if (!bdi->task) {
+			struct writeback_control wbc = {
+				.bdi			= bdi,
+				.sync_mode		= WB_SYNC_NONE,
+				.older_than_this	= NULL,
+				.range_cyclic		= 1,
+			};
+
+			/*
+			 * Add this 'bdi' to the back, so we get
+			 * a chance to flush other bdi's to free
+			 * memory.
+			 */
+			mutex_lock(&bdi_lock);
+			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+			mutex_unlock(&bdi_lock);
+
+			wbc.nr_to_write = 1024;
+			generic_sync_bdi_inodes(NULL, &wbc);
+		}
+	}
+
+	finish_wait(&me->wait, &wait);
+	return 0;
+}
+
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+	if (test_and_set_bit(BDI_pending, &bdi->state))
+		return;
+
+	mutex_lock(&bdi_lock);
+	list_move_tail(&bdi->bdi_list, &bdi_pending_list);
+	mutex_unlock(&bdi_lock);
+
+	wake_up(&default_backing_dev_info.wait);
+}
+
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...)
 {
@@ -214,12 +339,29 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 	}
 
 	mutex_lock(&bdi_lock);
-	list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+	list_add_tail(&bdi->bdi_list, &bdi_list);
 	mutex_unlock(&bdi_lock);
 
 	bdi->dev = dev;
-	bdi_debug_register(bdi, dev_name(dev));
 
+	/*
+	 * Just start the forker thread for our default backing_dev_info,
+	 * and add other bdi's to the list. They will get a thread created
+	 * on-demand when they need it.
+	 */
+	if (bdi_cap_flush_forker(bdi)) {
+		bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+						dev_name(dev));
+		if (!bdi->task) {
+			mutex_lock(&bdi_lock);
+			list_del(&bdi->bdi_list);
+			mutex_unlock(&bdi_lock);
+			ret = -ENOMEM;
+			goto exit;
+		}
+	}
+
+	bdi_debug_register(bdi, dev_name(dev));
 exit:
 	return ret;
 }
@@ -231,23 +373,34 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
-static void bdi_remove_from_list(struct backing_dev_info *bdi)
+static int sched_wait(void *word)
 {
-	mutex_lock(&bdi_lock);
-	list_del_rcu(&bdi->bdi_list);
-	mutex_unlock(&bdi_lock);
+	schedule();
+	return 0;
+}
 
+static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+{
 	/*
-	 * In case the bdi is freed right after unregister, we need to
-	 * make sure any RCU sections have exited
+	 * If setup is pending, wait for that to complete first
 	 */
-	synchronize_rcu();
+	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+
+	mutex_lock(&bdi_lock);
+	list_del(&bdi->bdi_list);
+	mutex_unlock(&bdi_lock);
 }
 
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
-		bdi_remove_from_list(bdi);
+		if (!bdi_cap_flush_forker(bdi)) {
+			bdi_wb_shutdown(bdi);
+			if (bdi->task) {
+				kthread_stop(bdi->task);
+				bdi->task = NULL;
+			}
+		}
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -257,14 +410,14 @@ EXPORT_SYMBOL(bdi_unregister);
 
 int bdi_init(struct backing_dev_info *bdi)
 {
-	int i;
-	int err;
+	int i, err;
 
 	bdi->dev = NULL;
 
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	init_waitqueue_head(&bdi->wait);
 	INIT_LIST_HEAD(&bdi->bdi_list);
 	INIT_LIST_HEAD(&bdi->b_io);
 	INIT_LIST_HEAD(&bdi->b_dirty);
@@ -283,8 +436,6 @@ int bdi_init(struct backing_dev_info *bdi)
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
-
-		bdi_remove_from_list(bdi);
 	}
 
 	return err;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 7c44314..54a4a65 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -36,15 +36,6 @@
 #include <linux/pagevec.h>
 
 /*
- * The maximum number of pages to writeout in a single bdflush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES	1024
-
-/*
  * After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
  * will look to see if it needs to force writeback or throttling.
  */
@@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
 /* End of sysctl-exported parameters */
 
 
-static void background_writeout(unsigned long _min_pages);
-
 /*
  * Scale the writeback cache size proportional to the relative writeout speeds.
  *
@@ -539,7 +528,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 		 * been flushed to permanent storage.
 		 */
 		if (bdi_nr_reclaimable) {
-			writeback_inodes(&wbc);
+			generic_sync_bdi_inodes(NULL, &wbc);
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
@@ -590,7 +579,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
 					  + global_page_state(NR_UNSTABLE_NFS)
 					  > background_thresh)))
-		pdflush_operation(background_writeout, 0);
+		bdi_start_writeback(bdi, NULL, 0, WB_SYNC_NONE);
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -675,152 +664,36 @@ void throttle_vm_writeout(gfp_t gfp_mask)
 }
 
 /*
- * writeback at least _min_pages, and keep writing until the amount of dirty
- * memory is less than the background threshold, or until we're all clean.
- */
-static void background_writeout(unsigned long _min_pages)
-{
-	long min_pages = _min_pages;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = NULL,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.range_cyclic	= 1,
-	};
-
-	for ( ; ; ) {
-		unsigned long background_thresh;
-		unsigned long dirty_thresh;
-
-		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
-		if (global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) < background_thresh
-				&& min_pages <= 0)
-			break;
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		wbc.pages_skipped = 0;
-		writeback_inodes(&wbc);
-		min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
-			/* Wrote less than expected */
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(WRITE, HZ/10);
-			else
-				break;
-		}
-	}
-}
-
-/*
  * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
  * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
  * -1 if all pdflush threads were busy.
  */
-int wakeup_pdflush(long nr_pages)
+void wakeup_flusher_threads(long nr_pages)
 {
 	if (nr_pages == 0)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
-	return pdflush_operation(background_writeout, nr_pages);
+	bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
+	return;
 }
 
-static void wb_timer_fn(unsigned long unused);
 static void laptop_timer_fn(unsigned long unused);
 
-static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
 static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
 
 /*
- * Periodic writeback of "old" data.
- *
- * Define "old": the first time one of an inode's pages is dirtied, we mark the
- * dirtying-time in the inode's address_space.  So this periodic writeback code
- * just walks the superblock inode list, writing back any inodes which are
- * older than a specific point in time.
- *
- * Try to run once per dirty_writeback_interval.  But if a writeback event
- * takes longer than a dirty_writeback_interval interval, then leave a
- * one-second gap.
- *
- * older_than_this takes precedence over nr_to_write.  So we'll only write back
- * all dirty pages if they are all attached to "old" mappings.
- */
-static void wb_kupdate(unsigned long arg)
-{
-	unsigned long oldest_jif;
-	unsigned long start_jif;
-	unsigned long next_jif;
-	long nr_to_write;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = &oldest_jif,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.for_kupdate	= 1,
-		.range_cyclic	= 1,
-	};
-
-	sync_supers();
-
-	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
-	start_jif = jiffies;
-	next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
-	nr_to_write = global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) +
-			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
-	while (nr_to_write > 0) {
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		writeback_inodes(&wbc);
-		if (wbc.nr_to_write > 0) {
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(WRITE, HZ/10);
-			else
-				break;	/* All the old data is written */
-		}
-		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-	}
-	if (time_before(next_jif, jiffies + HZ))
-		next_jif = jiffies + HZ;
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, next_jif);
-}
-
-/*
  * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
  */
 int dirty_writeback_centisecs_handler(ctl_table *table, int write,
 	struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
 {
 	proc_dointvec(table, write, file, buffer, length, ppos);
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, jiffies +
-			msecs_to_jiffies(dirty_writeback_interval * 10));
-	else
-		del_timer(&wb_timer);
 	return 0;
 }
 
-static void wb_timer_fn(unsigned long unused)
-{
-	if (pdflush_operation(wb_kupdate, 0) < 0)
-		mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
-}
-
-static void laptop_flush(unsigned long unused)
-{
-	sys_sync();
-}
-
 static void laptop_timer_fn(unsigned long unused)
 {
-	pdflush_operation(laptop_flush, 0);
+	wakeup_flusher_threads(0);
 }
 
 /*
@@ -903,8 +776,6 @@ void __init page_writeback_init(void)
 {
 	int shift;
 
-	mod_timer(&wb_timer,
-		  jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
 	writeback_set_ratelimit();
 	register_cpu_notifier(&ratelimit_nb);
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5fa3eda..e37fd38 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1654,7 +1654,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		 */
 		if (total_scanned > sc->swap_cluster_max +
 					sc->swap_cluster_max / 2) {
-			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
 			sc->may_writepage = 1;
 		}
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 05/13] aio: mostly crap
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (7 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 04/12] writeback: switch to per-bdi threads for flushing data Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  9:09   ` Jan Kara
  2009-05-25  7:30 ` [PATCH 05/12] writeback: get rid of pdflush completely Jens Axboe
                   ` (16 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

First attempts at getting rid of some locking in aio

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/aio.c            |  151 +++++++++++++++++++++++++++++++++------------------
 include/linux/aio.h |   11 ++--
 2 files changed, 103 insertions(+), 59 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index 76da125..98c82f2 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -79,9 +79,8 @@ static int __init aio_setup(void)
 	return 0;
 }
 
-static void aio_free_ring(struct kioctx *ctx)
+static void __aio_free_ring(struct kioctx *ctx, struct aio_ring_info *info)
 {
-	struct aio_ring_info *info = &ctx->ring_info;
 	long i;
 
 	for (i=0; i<info->nr_pages; i++)
@@ -99,16 +98,28 @@ static void aio_free_ring(struct kioctx *ctx)
 	info->nr = 0;
 }
 
-static int aio_setup_ring(struct kioctx *ctx)
+static void aio_free_ring(struct kioctx *ctx)
+{
+	unsigned int i;
+
+	for_each_possible_cpu(i) {
+		struct aio_ring_info *info = per_cpu_ptr(ctx->ring_info, i);
+
+		 __aio_free_ring(ctx, info);
+	}
+	free_percpu(ctx->ring_info);
+	ctx->ring_info = NULL;
+}
+
+static int __aio_setup_ring(struct kioctx *ctx, struct aio_ring_info *info)
 {
 	struct aio_ring *ring;
-	struct aio_ring_info *info = &ctx->ring_info;
 	unsigned nr_events = ctx->max_reqs;
 	unsigned long size;
 	int nr_pages;
 
-	/* Compensate for the ring buffer's head/tail overlap entry */
-	nr_events += 2;	/* 1 is required, 2 for good luck */
+	/* round nr_event to next power of 2 */
+	nr_events = roundup_pow_of_two(nr_events);
 
 	size = sizeof(struct aio_ring);
 	size += sizeof(struct io_event) * nr_events;
@@ -117,8 +128,6 @@ static int aio_setup_ring(struct kioctx *ctx)
 	if (nr_pages < 0)
 		return -EINVAL;
 
-	nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring)) / sizeof(struct io_event);
-
 	info->nr = 0;
 	info->ring_pages = info->internal_pages;
 	if (nr_pages > AIO_RING_PAGES) {
@@ -158,7 +167,8 @@ static int aio_setup_ring(struct kioctx *ctx)
 	ring = kmap_atomic(info->ring_pages[0], KM_USER0);
 	ring->nr = nr_events;	/* user copy */
 	ring->id = ctx->user_id;
-	ring->head = ring->tail = 0;
+	atomic_set(&ring->head, 0);
+	ring->tail = 0;
 	ring->magic = AIO_RING_MAGIC;
 	ring->compat_features = AIO_RING_COMPAT_FEATURES;
 	ring->incompat_features = AIO_RING_INCOMPAT_FEATURES;
@@ -168,6 +178,27 @@ static int aio_setup_ring(struct kioctx *ctx)
 	return 0;
 }
 
+static int aio_setup_ring(struct kioctx *ctx)
+{
+	unsigned int i;
+	int ret;
+
+	ctx->ring_info = alloc_percpu(struct aio_ring_info);
+	if (!ctx->ring_info)
+		return -ENOMEM;
+
+	ret = 0;
+	for_each_possible_cpu(i) {
+		struct aio_ring_info *info = per_cpu_ptr(ctx->ring_info, i);
+		int err;
+
+		err = __aio_setup_ring(ctx, info);
+		if (err && !ret)
+			ret = err;
+	}
+
+	return ret;
+}
 
 /* aio_ring_event: returns a pointer to the event at the given index from
  * kmap_atomic(, km).  Release the pointer with put_aio_ring_event();
@@ -176,8 +207,8 @@ static int aio_setup_ring(struct kioctx *ctx)
 #define AIO_EVENTS_FIRST_PAGE	((PAGE_SIZE - sizeof(struct aio_ring)) / sizeof(struct io_event))
 #define AIO_EVENTS_OFFSET	(AIO_EVENTS_PER_PAGE - AIO_EVENTS_FIRST_PAGE)
 
-#define aio_ring_event(info, nr, km) ({					\
-	unsigned pos = (nr) + AIO_EVENTS_OFFSET;			\
+#define aio_ring_event(info, __nr, km) ({				\
+	unsigned pos = ((__nr) & ((info)->nr - 1)) + AIO_EVENTS_OFFSET;	\
 	struct io_event *__event;					\
 	__event = kmap_atomic(						\
 			(info)->ring_pages[pos / AIO_EVENTS_PER_PAGE], km); \
@@ -262,7 +293,6 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
 
 	atomic_set(&ctx->users, 1);
 	spin_lock_init(&ctx->ctx_lock);
-	spin_lock_init(&ctx->ring_info.ring_lock);
 	init_waitqueue_head(&ctx->wait);
 
 	INIT_LIST_HEAD(&ctx->active_reqs);
@@ -426,6 +456,7 @@ void exit_aio(struct mm_struct *mm)
 static struct kiocb *__aio_get_req(struct kioctx *ctx)
 {
 	struct kiocb *req = NULL;
+	struct aio_ring_info *info;
 	struct aio_ring *ring;
 	int okay = 0;
 
@@ -448,15 +479,18 @@ static struct kiocb *__aio_get_req(struct kioctx *ctx)
 	/* Check if the completion queue has enough free space to
 	 * accept an event from this io.
 	 */
-	spin_lock_irq(&ctx->ctx_lock);
-	ring = kmap_atomic(ctx->ring_info.ring_pages[0], KM_USER0);
-	if (ctx->reqs_active < aio_ring_avail(&ctx->ring_info, ring)) {
+	local_irq_disable();
+	info = per_cpu_ptr(ctx->ring_info, smp_processor_id());
+	ring = kmap_atomic(info->ring_pages[0], KM_IRQ0);
+	if (ctx->reqs_active < aio_ring_avail(info, ring)) {
+		spin_lock(&ctx->ctx_lock);
 		list_add(&req->ki_list, &ctx->active_reqs);
 		ctx->reqs_active++;
+		spin_unlock(&ctx->ctx_lock);
 		okay = 1;
 	}
-	kunmap_atomic(ring, KM_USER0);
-	spin_unlock_irq(&ctx->ctx_lock);
+	kunmap_atomic(ring, KM_IRQ0);
+	local_irq_enable();
 
 	if (!okay) {
 		kmem_cache_free(kiocb_cachep, req);
@@ -578,9 +612,11 @@ int aio_put_req(struct kiocb *req)
 {
 	struct kioctx *ctx = req->ki_ctx;
 	int ret;
+
 	spin_lock_irq(&ctx->ctx_lock);
 	ret = __aio_put_req(ctx, req);
 	spin_unlock_irq(&ctx->ctx_lock);
+
 	return ret;
 }
 
@@ -954,7 +990,7 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
 	struct aio_ring	*ring;
 	struct io_event	*event;
 	unsigned long	flags;
-	unsigned long	tail;
+	unsigned	tail;
 	int		ret;
 
 	/*
@@ -972,15 +1008,14 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
 		return 1;
 	}
 
-	info = &ctx->ring_info;
-
 	/* add a completion event to the ring buffer.
 	 * must be done holding ctx->ctx_lock to prevent
 	 * other code from messing with the tail
 	 * pointer since we might be called from irq
 	 * context.
 	 */
-	spin_lock_irqsave(&ctx->ctx_lock, flags);
+	local_irq_save(flags);
+	info = per_cpu_ptr(ctx->ring_info, smp_processor_id());
 
 	if (iocb->ki_run_list.prev && !list_empty(&iocb->ki_run_list))
 		list_del_init(&iocb->ki_run_list);
@@ -996,8 +1031,6 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
 
 	tail = info->tail;
 	event = aio_ring_event(info, tail, KM_IRQ0);
-	if (++tail >= info->nr)
-		tail = 0;
 
 	event->obj = (u64)(unsigned long)iocb->ki_obj.user;
 	event->data = iocb->ki_user_data;
@@ -1013,13 +1046,14 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
 	 */
 	smp_wmb();	/* make event visible before updating tail */
 
+	tail++;
 	info->tail = tail;
 	ring->tail = tail;
 
 	put_aio_ring_event(event, KM_IRQ0);
 	kunmap_atomic(ring, KM_IRQ1);
 
-	pr_debug("added to ring %p at [%lu]\n", iocb, tail);
+	pr_debug("added to ring %p at [%u]\n", iocb, tail);
 
 	/*
 	 * Check if the user asked us to deliver the result through an
@@ -1031,7 +1065,9 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
 
 put_rq:
 	/* everything turned out well, dispose of the aiocb. */
+	spin_lock(&ctx->ctx_lock);
 	ret = __aio_put_req(ctx, iocb);
+	spin_unlock(&ctx->ctx_lock);
 
 	/*
 	 * We have to order our ring_info tail store above and test
@@ -1044,49 +1080,58 @@ put_rq:
 	if (waitqueue_active(&ctx->wait))
 		wake_up(&ctx->wait);
 
-	spin_unlock_irqrestore(&ctx->ctx_lock, flags);
+	local_irq_restore(flags);
+	return ret;
+}
+
+static int __aio_read_evt(struct aio_ring_info *info, struct aio_ring *ring,
+			  struct io_event *ent)
+{
+	struct io_event *evp;
+	unsigned head;
+	int ret = 0;
+
+	do {
+		head = atomic_read(&ring->head);
+		if (head == ring->tail)
+			break;
+		evp = aio_ring_event(info, head, KM_USER1);
+		*ent = *evp;
+		smp_mb(); /* finish reading the event before updatng the head */
+		++ret;
+		put_aio_ring_event(evp, KM_USER1);
+	} while (head != atomic_cmpxchg(&ring->head, head, head + 1));
+
 	return ret;
 }
 
 /* aio_read_evt
  *	Pull an event off of the ioctx's event ring.  Returns the number of 
  *	events fetched (0 or 1 ;-)
- *	FIXME: make this use cmpxchg.
- *	TODO: make the ringbuffer user mmap()able (requires FIXME).
+ *	TODO: make the ringbuffer user mmap()able
  */
 static int aio_read_evt(struct kioctx *ioctx, struct io_event *ent)
 {
-	struct aio_ring_info *info = &ioctx->ring_info;
-	struct aio_ring *ring;
-	unsigned long head;
-	int ret = 0;
+	int i, ret = 0;
 
-	ring = kmap_atomic(info->ring_pages[0], KM_USER0);
-	dprintk("in aio_read_evt h%lu t%lu m%lu\n",
-		 (unsigned long)ring->head, (unsigned long)ring->tail,
-		 (unsigned long)ring->nr);
+	for_each_possible_cpu(i) {
+		struct aio_ring_info *info;
+		struct aio_ring *ring;
 
-	if (ring->head == ring->tail)
-		goto out;
+		info = per_cpu_ptr(ioctx->ring_info, i);
+		ring = kmap_atomic(info->ring_pages[0], KM_USER0);
+		dprintk("in aio_read_evt h%u t%u m%u\n",
+			 atomic_read(&ring->head), ring->tail, ring->nr);
 
-	spin_lock(&info->ring_lock);
-
-	head = ring->head % info->nr;
-	if (head != ring->tail) {
-		struct io_event *evp = aio_ring_event(info, head, KM_USER1);
-		*ent = *evp;
-		head = (head + 1) % info->nr;
-		smp_mb(); /* finish reading the event before updatng the head */
-		ring->head = head;
-		ret = 1;
-		put_aio_ring_event(evp, KM_USER1);
+		ret = __aio_read_evt(info, ring, ent);
+		kunmap_atomic(ring, KM_USER0);
+		if (ret)
+			break;
 	}
-	spin_unlock(&info->ring_lock);
 
-out:
-	kunmap_atomic(ring, KM_USER0);
-	dprintk("leaving aio_read_evt: %d  h%lu t%lu\n", ret,
-		 (unsigned long)ring->head, (unsigned long)ring->tail);
+	dprintk("leaving aio_read_evt: %d  h%u t%u\n", ret,
+		 atomic_read(&ring->head), ring->tail);
+
 	return ret;
 }
 
diff --git a/include/linux/aio.h b/include/linux/aio.h
index b16a957..9a7acb4 100644
--- a/include/linux/aio.h
+++ b/include/linux/aio.h
@@ -149,7 +149,7 @@ struct kiocb {
 struct aio_ring {
 	unsigned	id;	/* kernel internal index number */
 	unsigned	nr;	/* number of io_events */
-	unsigned	head;
+	atomic_t	head;
 	unsigned	tail;
 
 	unsigned	magic;
@@ -157,11 +157,11 @@ struct aio_ring {
 	unsigned	incompat_features;
 	unsigned	header_length;	/* size of aio_ring */
 
-
-	struct io_event		io_events[0];
+	struct io_event	io_events[0];
 }; /* 128 bytes + ring size */
 
-#define aio_ring_avail(info, ring)	(((ring)->head + (info)->nr - 1 - (ring)->tail) % (info)->nr)
+#define aio_ring_avail(info, ring)					\
+	((info)->nr + (unsigned) atomic_read(&(ring)->head) - (ring)->tail)
 
 #define AIO_RING_PAGES	8
 struct aio_ring_info {
@@ -169,7 +169,6 @@ struct aio_ring_info {
 	unsigned long		mmap_size;
 
 	struct page		**ring_pages;
-	spinlock_t		ring_lock;
 	long			nr_pages;
 
 	unsigned		nr, tail;
@@ -197,7 +196,7 @@ struct kioctx {
 	/* sys_io_setup currently limits this to an unsigned int */
 	unsigned		max_reqs;
 
-	struct aio_ring_info	ring_info;
+	struct aio_ring_info	*ring_info;
 
 	struct delayed_work	wq;
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 05/12] writeback: get rid of pdflush completely
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (8 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 05/13] aio: mostly crap Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 06/13] block: move elevator ops into the queue Jens Axboe
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

It is now unused, so kill it off.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c         |    5 +
 include/linux/writeback.h |   12 --
 mm/Makefile               |    2 +-
 mm/pdflush.c              |  269 ---------------------------------------------
 4 files changed, 6 insertions(+), 282 deletions(-)
 delete mode 100644 mm/pdflush.c

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 7cb4d02..ca4d9da 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -29,6 +29,11 @@
 
 #define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
+/*
+ * We don't actually have pdflush, but this one is exported though /proc...
+ */
+int nr_pdflush_threads;
+
 /**
  * writeback_acquire - attempt to get exclusive writeback access to a device
  * @bdi: the device's backing_dev_info structure
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index a8e9f78..baf04a9 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -14,17 +14,6 @@ extern struct list_head inode_in_use;
 extern struct list_head inode_unused;
 
 /*
- * Yes, writeback.h requires sched.h
- * No, sched.h is not included from here.
- */
-static inline int task_is_pdflush(struct task_struct *task)
-{
-	return task->flags & PF_FLUSHER;
-}
-
-#define current_is_pdflush()	task_is_pdflush(current)
-
-/*
  * fs/fs-writeback.c
  */
 enum writeback_sync_modes {
@@ -151,7 +140,6 @@ balance_dirty_pages_ratelimited(struct address_space *mapping)
 typedef int (*writepage_t)(struct page *page, struct writeback_control *wbc,
 				void *data);
 
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0);
 int generic_writepages(struct address_space *mapping,
 		       struct writeback_control *wbc);
 int write_cache_pages(struct address_space *mapping,
diff --git a/mm/Makefile b/mm/Makefile
index ec73c68..2adb811 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -8,7 +8,7 @@ mmu-$(CONFIG_MMU)	:= fremap.o highmem.o madvise.o memory.o mincore.o \
 			   vmalloc.o
 
 obj-y			:= bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \
-			   maccess.o page_alloc.o page-writeback.o pdflush.o \
+			   maccess.o page_alloc.o page-writeback.o \
 			   readahead.o swap.o truncate.o vmscan.o shmem.o \
 			   prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
 			   page_isolation.o mm_init.o $(mmu-y)
diff --git a/mm/pdflush.c b/mm/pdflush.c
deleted file mode 100644
index 235ac44..0000000
--- a/mm/pdflush.c
+++ /dev/null
@@ -1,269 +0,0 @@
-/*
- * mm/pdflush.c - worker threads for writing back filesystem data
- *
- * Copyright (C) 2002, Linus Torvalds.
- *
- * 09Apr2002	Andrew Morton
- *		Initial version
- * 29Feb2004	kaos@sgi.com
- *		Move worker thread creation to kthread to avoid chewing
- *		up stack space with nested calls to kernel_thread.
- */
-
-#include <linux/sched.h>
-#include <linux/list.h>
-#include <linux/signal.h>
-#include <linux/spinlock.h>
-#include <linux/gfp.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/fs.h>		/* Needed by writeback.h	  */
-#include <linux/writeback.h>	/* Prototypes pdflush_operation() */
-#include <linux/kthread.h>
-#include <linux/cpuset.h>
-#include <linux/freezer.h>
-
-
-/*
- * Minimum and maximum number of pdflush instances
- */
-#define MIN_PDFLUSH_THREADS	2
-#define MAX_PDFLUSH_THREADS	8
-
-static void start_one_pdflush_thread(void);
-
-
-/*
- * The pdflush threads are worker threads for writing back dirty data.
- * Ideally, we'd like one thread per active disk spindle.  But the disk
- * topology is very hard to divine at this level.   Instead, we take
- * care in various places to prevent more than one pdflush thread from
- * performing writeback against a single filesystem.  pdflush threads
- * have the PF_FLUSHER flag set in current->flags to aid in this.
- */
-
-/*
- * All the pdflush threads.  Protected by pdflush_lock
- */
-static LIST_HEAD(pdflush_list);
-static DEFINE_SPINLOCK(pdflush_lock);
-
-/*
- * The count of currently-running pdflush threads.  Protected
- * by pdflush_lock.
- *
- * Readable by sysctl, but not writable.  Published to userspace at
- * /proc/sys/vm/nr_pdflush_threads.
- */
-int nr_pdflush_threads = 0;
-
-/*
- * The time at which the pdflush thread pool last went empty
- */
-static unsigned long last_empty_jifs;
-
-/*
- * The pdflush thread.
- *
- * Thread pool management algorithm:
- * 
- * - The minimum and maximum number of pdflush instances are bound
- *   by MIN_PDFLUSH_THREADS and MAX_PDFLUSH_THREADS.
- * 
- * - If there have been no idle pdflush instances for 1 second, create
- *   a new one.
- * 
- * - If the least-recently-went-to-sleep pdflush thread has been asleep
- *   for more than one second, terminate a thread.
- */
-
-/*
- * A structure for passing work to a pdflush thread.  Also for passing
- * state information between pdflush threads.  Protected by pdflush_lock.
- */
-struct pdflush_work {
-	struct task_struct *who;	/* The thread */
-	void (*fn)(unsigned long);	/* A callback function */
-	unsigned long arg0;		/* An argument to the callback */
-	struct list_head list;		/* On pdflush_list, when idle */
-	unsigned long when_i_went_to_sleep;
-};
-
-static int __pdflush(struct pdflush_work *my_work)
-{
-	current->flags |= PF_FLUSHER | PF_SWAPWRITE;
-	set_freezable();
-	my_work->fn = NULL;
-	my_work->who = current;
-	INIT_LIST_HEAD(&my_work->list);
-
-	spin_lock_irq(&pdflush_lock);
-	for ( ; ; ) {
-		struct pdflush_work *pdf;
-
-		set_current_state(TASK_INTERRUPTIBLE);
-		list_move(&my_work->list, &pdflush_list);
-		my_work->when_i_went_to_sleep = jiffies;
-		spin_unlock_irq(&pdflush_lock);
-		schedule();
-		try_to_freeze();
-		spin_lock_irq(&pdflush_lock);
-		if (!list_empty(&my_work->list)) {
-			/*
-			 * Someone woke us up, but without removing our control
-			 * structure from the global list.  swsusp will do this
-			 * in try_to_freeze()->refrigerator().  Handle it.
-			 */
-			my_work->fn = NULL;
-			continue;
-		}
-		if (my_work->fn == NULL) {
-			printk("pdflush: bogus wakeup\n");
-			continue;
-		}
-		spin_unlock_irq(&pdflush_lock);
-
-		(*my_work->fn)(my_work->arg0);
-
-		spin_lock_irq(&pdflush_lock);
-
-		/*
-		 * Thread creation: For how long have there been zero
-		 * available threads?
-		 *
-		 * To throttle creation, we reset last_empty_jifs.
-		 */
-		if (time_after(jiffies, last_empty_jifs + 1 * HZ)) {
-			if (list_empty(&pdflush_list)) {
-				if (nr_pdflush_threads < MAX_PDFLUSH_THREADS) {
-					last_empty_jifs = jiffies;
-					nr_pdflush_threads++;
-					spin_unlock_irq(&pdflush_lock);
-					start_one_pdflush_thread();
-					spin_lock_irq(&pdflush_lock);
-				}
-			}
-		}
-
-		my_work->fn = NULL;
-
-		/*
-		 * Thread destruction: For how long has the sleepiest
-		 * thread slept?
-		 */
-		if (list_empty(&pdflush_list))
-			continue;
-		if (nr_pdflush_threads <= MIN_PDFLUSH_THREADS)
-			continue;
-		pdf = list_entry(pdflush_list.prev, struct pdflush_work, list);
-		if (time_after(jiffies, pdf->when_i_went_to_sleep + 1 * HZ)) {
-			/* Limit exit rate */
-			pdf->when_i_went_to_sleep = jiffies;
-			break;					/* exeunt */
-		}
-	}
-	nr_pdflush_threads--;
-	spin_unlock_irq(&pdflush_lock);
-	return 0;
-}
-
-/*
- * Of course, my_work wants to be just a local in __pdflush().  It is
- * separated out in this manner to hopefully prevent the compiler from
- * performing unfortunate optimisations against the auto variables.  Because
- * these are visible to other tasks and CPUs.  (No problem has actually
- * been observed.  This is just paranoia).
- */
-static int pdflush(void *dummy)
-{
-	struct pdflush_work my_work;
-	cpumask_var_t cpus_allowed;
-
-	/*
-	 * Since the caller doesn't even check kthread_run() worked, let's not
-	 * freak out too much if this fails.
-	 */
-	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
-		printk(KERN_WARNING "pdflush failed to allocate cpumask\n");
-		return 0;
-	}
-
-	/*
-	 * pdflush can spend a lot of time doing encryption via dm-crypt.  We
-	 * don't want to do that at keventd's priority.
-	 */
-	set_user_nice(current, 0);
-
-	/*
-	 * Some configs put our parent kthread in a limited cpuset,
-	 * which kthread() overrides, forcing cpus_allowed == cpu_all_mask.
-	 * Our needs are more modest - cut back to our cpusets cpus_allowed.
-	 * This is needed as pdflush's are dynamically created and destroyed.
-	 * The boottime pdflush's are easily placed w/o these 2 lines.
-	 */
-	cpuset_cpus_allowed(current, cpus_allowed);
-	set_cpus_allowed_ptr(current, cpus_allowed);
-	free_cpumask_var(cpus_allowed);
-
-	return __pdflush(&my_work);
-}
-
-/*
- * Attempt to wake up a pdflush thread, and get it to do some work for you.
- * Returns zero if it indeed managed to find a worker thread, and passed your
- * payload to it.
- */
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0)
-{
-	unsigned long flags;
-	int ret = 0;
-
-	BUG_ON(fn == NULL);	/* Hard to diagnose if it's deferred */
-
-	spin_lock_irqsave(&pdflush_lock, flags);
-	if (list_empty(&pdflush_list)) {
-		ret = -1;
-	} else {
-		struct pdflush_work *pdf;
-
-		pdf = list_entry(pdflush_list.next, struct pdflush_work, list);
-		list_del_init(&pdf->list);
-		if (list_empty(&pdflush_list))
-			last_empty_jifs = jiffies;
-		pdf->fn = fn;
-		pdf->arg0 = arg0;
-		wake_up_process(pdf->who);
-	}
-	spin_unlock_irqrestore(&pdflush_lock, flags);
-
-	return ret;
-}
-
-static void start_one_pdflush_thread(void)
-{
-	struct task_struct *k;
-
-	k = kthread_run(pdflush, NULL, "pdflush");
-	if (unlikely(IS_ERR(k))) {
-		spin_lock_irq(&pdflush_lock);
-		nr_pdflush_threads--;
-		spin_unlock_irq(&pdflush_lock);
-	}
-}
-
-static int __init pdflush_init(void)
-{
-	int i;
-
-	/*
-	 * Pre-set nr_pdflush_threads...  If we fail to create,
-	 * the count will be decremented.
-	 */
-	nr_pdflush_threads = MIN_PDFLUSH_THREADS;
-
-	for (i = 0; i < MIN_PDFLUSH_THREADS; i++)
-		start_one_pdflush_thread();
-	return 0;
-}
-
-module_init(pdflush_init);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 06/13] block: move elevator ops into the queue
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (9 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 05/12] writeback: get rid of pdflush completely Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 06/12] writeback: separate the flushing state/task from the bdi Jens Axboe
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/elevator.c       |   83 ++++++++++++++++++-----------------------------
 include/linux/blkdev.h |    1 +
 2 files changed, 33 insertions(+), 51 deletions(-)

diff --git a/block/elevator.c b/block/elevator.c
index 7073a90..fdb0675 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -65,10 +65,9 @@ DEFINE_TRACE(block_rq_issue);
 static int elv_iosched_allow_merge(struct request *rq, struct bio *bio)
 {
 	struct request_queue *q = rq->q;
-	struct elevator_queue *e = q->elevator;
 
-	if (e->ops->elevator_allow_merge_fn)
-		return e->ops->elevator_allow_merge_fn(q, rq, bio);
+	if (q->elv_ops.elevator_allow_merge_fn)
+		return q->elv_ops.elevator_allow_merge_fn(q, rq, bio);
 
 	return 1;
 }
@@ -185,6 +184,7 @@ static void *elevator_init_queue(struct request_queue *q,
 static void elevator_attach(struct request_queue *q, struct elevator_queue *eq,
 			   void *data)
 {
+	q->elv_ops = *eq->ops;
 	q->elevator = eq;
 	eq->elevator_data = data;
 }
@@ -312,18 +312,14 @@ EXPORT_SYMBOL(elevator_exit);
 
 static void elv_activate_rq(struct request_queue *q, struct request *rq)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_activate_req_fn)
-		e->ops->elevator_activate_req_fn(q, rq);
+	if (q->elv_ops.elevator_activate_req_fn)
+		q->elv_ops.elevator_activate_req_fn(q, rq);
 }
 
 static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_deactivate_req_fn)
-		e->ops->elevator_deactivate_req_fn(q, rq);
+	if (q->elv_ops.elevator_deactivate_req_fn)
+		q->elv_ops.elevator_deactivate_req_fn(q, rq);
 }
 
 static inline void __elv_rqhash_del(struct request *rq)
@@ -495,7 +491,6 @@ EXPORT_SYMBOL(elv_dispatch_add_tail);
 
 int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
-	struct elevator_queue *e = q->elevator;
 	struct request *__rq;
 	int ret;
 
@@ -522,18 +517,16 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
 		return ELEVATOR_BACK_MERGE;
 	}
 
-	if (e->ops->elevator_merge_fn)
-		return e->ops->elevator_merge_fn(q, req, bio);
+	if (q->elv_ops.elevator_merge_fn)
+		return q->elv_ops.elevator_merge_fn(q, req, bio);
 
 	return ELEVATOR_NO_MERGE;
 }
 
 void elv_merged_request(struct request_queue *q, struct request *rq, int type)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_merged_fn)
-		e->ops->elevator_merged_fn(q, rq, type);
+	if (q->elv_ops.elevator_merged_fn)
+		q->elv_ops.elevator_merged_fn(q, rq, type);
 
 	if (type == ELEVATOR_BACK_MERGE)
 		elv_rqhash_reposition(q, rq);
@@ -544,10 +537,8 @@ void elv_merged_request(struct request_queue *q, struct request *rq, int type)
 void elv_merge_requests(struct request_queue *q, struct request *rq,
 			     struct request *next)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_merge_req_fn)
-		e->ops->elevator_merge_req_fn(q, rq, next);
+	if (q->elv_ops.elevator_merge_req_fn)
+		q->elv_ops.elevator_merge_req_fn(q, rq, next);
 
 	elv_rqhash_reposition(q, rq);
 	elv_rqhash_del(q, next);
@@ -576,8 +567,10 @@ void elv_requeue_request(struct request_queue *q, struct request *rq)
 void elv_drain_elevator(struct request_queue *q)
 {
 	static int printed;
-	while (q->elevator->ops->elevator_dispatch_fn(q, 1))
+
+	while (q->elv_ops.elevator_dispatch_fn(q, 1))
 		;
+
 	if (q->nr_sorted == 0)
 		return;
 	if (printed++ < 10) {
@@ -662,7 +655,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 		 * rq cannot be accessed after calling
 		 * elevator_add_req_fn.
 		 */
-		q->elevator->ops->elevator_add_req_fn(q, rq);
+		q->elv_ops.elevator_add_req_fn(q, rq);
 		break;
 
 	case ELEVATOR_INSERT_REQUEUE:
@@ -770,7 +763,7 @@ static inline struct request *__elv_next_request(struct request_queue *q)
 				return rq;
 		}
 
-		if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
+		if (!q->elv_ops.elevator_dispatch_fn(q, 0))
 			return NULL;
 	}
 }
@@ -872,13 +865,11 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
 
 int elv_queue_empty(struct request_queue *q)
 {
-	struct elevator_queue *e = q->elevator;
-
 	if (!list_empty(&q->queue_head))
 		return 0;
 
-	if (e->ops->elevator_queue_empty_fn)
-		return e->ops->elevator_queue_empty_fn(q);
+	if (q->elv_ops.elevator_queue_empty_fn)
+		return q->elv_ops.elevator_queue_empty_fn(q);
 
 	return 1;
 }
@@ -886,28 +877,24 @@ EXPORT_SYMBOL(elv_queue_empty);
 
 struct request *elv_latter_request(struct request_queue *q, struct request *rq)
 {
-	struct elevator_queue *e = q->elevator;
+	if (q->elv_ops.elevator_latter_req_fn)
+		return q->elv_ops.elevator_latter_req_fn(q, rq);
 
-	if (e->ops->elevator_latter_req_fn)
-		return e->ops->elevator_latter_req_fn(q, rq);
 	return NULL;
 }
 
 struct request *elv_former_request(struct request_queue *q, struct request *rq)
 {
-	struct elevator_queue *e = q->elevator;
+	if (q->elv_ops.elevator_former_req_fn)
+		return q->elv_ops.elevator_former_req_fn(q, rq);
 
-	if (e->ops->elevator_former_req_fn)
-		return e->ops->elevator_former_req_fn(q, rq);
 	return NULL;
 }
 
 int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_set_req_fn)
-		return e->ops->elevator_set_req_fn(q, rq, gfp_mask);
+	if (q->elv_ops.elevator_set_req_fn)
+		return q->elv_ops.elevator_set_req_fn(q, rq, gfp_mask);
 
 	rq->elevator_private = NULL;
 	return 0;
@@ -915,18 +902,14 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 
 void elv_put_request(struct request_queue *q, struct request *rq)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_put_req_fn)
-		e->ops->elevator_put_req_fn(rq);
+	if (q->elv_ops.elevator_put_req_fn)
+		q->elv_ops.elevator_put_req_fn(rq);
 }
 
 int elv_may_queue(struct request_queue *q, int rw)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_may_queue_fn)
-		return e->ops->elevator_may_queue_fn(q, rw);
+	if (q->elv_ops.elevator_may_queue_fn)
+		return q->elv_ops.elevator_may_queue_fn(q, rw);
 
 	return ELV_MQUEUE_MAY;
 }
@@ -946,15 +929,13 @@ EXPORT_SYMBOL(elv_abort_queue);
 
 void elv_completed_request(struct request_queue *q, struct request *rq)
 {
-	struct elevator_queue *e = q->elevator;
-
 	/*
 	 * request is released from the driver, io must be done
 	 */
 	if (blk_account_rq(rq)) {
 		q->in_flight--;
-		if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
-			e->ops->elevator_completed_req_fn(q, rq);
+		if (blk_sorted_rq(rq) && q->elv_ops.elevator_completed_req_fn)
+			q->elv_ops.elevator_completed_req_fn(q, rq);
 	}
 
 	/*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c00f050..4d6db9f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -326,6 +326,7 @@ struct request_queue
 	struct list_head	queue_head;
 	struct request		*last_merge;
 	struct elevator_queue	*elevator;
+	struct elevator_ops	elv_ops;
 
 	/*
 	 * the queue request freelist, one for reads and one for writes
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 06/12] writeback: separate the flushing state/task from the bdi
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (10 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 06/13] block: move elevator ops into the queue Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 07/13] block: avoid indirect calls to enter cfq io scheduler Jens Axboe
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Add a struct bdi_writeback for tracking and handling dirty IO. This
is in preparation for adding > 1 flusher task per bdi.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  145 ++++++++++++++++++++++++---------------
 include/linux/backing-dev.h |   40 ++++++-----
 mm/backing-dev.c            |  161 ++++++++++++++++++++++++++++++++-----------
 3 files changed, 233 insertions(+), 113 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index ca4d9da..7a9f0b0 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -46,9 +46,11 @@ int nr_pdflush_threads;
  * unless they implement their own.  Which is somewhat inefficient, as this
  * may prevent concurrent writeback against multiple devices.
  */
-static int writeback_acquire(struct backing_dev_info *bdi)
+static int writeback_acquire(struct bdi_writeback *wb)
 {
-	return !test_and_set_bit(BDI_pdflush, &bdi->state);
+	struct backing_dev_info *bdi = wb->bdi;
+
+	return !test_and_set_bit(wb->nr, &bdi->wb_active);
 }
 
 /**
@@ -59,19 +61,40 @@ static int writeback_acquire(struct backing_dev_info *bdi)
  */
 int writeback_in_progress(struct backing_dev_info *bdi)
 {
-	return test_bit(BDI_pdflush, &bdi->state);
+	return bdi->wb_active != 0;
 }
 
 /**
  * writeback_release - relinquish exclusive writeback access against a device.
  * @bdi: the device's backing_dev_info structure
  */
-static void writeback_release(struct backing_dev_info *bdi)
+static void writeback_release(struct bdi_writeback *wb)
 {
-	WARN_ON_ONCE(!writeback_in_progress(bdi));
-	bdi->wb_arg.nr_pages = 0;
-	bdi->wb_arg.sb = NULL;
-	clear_bit(BDI_pdflush, &bdi->state);
+	struct backing_dev_info *bdi = wb->bdi;
+
+	wb->nr_pages = 0;
+	wb->sb = NULL;
+	clear_bit(wb->nr, &bdi->wb_active);
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
+			       long nr_pages,
+			       enum writeback_sync_modes sync_mode)
+{
+	if (!wb_has_dirty_io(wb))
+		return;
+
+	if (writeback_acquire(wb)) {
+		wb->nr_pages = nr_pages;
+		wb->sb = sb;
+		wb->sync_mode = sync_mode;
+
+		/*
+		 * make above store seen before the task is woken
+		 */
+		smp_mb();
+		wake_up(&wb->wait);
+	}
 }
 
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
@@ -81,22 +104,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
 	 * This only happens the first time someone kicks this bdi, so put
 	 * it out-of-line.
 	 */
-	if (unlikely(!bdi->task)) {
+	if (unlikely(!bdi->wb.task)) {
 		bdi_add_default_flusher_task(bdi);
 		return 1;
 	}
 
-	if (writeback_acquire(bdi)) {
-		bdi->wb_arg.nr_pages = nr_pages;
-		bdi->wb_arg.sb = sb;
-		bdi->wb_arg.sync_mode = sync_mode;
-		/*
-		 * make above store seen before the task is woken
-		 */
-		smp_mb();
-		wake_up(&bdi->wait);
-	}
-
+	wb_start_writeback(&bdi->wb, sb, nr_pages, sync_mode);
 	return 0;
 }
 
@@ -124,12 +137,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
  * older_than_this takes precedence over nr_to_write.  So we'll only write back
  * all dirty pages if they are all attached to "old" mappings.
  */
-static void bdi_kupdated(struct backing_dev_info *bdi)
+static void wb_kupdated(struct bdi_writeback *wb)
 {
 	unsigned long oldest_jif;
 	long nr_to_write;
 	struct writeback_control wbc = {
-		.bdi			= bdi,
+		.bdi			= wb->bdi,
 		.sync_mode		= WB_SYNC_NONE,
 		.older_than_this	= &oldest_jif,
 		.nr_to_write		= 0,
@@ -166,15 +179,19 @@ static inline bool over_bground_thresh(void)
 		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
 }
 
-static void bdi_pdflush(struct backing_dev_info *bdi)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc);
+
+static void wb_writeback(struct bdi_writeback *wb)
 {
 	struct writeback_control wbc = {
-		.bdi			= bdi,
-		.sync_mode		= bdi->wb_arg.sync_mode,
+		.bdi			= wb->bdi,
+		.sync_mode		= wb->sync_mode,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
-	long nr_pages = bdi->wb_arg.nr_pages;
+	long nr_pages = wb->nr_pages;
 
 	for (;;) {
 		if (wbc.sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
@@ -185,7 +202,7 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		wbc.pages_skipped = 0;
-		generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+		generic_sync_wb_inodes(wb, wb->sb, &wbc);
 		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
@@ -202,13 +219,13 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
  * Handle writeback of dirty data for the device backed by this bdi. Also
  * wakes up periodically and does kupdated style flushing.
  */
-int bdi_writeback_task(struct backing_dev_info *bdi)
+int bdi_writeback_task(struct bdi_writeback *wb)
 {
 	while (!kthread_should_stop()) {
 		unsigned long wait_jiffies;
 		DEFINE_WAIT(wait);
 
-		prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
 		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
 		schedule_timeout(wait_jiffies);
 		try_to_freeze();
@@ -227,13 +244,13 @@ int bdi_writeback_task(struct backing_dev_info *bdi)
 		 *  pdflush style writeout.
 		 *
 		 */
-		if (writeback_acquire(bdi))
-			bdi_kupdated(bdi);
+		if (writeback_acquire(wb))
+			wb_kupdated(wb);
 		else
-			bdi_pdflush(bdi);
+			wb_writeback(wb);
 
-		writeback_release(bdi);
-		finish_wait(&bdi->wait, &wait);
+		writeback_release(wb);
+		finish_wait(&wb->wait, &wait);
 	}
 
 	return 0;
@@ -255,6 +272,14 @@ void bdi_writeback_all(struct super_block *sb, long nr_pages,
 	mutex_unlock(&bdi_lock);
 }
 
+/*
+ * We have only a single wb per bdi, so just return that.
+ */
+static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
+{
+	return &inode_to_bdi(inode)->wb;
+}
+
 /**
  *	__mark_inode_dirty -	internal function
  *	@inode: inode to mark
@@ -353,9 +378,10 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
+			struct bdi_writeback *wb = inode_get_wb(inode);
+
 			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list,
-					&inode_to_bdi(inode)->b_dirty);
+			list_move(&inode->i_list, &wb->b_dirty);
 		}
 	}
 out:
@@ -382,16 +408,16 @@ static int write_inode(struct inode *inode, int sync)
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct backing_dev_info *bdi = inode_to_bdi(inode);
+	struct bdi_writeback *wb = inode_get_wb(inode);
 
-	if (!list_empty(&bdi->b_dirty)) {
+	if (!list_empty(&wb->b_dirty)) {
 		struct inode *tail;
 
-		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+		tail = list_entry(wb->b_dirty.next, struct inode, i_list);
 		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &bdi->b_dirty);
+	list_move(&inode->i_list, &wb->b_dirty);
 }
 
 /*
@@ -399,7 +425,9 @@ static void redirty_tail(struct inode *inode)
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
+	struct bdi_writeback *wb = inode_get_wb(inode);
+
+	list_move(&inode->i_list, &wb->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -446,11 +474,10 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct backing_dev_info *bdi,
-		     unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
 {
-	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
-	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+	list_splice_init(&wb->b_more_io, wb->b_io.prev);
+	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
 }
 
 /*
@@ -611,20 +638,20 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return __sync_single_inode(inode, wbc);
 }
 
-void generic_sync_bdi_inodes(struct super_block *sb,
-			     struct writeback_control *wbc)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc)
 {
 	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
-	struct backing_dev_info *bdi = wbc->bdi;
 	const unsigned long start = jiffies;	/* livelock avoidance */
 
 	spin_lock(&inode_lock);
 
-	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
-		queue_io(bdi, wbc->older_than_this);
+	if (!wbc->for_kupdate || list_empty(&wb->b_io))
+		queue_io(wb, wbc->older_than_this);
 
-	while (!list_empty(&bdi->b_io)) {
-		struct inode *inode = list_entry(bdi->b_io.prev,
+	while (!list_empty(&wb->b_io)) {
+		struct inode *inode = list_entry(wb->b_io.prev,
 						struct inode, i_list);
 		long pages_skipped;
 
@@ -636,7 +663,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			continue;
 		}
 
-		if (!bdi_cap_writeback_dirty(bdi)) {
+		if (!bdi_cap_writeback_dirty(wb->bdi)) {
 			redirty_tail(inode);
 			if (is_blkdev_sb) {
 				/*
@@ -658,7 +685,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			continue;
 		}
 
-		if (wbc->nonblocking && bdi_write_congested(bdi)) {
+		if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
 			wbc->encountered_congestion = 1;
 			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
@@ -692,7 +719,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&bdi->b_more_io))
+		if (!list_empty(&wb->b_more_io))
 			wbc->more_io = 1;
 	}
 
@@ -700,6 +727,14 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 	/* Leave any unwritten inodes on b_io */
 }
 
+void generic_sync_bdi_inodes(struct super_block *sb,
+			     struct writeback_control *wbc)
+{
+	struct backing_dev_info *bdi = wbc->bdi;
+
+	generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+}
+
 /*
  * Write out a superblock's list of dirty inodes.  A wait will be performed
  * upon no inodes, all inodes or the final one, depending upon sync_mode.
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index f164925..77dc62c 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -24,8 +24,8 @@ struct dentry;
  * Bits in backing_dev_info.state
  */
 enum bdi_state {
-	BDI_pdflush,		/* A pdflush thread is working this device */
 	BDI_pending,		/* On its way to being activated */
+	BDI_wb_alloc,		/* Default embedded wb allocated */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -41,15 +41,23 @@ enum bdi_stat_item {
 
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
-struct bdi_writeback_arg {
-	unsigned long nr_pages;
-	struct super_block *sb;
+struct bdi_writeback {
+	struct backing_dev_info *bdi;		/* our parent bdi */
+	unsigned int nr;
+
+	struct task_struct	*task;		/* writeback task */
+	wait_queue_head_t	wait;
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+
+	unsigned long		nr_pages;
+	struct super_block	*sb;
 	enum writeback_sync_modes sync_mode;
 };
 
 struct backing_dev_info {
 	struct list_head bdi_list;
-
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -66,14 +74,11 @@ struct backing_dev_info {
 	unsigned int min_ratio;
 	unsigned int max_ratio, max_prop_frac;
 
-	struct device *dev;
+	struct bdi_writeback wb;  /* default writeback info for this bdi */
+	unsigned long wb_active;  /* bitmap of active tasks */
+	unsigned long wb_mask;	  /* number of registered tasks */
 
-	struct task_struct	*task;		/* writeback task */
-	wait_queue_head_t	wait;
-	struct bdi_writeback_arg wb_arg;	/* protected by BDI_pdflush */
-	struct list_head	b_dirty;	/* dirty inodes */
-	struct list_head	b_io;		/* parked for writeback */
-	struct list_head	b_more_io;	/* parked for more writeback */
+	struct device *dev;
 
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debug_dir;
@@ -90,19 +95,20 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
 			 long nr_pages, enum writeback_sync_modes sync_mode);
-int bdi_writeback_task(struct backing_dev_info *bdi);
+int bdi_writeback_task(struct bdi_writeback *wb);
 void bdi_writeback_all(struct super_block *sb, long nr_pages,
 			enum writeback_sync_modes sync_mode);
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
 extern struct mutex bdi_lock;
 extern struct list_head bdi_list;
 
-static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
 {
-	return !list_empty(&bdi->b_dirty) ||
-	       !list_empty(&bdi->b_io) ||
-	       !list_empty(&bdi->b_more_io);
+	return !list_empty(&wb->b_dirty) ||
+	       !list_empty(&wb->b_io) ||
+	       !list_empty(&wb->b_more_io);
 }
 
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 57c8487..df90b0e 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,17 +199,59 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
+static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+	memset(wb, 0, sizeof(*wb));
+
+	wb->bdi = bdi;
+	init_waitqueue_head(&wb->wait);
+	INIT_LIST_HEAD(&wb->b_dirty);
+	INIT_LIST_HEAD(&wb->b_io);
+	INIT_LIST_HEAD(&wb->b_more_io);
+}
+
+static void bdi_flush_io(struct backing_dev_info *bdi)
+{
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+		.nr_to_write		= 1024,
+	};
+
+	generic_sync_bdi_inodes(NULL, &wbc);
+}
+
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	set_bit(0, &bdi->wb_mask);
+	wb->nr = 0;
+	return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	clear_bit(wb->nr, &bdi->wb_mask);
+	clear_bit(BDI_wb_alloc, &bdi->state);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+
+	set_bit(BDI_wb_alloc, &bdi->state);
+	wb = &bdi->wb;
+	wb_assign_nr(bdi, wb);
+	return wb;
+}
+
 static int bdi_start_fn(void *ptr)
 {
-	struct backing_dev_info *bdi = ptr;
+	struct bdi_writeback *wb = ptr;
+	struct backing_dev_info *bdi = wb->bdi;
 	struct task_struct *tsk = current;
-
-	/*
-	 * Add us to the active bdi_list
-	 */
-	mutex_lock(&bdi_lock);
-	list_add(&bdi->bdi_list, &bdi_list);
-	mutex_unlock(&bdi_lock);
+	int ret;
 
 	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
 	set_freezable();
@@ -226,21 +268,33 @@ static int bdi_start_fn(void *ptr)
 	smp_mb__after_clear_bit();
 	wake_up_bit(&bdi->state, BDI_pending);
 
-	return bdi_writeback_task(bdi);
+	ret = bdi_writeback_task(wb);
+
+	bdi_put_wb(bdi, wb);
+	return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+	return wb_has_dirty_io(&bdi->wb);
 }
 
 static int bdi_forker_task(void *ptr)
 {
-	struct backing_dev_info *me = ptr;
+	struct bdi_writeback *me = ptr;
 	DEFINE_WAIT(wait);
 
 	for (;;) {
 		struct backing_dev_info *bdi, *tmp;
+		struct bdi_writeback *wb;
 
 		/*
 		 * Should never trigger on the default bdi
 		 */
-		WARN_ON(bdi_has_dirty_io(me));
+		if (wb_has_dirty_io(me)) {
+			bdi_flush_io(me->bdi);
+			WARN_ON(1);
+		}
 
 		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
 
@@ -251,7 +305,7 @@ static int bdi_forker_task(void *ptr)
 		 * a thread registered. If so, set that up.
 		 */
 		list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
-			if (bdi->task || !bdi_has_dirty_io(bdi))
+			if (bdi->wb.task || !bdi_has_dirty_io(bdi))
 				continue;
 
 			bdi_add_default_flusher_task(bdi);
@@ -272,24 +326,22 @@ static int bdi_forker_task(void *ptr)
 		list_del_init(&bdi->bdi_list);
 		mutex_unlock(&bdi_lock);
 
-		BUG_ON(bdi->task);
+		wb = bdi_new_wb(bdi);
+		if (!wb)
+			goto readd_flush;
 
-		bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+		wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
 					dev_name(bdi->dev));
+
 		/*
 		 * If task creation fails, then readd the bdi to
 		 * the pending list and force writeout of the bdi
 		 * from this forker thread. That will free some memory
 		 * and we can try again.
 		 */
-		if (!bdi->task) {
-			struct writeback_control wbc = {
-				.bdi			= bdi,
-				.sync_mode		= WB_SYNC_NONE,
-				.older_than_this	= NULL,
-				.range_cyclic		= 1,
-			};
-
+		if (!wb->task) {
+			bdi_put_wb(bdi, wb);
+readd_flush:
 			/*
 			 * Add this 'bdi' to the back, so we get
 			 * a chance to flush other bdi's to free
@@ -299,8 +351,7 @@ static int bdi_forker_task(void *ptr)
 			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
 			mutex_unlock(&bdi_lock);
 
-			wbc.nr_to_write = 1024;
-			generic_sync_bdi_inodes(NULL, &wbc);
+			bdi_flush_io(bdi);
 		}
 	}
 
@@ -308,8 +359,18 @@ static int bdi_forker_task(void *ptr)
 	return 0;
 }
 
+/*
+ * Add a new flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
 {
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
+	/*
+	 * Someone already marked this pending for task creation
+	 */
 	if (test_and_set_bit(BDI_pending, &bdi->state))
 		return;
 
@@ -317,7 +378,7 @@ void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
 	list_move_tail(&bdi->bdi_list, &bdi_pending_list);
 	mutex_unlock(&bdi_lock);
 
-	wake_up(&default_backing_dev_info.wait);
+	wake_up(&default_backing_dev_info.wb.wait);
 }
 
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
@@ -350,13 +411,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 	 * on-demand when they need it.
 	 */
 	if (bdi_cap_flush_forker(bdi)) {
-		bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+		struct bdi_writeback *wb;
+
+		wb = bdi_new_wb(bdi);
+		if (!wb) {
+			ret = -ENOMEM;
+			goto remove_err;
+		}
+
+		wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
 						dev_name(dev));
-		if (!bdi->task) {
+		if (!wb->task) {
+			bdi_put_wb(bdi, wb);
+			ret = -ENOMEM;
+remove_err:
 			mutex_lock(&bdi_lock);
 			list_del(&bdi->bdi_list);
 			mutex_unlock(&bdi_lock);
-			ret = -ENOMEM;
 			goto exit;
 		}
 	}
@@ -379,28 +450,39 @@ static int sched_wait(void *word)
 	return 0;
 }
 
+/*
+ * Remove bdi from global list and shutdown any threads we have running
+ */
 static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 {
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
 	/*
 	 * If setup is pending, wait for that to complete first
+	 * Make sure nobody finds us on the bdi_list anymore
 	 */
 	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
 
+	/*
+	 * Make sure nobody finds us on the bdi_list anymore
+	 */
 	mutex_lock(&bdi_lock);
 	list_del(&bdi->bdi_list);
 	mutex_unlock(&bdi_lock);
+
+	/*
+	 * Finally, kill the kernel thread
+	 */
+	kthread_stop(bdi->wb.task);
 }
 
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
-		if (!bdi_cap_flush_forker(bdi)) {
+		if (!bdi_cap_flush_forker(bdi))
 			bdi_wb_shutdown(bdi);
-			if (bdi->task) {
-				kthread_stop(bdi->task);
-				bdi->task = NULL;
-			}
-		}
+
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -417,11 +499,10 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
-	init_waitqueue_head(&bdi->wait);
 	INIT_LIST_HEAD(&bdi->bdi_list);
-	INIT_LIST_HEAD(&bdi->b_io);
-	INIT_LIST_HEAD(&bdi->b_dirty);
-	INIT_LIST_HEAD(&bdi->b_more_io);
+	bdi->wb_mask = bdi->wb_active = 0;
+
+	bdi_wb_init(&bdi->wb, bdi);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -446,9 +527,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
-	WARN_ON(!list_empty(&bdi->b_dirty));
-	WARN_ON(!list_empty(&bdi->b_io));
-	WARN_ON(!list_empty(&bdi->b_more_io));
+	WARN_ON(bdi_has_dirty_io(bdi));
 
 	bdi_unregister(bdi);
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 07/13] block: avoid indirect calls to enter cfq io scheduler
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (11 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 06/12] writeback: separate the flushing state/task from the bdi Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-26  9:02     ` Nikanth K
  2009-05-25  7:30 ` [PATCH 07/12] writeback: support > 1 flusher thread per bdi Jens Axboe
                   ` (12 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

They can be expensive, since CPUs generally do not branch predict
well for them.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/Kconfig.iosched |    4 +
 block/cfq-iosched.c   |   36 +++++------
 block/cfq-iosched.h   |   23 +++++++
 block/elevator.c      |   33 +++++-----
 block/elevator.h      |  162 +++++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 223 insertions(+), 35 deletions(-)
 create mode 100644 block/cfq-iosched.h
 create mode 100644 block/elevator.h

diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 7e803fc..9abb717 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -40,6 +40,10 @@ config IOSCHED_CFQ
 	  working environment, suitable for desktop systems.
 	  This is the default I/O scheduler.
 
+config IOSCHED_CFQ_BUILTIN
+	bool
+	default y if IOSCHED_CFQ=y
+
 choice
 	prompt "Default I/O scheduler"
 	default DEFAULT_CFQ
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..faa006a 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -13,6 +13,8 @@
 #include <linux/ioprio.h>
 #include <linux/blktrace_api.h>
 
+#include "cfq-iosched.h"
+
 /*
  * tunables
  */
@@ -271,7 +273,7 @@ static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)
 	}
 }
 
-static int cfq_queue_empty(struct request_queue *q)
+int cfq_queue_empty(struct request_queue *q)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 
@@ -752,7 +754,7 @@ cfq_find_rq_fmerge(struct cfq_data *cfqd, struct bio *bio)
 	return NULL;
 }
 
-static void cfq_activate_request(struct request_queue *q, struct request *rq)
+void cfq_activate_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 
@@ -763,7 +765,7 @@ static void cfq_activate_request(struct request_queue *q, struct request *rq)
 	cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors;
 }
 
-static void cfq_deactivate_request(struct request_queue *q, struct request *rq)
+void cfq_deactivate_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 
@@ -790,8 +792,7 @@ static void cfq_remove_request(struct request *rq)
 	}
 }
 
-static int cfq_merge(struct request_queue *q, struct request **req,
-		     struct bio *bio)
+int cfq_merge(struct request_queue *q, struct request **req, struct bio *bio)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct request *__rq;
@@ -805,8 +806,7 @@ static int cfq_merge(struct request_queue *q, struct request **req,
 	return ELEVATOR_NO_MERGE;
 }
 
-static void cfq_merged_request(struct request_queue *q, struct request *req,
-			       int type)
+void cfq_merged_request(struct request_queue *q, struct request *req, int type)
 {
 	if (type == ELEVATOR_FRONT_MERGE) {
 		struct cfq_queue *cfqq = RQ_CFQQ(req);
@@ -815,9 +815,8 @@ static void cfq_merged_request(struct request_queue *q, struct request *req,
 	}
 }
 
-static void
-cfq_merged_requests(struct request_queue *q, struct request *rq,
-		    struct request *next)
+void cfq_merged_requests(struct request_queue *q, struct request *rq,
+			 struct request *next)
 {
 	/*
 	 * reposition in fifo if next is older than rq
@@ -829,8 +828,8 @@ cfq_merged_requests(struct request_queue *q, struct request *rq,
 	cfq_remove_request(next);
 }
 
-static int cfq_allow_merge(struct request_queue *q, struct request *rq,
-			   struct bio *bio)
+int cfq_allow_merge(struct request_queue *q, struct request *rq,
+		    struct bio *bio)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_io_context *cic;
@@ -1291,7 +1290,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
  * Find the cfqq that we need to service and move a request from that to the
  * dispatch list
  */
-static int cfq_dispatch_requests(struct request_queue *q, int force)
+int cfq_dispatch_requests(struct request_queue *q, int force)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_queue *cfqq;
@@ -2104,7 +2103,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 	}
 }
 
-static void cfq_insert_request(struct request_queue *q, struct request *rq)
+void cfq_insert_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
@@ -2144,7 +2143,7 @@ static void cfq_update_hw_tag(struct cfq_data *cfqd)
 	cfqd->rq_in_driver_peak = 0;
 }
 
-static void cfq_completed_request(struct request_queue *q, struct request *rq)
+void cfq_completed_request(struct request_queue *q, struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
 	struct cfq_data *cfqd = cfqq->cfqd;
@@ -2236,7 +2235,7 @@ static inline int __cfq_may_queue(struct cfq_queue *cfqq)
 	return ELV_MQUEUE_MAY;
 }
 
-static int cfq_may_queue(struct request_queue *q, int rw)
+int cfq_may_queue(struct request_queue *q, int rw)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct task_struct *tsk = current;
@@ -2267,7 +2266,7 @@ static int cfq_may_queue(struct request_queue *q, int rw)
 /*
  * queue lock held here
  */
-static void cfq_put_request(struct request *rq)
+void cfq_put_request(struct request *rq)
 {
 	struct cfq_queue *cfqq = RQ_CFQQ(rq);
 
@@ -2289,8 +2288,7 @@ static void cfq_put_request(struct request *rq)
 /*
  * Allocate cfq data structures associated with this request.
  */
-static int
-cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+int cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 {
 	struct cfq_data *cfqd = q->elevator->elevator_data;
 	struct cfq_io_context *cic;
diff --git a/block/cfq-iosched.h b/block/cfq-iosched.h
new file mode 100644
index 0000000..71fcd21
--- /dev/null
+++ b/block/cfq-iosched.h
@@ -0,0 +1,23 @@
+#ifndef CFQ_IOSCHED_H
+#define CFQ_IOSCHED_H
+
+struct request_queue;
+struct bio;
+struct request;
+
+int cfq_merge(struct request_queue *, struct request **, struct bio *);
+void cfq_merged_request(struct request_queue *, struct request *, int);
+void cfq_merged_requests(struct request_queue *, struct request *,
+			 struct request *);
+int cfq_allow_merge(struct request_queue *, struct request *, struct bio *);
+int cfq_dispatch_requests(struct request_queue *, int);
+void cfq_insert_request(struct request_queue *, struct request *);
+void cfq_activate_request(struct request_queue *, struct request *);
+void cfq_deactivate_request(struct request_queue *, struct request *);
+int cfq_queue_empty(struct request_queue *);
+void cfq_completed_request(struct request_queue *, struct request *);
+int cfq_set_request(struct request_queue *, struct request *, gfp_t);
+void cfq_put_request(struct request *);
+int cfq_may_queue(struct request_queue *, int);
+
+#endif
diff --git a/block/elevator.c b/block/elevator.c
index fdb0675..c7143fb 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -38,6 +38,7 @@
 #include <linux/uaccess.h>
 
 #include "blk.h"
+#include "elevator.h"
 
 static DEFINE_SPINLOCK(elv_list_lock);
 static LIST_HEAD(elv_list);
@@ -67,7 +68,7 @@ static int elv_iosched_allow_merge(struct request *rq, struct bio *bio)
 	struct request_queue *q = rq->q;
 
 	if (q->elv_ops.elevator_allow_merge_fn)
-		return q->elv_ops.elevator_allow_merge_fn(q, rq, bio);
+		return elv_call_allow_merge_fn(q, rq, bio);
 
 	return 1;
 }
@@ -313,13 +314,13 @@ EXPORT_SYMBOL(elevator_exit);
 static void elv_activate_rq(struct request_queue *q, struct request *rq)
 {
 	if (q->elv_ops.elevator_activate_req_fn)
-		q->elv_ops.elevator_activate_req_fn(q, rq);
+		elv_call_activate_req_fn(q, rq);
 }
 
 static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
 {
 	if (q->elv_ops.elevator_deactivate_req_fn)
-		q->elv_ops.elevator_deactivate_req_fn(q, rq);
+		elv_call_deactivate_req_fn(q, rq);
 }
 
 static inline void __elv_rqhash_del(struct request *rq)
@@ -518,7 +519,7 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
 	}
 
 	if (q->elv_ops.elevator_merge_fn)
-		return q->elv_ops.elevator_merge_fn(q, req, bio);
+		return elv_call_merge_fn(q, req, bio);
 
 	return ELEVATOR_NO_MERGE;
 }
@@ -526,7 +527,7 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
 void elv_merged_request(struct request_queue *q, struct request *rq, int type)
 {
 	if (q->elv_ops.elevator_merged_fn)
-		q->elv_ops.elevator_merged_fn(q, rq, type);
+		elv_call_merged_fn(q, rq, type);
 
 	if (type == ELEVATOR_BACK_MERGE)
 		elv_rqhash_reposition(q, rq);
@@ -538,7 +539,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq,
 			     struct request *next)
 {
 	if (q->elv_ops.elevator_merge_req_fn)
-		q->elv_ops.elevator_merge_req_fn(q, rq, next);
+		elv_call_merge_req_fn(q, rq, next);
 
 	elv_rqhash_reposition(q, rq);
 	elv_rqhash_del(q, next);
@@ -568,7 +569,7 @@ void elv_drain_elevator(struct request_queue *q)
 {
 	static int printed;
 
-	while (q->elv_ops.elevator_dispatch_fn(q, 1))
+	while (elv_call_dispatch_fn(q, 1))
 		;
 
 	if (q->nr_sorted == 0)
@@ -655,7 +656,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 		 * rq cannot be accessed after calling
 		 * elevator_add_req_fn.
 		 */
-		q->elv_ops.elevator_add_req_fn(q, rq);
+		elv_call_add_req_fn(q, rq);
 		break;
 
 	case ELEVATOR_INSERT_REQUEUE:
@@ -763,7 +764,7 @@ static inline struct request *__elv_next_request(struct request_queue *q)
 				return rq;
 		}
 
-		if (!q->elv_ops.elevator_dispatch_fn(q, 0))
+		if (!elv_call_dispatch_fn(q, 0))
 			return NULL;
 	}
 }
@@ -869,7 +870,7 @@ int elv_queue_empty(struct request_queue *q)
 		return 0;
 
 	if (q->elv_ops.elevator_queue_empty_fn)
-		return q->elv_ops.elevator_queue_empty_fn(q);
+		return elv_call_queue_empty_fn(q);
 
 	return 1;
 }
@@ -878,7 +879,7 @@ EXPORT_SYMBOL(elv_queue_empty);
 struct request *elv_latter_request(struct request_queue *q, struct request *rq)
 {
 	if (q->elv_ops.elevator_latter_req_fn)
-		return q->elv_ops.elevator_latter_req_fn(q, rq);
+		return elv_call_latter_req_fn(q, rq);
 
 	return NULL;
 }
@@ -886,7 +887,7 @@ struct request *elv_latter_request(struct request_queue *q, struct request *rq)
 struct request *elv_former_request(struct request_queue *q, struct request *rq)
 {
 	if (q->elv_ops.elevator_former_req_fn)
-		return q->elv_ops.elevator_former_req_fn(q, rq);
+		return elv_call_former_req_fn(q, rq);
 
 	return NULL;
 }
@@ -894,7 +895,7 @@ struct request *elv_former_request(struct request_queue *q, struct request *rq)
 int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 {
 	if (q->elv_ops.elevator_set_req_fn)
-		return q->elv_ops.elevator_set_req_fn(q, rq, gfp_mask);
+		return elv_call_set_req_fn(q, rq, gfp_mask);
 
 	rq->elevator_private = NULL;
 	return 0;
@@ -903,13 +904,13 @@ int elv_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
 void elv_put_request(struct request_queue *q, struct request *rq)
 {
 	if (q->elv_ops.elevator_put_req_fn)
-		q->elv_ops.elevator_put_req_fn(rq);
+		elv_call_put_req_fn(q, rq);
 }
 
 int elv_may_queue(struct request_queue *q, int rw)
 {
 	if (q->elv_ops.elevator_may_queue_fn)
-		return q->elv_ops.elevator_may_queue_fn(q, rw);
+		return elv_call_may_queue_fn(q, rw);
 
 	return ELV_MQUEUE_MAY;
 }
@@ -935,7 +936,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 	if (blk_account_rq(rq)) {
 		q->in_flight--;
 		if (blk_sorted_rq(rq) && q->elv_ops.elevator_completed_req_fn)
-			q->elv_ops.elevator_completed_req_fn(q, rq);
+			elv_call_completed_req_fn(q, rq);
 	}
 
 	/*
diff --git a/block/elevator.h b/block/elevator.h
new file mode 100644
index 0000000..d8b5f0c
--- /dev/null
+++ b/block/elevator.h
@@ -0,0 +1,162 @@
+#ifndef ELV_INTERN_H
+#define ELV_INTERN_H
+
+#include <linux/blkdev.h>
+#include <linux/elevator.h>
+
+#include "cfq-iosched.h"
+
+static inline int elv_call_allow_merge_fn(struct request_queue *q,
+					  struct request *rq, struct bio *bio)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_allow_merge_fn == cfq_allow_merge)
+		return cfq_allow_merge(q, rq, bio);
+#endif
+	return q->elv_ops.elevator_allow_merge_fn(q, rq, bio);
+}
+
+static inline void elv_call_activate_req_fn(struct request_queue *q,
+					    struct request *rq)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_activate_req_fn == cfq_activate_request)
+		cfq_activate_request(q, rq);
+	else
+#endif
+		q->elv_ops.elevator_activate_req_fn(q, rq);
+}
+
+static inline void elv_call_deactivate_req_fn(struct request_queue *q,
+					      struct request *rq)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_deactivate_req_fn == cfq_deactivate_request)
+		cfq_deactivate_request(q, rq);
+	else
+#endif
+	q->elv_ops.elevator_deactivate_req_fn(q, rq);
+}
+
+static inline int elv_call_merge_fn(struct request_queue *q,
+				    struct request **rq, struct bio *bio)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_merge_fn == cfq_merge)
+		return cfq_merge(q, rq, bio);
+#endif
+	return q->elv_ops.elevator_merge_fn(q, rq, bio);
+}
+
+static inline void elv_call_merged_fn(struct request_queue *q,
+				      struct request *rq, int type)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_merged_fn == cfq_merged_request)
+		cfq_merged_request(q, rq, type);
+	else
+#endif
+		q->elv_ops.elevator_merged_fn(q, rq, type);
+}
+
+static inline void elv_call_merge_req_fn(struct request_queue *q,
+					 struct request *rq,
+					 struct request *next)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_merge_req_fn == cfq_merged_requests)
+		cfq_merged_requests(q, rq, next);
+	else
+#endif
+		q->elv_ops.elevator_merge_req_fn(q, rq, next);
+}
+
+static inline int elv_call_dispatch_fn(struct request_queue *q, int force)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_dispatch_fn == cfq_dispatch_requests)
+		return cfq_dispatch_requests(q, force);
+#endif
+	return q->elv_ops.elevator_dispatch_fn(q, force);
+
+}
+
+static inline void elv_call_add_req_fn(struct request_queue *q,
+				       struct request *rq)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_add_req_fn == cfq_insert_request)
+		cfq_insert_request(q, rq);
+	else
+#endif
+		q->elv_ops.elevator_add_req_fn(q, rq);
+}
+
+static inline int elv_call_queue_empty_fn(struct request_queue *q)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_queue_empty_fn == cfq_queue_empty)
+		return cfq_queue_empty(q);
+#endif
+	return q->elv_ops.elevator_queue_empty_fn(q);
+}
+
+static inline struct request *
+elv_call_former_req_fn(struct request_queue *q, struct request *rq)
+{
+	if (q->elv_ops.elevator_former_req_fn == elv_rb_former_request)
+		return elv_rb_former_request(q, rq);
+
+	return q->elv_ops.elevator_former_req_fn(q, rq);
+}
+
+static inline struct request *
+elv_call_latter_req_fn(struct request_queue *q, struct request *rq)
+{
+	if (q->elv_ops.elevator_latter_req_fn == elv_rb_latter_request)
+		return elv_rb_latter_request(q, rq);
+
+	return q->elv_ops.elevator_latter_req_fn(q, rq);
+}
+
+static int
+elv_call_set_req_fn(struct request_queue *q, struct request *rq, gfp_t gfp_mask)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_set_req_fn == cfq_set_request)
+		return cfq_set_request(q, rq, gfp_mask);
+#endif
+	return q->elv_ops.elevator_set_req_fn(q, rq, gfp_mask);
+}
+
+static void elv_call_put_req_fn(struct request_queue *q, struct request *rq)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_put_req_fn == cfq_put_request)
+		cfq_put_request(rq);
+	else
+#endif
+		q->elv_ops.elevator_put_req_fn(rq);
+}
+
+static int elv_call_may_queue_fn(struct request_queue *q, int rw)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_may_queue_fn == cfq_may_queue)
+		return cfq_may_queue(q, rw);
+#endif
+	return q->elv_ops.elevator_may_queue_fn(q, rw);
+}
+
+static void
+elv_call_completed_req_fn(struct request_queue *q, struct request *rq)
+{
+#if defined(CONFIG_IOSCHED_CFQ_BUILTIN)
+	if (q->elv_ops.elevator_completed_req_fn == cfq_completed_request)
+		cfq_completed_request(q, rq);
+	else
+#endif
+		q->elv_ops.elevator_completed_req_fn(q, rq);
+}
+
+#endif
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 07/12] writeback: support > 1 flusher thread per bdi
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (12 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 07/13] block: avoid indirect calls to enter cfq io scheduler Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 08/13] block: change the tag sync vs async restriction logic Jens Axboe
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Build on the bdi_writeback support by allowing registration of
more than 1 flusher thread. File systems can call bdi_add_flusher_task(bdi)
to add more flusher threads to the device. If they do so, they must also
provide a super_operations function to return the suitable bdi_writeback
struct from any given inode.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  346 +++++++++++++++++++++++++++++++++----------
 include/linux/backing-dev.h |   32 ++++-
 include/linux/fs.h          |    3 +
 mm/backing-dev.c            |  244 ++++++++++++++++++++++++------
 mm/page-writeback.c         |    4 +-
 5 files changed, 495 insertions(+), 134 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 7a9f0b0..563860c 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -34,86 +34,196 @@
  */
 int nr_pdflush_threads;
 
-/**
- * writeback_acquire - attempt to get exclusive writeback access to a device
- * @bdi: the device's backing_dev_info structure
- *
- * It is a waste of resources to have more than one pdflush thread blocked on
- * a single request queue.  Exclusion at the request_queue level is obtained
- * via a flag in the request_queue's backing_dev_info.state.
- *
- * Non-request_queue-backed address_spaces will share default_backing_dev_info,
- * unless they implement their own.  Which is somewhat inefficient, as this
- * may prevent concurrent writeback against multiple devices.
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+				   struct super_block *sb,
+				   struct writeback_control *wbc);
+
+/*
+ * Work items for the bdi_writeback threads
  */
-static int writeback_acquire(struct bdi_writeback *wb)
+struct bdi_work {
+	struct list_head list;
+	struct rcu_head rcu_head;
+
+	unsigned long seen;
+	atomic_t pending;
+
+	unsigned long sb_data;
+	unsigned long nr_pages;
+	enum writeback_sync_modes sync_mode;
+
+	unsigned long state;
+};
+
+static struct super_block *bdi_work_sb(struct bdi_work *work)
 {
-	struct backing_dev_info *bdi = wb->bdi;
+	return (struct super_block *) (work->sb_data & ~1UL);
+}
+
+static inline bool bdi_work_on_stack(struct bdi_work *work)
+{
+	return work->sb_data & 1UL;
+}
+
+static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
+				 unsigned long nr_pages,
+				 enum writeback_sync_modes sync_mode)
+{
+	INIT_RCU_HEAD(&work->rcu_head);
+	work->sb_data = (unsigned long) sb;
+	work->nr_pages = nr_pages;
+	work->sync_mode = sync_mode;
+	work->state = 0;
+}
 
-	return !test_and_set_bit(wb->nr, &bdi->wb_active);
+static inline void bdi_work_init_on_stack(struct bdi_work *work,
+					  struct super_block *sb,
+					  unsigned long nr_pages,
+					  enum writeback_sync_modes sync_mode)
+{
+	bdi_work_init(work, sb, nr_pages, sync_mode);
+	set_bit(0, &work->state);
+	work->sb_data |= 1UL;
 }
 
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
  *
- * Determine whether there is writeback in progress against a backing device.
+ * Determine whether there is writeback waiting to be handled against a
+ * backing device.
  */
 int writeback_in_progress(struct backing_dev_info *bdi)
 {
-	return bdi->wb_active != 0;
+	return !list_empty(&bdi->work_list);
 }
 
-/**
- * writeback_release - relinquish exclusive writeback access against a device.
- * @bdi: the device's backing_dev_info structure
- */
-static void writeback_release(struct bdi_writeback *wb)
+static void bdi_work_free(struct rcu_head *head)
 {
-	struct backing_dev_info *bdi = wb->bdi;
+	struct bdi_work *work = container_of(head, struct bdi_work, rcu_head);
+
+	if (!bdi_work_on_stack(work))
+		kfree(work);
+	else {
+		clear_bit(0, &work->state);
+		smp_mb__after_clear_bit();
+		wake_up_bit(&work->state, 0);
+	}
+}
+
+static void wb_clear_pending(struct bdi_writeback *wb, struct bdi_work *work)
+{
+	/*
+	 * The caller has retrieved the work arguments from this work,
+	 * drop our reference. If this is the last ref, delete and free it
+	 */
+	if (atomic_dec_and_test(&work->pending)) {
+		struct backing_dev_info *bdi = wb->bdi;
 
-	wb->nr_pages = 0;
-	wb->sb = NULL;
-	clear_bit(wb->nr, &bdi->wb_active);
+		spin_lock(&bdi->wb_lock);
+		list_del_rcu(&work->list);
+		spin_unlock(&bdi->wb_lock);
+
+		call_rcu(&work->rcu_head, bdi_work_free);
+	}
 }
 
-static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
-			       long nr_pages,
-			       enum writeback_sync_modes sync_mode)
+static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
 {
-	if (!wb_has_dirty_io(wb))
-		return;
+	/*
+	 * If we failed allocating the bdi work item, wake up the wb thread
+	 * always. As a safety precaution, it'll flush out everything
+	 */
+	if (!wb_has_dirty_io(wb) && work)
+		wb_clear_pending(wb, work);
+	else
+		wake_up(&wb->wait);
+}
 
-	if (writeback_acquire(wb)) {
-		wb->nr_pages = nr_pages;
-		wb->sb = sb;
-		wb->sync_mode = sync_mode;
+/*
+ * Add work to bdi work list.
+ */
+static int bdi_queue_writeback(struct backing_dev_info *bdi,
+			       struct bdi_work *work)
+{
+	if (work) {
+		work->seen = bdi->wb_mask;
+		atomic_set(&work->pending, bdi->wb_cnt);
 
 		/*
-		 * make above store seen before the task is woken
+		 * Make sure stores are seen before it appears on the list
 		 */
 		smp_mb();
-		wake_up(&wb->wait);
+
+		spin_lock(&bdi->wb_lock);
+		list_add_tail_rcu(&work->list, &bdi->work_list);
+		spin_unlock(&bdi->wb_lock);
 	}
-}
 
-int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
-			 long nr_pages, enum writeback_sync_modes sync_mode)
-{
 	/*
 	 * This only happens the first time someone kicks this bdi, so put
 	 * it out-of-line.
 	 */
-	if (unlikely(!bdi->wb.task)) {
+	if (unlikely(list_empty_careful(&bdi->wb_list))) {
+		mutex_lock(&bdi_lock);
 		bdi_add_default_flusher_task(bdi);
+		mutex_unlock(&bdi_lock);
 		return 1;
 	}
 
-	wb_start_writeback(&bdi->wb, sb, nr_pages, sync_mode);
+	if (!bdi_wblist_needs_lock(bdi))
+		wb_start_writeback(&bdi->wb, work);
+	else {
+		struct bdi_writeback *wb;
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+			wb_start_writeback(wb, work);
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+
 	return 0;
 }
 
 /*
+ * Used for on-stack allocated work items. The caller needs to wait until
+ * the wb threads have acked the work before it's safe to continue.
+ */
+static void bdi_wait_on_work_start(struct bdi_work *work)
+{
+	wait_on_bit(&work->state, 0, bdi_sched_wait, TASK_UNINTERRUPTIBLE);
+}
+
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+			 long nr_pages, enum writeback_sync_modes sync_mode)
+{
+	struct bdi_work work_stack, *work;
+	int ret;
+
+	work = kmalloc(sizeof(*work), GFP_ATOMIC);
+	if (work)
+		bdi_work_init(work, sb, nr_pages, sync_mode);
+	else {
+		work = &work_stack;
+		bdi_work_init_on_stack(work, sb, nr_pages, sync_mode);
+	}
+
+	ret = bdi_queue_writeback(bdi, work);
+
+	/*
+	 * If this came from our stack, we need to wait until the wb threads
+	 * have noticed this work before we return (and invalidate the stack)
+	 */
+	if (work == &work_stack)
+		bdi_wait_on_work_start(work);
+
+	return ret;
+}
+
+/*
  * The maximum number of pages to writeout in a single bdi flush/kupdate
  * operation.  We do this so we don't hold I_SYNC against an inode for
  * enormous amounts of time, which would block a userspace task which has
@@ -162,7 +272,7 @@ static void wb_kupdated(struct bdi_writeback *wb)
 		wbc.more_io = 0;
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		generic_sync_bdi_inodes(NULL, &wbc);
+		generic_sync_wb_inodes(wb, NULL, &wbc);
 		if (wbc.nr_to_write > 0)
 			break;	/* All the old data is written */
 		nr_to_write -= MAX_WRITEBACK_PAGES;
@@ -179,22 +289,19 @@ static inline bool over_bground_thresh(void)
 		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
 }
 
-static void generic_sync_wb_inodes(struct bdi_writeback *wb,
-				   struct super_block *sb,
-				   struct writeback_control *wbc);
-
-static void wb_writeback(struct bdi_writeback *wb)
+static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+			   struct super_block *sb,
+			   enum writeback_sync_modes sync_mode)
 {
 	struct writeback_control wbc = {
 		.bdi			= wb->bdi,
-		.sync_mode		= wb->sync_mode,
+		.sync_mode		= sync_mode,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
-	long nr_pages = wb->nr_pages;
 
 	for (;;) {
-		if (wbc.sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
+		if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
 		    !over_bground_thresh())
 			break;
 
@@ -202,7 +309,7 @@ static void wb_writeback(struct bdi_writeback *wb)
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		wbc.pages_skipped = 0;
-		generic_sync_wb_inodes(wb, wb->sb, &wbc);
+		generic_sync_wb_inodes(wb, sb, &wbc);
 		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
@@ -216,46 +323,91 @@ static void wb_writeback(struct bdi_writeback *wb)
 }
 
 /*
+ * Return the next bdi_work struct that hasn't been processed by this
+ * wb thread yet
+ */
+static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
+					   struct bdi_writeback *wb)
+{
+	struct bdi_work *work, *ret = NULL;
+
+	rcu_read_lock();
+
+	list_for_each_entry_rcu(work, &bdi->work_list, list) {
+		if (!test_and_clear_bit(wb->nr, &work->seen))
+			continue;
+
+		ret = work;
+		break;
+	}
+
+	rcu_read_unlock();
+	return ret;
+}
+
+static void wb_writeback(struct bdi_writeback *wb)
+{
+	struct backing_dev_info *bdi = wb->bdi;
+	struct bdi_work *work;
+
+	while ((work = get_next_work_item(bdi, wb)) != NULL) {
+		struct super_block *sb = bdi_work_sb(work);
+		long nr_pages = work->nr_pages;
+		enum writeback_sync_modes sync_mode = work->sync_mode;
+
+		wb_clear_pending(wb, work);
+		__wb_writeback(wb, nr_pages, sb, sync_mode);
+	}
+}
+
+/*
+ * This will be inlined in bdi_writeback_task() once we get rid of any
+ * dirty inodes on the default_backing_dev_info
+ */
+static void wb_do_writeback(struct bdi_writeback *wb)
+{
+	/*
+	 * We get here in two cases:
+	 *
+	 *  schedule_timeout() returned because the dirty writeback
+	 *  interval has elapsed. If that happens, the work item list
+	 *  will be empty and we will proceed to do kupdated style writeout.
+	 *
+	 *  Someone called bdi_start_writeback(), which put one/more work
+	 *  items on the work_list. Process those.
+	 */
+	if (list_empty(&wb->bdi->work_list))
+		wb_kupdated(wb);
+	else
+		wb_writeback(wb);
+}
+
+/*
  * Handle writeback of dirty data for the device backed by this bdi. Also
  * wakes up periodically and does kupdated style flushing.
  */
 int bdi_writeback_task(struct bdi_writeback *wb)
 {
+	DEFINE_WAIT(wait);
+
 	while (!kthread_should_stop()) {
 		unsigned long wait_jiffies;
-		DEFINE_WAIT(wait);
+
+		wb_do_writeback(wb);
 
 		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
 		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
 		schedule_timeout(wait_jiffies);
 		try_to_freeze();
-
-		/*
-		 * We get here in two cases:
-		 *
-		 *  schedule_timeout() returned because the dirty writeback
-		 *  interval has elapsed. If that happens, we will be able
-		 *  to acquire the writeback lock and will proceed to do
-		 *  kupdated style writeout.
-		 *
-		 *  Someone called bdi_start_writeback(), which will acquire
-		 *  the writeback lock. This means our writeback_acquire()
-		 *  below will fail and we call into bdi_pdflush() for
-		 *  pdflush style writeout.
-		 *
-		 */
-		if (writeback_acquire(wb))
-			wb_kupdated(wb);
-		else
-			wb_writeback(wb);
-
-		writeback_release(wb);
-		finish_wait(&wb->wait, &wait);
 	}
 
+	finish_wait(&wb->wait, &wait);
 	return 0;
 }
 
+/*
+ * Do in-line writeback for all backing devices. Expensive!
+ */
 void bdi_writeback_all(struct super_block *sb, long nr_pages,
 		       enum writeback_sync_modes sync_mode)
 {
@@ -266,18 +418,38 @@ void bdi_writeback_all(struct super_block *sb, long nr_pages,
 	list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
 		if (!bdi_has_dirty_io(bdi))
 			continue;
-		bdi_start_writeback(bdi, sb, nr_pages, sync_mode);
+
+		if (!bdi_wblist_needs_lock(bdi))
+			__wb_writeback(&bdi->wb, 0, sb, sync_mode);
+		else {
+			struct bdi_writeback *wb;
+			int idx;
+
+			idx = srcu_read_lock(&bdi->srcu);
+
+			list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+				__wb_writeback(&bdi->wb, 0, sb, sync_mode);
+
+			srcu_read_unlock(&bdi->srcu, idx);
+		}
 	}
 
 	mutex_unlock(&bdi_lock);
 }
 
 /*
- * We have only a single wb per bdi, so just return that.
+ * If the filesystem didn't provide a way to map an inode to a dedicated
+ * flusher thread, it doesn't support more than 1 thread. So we know it's
+ * the default thread, return that.
  */
 static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
 {
-	return &inode_to_bdi(inode)->wb;
+	const struct super_operations *sop = inode->i_sb->s_op;
+
+	if (!sop->inode_get_wb)
+		return &inode_to_bdi(inode)->wb;
+
+	return sop->inode_get_wb(inode);
 }
 
 /**
@@ -731,8 +903,24 @@ void generic_sync_bdi_inodes(struct super_block *sb,
 			     struct writeback_control *wbc)
 {
 	struct backing_dev_info *bdi = wbc->bdi;
+	struct bdi_writeback *wb;
+
+	/*
+	 * Common case is just a single wb thread and that is embedded in
+	 * the bdi, so it doesn't need locking
+	 */
+	if (!bdi_wblist_needs_lock(bdi))
+		generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+	else {
+		int idx;
 
-	generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+			generic_sync_wb_inodes(wb, sb, wbc);
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
 }
 
 /*
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 77dc62c..72b4797 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,8 @@
 #include <linux/proportions.h>
 #include <linux/kernel.h>
 #include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/srcu.h>
 #include <linux/writeback.h>
 #include <asm/atomic.h>
 
@@ -26,6 +28,7 @@ struct dentry;
 enum bdi_state {
 	BDI_pending,		/* On its way to being activated */
 	BDI_wb_alloc,		/* Default embedded wb allocated */
+	BDI_wblist_lock,	/* bdi->wb_list now needs locking */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -42,6 +45,8 @@ enum bdi_stat_item {
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
 struct bdi_writeback {
+	struct list_head list;			/* hangs off the bdi */
+
 	struct backing_dev_info *bdi;		/* our parent bdi */
 	unsigned int nr;
 
@@ -50,13 +55,12 @@ struct bdi_writeback {
 	struct list_head	b_dirty;	/* dirty inodes */
 	struct list_head	b_io;		/* parked for writeback */
 	struct list_head	b_more_io;	/* parked for more writeback */
-
-	unsigned long		nr_pages;
-	struct super_block	*sb;
-	enum writeback_sync_modes sync_mode;
 };
 
+#define BDI_MAX_FLUSHERS	32
+
 struct backing_dev_info {
+	struct srcu_struct srcu; /* for wb_list read side protection */
 	struct list_head bdi_list;
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
@@ -75,8 +79,12 @@ struct backing_dev_info {
 	unsigned int max_ratio, max_prop_frac;
 
 	struct bdi_writeback wb;  /* default writeback info for this bdi */
-	unsigned long wb_active;  /* bitmap of active tasks */
-	unsigned long wb_mask;	  /* number of registered tasks */
+	spinlock_t wb_lock;	  /* protects update side of wb_list */
+	struct list_head wb_list; /* the flusher threads hanging off this bdi */
+	unsigned long wb_mask;	  /* bitmask of registered tasks */
+	unsigned int wb_cnt;	  /* number of registered tasks */
+
+	struct list_head work_list;
 
 	struct device *dev;
 
@@ -99,11 +107,17 @@ int bdi_writeback_task(struct bdi_writeback *wb);
 void bdi_writeback_all(struct super_block *sb, long nr_pages,
 			enum writeback_sync_modes sync_mode);
 void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+void bdi_add_flusher_task(struct backing_dev_info *bdi);
 int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
 extern struct mutex bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
+{
+	return test_bit(BDI_wblist_lock, &bdi->state);
+}
+
 static inline int wb_has_dirty_io(struct bdi_writeback *wb)
 {
 	return !list_empty(&wb->b_dirty) ||
@@ -316,4 +330,10 @@ static inline bool mapping_cap_swap_backed(struct address_space *mapping)
 	return bdi_cap_swap_backed(mapping->backing_dev_info);
 }
 
+static inline int bdi_sched_wait(void *word)
+{
+	schedule();
+	return 0;
+}
+
 #endif		/* _LINUX_BACKING_DEV_H */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ecdc544..d3bda5d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1550,11 +1550,14 @@ extern ssize_t vfs_readv(struct file *, const struct iovec __user *,
 extern ssize_t vfs_writev(struct file *, const struct iovec __user *,
 		unsigned long, loff_t *);
 
+struct bdi_writeback;
+
 struct super_operations {
    	struct inode *(*alloc_inode)(struct super_block *sb);
 	void (*destroy_inode)(struct inode *);
 
    	void (*dirty_inode) (struct inode *);
+	struct bdi_writeback *(*inode_get_wb) (struct inode *);
 	int (*write_inode) (struct inode *, int);
 	void (*drop_inode) (struct inode *);
 	void (*delete_inode) (struct inode *);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index df90b0e..3e74041 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,7 +199,42 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
-static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	unsigned long mask = BDI_MAX_FLUSHERS - 1;
+	unsigned int nr;
+
+	do {
+		if ((bdi->wb_mask & mask) == mask)
+			return 1;
+
+		nr = find_first_zero_bit(&bdi->wb_mask, BDI_MAX_FLUSHERS);
+	} while (test_and_set_bit(nr, &bdi->wb_mask));
+
+	wb->nr = nr;
+
+	spin_lock(&bdi->wb_lock);
+	bdi->wb_cnt++;
+	spin_unlock(&bdi->wb_lock);
+
+	return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+	clear_bit(wb->nr, &bdi->wb_mask);
+
+	if (wb == &bdi->wb)
+		clear_bit(BDI_wb_alloc, &bdi->state);
+	else
+		kfree(wb);
+
+	spin_lock(&bdi->wb_lock);
+	bdi->wb_cnt--;
+	spin_unlock(&bdi->wb_lock);
+}
+
+static int bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
 {
 	memset(wb, 0, sizeof(*wb));
 
@@ -208,6 +243,30 @@ static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
 	INIT_LIST_HEAD(&wb->b_dirty);
 	INIT_LIST_HEAD(&wb->b_io);
 	INIT_LIST_HEAD(&wb->b_more_io);
+
+	return wb_assign_nr(bdi, wb);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+	struct bdi_writeback *wb;
+
+	/*
+	 * Default bdi->wb is already assigned, so just return it
+	 */
+	if (!test_and_set_bit(BDI_wb_alloc, &bdi->state))
+		wb = &bdi->wb;
+	else {
+		wb = kmalloc(sizeof(struct bdi_writeback), GFP_KERNEL);
+		if (wb) {
+			if (bdi_wb_init(wb, bdi)) {
+				kfree(wb);
+				wb = NULL;
+			}
+		}
+	}
+
+	return wb;
 }
 
 static void bdi_flush_io(struct backing_dev_info *bdi)
@@ -223,35 +282,26 @@ static void bdi_flush_io(struct backing_dev_info *bdi)
 	generic_sync_bdi_inodes(NULL, &wbc);
 }
 
-static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+static void bdi_task_init(struct backing_dev_info *bdi,
+			  struct bdi_writeback *wb)
 {
-	set_bit(0, &bdi->wb_mask);
-	wb->nr = 0;
-	return 0;
-}
-
-static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
-{
-	clear_bit(wb->nr, &bdi->wb_mask);
-	clear_bit(BDI_wb_alloc, &bdi->state);
-}
+	struct task_struct *tsk = current;
+	int was_empty;
 
-static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
-{
-	struct bdi_writeback *wb;
+	/*
+	 * Add us to the active bdi_list. If we are adding threads beyond
+	 * the default embedded bdi_writeback, then we need to start using
+	 * proper locking. Check the list for empty first, then set the
+	 * BDI_wblist_lock flag if there's > 1 entry on the list now
+	 */
+	spin_lock(&bdi->wb_lock);
 
-	set_bit(BDI_wb_alloc, &bdi->state);
-	wb = &bdi->wb;
-	wb_assign_nr(bdi, wb);
-	return wb;
-}
+	was_empty = list_empty(&bdi->wb_list);
+	list_add_tail_rcu(&wb->list, &bdi->wb_list);
+	if (!was_empty)
+		set_bit(BDI_wblist_lock, &bdi->state);
 
-static int bdi_start_fn(void *ptr)
-{
-	struct bdi_writeback *wb = ptr;
-	struct backing_dev_info *bdi = wb->bdi;
-	struct task_struct *tsk = current;
-	int ret;
+	spin_unlock(&bdi->wb_lock);
 
 	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
 	set_freezable();
@@ -260,6 +310,15 @@ static int bdi_start_fn(void *ptr)
 	 * Our parent may run at a different priority, just set us to normal
 	 */
 	set_user_nice(tsk, 0);
+}
+
+static int bdi_start_fn(void *ptr)
+{
+	struct bdi_writeback *wb = ptr;
+	struct backing_dev_info *bdi = wb->bdi;
+	int ret;
+
+	bdi_task_init(bdi, wb);
 
 	/*
 	 * Clear pending bit and wakeup anybody waiting to tear us down
@@ -268,15 +327,53 @@ static int bdi_start_fn(void *ptr)
 	smp_mb__after_clear_bit();
 	wake_up_bit(&bdi->state, BDI_pending);
 
+	/*
+	 * Make us discoverable on the bdi_list again
+	 */
+	mutex_lock(&bdi_lock);
+	list_add_tail(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_lock);
+
 	ret = bdi_writeback_task(wb);
 
+	/*
+	 * Remove us from the list
+	 */
+	spin_lock(&bdi->wb_lock);
+	list_del_rcu(&wb->list);
+	spin_unlock(&bdi->wb_lock);
+
+	/*
+	 * wait for rcu grace period to end, so we can free wb
+	 */
+	synchronize_srcu(&bdi->srcu);
+
 	bdi_put_wb(bdi, wb);
 	return ret;
 }
 
 int bdi_has_dirty_io(struct backing_dev_info *bdi)
 {
-	return wb_has_dirty_io(&bdi->wb);
+	struct bdi_writeback *wb;
+	int ret = 0;
+
+	if (!bdi_wblist_needs_lock(bdi))
+		ret = wb_has_dirty_io(&bdi->wb);
+	else {
+		int idx;
+
+		idx = srcu_read_lock(&bdi->srcu);
+
+		list_for_each_entry_rcu(wb, &bdi->wb_list, list) {
+			ret = wb_has_dirty_io(wb);
+			if (ret)
+				break;
+		}
+
+		srcu_read_unlock(&bdi->srcu, idx);
+	}
+
+	return ret;
 }
 
 static int bdi_forker_task(void *ptr)
@@ -284,6 +381,8 @@ static int bdi_forker_task(void *ptr)
 	struct bdi_writeback *me = ptr;
 	DEFINE_WAIT(wait);
 
+	bdi_task_init(me->bdi, me);
+
 	for (;;) {
 		struct backing_dev_info *bdi, *tmp;
 		struct bdi_writeback *wb;
@@ -360,26 +459,69 @@ readd_flush:
 }
 
 /*
- * Add a new flusher task that gets created for any bdi
- * that has dirty data pending writeout
+ * bdi_lock held on entry
  */
-void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
+				     int(*func)(struct backing_dev_info *))
 {
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
 	/*
-	 * Someone already marked this pending for task creation
+	 * Check with the helper whether to proceed adding a task. Will only
+	 * abort if we two or more simultanous calls to
+	 * bdi_add_default_flusher_task() occured, further additions will block
+	 * waiting for previous additions to finish.
 	 */
-	if (test_and_set_bit(BDI_pending, &bdi->state))
-		return;
+	if (!func(bdi)) {
+		list_move_tail(&bdi->bdi_list, &bdi_pending_list);
 
-	mutex_lock(&bdi_lock);
-	list_move_tail(&bdi->bdi_list, &bdi_pending_list);
+		/*
+		 * We are now on the pending list, wake up bdi_forker_task()
+		 * to finish the job and add us back to the active bdi_list
+		 */
+		wake_up(&default_backing_dev_info.wb.wait);
+	}
+}
+
+static int flusher_add_helper_block(struct backing_dev_info *bdi)
+{
 	mutex_unlock(&bdi_lock);
+	wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
+				TASK_UNINTERRUPTIBLE);
+	mutex_lock(&bdi_lock);
+	return 0;
+}
+
+static int flusher_add_helper_test(struct backing_dev_info *bdi)
+{
+	return test_and_set_bit(BDI_pending, &bdi->state);
+}
+
+/*
+ * Add the default flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+	bdi_add_one_flusher_task(bdi, flusher_add_helper_test);
+}
 
-	wake_up(&default_backing_dev_info.wb.wait);
+/**
+ * bdi_add_flusher_task - add one more flusher task to this @bdi
+ *  @bdi:	the bdi
+ *
+ * Add an additional flusher task to this @bdi. Will block waiting on
+ * previous additions, if any.
+ *
+ */
+void bdi_add_flusher_task(struct backing_dev_info *bdi)
+{
+	mutex_lock(&bdi_lock);
+	bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
+	mutex_unlock(&bdi_lock);
 }
+EXPORT_SYMBOL(bdi_add_flusher_task);
 
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...)
@@ -444,17 +586,13 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
-static int sched_wait(void *word)
-{
-	schedule();
-	return 0;
-}
-
 /*
  * Remove bdi from global list and shutdown any threads we have running
  */
 static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 {
+	struct bdi_writeback *wb;
+
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
@@ -462,7 +600,8 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 	 * If setup is pending, wait for that to complete first
 	 * Make sure nobody finds us on the bdi_list anymore
 	 */
-	wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+	wait_on_bit(&bdi->state, BDI_pending, bdi_sched_wait,
+			TASK_UNINTERRUPTIBLE);
 
 	/*
 	 * Make sure nobody finds us on the bdi_list anymore
@@ -472,9 +611,11 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 	mutex_unlock(&bdi_lock);
 
 	/*
-	 * Finally, kill the kernel thread
+	 * Finally, kill the kernel threads. We don't need to be RCU
+	 * safe anymore, since the bdi is gone from visibility.
 	 */
-	kthread_stop(bdi->wb.task);
+	list_for_each_entry(wb, &bdi->wb_list, list)
+		kthread_stop(wb->task);
 }
 
 void bdi_unregister(struct backing_dev_info *bdi)
@@ -499,8 +640,12 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	spin_lock_init(&bdi->wb_lock);
+	bdi->wb_mask = 0;
+	bdi->wb_cnt = 0;
 	INIT_LIST_HEAD(&bdi->bdi_list);
-	bdi->wb_mask = bdi->wb_active = 0;
+	INIT_LIST_HEAD(&bdi->wb_list);
+	INIT_LIST_HEAD(&bdi->work_list);
 
 	bdi_wb_init(&bdi->wb, bdi);
 
@@ -510,10 +655,15 @@ int bdi_init(struct backing_dev_info *bdi)
 			goto err;
 	}
 
+	err = init_srcu_struct(&bdi->srcu);
+	if (err)
+		goto err;
+
 	bdi->dirty_exceeded = 0;
 	err = prop_local_init_percpu(&bdi->completions);
 
 	if (err) {
+		cleanup_srcu_struct(&bdi->srcu);
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
@@ -531,6 +681,8 @@ void bdi_destroy(struct backing_dev_info *bdi)
 
 	bdi_unregister(bdi);
 
+	cleanup_srcu_struct(&bdi->srcu);
+
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
 		percpu_counter_destroy(&bdi->bdi_stat[i]);
 
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 54a4a65..7dd7de7 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -665,8 +665,7 @@ void throttle_vm_writeout(gfp_t gfp_mask)
 
 /*
  * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
- * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
- * -1 if all pdflush threads were busy.
+ * the whole world.
  */
 void wakeup_flusher_threads(long nr_pages)
 {
@@ -674,7 +673,6 @@ void wakeup_flusher_threads(long nr_pages)
 		nr_pages = global_page_state(NR_FILE_DIRTY) +
 				global_page_state(NR_UNSTABLE_NFS);
 	bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
-	return;
 }
 
 static void laptop_timer_fn(unsigned long unused);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 08/13] block: change the tag sync vs async restriction logic
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (13 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 07/12] writeback: support > 1 flusher thread per bdi Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:30 ` [PATCH 08/12] writeback: include default_backing_dev_info in writeback Jens Axboe
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Make them fully share the tag space, but disallow async requests using
the last any two slots.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-barrier.c    |    2 +-
 block/blk-tag.c        |   15 +++++++++------
 block/elevator.c       |   10 +++++-----
 include/linux/blkdev.h |    7 ++++++-
 4 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/block/blk-barrier.c b/block/blk-barrier.c
index 20b4111..3716ba5 100644
--- a/block/blk-barrier.c
+++ b/block/blk-barrier.c
@@ -221,7 +221,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp)
 	} else
 		skip |= QUEUE_ORDSEQ_PREFLUSH;
 
-	if ((q->ordered & QUEUE_ORDERED_BY_DRAIN) && q->in_flight)
+	if ((q->ordered & QUEUE_ORDERED_BY_DRAIN) && queue_in_flight(q))
 		rq = NULL;
 	else
 		skip |= QUEUE_ORDSEQ_DRAIN;
diff --git a/block/blk-tag.c b/block/blk-tag.c
index 3c518e3..e9a7501 100644
--- a/block/blk-tag.c
+++ b/block/blk-tag.c
@@ -336,7 +336,7 @@ EXPORT_SYMBOL(blk_queue_end_tag);
 int blk_queue_start_tag(struct request_queue *q, struct request *rq)
 {
 	struct blk_queue_tag *bqt = q->queue_tags;
-	unsigned max_depth, offset;
+	unsigned max_depth;
 	int tag;
 
 	if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
@@ -355,13 +355,16 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
 	 * to starve sync IO on behalf of flooding async IO.
 	 */
 	max_depth = bqt->max_depth;
-	if (rq_is_sync(rq))
-		offset = 0;
-	else
-		offset = max_depth >> 2;
+	if (!rq_is_sync(rq) && max_depth > 1) {
+		max_depth -= 2;
+		if (!max_depth)
+			max_depth = 1;
+		if (q->in_flight[0] > max_depth)
+			return 1;
+	}
 
 	do {
-		tag = find_next_zero_bit(bqt->tag_map, max_depth, offset);
+		tag = find_first_zero_bit(bqt->tag_map, max_depth);
 		if (tag >= max_depth)
 			return 1;
 
diff --git a/block/elevator.c b/block/elevator.c
index c7143fb..6261b24 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -555,7 +555,7 @@ void elv_requeue_request(struct request_queue *q, struct request *rq)
 	 * in_flight count again
 	 */
 	if (blk_account_rq(rq)) {
-		q->in_flight--;
+		q->in_flight[rq_is_sync(rq)]--;
 		if (blk_sorted_rq(rq))
 			elv_deactivate_rq(q, rq);
 	}
@@ -697,7 +697,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 
 	if (unplug_it && blk_queue_plugged(q)) {
 		int nrq = q->rq.count[BLK_RW_SYNC] + q->rq.count[BLK_RW_ASYNC]
-			- q->in_flight;
+			- queue_in_flight(q);
 
 		if (nrq >= q->unplug_thresh)
 			__generic_unplug_device(q);
@@ -861,7 +861,7 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
 	 * the driver side.
 	 */
 	if (blk_account_rq(rq))
-		q->in_flight++;
+		q->in_flight[rq_is_sync(rq)]++;
 }
 
 int elv_queue_empty(struct request_queue *q)
@@ -934,7 +934,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 	 * request is released from the driver, io must be done
 	 */
 	if (blk_account_rq(rq)) {
-		q->in_flight--;
+		q->in_flight[rq_is_sync(rq)]--;
 		if (blk_sorted_rq(rq) && q->elv_ops.elevator_completed_req_fn)
 			elv_call_completed_req_fn(q, rq);
 	}
@@ -949,7 +949,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 		if (!list_empty(&q->queue_head))
 			next = list_entry_rq(q->queue_head.next);
 
-		if (!q->in_flight &&
+		if (!queue_in_flight(q) &&
 		    blk_ordered_cur_seq(q) == QUEUE_ORDSEQ_DRAIN &&
 		    (!next || blk_ordered_req_seq(next) > QUEUE_ORDSEQ_DRAIN)) {
 			blk_ordered_complete_seq(q, QUEUE_ORDSEQ_DRAIN, 0);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4d6db9f..ca322da 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -416,7 +416,7 @@ struct request_queue
 	struct list_head	tag_busy_list;
 
 	unsigned int		nr_sorted;
-	unsigned int		in_flight;
+	unsigned int		in_flight[2];
 
 	unsigned int		rq_timeout;
 	struct timer_list	timeout;
@@ -528,6 +528,11 @@ static inline void queue_flag_clear_unlocked(unsigned int flag,
 	__clear_bit(flag, &q->queue_flags);
 }
 
+static inline int queue_in_flight(struct request_queue *q)
+{
+	return q->in_flight[0] + q->in_flight[1];
+}
+
 static inline void queue_flag_clear(unsigned int flag, struct request_queue *q)
 {
 	WARN_ON_ONCE(!queue_is_locked(q));
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 08/12] writeback: include default_backing_dev_info in writeback
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (14 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 08/13] block: change the tag sync vs async restriction logic Jens Axboe
@ 2009-05-25  7:30 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 09/13] libata: switch to using block layer tagging support Jens Axboe
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:30 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

We see dirty inodes there occasionally, so better be safe and write them
out.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c         |    2 +-
 include/linux/writeback.h |    1 +
 mm/backing-dev.c          |   16 +++++++++++-----
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 563860c..47f5ace 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -364,7 +364,7 @@ static void wb_writeback(struct bdi_writeback *wb)
  * This will be inlined in bdi_writeback_task() once we get rid of any
  * dirty inodes on the default_backing_dev_info
  */
-static void wb_do_writeback(struct bdi_writeback *wb)
+void wb_do_writeback(struct bdi_writeback *wb)
 {
 	/*
 	 * We get here in two cases:
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index baf04a9..e414702 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,6 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
 int inode_wait(void *);
 void sync_inodes_sb(struct super_block *, int wait);
 void sync_inodes(int wait);
+void wb_do_writeback(struct bdi_writeback *wb);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 3e74041..3a032be 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -388,12 +388,14 @@ static int bdi_forker_task(void *ptr)
 		struct bdi_writeback *wb;
 
 		/*
-		 * Should never trigger on the default bdi
+		 * Ideally we'd like not to see any dirty inodes on the
+		 * default_backing_dev_info. Until these are tracked down,
+		 * perform the same writeback here that bdi_writeback_task
+		 * does. For logic, see comment in
+		 * fs/fs-writeback.c:bdi_writeback_task()
 		 */
-		if (wb_has_dirty_io(me)) {
-			bdi_flush_io(me->bdi);
-			WARN_ON(1);
-		}
+		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+			wb_do_writeback(me);
 
 		prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
 
@@ -420,6 +422,10 @@ static int bdi_forker_task(void *ptr)
 			continue;
 		}
 
+		/*
+		 * This is our real job - check for pending entries in
+		 * bdi_pending_list, and create the tasks that got added
+		 */
 		bdi = list_entry(bdi_pending_list.next, struct backing_dev_info,
 				 bdi_list);
 		list_del_init(&bdi->bdi_list);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 09/13] libata: switch to using block layer tagging support
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (15 preceding siblings ...)
  2009-05-25  7:30 ` [PATCH 08/12] writeback: include default_backing_dev_info in writeback Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 09/12] writeback: allow sleepy exit of default writeback task Jens Axboe
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

libata currently has a pretty dumb ATA_MAX_QUEUE loop for finding
a free tag to use. Instead of fixing that up, convert libata to
using block layer tagging - gets rid of code in libata, and is also
much faster.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/ata/libata-core.c |   65 ++++----------------------------------------
 drivers/ata/libata-scsi.c |   23 ++++++++++++++-
 drivers/ata/libata.h      |   19 ++++++++++++-
 include/linux/libata.h    |    1 -
 4 files changed, 44 insertions(+), 64 deletions(-)

diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index ca4d208..74c9055 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -1789,8 +1789,6 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 	else
 		tag = 0;
 
-	if (test_and_set_bit(tag, &ap->qc_allocated))
-		BUG();
 	qc = __ata_qc_from_tag(ap, tag);
 
 	qc->tag = tag;
@@ -4785,36 +4783,6 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words)
 }
 
 /**
- *	ata_qc_new - Request an available ATA command, for queueing
- *	@ap: target port
- *
- *	LOCKING:
- *	None.
- */
-
-static struct ata_queued_cmd *ata_qc_new(struct ata_port *ap)
-{
-	struct ata_queued_cmd *qc = NULL;
-	unsigned int i;
-
-	/* no command while frozen */
-	if (unlikely(ap->pflags & ATA_PFLAG_FROZEN))
-		return NULL;
-
-	/* the last tag is reserved for internal command. */
-	for (i = 0; i < ATA_MAX_QUEUE - 1; i++)
-		if (!test_and_set_bit(i, &ap->qc_allocated)) {
-			qc = __ata_qc_from_tag(ap, i);
-			break;
-		}
-
-	if (qc)
-		qc->tag = i;
-
-	return qc;
-}
-
-/**
  *	ata_qc_new_init - Request an available ATA command, and initialize it
  *	@dev: Device from whom we request an available command structure
  *
@@ -4822,16 +4790,20 @@ static struct ata_queued_cmd *ata_qc_new(struct ata_port *ap)
  *	None.
  */
 
-struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev)
+struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev, int tag)
 {
 	struct ata_port *ap = dev->link->ap;
 	struct ata_queued_cmd *qc;
 
-	qc = ata_qc_new(ap);
+	if (unlikely(ap->pflags & ATA_PFLAG_FROZEN))
+		return NULL;
+
+	qc = __ata_qc_from_tag(ap, tag);
 	if (qc) {
 		qc->scsicmd = NULL;
 		qc->ap = ap;
 		qc->dev = dev;
+		qc->tag = tag;
 
 		ata_qc_reinit(qc);
 	}
@@ -4839,31 +4811,6 @@ struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev)
 	return qc;
 }
 
-/**
- *	ata_qc_free - free unused ata_queued_cmd
- *	@qc: Command to complete
- *
- *	Designed to free unused ata_queued_cmd object
- *	in case something prevents using it.
- *
- *	LOCKING:
- *	spin_lock_irqsave(host lock)
- */
-void ata_qc_free(struct ata_queued_cmd *qc)
-{
-	struct ata_port *ap = qc->ap;
-	unsigned int tag;
-
-	WARN_ON_ONCE(qc == NULL); /* ata_qc_from_tag _might_ return NULL */
-
-	qc->flags = 0;
-	tag = qc->tag;
-	if (likely(ata_tag_valid(tag))) {
-		qc->tag = ATA_TAG_POISON;
-		clear_bit(tag, &ap->qc_allocated);
-	}
-}
-
 void __ata_qc_complete(struct ata_queued_cmd *qc)
 {
 	struct ata_port *ap = qc->ap;
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 3423160..b0179c1 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -742,7 +742,11 @@ static struct ata_queued_cmd *ata_scsi_qc_new(struct ata_device *dev,
 {
 	struct ata_queued_cmd *qc;
 
-	qc = ata_qc_new_init(dev);
+	if (cmd->request->tag != -1)
+		qc = ata_qc_new_init(dev, cmd->request->tag);
+	else
+		qc = ata_qc_new_init(dev, 0);
+
 	if (qc) {
 		qc->scsicmd = cmd;
 		qc->scsidone = done;
@@ -1137,7 +1141,17 @@ static int ata_scsi_dev_config(struct scsi_device *sdev,
 
 		depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id));
 		depth = min(ATA_MAX_QUEUE - 1, depth);
-		scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, depth);
+
+		/*
+		 * If this device is behind a port multiplier, we have
+		 * to share the tag map between all devices on that PMP.
+		 * Set up the shared tag map here and we get automatic.
+		 */
+		if (dev->link->ap->pmp_link)
+			scsi_init_shared_tag_map(sdev->host, ATA_MAX_QUEUE - 1);
+
+		scsi_set_tag_type(sdev, MSG_SIMPLE_TAG);
+		scsi_activate_tcq(sdev, depth);
 	}
 
 	return 0;
@@ -1990,6 +2004,11 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
 		hdr[1] |= (1 << 7);
 
 	memcpy(rbuf, hdr, sizeof(hdr));
+
+	/* if ncq, set tags supported */
+	if (ata_id_has_ncq(args->id))
+		rbuf[7] |= (1 << 1);
+
 	memcpy(&rbuf[8], "ATA     ", 8);
 	ata_id_string(args->id, &rbuf[16], ATA_ID_PROD, 16);
 	ata_id_string(args->id, &rbuf[32], ATA_ID_FW_REV, 4);
diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
index 89a1e00..bad444b 100644
--- a/drivers/ata/libata.h
+++ b/drivers/ata/libata.h
@@ -74,7 +74,7 @@ extern struct ata_link *ata_dev_phys_link(struct ata_device *dev);
 extern void ata_force_cbl(struct ata_port *ap);
 extern u64 ata_tf_to_lba(const struct ata_taskfile *tf);
 extern u64 ata_tf_to_lba48(const struct ata_taskfile *tf);
-extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev);
+extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev, int tag);
 extern int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
 			   u64 block, u32 n_block, unsigned int tf_flags,
 			   unsigned int tag);
@@ -100,7 +100,6 @@ extern int ata_dev_configure(struct ata_device *dev);
 extern int sata_down_spd_limit(struct ata_link *link, u32 spd_limit);
 extern int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel);
 extern void ata_sg_clean(struct ata_queued_cmd *qc);
-extern void ata_qc_free(struct ata_queued_cmd *qc);
 extern void ata_qc_issue(struct ata_queued_cmd *qc);
 extern void __ata_qc_complete(struct ata_queued_cmd *qc);
 extern int atapi_check_dma(struct ata_queued_cmd *qc);
@@ -116,6 +115,22 @@ extern struct ata_port *ata_port_alloc(struct ata_host *host);
 extern void ata_dev_enable_pm(struct ata_device *dev, enum link_pm policy);
 extern void ata_lpm_schedule(struct ata_port *ap, enum link_pm);
 
+/**
+ *	ata_qc_free - free unused ata_queued_cmd
+ *	@qc: Command to complete
+ *
+ *	Designed to free unused ata_queued_cmd object
+ *	in case something prevents using it.
+ *
+ *	LOCKING:
+ *	spin_lock_irqsave(host lock)
+ */
+static inline void ata_qc_free(struct ata_queued_cmd *qc)
+{
+	qc->flags = 0;
+	qc->tag = ATA_TAG_POISON;
+}
+
 /* libata-acpi.c */
 #ifdef CONFIG_ATA_ACPI
 extern void ata_acpi_associate_sata_port(struct ata_port *ap);
diff --git a/include/linux/libata.h b/include/linux/libata.h
index 3d501db..cf1e54e 100644
--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -716,7 +716,6 @@ struct ata_port {
 	unsigned int		cbl;	/* cable type; ATA_CBL_xxx */
 
 	struct ata_queued_cmd	qcmd[ATA_MAX_QUEUE];
-	unsigned long		qc_allocated;
 	unsigned int		qc_active;
 	int			nr_active_links; /* #links with active qcs */
 
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 09/12] writeback: allow sleepy exit of default writeback task
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (16 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 09/13] libata: switch to using block layer tagging support Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 10/13] block: add function for waiting for a specific free tag Jens Axboe
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Since we do lazy create of default writeback tasks for a bdi, we can
allow sleepy exit if it has been completely idle for 5 minutes.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |   52 ++++++++++++++++++++++++++++++++++--------
 include/linux/backing-dev.h |    5 ++++
 include/linux/writeback.h   |    2 +-
 3 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 47f5ace..1292a88 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -247,10 +247,10 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
  * older_than_this takes precedence over nr_to_write.  So we'll only write back
  * all dirty pages if they are all attached to "old" mappings.
  */
-static void wb_kupdated(struct bdi_writeback *wb)
+static long wb_kupdated(struct bdi_writeback *wb)
 {
 	unsigned long oldest_jif;
-	long nr_to_write;
+	long nr_to_write, wrote = 0;
 	struct writeback_control wbc = {
 		.bdi			= wb->bdi,
 		.sync_mode		= WB_SYNC_NONE,
@@ -273,10 +273,13 @@ static void wb_kupdated(struct bdi_writeback *wb)
 		wbc.encountered_congestion = 0;
 		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		generic_sync_wb_inodes(wb, NULL, &wbc);
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		if (wbc.nr_to_write > 0)
 			break;	/* All the old data is written */
 		nr_to_write -= MAX_WRITEBACK_PAGES;
 	}
+
+	return wrote;
 }
 
 static inline bool over_bground_thresh(void)
@@ -289,7 +292,7 @@ static inline bool over_bground_thresh(void)
 		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
 }
 
-static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 			   struct super_block *sb,
 			   enum writeback_sync_modes sync_mode)
 {
@@ -299,6 +302,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 	};
+	long wrote = 0;
 
 	for (;;) {
 		if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
@@ -311,6 +315,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 		wbc.pages_skipped = 0;
 		generic_sync_wb_inodes(wb, sb, &wbc);
 		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
 		 */
@@ -320,6 +325,8 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
 			break;
 		}
 	}
+
+	return wrote;
 }
 
 /*
@@ -345,10 +352,11 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
 	return ret;
 }
 
-static void wb_writeback(struct bdi_writeback *wb)
+static long wb_writeback(struct bdi_writeback *wb)
 {
 	struct backing_dev_info *bdi = wb->bdi;
 	struct bdi_work *work;
+	long wrote = 0;
 
 	while ((work = get_next_work_item(bdi, wb)) != NULL) {
 		struct super_block *sb = bdi_work_sb(work);
@@ -356,16 +364,20 @@ static void wb_writeback(struct bdi_writeback *wb)
 		enum writeback_sync_modes sync_mode = work->sync_mode;
 
 		wb_clear_pending(wb, work);
-		__wb_writeback(wb, nr_pages, sb, sync_mode);
+		wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
 	}
+
+	return wrote;
 }
 
 /*
  * This will be inlined in bdi_writeback_task() once we get rid of any
  * dirty inodes on the default_backing_dev_info
  */
-void wb_do_writeback(struct bdi_writeback *wb)
+long wb_do_writeback(struct bdi_writeback *wb)
 {
+	long wrote;
+
 	/*
 	 * We get here in two cases:
 	 *
@@ -377,9 +389,11 @@ void wb_do_writeback(struct bdi_writeback *wb)
 	 *  items on the work_list. Process those.
 	 */
 	if (list_empty(&wb->bdi->work_list))
-		wb_kupdated(wb);
+		wrote = wb_kupdated(wb);
 	else
-		wb_writeback(wb);
+		wrote = wb_writeback(wb);
+
+	return wrote;
 }
 
 /*
@@ -388,12 +402,30 @@ void wb_do_writeback(struct bdi_writeback *wb)
  */
 int bdi_writeback_task(struct bdi_writeback *wb)
 {
+	unsigned long last_active = jiffies;
+	unsigned long wait_jiffies = -1UL;
+	long pages_written;
 	DEFINE_WAIT(wait);
 
 	while (!kthread_should_stop()) {
-		unsigned long wait_jiffies;
 
-		wb_do_writeback(wb);
+		pages_written = wb_do_writeback(wb);
+
+		if (pages_written)
+			last_active = jiffies;
+		else if (wait_jiffies != -1UL) {
+			unsigned long max_idle;
+
+			/*
+			 * Longest period of inactivity that we tolerate. If we
+			 * see dirty data again later, the task will get
+			 * recreated automatically.
+			 */
+			max_idle = max(5UL * 60 * HZ, wait_jiffies);
+			if (time_after(jiffies, max_idle + last_active) &&
+			    wb_is_default_task(wb))
+				break;
+		}
 
 		prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
 		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 72b4797..53e6c8d 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -113,6 +113,11 @@ int bdi_has_dirty_io(struct backing_dev_info *bdi);
 extern struct mutex bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int wb_is_default_task(struct bdi_writeback *wb)
+{
+	return wb == &wb->bdi->wb;
+}
+
 static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
 {
 	return test_bit(BDI_wblist_lock, &bdi->state);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index e414702..30e318b 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,7 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
 int inode_wait(void *);
 void sync_inodes_sb(struct super_block *, int wait);
 void sync_inodes(int wait);
-void wb_do_writeback(struct bdi_writeback *wb);
+long wb_do_writeback(struct bdi_writeback *wb);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 10/13] block: add function for waiting for a specific free tag
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (17 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 09/12] writeback: allow sleepy exit of default writeback task Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 10/12] writeback: add some debug inode list counters to bdi stats Jens Axboe
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

We need this in libata to ensure that we don't race between internal
tag usage and the block layer usage.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-tag.c           |   99 ++++++++++++++++++++++++++++++++-------------
 drivers/ata/libata-core.c |   29 +++++++++----
 include/linux/blkdev.h    |    3 +
 3 files changed, 95 insertions(+), 36 deletions(-)

diff --git a/block/blk-tag.c b/block/blk-tag.c
index e9a7501..208468b 100644
--- a/block/blk-tag.c
+++ b/block/blk-tag.c
@@ -149,6 +149,7 @@ static struct blk_queue_tag *__blk_queue_init_tags(struct request_queue *q,
 		goto fail;
 
 	atomic_set(&tags->refcnt, 1);
+	init_waitqueue_head(&tags->waitq);
 	return tags;
 fail:
 	kfree(tags);
@@ -264,6 +265,65 @@ int blk_queue_resize_tags(struct request_queue *q, int new_depth)
 }
 EXPORT_SYMBOL(blk_queue_resize_tags);
 
+void blk_queue_acquire_tag(struct request_queue *q, int tag)
+{
+	struct blk_queue_tag *bqt;
+
+	if (!blk_queue_tagged(q) || !q->queue_tags)
+		return;
+
+	bqt = q->queue_tags;
+	do {
+		DEFINE_WAIT(wait);
+
+		if (!test_and_set_bit_lock(tag, bqt->tag_map))
+			break;
+
+		prepare_to_wait(&bqt->waitq, &wait, TASK_UNINTERRUPTIBLE);
+		if (test_and_set_bit_lock(tag, bqt->tag_map)) {
+			spin_unlock_irq(q->queue_lock);
+			schedule();
+			spin_lock_irq(q->queue_lock);
+		}
+		finish_wait(&bqt->waitq, &wait);
+	} while (1);
+}
+
+void blk_queue_release_tag(struct request_queue *q, int tag)
+{
+	struct blk_queue_tag *bqt = q->queue_tags;
+
+	if (!blk_queue_tagged(q))
+		return;
+
+	/*
+	 * Normally we store a request pointer in the tag index, but for
+	 * blk_queue_acquire_tag() usage, we may not have something specific
+	 * assigned to the tag slot. In any case, be safe and clear it.
+	 */
+	bqt->tag_index[tag] = NULL;
+
+	if (unlikely(!test_bit(tag, bqt->tag_map))) {
+		printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n",
+		       __func__, tag);
+		return;
+	}
+	/*
+	 * The tag_map bit acts as a lock for tag_index[bit], so we need
+	 * unlock memory barrier semantics.
+	 */
+	clear_bit_unlock(tag, bqt->tag_map);
+
+	/*
+	 * We don't need a memory barrier here, since we have the bit lock
+	 * ordering above. Otherwise it would need an smp_mb();
+	 */
+	if (waitqueue_active(&bqt->waitq))
+		wake_up(&bqt->waitq);
+
+}
+EXPORT_SYMBOL(blk_queue_release_tag);
+
 /**
  * blk_queue_end_tag - end tag operations for a request
  * @q:  the request queue for the device
@@ -285,33 +345,17 @@ void blk_queue_end_tag(struct request_queue *q, struct request *rq)
 
 	BUG_ON(tag == -1);
 
-	if (unlikely(tag >= bqt->real_max_depth))
-		/*
-		 * This can happen after tag depth has been reduced.
-		 * FIXME: how about a warning or info message here?
-		 */
-		return;
-
-	list_del_init(&rq->queuelist);
-	rq->cmd_flags &= ~REQ_QUEUED;
-	rq->tag = -1;
-
-	if (unlikely(bqt->tag_index[tag] == NULL))
-		printk(KERN_ERR "%s: tag %d is missing\n",
-		       __func__, tag);
-
-	bqt->tag_index[tag] = NULL;
-
-	if (unlikely(!test_bit(tag, bqt->tag_map))) {
-		printk(KERN_ERR "%s: attempt to clear non-busy tag (%d)\n",
-		       __func__, tag);
-		return;
-	}
 	/*
-	 * The tag_map bit acts as a lock for tag_index[bit], so we need
-	 * unlock memory barrier semantics.
+	 * When the tag depth is being reduced, we don't wait for higher tags
+	 * to finish. So we could see this triggering without it being an error.
 	 */
-	clear_bit_unlock(tag, bqt->tag_map);
+	if (tag < bqt->real_max_depth) {
+		list_del_init(&rq->queuelist);
+		rq->cmd_flags &= ~REQ_QUEUED;
+		rq->tag = -1;
+
+		blk_queue_release_tag(q, tag);
+	}
 }
 EXPORT_SYMBOL(blk_queue_end_tag);
 
@@ -336,8 +380,7 @@ EXPORT_SYMBOL(blk_queue_end_tag);
 int blk_queue_start_tag(struct request_queue *q, struct request *rq)
 {
 	struct blk_queue_tag *bqt = q->queue_tags;
-	unsigned max_depth;
-	int tag;
+	int max_depth, tag;
 
 	if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
 		printk(KERN_ERR
@@ -371,7 +414,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
 	} while (test_and_set_bit_lock(tag, bqt->tag_map));
 	/*
 	 * We need lock ordering semantics given by test_and_set_bit_lock.
-	 * See blk_queue_end_tag for details.
+	 * See blk_queue_release_tag() for details.
 	 */
 
 	rq->cmd_flags |= REQ_QUEUED;
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index 74c9055..5e43f2b 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -61,6 +61,7 @@
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
 #include <scsi/scsi_host.h>
+#include <scsi/scsi_device.h>
 #include <linux/libata.h>
 #include <asm/byteorder.h>
 #include <linux/cdrom.h>
@@ -1765,15 +1766,14 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 	u32 preempted_sactive, preempted_qc_active;
 	int preempted_nr_active_links;
 	DECLARE_COMPLETION_ONSTACK(wait);
-	unsigned long flags;
 	unsigned int err_mask;
 	int rc;
 
-	spin_lock_irqsave(ap->lock, flags);
+	spin_lock_irq(ap->lock);
 
 	/* no internal command while frozen */
 	if (ap->pflags & ATA_PFLAG_FROZEN) {
-		spin_unlock_irqrestore(ap->lock, flags);
+		spin_unlock_irq(ap->lock);
 		return AC_ERR_SYSTEM;
 	}
 
@@ -1789,6 +1789,16 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 	else
 		tag = 0;
 
+	/*
+	 * We could be racing with the tag freeing in the block layer, so
+	 * we need to ensure that our tag is free.
+	 */
+	if (dev->sdev && dev->sdev->request_queue)
+		blk_queue_acquire_tag(dev->sdev->request_queue, tag);
+
+	/*
+	 * The tag is now ours
+	 */
 	qc = __ata_qc_from_tag(ap, tag);
 
 	qc->tag = tag;
@@ -1828,7 +1838,7 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 
 	ata_qc_issue(qc);
 
-	spin_unlock_irqrestore(ap->lock, flags);
+	spin_unlock_irq(ap->lock);
 
 	if (!timeout) {
 		if (ata_probe_timeout)
@@ -1844,7 +1854,7 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 	ata_port_flush_task(ap);
 
 	if (!rc) {
-		spin_lock_irqsave(ap->lock, flags);
+		spin_lock_irq(ap->lock);
 
 		/* We're racing with irq here.  If we lose, the
 		 * following test prevents us from completing the qc
@@ -1864,7 +1874,7 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 					"qc timeout (cmd 0x%x)\n", command);
 		}
 
-		spin_unlock_irqrestore(ap->lock, flags);
+		spin_unlock_irq(ap->lock);
 	}
 
 	/* do post_internal_cmd */
@@ -1884,11 +1894,14 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 	}
 
 	/* finish up */
-	spin_lock_irqsave(ap->lock, flags);
+	spin_lock_irq(ap->lock);
 
 	*tf = qc->result_tf;
 	err_mask = qc->err_mask;
 
+	if (dev->sdev && dev->sdev->request_queue)
+		blk_queue_release_tag(dev->sdev->request_queue, tag);
+
 	ata_qc_free(qc);
 	link->active_tag = preempted_tag;
 	link->sactive = preempted_sactive;
@@ -1911,7 +1924,7 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
 		ata_port_probe(ap);
 	}
 
-	spin_unlock_irqrestore(ap->lock, flags);
+	spin_unlock_irq(ap->lock);
 
 	if ((err_mask & AC_ERR_TIMEOUT) && auto_timeout)
 		ata_internal_cmd_timed_out(dev, command);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ca322da..f2b6b92 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -307,6 +307,7 @@ struct blk_queue_tag {
 	int max_depth;			/* what we will send to device */
 	int real_max_depth;		/* what the array can hold */
 	atomic_t refcnt;		/* map can be shared */
+	wait_queue_head_t waitq;	/* for waiting on tags */
 };
 
 #define BLK_SCSI_MAX_CMDS	(256)
@@ -929,6 +930,8 @@ extern void blk_put_queue(struct request_queue *);
  * tag stuff
  */
 #define blk_rq_tagged(rq)		((rq)->cmd_flags & REQ_QUEUED)
+extern void blk_queue_acquire_tag(struct request_queue *, int);
+extern void blk_queue_release_tag(struct request_queue *, int);
 extern int blk_queue_start_tag(struct request_queue *, struct request *);
 extern struct request *blk_queue_find_tag(struct request_queue *, int);
 extern void blk_queue_end_tag(struct request_queue *, struct request *);
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 10/12] writeback: add some debug inode list counters to bdi stats
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (18 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 10/13] block: add function for waiting for a specific free tag Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 11/13] block: disallow merging of read-ahead bits into normal request Jens Axboe
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Add some debug entries to be able to inspect the internal state of
the writeback details.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 mm/backing-dev.c |   38 ++++++++++++++++++++++++++++++++++----
 1 files changed, 34 insertions(+), 4 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 3a032be..fcc0b2a 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -43,9 +43,29 @@ static void bdi_debug_init(void)
 static int bdi_debug_stats_show(struct seq_file *m, void *v)
 {
 	struct backing_dev_info *bdi = m->private;
+	struct bdi_writeback *wb;
 	unsigned long background_thresh;
 	unsigned long dirty_thresh;
 	unsigned long bdi_thresh;
+	unsigned long nr_dirty, nr_io, nr_more_io, nr_wb;
+	struct inode *inode;
+
+	/*
+	 * inode lock is enough here, the bdi->wb_list is protected by
+	 * RCU on the reader side
+	 */
+	nr_wb = nr_dirty = nr_io = nr_more_io = 0;
+	spin_lock(&inode_lock);
+	list_for_each_entry(wb, &bdi->wb_list, list) {
+		nr_wb++;
+		list_for_each_entry(inode, &wb->b_dirty, i_list)
+			nr_dirty++;
+		list_for_each_entry(inode, &wb->b_io, i_list)
+			nr_io++;
+		list_for_each_entry(inode, &wb->b_more_io, i_list)
+			nr_more_io++;
+	}
+	spin_unlock(&inode_lock);
 
 	get_dirty_limits(&background_thresh, &dirty_thresh, &bdi_thresh, bdi);
 
@@ -55,12 +75,22 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v)
 		   "BdiReclaimable:   %8lu kB\n"
 		   "BdiDirtyThresh:   %8lu kB\n"
 		   "DirtyThresh:      %8lu kB\n"
-		   "BackgroundThresh: %8lu kB\n",
+		   "BackgroundThresh: %8lu kB\n"
+		   "WriteBack threads:%8lu\n"
+		   "b_dirty:          %8lu\n"
+		   "b_io:             %8lu\n"
+		   "b_more_io:        %8lu\n"
+		   "bdi_list:         %8u\n"
+		   "state:            %8lx\n"
+		   "wb_mask:          %8lx\n"
+		   "wb_list:          %8u\n"
+		   "wb_cnt:           %8u\n",
 		   (unsigned long) K(bdi_stat(bdi, BDI_WRITEBACK)),
 		   (unsigned long) K(bdi_stat(bdi, BDI_RECLAIMABLE)),
-		   K(bdi_thresh),
-		   K(dirty_thresh),
-		   K(background_thresh));
+		   K(bdi_thresh), K(dirty_thresh),
+		   K(background_thresh), nr_wb, nr_dirty, nr_io, nr_more_io,
+		   !list_empty(&bdi->bdi_list), bdi->state, bdi->wb_mask,
+		   !list_empty(&bdi->wb_list), bdi->wb_cnt);
 #undef K
 
 	return 0;
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 11/13] block: disallow merging of read-ahead bits into normal request
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (19 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 10/12] writeback: add some debug inode list counters to bdi stats Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 11/12] writeback: add name to backing_dev_info Jens Axboe
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

For SSD type devices, the request latency is really low. So for those
types of devices, we may not want to merge the read part of a request into
the read-ahead request that is also generates.

Add code to mpage.c to properly propagate read vs reada information to
the block layer and let the elevator core check and prevent such merges.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/elevator.c |    7 +++++++
 fs/mpage.c       |   30 ++++++++++++++++++++++++------
 2 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/block/elevator.c b/block/elevator.c
index 6261b24..17cfaa2 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -67,6 +67,13 @@ static int elv_iosched_allow_merge(struct request *rq, struct bio *bio)
 {
 	struct request_queue *q = rq->q;
 
+	/*
+	 * Disallow merge of a read-ahead bio into a normal request for SSD
+	 */
+	if (blk_queue_nonrot(q) &&
+	    bio_rw_ahead(bio) && !(rq->cmd_flags & REQ_FAILFAST_DEV))
+		return 0;
+
 	if (q->elv_ops.elevator_allow_merge_fn)
 		return elv_call_allow_merge_fn(q, rq, bio);
 
diff --git a/fs/mpage.c b/fs/mpage.c
index 680ba60..d02cf51 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -180,11 +180,18 @@ do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
 	unsigned page_block;
 	unsigned first_hole = blocks_per_page;
 	struct block_device *bdev = NULL;
-	int length;
+	int length, rw;
 	int fully_mapped = 1;
 	unsigned nblocks;
 	unsigned relative_block;
 
+	/*
+	 * If there's some read-ahead in this range, be sure to tell
+	 * the block layer about it. We start off as a READ, then switch
+	 * to READA if we spot the read-ahead marker on the page.
+	 */
+	rw = READ;
+
 	if (page_has_buffers(page))
 		goto confused;
 
@@ -289,7 +296,7 @@ do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
 	 * This page will go to BIO.  Do we need to send this BIO off first?
 	 */
 	if (bio && (*last_block_in_bio != blocks[0] - 1))
-		bio = mpage_bio_submit(READ, bio);
+		bio = mpage_bio_submit(rw, bio);
 
 alloc_new:
 	if (bio == NULL) {
@@ -301,8 +308,19 @@ alloc_new:
 	}
 
 	length = first_hole << blkbits;
-	if (bio_add_page(bio, page, length, 0) < length) {
-		bio = mpage_bio_submit(READ, bio);
+
+	/*
+	 * If this is an SSD, don't merge the read-ahead part of the IO
+	 * with the actual request. We want the interesting part to complete
+	 * as quickly as possible.
+	 */
+	if (blk_queue_nonrot(bdev_get_queue(bdev)) &&
+	    bio->bi_size && PageReadahead(page)) {
+		bio = mpage_bio_submit(rw, bio);
+		rw = READA;
+		goto alloc_new;
+	} else if (bio_add_page(bio, page, length, 0) < length) {
+		bio = mpage_bio_submit(rw, bio);
 		goto alloc_new;
 	}
 
@@ -310,7 +328,7 @@ alloc_new:
 	nblocks = map_bh->b_size >> blkbits;
 	if ((buffer_boundary(map_bh) && relative_block == nblocks) ||
 	    (first_hole != blocks_per_page))
-		bio = mpage_bio_submit(READ, bio);
+		bio = mpage_bio_submit(rw, bio);
 	else
 		*last_block_in_bio = blocks[blocks_per_page - 1];
 out:
@@ -318,7 +336,7 @@ out:
 
 confused:
 	if (bio)
-		bio = mpage_bio_submit(READ, bio);
+		bio = mpage_bio_submit(rw, bio);
 	if (!PageUptodate(page))
 	        block_read_full_page(page, get_block);
 	else
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 11/12] writeback: add name to backing_dev_info
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (20 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 11/13] block: disallow merging of read-ahead bits into normal request Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 12/13] block: first cut at implementing a NAPI approach for block devices Jens Axboe
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

This enables us to track who does what and print info. Its main use
is catching dirty inodes on the default_backing_dev_info, so we can
fix that up.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-core.c            |    1 +
 drivers/block/aoe/aoeblk.c  |    1 +
 drivers/char/mem.c          |    1 +
 fs/btrfs/disk-io.c          |    1 +
 fs/char_dev.c               |    1 +
 fs/configfs/inode.c         |    1 +
 fs/fuse/inode.c             |    1 +
 fs/hugetlbfs/inode.c        |    1 +
 fs/nfs/client.c             |    1 +
 fs/ocfs2/dlm/dlmfs.c        |    1 +
 fs/ramfs/inode.c            |    1 +
 fs/sysfs/inode.c            |    1 +
 fs/ubifs/super.c            |    1 +
 include/linux/backing-dev.h |    2 ++
 kernel/cgroup.c             |    1 +
 mm/backing-dev.c            |    1 +
 mm/swap_state.c             |    1 +
 17 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..d3f18b5 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -517,6 +517,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 
 	q->backing_dev_info.unplug_io_fn = blk_backing_dev_unplug;
 	q->backing_dev_info.unplug_io_data = q;
+	q->backing_dev_info.name = "block";
 	err = bdi_init(&q->backing_dev_info);
 	if (err) {
 		kmem_cache_free(blk_requestq_cachep, q);
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 2307a27..0efb8fc 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -265,6 +265,7 @@ aoeblk_gdalloc(void *vp)
 	}
 
 	blk_queue_make_request(&d->blkq, aoeblk_make_request);
+	d->blkq.backing_dev_info.name = "aoe";
 	if (bdi_init(&d->blkq.backing_dev_info))
 		goto err_mempool;
 	spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 8f05c38..3b38093 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -820,6 +820,7 @@ static const struct file_operations zero_fops = {
  * - permits private mappings, "copies" are taken of the source of zeros
  */
 static struct backing_dev_info zero_bdi = {
+	.name		= "char/mem",
 	.capabilities	= BDI_CAP_MAP_COPY,
 };
 
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 2dc19c9..eff2a82 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1353,6 +1353,7 @@ static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
 {
 	int err;
 
+	bdi->name = "btrfs";
 	bdi->capabilities = BDI_CAP_MAP_COPY;
 	err = bdi_init(bdi);
 	if (err)
diff --git a/fs/char_dev.c b/fs/char_dev.c
index 38f7122..350ef9c 100644
--- a/fs/char_dev.c
+++ b/fs/char_dev.c
@@ -32,6 +32,7 @@
  * - no readahead or I/O queue unplugging required
  */
 struct backing_dev_info directly_mappable_cdev_bdi = {
+	.name = "char",
 	.capabilities	= (
 #ifdef CONFIG_MMU
 		/* permit private copies of the data to be taken */
diff --git a/fs/configfs/inode.c b/fs/configfs/inode.c
index 5d349d3..9a266cd 100644
--- a/fs/configfs/inode.c
+++ b/fs/configfs/inode.c
@@ -46,6 +46,7 @@ static const struct address_space_operations configfs_aops = {
 };
 
 static struct backing_dev_info configfs_backing_dev_info = {
+	.name		= "configfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index 91f7c85..e5e8b03 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -484,6 +484,7 @@ int fuse_conn_init(struct fuse_conn *fc, struct super_block *sb)
 	INIT_LIST_HEAD(&fc->bg_queue);
 	INIT_LIST_HEAD(&fc->entry);
 	atomic_set(&fc->num_waiting, 0);
+	fc->bdi.name = "fuse";
 	fc->bdi.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
 	fc->bdi.unplug_io_fn = default_unplug_io_fn;
 	/* fuse does it's own writeback accounting */
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index c1462d4..db1e537 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -43,6 +43,7 @@ static const struct inode_operations hugetlbfs_dir_inode_operations;
 static const struct inode_operations hugetlbfs_inode_operations;
 
 static struct backing_dev_info hugetlbfs_backing_dev_info = {
+	.name		= "hugetlbfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 75c9cd2..3a26d06 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -836,6 +836,7 @@ static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fsinfo *
 		server->rsize = NFS_MAX_FILE_IO_SIZE;
 	server->rpages = (server->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 
+	server->backing_dev_info.name = "nfs";
 	server->backing_dev_info.ra_pages = server->rpages * NFS_MAX_READAHEAD;
 
 	if (server->wsize > max_rpc_payload)
diff --git a/fs/ocfs2/dlm/dlmfs.c b/fs/ocfs2/dlm/dlmfs.c
index 1c9efb4..02bf178 100644
--- a/fs/ocfs2/dlm/dlmfs.c
+++ b/fs/ocfs2/dlm/dlmfs.c
@@ -325,6 +325,7 @@ clear_fields:
 }
 
 static struct backing_dev_info dlmfs_backing_dev_info = {
+	.name		= "ocfs2-dlmfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
index 3a6b193..5a24199 100644
--- a/fs/ramfs/inode.c
+++ b/fs/ramfs/inode.c
@@ -46,6 +46,7 @@ static const struct super_operations ramfs_ops;
 static const struct inode_operations ramfs_dir_inode_operations;
 
 static struct backing_dev_info ramfs_backing_dev_info = {
+	.name		= "ramfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK |
 			  BDI_CAP_MAP_DIRECT | BDI_CAP_MAP_COPY |
diff --git a/fs/sysfs/inode.c b/fs/sysfs/inode.c
index 555f0ff..e57f98e 100644
--- a/fs/sysfs/inode.c
+++ b/fs/sysfs/inode.c
@@ -29,6 +29,7 @@ static const struct address_space_operations sysfs_aops = {
 };
 
 static struct backing_dev_info sysfs_backing_dev_info = {
+	.name		= "sysfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index e9f7a75..2349e2c 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -1923,6 +1923,7 @@ static int ubifs_fill_super(struct super_block *sb, void *data, int silent)
 	 *
 	 * Read-ahead will be disabled because @c->bdi.ra_pages is 0.
 	 */
+	c->bdi.name = "ubifs",
 	c->bdi.capabilities = BDI_CAP_MAP_COPY;
 	c->bdi.unplug_io_fn = default_unplug_io_fn;
 	err  = bdi_init(&c->bdi);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 53e6c8d..4507569 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -70,6 +70,8 @@ struct backing_dev_info {
 	void (*unplug_io_fn)(struct backing_dev_info *, struct page *);
 	void *unplug_io_data;
 
+	char *name;
+
 	struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
 
 	struct prop_local_percpu completions;
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a7267bf..0863c5f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -598,6 +598,7 @@ static struct inode_operations cgroup_dir_inode_operations;
 static struct file_operations proc_cgroupstats_operations;
 
 static struct backing_dev_info cgroup_backing_dev_info = {
+	.name		= "cgroup",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
 
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index fcc0b2a..0834ff9 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -17,6 +17,7 @@ void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 EXPORT_SYMBOL(default_unplug_io_fn);
 
 struct backing_dev_info default_backing_dev_info = {
+	.name		= "default",
 	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
 	.state		= 0,
 	.capabilities	= BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3ecea98..323da00 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -34,6 +34,7 @@ static const struct address_space_operations swap_aops = {
 };
 
 static struct backing_dev_info swap_backing_dev_info = {
+	.name		= "swap",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK | BDI_CAP_SWAP_BACKED,
 	.unplug_io_fn	= swap_unplug_io_fn,
 };
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 12/13] block: first cut at implementing a NAPI approach for block devices
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (21 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 11/12] writeback: add name to backing_dev_info Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 12/12] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Adds support for AHCI only, along with the generic code.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/Makefile            |    2 +-
 block/blk-ipoll.c         |  160 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/ata/ahci.c        |   53 ++++++++++++++-
 include/linux/blk-ipoll.h |   38 +++++++++++
 include/linux/interrupt.h |    1 +
 include/linux/libata.h    |    2 +
 6 files changed, 252 insertions(+), 4 deletions(-)
 create mode 100644 block/blk-ipoll.c
 create mode 100644 include/linux/blk-ipoll.h

diff --git a/block/Makefile b/block/Makefile
index e9fa4dd..537e88a 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -5,7 +5,7 @@
 obj-$(CONFIG_BLOCK) := elevator.o blk-core.o blk-tag.o blk-sysfs.o \
 			blk-barrier.o blk-settings.o blk-ioc.o blk-map.o \
 			blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
-			ioctl.o genhd.o scsi_ioctl.o cmd-filter.o
+			blk-ipoll.o ioctl.o genhd.o scsi_ioctl.o cmd-filter.o
 
 obj-$(CONFIG_BLK_DEV_BSG)	+= bsg.o
 obj-$(CONFIG_IOSCHED_NOOP)	+= noop-iosched.o
diff --git a/block/blk-ipoll.c b/block/blk-ipoll.c
new file mode 100644
index 0000000..700b74d
--- /dev/null
+++ b/block/blk-ipoll.c
@@ -0,0 +1,160 @@
+/*
+ * Functions related to interrupt-poll handling in the block layer. This
+ * is similar to NAPI for network devices.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/interrupt.h>
+#include <linux/cpu.h>
+#include <linux/blk-ipoll.h>
+
+#include "blk.h"
+
+static DEFINE_PER_CPU(struct list_head, blk_cpu_ipoll);
+
+void blk_ipoll_sched(struct blk_ipoll *ipoll)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	list_add_tail(&ipoll->list, &__get_cpu_var(blk_cpu_ipoll));
+	__raise_softirq_irqoff(BLOCK_IPOLL_SOFTIRQ);
+	local_irq_restore(flags);
+}
+EXPORT_SYMBOL(blk_ipoll_sched);
+
+void __blk_ipoll_complete(struct blk_ipoll *ipoll)
+{
+	list_del(&ipoll->list);
+	smp_mb__before_clear_bit();
+	clear_bit(IPOLL_F_SCHED, &ipoll->state);
+}
+
+void blk_ipoll_complete(struct blk_ipoll *ipoll)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	__blk_ipoll_complete(ipoll);
+	local_irq_restore(flags);
+}
+
+static void blk_ipoll_softirq(struct softirq_action *h)
+{
+	struct list_head *list = &__get_cpu_var(blk_cpu_ipoll);
+	unsigned long start_time = jiffies;
+	int rearm = 0, budget = 64;
+
+	local_irq_disable();
+
+	while (!list_empty(list)) {
+		struct blk_ipoll *ipoll;
+		int work, weight;
+
+		/*
+		 * If softirq window is exhausted then punt.
+		 */
+		if (budget <= 0 || jiffies != start_time) {
+			rearm = 1;
+			break;
+		}
+
+		local_irq_enable();
+
+		/* Even though interrupts have been re-enabled, this
+		 * access is safe because interrupts can only add new
+		 * entries to the tail of this list, and only ->ipoll()
+		 * calls can remove this head entry from the list.
+		 */
+		ipoll = list_entry(list->next, struct blk_ipoll, list);
+
+		weight = ipoll->weight;
+		work = ipoll->ipoll(ipoll, weight);
+		budget -= work;
+
+		local_irq_disable();
+
+		/* Drivers must not modify the NAPI state if they
+		 * consume the entire weight.  In such cases this code
+		 * still "owns" the NAPI instance and therefore can
+		 * move the instance around on the list at-will.
+		 */
+		if (work >= weight) {
+			if (blk_ipoll_disable_pending(ipoll))
+				__blk_ipoll_complete(ipoll);
+			else
+				list_move_tail(&ipoll->list, list);
+		}
+	}
+
+	if (rearm)
+		__raise_softirq_irqoff(BLOCK_IPOLL_SOFTIRQ);
+
+	local_irq_enable();
+}
+
+void blk_ipoll_disable(struct blk_ipoll *ipoll)
+{
+	set_bit(IPOLL_F_DISABLE, &ipoll->state);
+	while (test_and_set_bit(IPOLL_F_SCHED, &ipoll->state))
+		msleep(1);
+	clear_bit(IPOLL_F_DISABLE, &ipoll->state);
+}
+EXPORT_SYMBOL(blk_ipoll_disable);
+
+void blk_ipoll_enable(struct blk_ipoll *ipoll)
+{
+	BUG_ON(!test_bit(IPOLL_F_SCHED, &ipoll->state));
+        smp_mb__before_clear_bit();
+        clear_bit(IPOLL_F_SCHED, &ipoll->state);
+}
+EXPORT_SYMBOL(blk_ipoll_enable);
+
+void blk_ipoll_init(struct blk_ipoll *ipoll, int weight, blk_ipoll_fn *poll_fn)
+{
+	memset(ipoll, 0, sizeof(*ipoll));
+	INIT_LIST_HEAD(&ipoll->list);
+	ipoll->weight = weight;
+	ipoll->ipoll = poll_fn;
+}
+EXPORT_SYMBOL(blk_ipoll_init);
+
+static int __cpuinit blk_ipoll_cpu_notify(struct notifier_block *self,
+					  unsigned long action, void *hcpu)
+{
+	/*
+	 * If a CPU goes away, splice its entries to the current CPU
+	 * and trigger a run of the softirq
+	 */
+	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+		int cpu = (unsigned long) hcpu;
+
+		local_irq_disable();
+		list_splice_init(&per_cpu(blk_cpu_ipoll, cpu),
+				 &__get_cpu_var(blk_cpu_ipoll));
+		raise_softirq_irqoff(BLOCK_IPOLL_SOFTIRQ);
+		local_irq_enable();
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block __cpuinitdata blk_ipoll_cpu_notifier = {
+	.notifier_call	= blk_ipoll_cpu_notify,
+};
+
+static __init int blk_ipoll_setup(void)
+{
+	int i;
+
+	for_each_possible_cpu(i)
+		INIT_LIST_HEAD(&per_cpu(blk_cpu_ipoll, i));
+
+	open_softirq(BLOCK_IPOLL_SOFTIRQ, blk_ipoll_softirq);
+	register_hotcpu_notifier(&blk_ipoll_cpu_notifier);
+	return 0;
+}
+subsys_initcall(blk_ipoll_setup);
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 08186ec..9701f93 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -45,6 +45,7 @@
 #include <scsi/scsi_host.h>
 #include <scsi/scsi_cmnd.h>
 #include <linux/libata.h>
+#include <linux/blk-ipoll.h>
 
 #define DRV_NAME	"ahci"
 #define DRV_VERSION	"3.0"
@@ -2047,7 +2048,7 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat)
 		ata_port_abort(ap);
 }
 
-static void ahci_port_intr(struct ata_port *ap)
+static int ahci_port_intr(struct ata_port *ap)
 {
 	void __iomem *port_mmio = ahci_port_base(ap);
 	struct ata_eh_info *ehi = &ap->link.eh_info;
@@ -2077,7 +2078,7 @@ static void ahci_port_intr(struct ata_port *ap)
 
 	if (unlikely(status & PORT_IRQ_ERROR)) {
 		ahci_error_intr(ap, status);
-		return;
+		return 0;
 	}
 
 	if (status & PORT_IRQ_SDB_FIS) {
@@ -2118,7 +2119,48 @@ static void ahci_port_intr(struct ata_port *ap)
 		ehi->err_mask |= AC_ERR_HSM;
 		ehi->action |= ATA_EH_RESET;
 		ata_port_freeze(ap);
+		rc = 0;
+	}
+
+	return rc;
+}
+
+static void ap_irq_disable(struct ata_port *ap)
+{
+	void __iomem *port_mmio = ahci_port_base(ap);
+
+	writel(0, port_mmio + PORT_IRQ_MASK);
+}
+
+static void ap_irq_enable(struct ata_port *ap)
+{
+	void __iomem *port_mmio = ahci_port_base(ap);
+	struct ahci_port_priv *pp = ap->private_data;
+
+	writel(pp->intr_mask, port_mmio + PORT_IRQ_MASK);
+}
+
+static int ahci_ipoll(struct blk_ipoll *ipoll, int budget)
+{
+	struct ata_port *ap = container_of(ipoll, struct ata_port, ipoll);
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&ap->host->lock, flags);
+	ret = ahci_port_intr(ap);
+	spin_unlock_irqrestore(&ap->host->lock, flags);
+
+	if (ret > ipoll->max) {
+		printk("new ipoll max of %d\n", ret);
+		ipoll->max = ret;
+	}
+
+	if (ret < budget) {
+		blk_ipoll_complete(ipoll);
+		ap_irq_enable(ap);
 	}
+
+	return ret;
 }
 
 static irqreturn_t ahci_interrupt(int irq, void *dev_instance)
@@ -2151,7 +2193,10 @@ static irqreturn_t ahci_interrupt(int irq, void *dev_instance)
 
 		ap = host->ports[i];
 		if (ap) {
-			ahci_port_intr(ap);
+			if (blk_ipoll_sched_prep(&ap->ipoll)) {
+				ap_irq_disable(ap);
+				blk_ipoll_sched(&ap->ipoll);
+			}
 			VPRINTK("port %u\n", i);
 		} else {
 			VPRINTK("port %u (no irq)\n", i);
@@ -2407,6 +2452,8 @@ static int ahci_port_start(struct ata_port *ap)
 
 	ap->private_data = pp;
 
+	blk_ipoll_init(&ap->ipoll, 32, ahci_ipoll);
+
 	/* engage engines, captain */
 	return ahci_port_resume(ap);
 }
diff --git a/include/linux/blk-ipoll.h b/include/linux/blk-ipoll.h
new file mode 100644
index 0000000..dcc638f
--- /dev/null
+++ b/include/linux/blk-ipoll.h
@@ -0,0 +1,38 @@
+#ifndef BLK_IPOLL_H
+#define BLK_IPOLL_H
+
+struct blk_ipoll;
+typedef int (blk_ipoll_fn)(struct blk_ipoll *, int);
+
+struct blk_ipoll {
+	struct list_head list;
+	unsigned long state;
+	int weight;
+	int max;
+	blk_ipoll_fn *ipoll;
+};
+
+enum {
+	IPOLL_F_SCHED		= 0,
+	IPOLL_F_DISABLE		= 1,
+};
+
+static inline int blk_ipoll_sched_prep(struct blk_ipoll *ipoll)
+{
+	return !test_bit(IPOLL_F_DISABLE, &ipoll->state) &&
+		!test_and_set_bit(IPOLL_F_SCHED, &ipoll->state);
+}
+
+static inline int blk_ipoll_disable_pending(struct blk_ipoll *ipoll)
+{
+	return test_bit(IPOLL_F_DISABLE, &ipoll->state);
+}
+
+extern void blk_ipoll_sched(struct blk_ipoll *);
+extern void blk_ipoll_init(struct blk_ipoll *, int, blk_ipoll_fn *);
+extern void blk_ipoll_complete(struct blk_ipoll *);
+extern void __blk_ipoll_complete(struct blk_ipoll *);
+extern void blk_ipoll_enable(struct blk_ipoll *);
+extern void blk_ipoll_disable(struct blk_ipoll *);
+
+#endif
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 91bb76f..514cd75 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -335,6 +335,7 @@ enum
 	NET_TX_SOFTIRQ,
 	NET_RX_SOFTIRQ,
 	BLOCK_SOFTIRQ,
+	BLOCK_IPOLL_SOFTIRQ,
 	TASKLET_SOFTIRQ,
 	SCHED_SOFTIRQ,
 	HRTIMER_SOFTIRQ,
diff --git a/include/linux/libata.h b/include/linux/libata.h
index cf1e54e..9f9df5e 100644
--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -37,6 +37,7 @@
 #include <scsi/scsi_host.h>
 #include <linux/acpi.h>
 #include <linux/cdrom.h>
+#include <linux/blk-ipoll.h>
 
 /*
  * Define if arch has non-standard setup.  This is a _PCI_ standard
@@ -759,6 +760,7 @@ struct ata_port {
 #endif
 	/* owned by EH */
 	u8			sector_buf[ATA_SECT_SIZE] ____cacheline_aligned;
+	struct blk_ipoll	ipoll;
 };
 
 /* The following initializer overrides a method to NULL whether one of
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 12/12] writeback: check for registered bdi in flusher add and inode dirty
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (22 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 12/13] block: first cut at implementing a NAPI approach for block devices Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:31 ` [PATCH 13/13] block: unlocked completion test patch Jens Axboe
  2009-05-25  7:33 ` [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Also a debugging aid. We want to catch dirty inodes being added to
backing devices that don't do writeback.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |    7 +++++++
 include/linux/backing-dev.h |    1 +
 mm/backing-dev.c            |    6 ++++++
 3 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 1292a88..bf8e0d5 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -583,6 +583,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 		 */
 		if (!was_dirty) {
 			struct bdi_writeback *wb = inode_get_wb(inode);
+			struct backing_dev_info *bdi = wb->bdi;
+
+			if (bdi_cap_writeback_dirty(bdi) &&
+			    !test_bit(BDI_registered, &bdi->state)) {
+				WARN_ON(1);
+				printk("bdi-%s not registered\n", bdi->name);
+			}
 
 			inode->dirtied_when = jiffies;
 			list_move(&inode->i_list, &wb->b_dirty);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 4507569..0b20d4b 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -31,6 +31,7 @@ enum bdi_state {
 	BDI_wblist_lock,	/* bdi->wb_list now needs locking */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
+	BDI_registered,		/* bdi_register() was done */
 	BDI_unused,		/* Available bits start here */
 };
 
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 0834ff9..ed66081 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -504,6 +504,11 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
+	if (WARN_ON(!test_bit(BDI_registered, &bdi->state))) {
+		printk("bdi %p/%s is not registered!\n", bdi, bdi->name);
+		return;
+	}
+
 	/*
 	 * Check with the helper whether to proceed adding a task. Will only
 	 * abort if we two or more simultanous calls to
@@ -612,6 +617,7 @@ remove_err:
 	}
 
 	bdi_debug_register(bdi, dev_name(dev));
+	set_bit(BDI_registered, &bdi->state);
 exit:
 	return ret;
 }
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 13/13] block: unlocked completion test patch
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (23 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 12/12] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
@ 2009-05-25  7:31 ` Jens Axboe
  2009-05-25  7:33 ` [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:31 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang, Jens Axboe

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-softirq.c       |   24 ++++++++++++++++++++----
 block/blk-timeout.c       |    2 +-
 block/blk.h               |    2 ++
 drivers/ata/libata-scsi.c |    1 +
 drivers/scsi/scsi.c       |    5 ++++-
 include/linux/blkdev.h    |    2 +-
 include/scsi/scsi_cmnd.h  |    1 +
 7 files changed, 30 insertions(+), 7 deletions(-)

diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index ee9c216..ebe3e1c 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -101,7 +101,7 @@ static struct notifier_block __cpuinitdata blk_cpu_notifier = {
 	.notifier_call	= blk_cpu_notify,
 };
 
-void __blk_complete_request(struct request *req)
+void __blk_complete_request(struct request *req, int locked)
 {
 	struct request_queue *q = req->q;
 	unsigned long flags;
@@ -133,8 +133,15 @@ do_local:
 		 * entries there, someone already raised the irq but it
 		 * hasn't run yet.
 		 */
-		if (list->next == &req->csd.list)
-			raise_softirq_irqoff(BLOCK_SOFTIRQ);
+		if (list->next == &req->csd.list) {
+			if (locked)
+				raise_softirq_irqoff(BLOCK_SOFTIRQ);
+			else {
+				local_irq_restore(flags);
+				q->softirq_done_fn(req);
+				return;
+			}
+		}
 	} else if (raise_blk_irq(ccpu, req))
 		goto do_local;
 
@@ -157,10 +164,19 @@ void blk_complete_request(struct request *req)
 	if (unlikely(blk_should_fake_timeout(req->q)))
 		return;
 	if (!blk_mark_rq_complete(req))
-		__blk_complete_request(req);
+		__blk_complete_request(req, 1);
 }
 EXPORT_SYMBOL(blk_complete_request);
 
+void blk_complete_request_nolock(struct request *req)
+{
+	if (unlikely(blk_should_fake_timeout(req->q)))
+		return;
+	if (!blk_mark_rq_complete(req))
+		__blk_complete_request(req, 0);
+}
+EXPORT_SYMBOL(blk_complete_request_nolock);
+
 static __init int blk_softirq_init(void)
 {
 	int i;
diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index 1ec0d50..1744d87 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -84,7 +84,7 @@ static void blk_rq_timed_out(struct request *req)
 	ret = q->rq_timed_out_fn(req);
 	switch (ret) {
 	case BLK_EH_HANDLED:
-		__blk_complete_request(req);
+		__blk_complete_request(req, 0);
 		break;
 	case BLK_EH_RESET_TIMER:
 		blk_clear_rq_complete(req);
diff --git a/block/blk.h b/block/blk.h
index 79c85f7..41f2f70 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -43,6 +43,8 @@ static inline void blk_clear_rq_complete(struct request *rq)
 	clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags);
 }
 
+extern void __blk_complete_request(struct request *, int);
+
 #ifdef CONFIG_FAIL_IO_TIMEOUT
 int blk_should_fake_timeout(struct request_queue *);
 ssize_t part_timeout_show(struct device *, struct device_attribute *, char *);
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index b0179c1..de185b0 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -2621,6 +2621,7 @@ static void atapi_qc_complete(struct ata_queued_cmd *qc)
 		cmd->result = SAM_STAT_GOOD;
 	}
 
+	cmd->unlocked = 1;
 	qc->scsidone(cmd);
 	ata_qc_free(qc);
 }
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index da33b7a..d0d2afe 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -754,7 +754,10 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
  */
 static void scsi_done(struct scsi_cmnd *cmd)
 {
-	blk_complete_request(cmd->request);
+	if (cmd->unlocked)
+		blk_complete_request_nolock(cmd->request);
+	else
+		blk_complete_request(cmd->request);
 }
 
 /* Move this to a header if it becomes more generally useful */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f2b6b92..9aac81e 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -865,7 +865,7 @@ extern int blk_end_request_callback(struct request *rq, int error,
 				unsigned int nr_bytes,
 				int (drv_callback)(struct request *));
 extern void blk_complete_request(struct request *);
-extern void __blk_complete_request(struct request *);
+extern void blk_complete_request_nolock(struct request *);
 extern void blk_abort_request(struct request *);
 extern void blk_abort_queue(struct request_queue *);
 extern void blk_update_request(struct request *rq, int error,
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 649ad36..c0f06a3 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -75,6 +75,7 @@ struct scsi_cmnd {
 
 	int retries;
 	int allowed;
+	int unlocked;
 
 	unsigned char prot_op;
 	unsigned char prot_type;
-- 
1.6.3.rc0.1.gf800


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 0/12] Per-bdi writeback flusher threads #5
  2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
                   ` (24 preceding siblings ...)
  2009-05-25  7:31 ` [PATCH 13/13] block: unlocked completion test patch Jens Axboe
@ 2009-05-25  7:33 ` Jens Axboe
  25 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:33 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, david, hch, akpm, jack, yanmin_zhang

On Mon, May 25 2009, Jens Axboe wrote:
> Hi,
> 
> Here's the 5th version of the writeback patches. Changes since v4:

Yikes, please disregard this series. Apparently the patch directory had
some old/mixed experimental patches from another branch, so it's all
messed up.

I'll post a correct v6 very shortly.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:30 ` [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer Jens Axboe
@ 2009-05-25  7:41   ` Christoph Hellwig
  2009-05-25  7:46     ` Jens Axboe
  2009-05-25  8:15     ` Pekka Enberg
  2009-05-25  9:28   ` Boaz Harrosh
  2 siblings, 1 reply; 61+ messages in thread
From: Christoph Hellwig @ 2009-05-25  7:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, linux-scsi

On Mon, May 25, 2009 at 09:30:48AM +0200, Jens Axboe wrote:
> Fold the sense buffer into the command, thereby eliminating a slab
> allocation and free per command.

Might help to send it to linux-scsi to get people to review and apply it
:)

But that patch looks good to me, avoiding one allocation for each
command and simplifying the code.  I try to remember why these were
two slabs to start with but can't find any reason.

Btw, we might just want to declare the sense buffer directly as a sized
array in the scsi command as there really doesn't seem to be a reason
not to allocate it.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:41   ` Christoph Hellwig
@ 2009-05-25  7:46     ` Jens Axboe
  2009-05-25  7:50       ` Christoph Hellwig
  0 siblings, 1 reply; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:46 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, akpm, jack,
	yanmin_zhang, linux-scsi

On Mon, May 25 2009, Christoph Hellwig wrote:
> On Mon, May 25, 2009 at 09:30:48AM +0200, Jens Axboe wrote:
> > Fold the sense buffer into the command, thereby eliminating a slab
> > allocation and free per command.
> 
> Might help to send it to linux-scsi to get people to review and apply it
> :)

yeah, as I later posted, this wasn't meant to be sent out as part of
the writeback series :-)

> But that patch looks good to me, avoiding one allocation for each
> command and simplifying the code.  I try to remember why these were
> two slabs to start with but can't find any reason.
> 
> Btw, we might just want to declare the sense buffer directly as a sized
> array in the scsi command as there really doesn't seem to be a reason
> not to allocate it.

That is also a workable solution. I've been trying to cut down on the
number of allocations required per-IO, and there's definitely still some
low hanging fruit there. Some of it is already included, like the inline
io_vecs in the bio.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:46     ` Jens Axboe
@ 2009-05-25  7:50       ` Christoph Hellwig
  2009-05-25  7:54         ` Jens Axboe
                           ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: Christoph Hellwig @ 2009-05-25  7:50 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, linux-kernel, linux-fsdevel, chris.mason,
	david, akpm, jack, yanmin_zhang, linux-scsi

On Mon, May 25, 2009 at 09:46:47AM +0200, Jens Axboe wrote:
> > But that patch looks good to me, avoiding one allocation for each
> > command and simplifying the code.  I try to remember why these were
> > two slabs to start with but can't find any reason.
> > 
> > Btw, we might just want to declare the sense buffer directly as a sized
> > array in the scsi command as there really doesn't seem to be a reason
> > not to allocate it.
> 
> That is also a workable solution. I've been trying to cut down on the
> number of allocations required per-IO, and there's definitely still some
> low hanging fruit there. Some of it is already included, like the inline
> io_vecs in the bio.

Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
->destroy_cmnd method to the host template which optionally move the
command allocation to the LLDD.  That way we can embedd the scsi_cmnd
into the drivers per-commad structure and eliminate another memory
allocation.  Also this would naturally extend the keep one cmnd pool
to drivers without requiring additional code.  As a second step it
would also allow killing the scsi_host_cmd_pool byt just having
a set of library routines that drivers which need SLAB_CACHE_DMA can
use.


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:50       ` Christoph Hellwig
@ 2009-05-25  7:54         ` Jens Axboe
  2009-05-25 10:33         ` Boaz Harrosh
  2009-05-26  4:36         ` FUJITA Tomonori
  2 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25  7:54 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, akpm, jack,
	yanmin_zhang, linux-scsi

On Mon, May 25 2009, Christoph Hellwig wrote:
> On Mon, May 25, 2009 at 09:46:47AM +0200, Jens Axboe wrote:
> > > But that patch looks good to me, avoiding one allocation for each
> > > command and simplifying the code.  I try to remember why these were
> > > two slabs to start with but can't find any reason.
> > > 
> > > Btw, we might just want to declare the sense buffer directly as a sized
> > > array in the scsi command as there really doesn't seem to be a reason
> > > not to allocate it.
> > 
> > That is also a workable solution. I've been trying to cut down on the
> > number of allocations required per-IO, and there's definitely still some
> > low hanging fruit there. Some of it is already included, like the inline
> > io_vecs in the bio.
> 
> Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
> ->destroy_cmnd method to the host template which optionally move the
> command allocation to the LLDD.  That way we can embedd the scsi_cmnd
> into the drivers per-commad structure and eliminate another memory
> allocation.  Also this would naturally extend the keep one cmnd pool
> to drivers without requiring additional code.  As a second step it
> would also allow killing the scsi_host_cmd_pool byt just having
> a set of library routines that drivers which need SLAB_CACHE_DMA can
> use.

That's a good idea and could kill one more alloc/free per IO. I'll add
that to the mix!

And in case anyone is interested, the patches that got mixed up with the
writeback patches are from the 'ssd' branch. It's basically a mix of
experimental patches for improving performance. Some are crap, some are
worth continuing with. There's been a steady influx of patches from
there to mainline, so it's a continually changing branch. Well not so
much lately since I spent most of the time in the writeback branch, but
otherwise.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense  buffer
  2009-05-25  7:30 ` [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer Jens Axboe
@ 2009-05-25  8:15     ` Pekka Enberg
  2009-05-25  8:15     ` Pekka Enberg
  2009-05-25  9:28   ` Boaz Harrosh
  2 siblings, 0 replies; 61+ messages in thread
From: Pekka Enberg @ 2009-05-25  8:15 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, Christoph Lameter, Matthew Wilcox, Nick Piggin

On Mon, May 25, 2009 at 10:30 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> Fold the sense buffer into the command, thereby eliminating a slab
> allocation and free per command.
>
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Interesting. I wonder how this affects the SLAB vs. SLUB regression
people are seeing on high end machines in OLTP benchmarks.

                                      Pekka

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
@ 2009-05-25  8:15     ` Pekka Enberg
  0 siblings, 0 replies; 61+ messages in thread
From: Pekka Enberg @ 2009-05-25  8:15 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, Christoph Lameter, Matthew Wilcox, Nick Piggin

On Mon, May 25, 2009 at 10:30 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> Fold the sense buffer into the command, thereby eliminating a slab
> allocation and free per command.
>
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Interesting. I wonder how this affects the SLAB vs. SLUB regression
people are seeing on high end machines in OLTP benchmarks.

                                      Pekka

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/13] aio: mostly crap
  2009-05-25  7:30 ` [PATCH 05/13] aio: mostly crap Jens Axboe
@ 2009-05-25  9:09   ` Jan Kara
  0 siblings, 0 replies; 61+ messages in thread
From: Jan Kara @ 2009-05-25  9:09 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

On Mon 25-05-09 09:30:52, Jens Axboe wrote:
> First attempts at getting rid of some locking in aio
  I suppose this shouldn't be in the series ;).

								Honza

> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
>  fs/aio.c            |  151 +++++++++++++++++++++++++++++++++------------------
>  include/linux/aio.h |   11 ++--
>  2 files changed, 103 insertions(+), 59 deletions(-)
> 
> diff --git a/fs/aio.c b/fs/aio.c
> index 76da125..98c82f2 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -79,9 +79,8 @@ static int __init aio_setup(void)
>  	return 0;
>  }
>  
> -static void aio_free_ring(struct kioctx *ctx)
> +static void __aio_free_ring(struct kioctx *ctx, struct aio_ring_info *info)
>  {
> -	struct aio_ring_info *info = &ctx->ring_info;
>  	long i;
>  
>  	for (i=0; i<info->nr_pages; i++)
> @@ -99,16 +98,28 @@ static void aio_free_ring(struct kioctx *ctx)
>  	info->nr = 0;
>  }
>  
> -static int aio_setup_ring(struct kioctx *ctx)
> +static void aio_free_ring(struct kioctx *ctx)
> +{
> +	unsigned int i;
> +
> +	for_each_possible_cpu(i) {
> +		struct aio_ring_info *info = per_cpu_ptr(ctx->ring_info, i);
> +
> +		 __aio_free_ring(ctx, info);
> +	}
> +	free_percpu(ctx->ring_info);
> +	ctx->ring_info = NULL;
> +}
> +
> +static int __aio_setup_ring(struct kioctx *ctx, struct aio_ring_info *info)
>  {
>  	struct aio_ring *ring;
> -	struct aio_ring_info *info = &ctx->ring_info;
>  	unsigned nr_events = ctx->max_reqs;
>  	unsigned long size;
>  	int nr_pages;
>  
> -	/* Compensate for the ring buffer's head/tail overlap entry */
> -	nr_events += 2;	/* 1 is required, 2 for good luck */
> +	/* round nr_event to next power of 2 */
> +	nr_events = roundup_pow_of_two(nr_events);
>  
>  	size = sizeof(struct aio_ring);
>  	size += sizeof(struct io_event) * nr_events;
> @@ -117,8 +128,6 @@ static int aio_setup_ring(struct kioctx *ctx)
>  	if (nr_pages < 0)
>  		return -EINVAL;
>  
> -	nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring)) / sizeof(struct io_event);
> -
>  	info->nr = 0;
>  	info->ring_pages = info->internal_pages;
>  	if (nr_pages > AIO_RING_PAGES) {
> @@ -158,7 +167,8 @@ static int aio_setup_ring(struct kioctx *ctx)
>  	ring = kmap_atomic(info->ring_pages[0], KM_USER0);
>  	ring->nr = nr_events;	/* user copy */
>  	ring->id = ctx->user_id;
> -	ring->head = ring->tail = 0;
> +	atomic_set(&ring->head, 0);
> +	ring->tail = 0;
>  	ring->magic = AIO_RING_MAGIC;
>  	ring->compat_features = AIO_RING_COMPAT_FEATURES;
>  	ring->incompat_features = AIO_RING_INCOMPAT_FEATURES;
> @@ -168,6 +178,27 @@ static int aio_setup_ring(struct kioctx *ctx)
>  	return 0;
>  }
>  
> +static int aio_setup_ring(struct kioctx *ctx)
> +{
> +	unsigned int i;
> +	int ret;
> +
> +	ctx->ring_info = alloc_percpu(struct aio_ring_info);
> +	if (!ctx->ring_info)
> +		return -ENOMEM;
> +
> +	ret = 0;
> +	for_each_possible_cpu(i) {
> +		struct aio_ring_info *info = per_cpu_ptr(ctx->ring_info, i);
> +		int err;
> +
> +		err = __aio_setup_ring(ctx, info);
> +		if (err && !ret)
> +			ret = err;
> +	}
> +
> +	return ret;
> +}
>  
>  /* aio_ring_event: returns a pointer to the event at the given index from
>   * kmap_atomic(, km).  Release the pointer with put_aio_ring_event();
> @@ -176,8 +207,8 @@ static int aio_setup_ring(struct kioctx *ctx)
>  #define AIO_EVENTS_FIRST_PAGE	((PAGE_SIZE - sizeof(struct aio_ring)) / sizeof(struct io_event))
>  #define AIO_EVENTS_OFFSET	(AIO_EVENTS_PER_PAGE - AIO_EVENTS_FIRST_PAGE)
>  
> -#define aio_ring_event(info, nr, km) ({					\
> -	unsigned pos = (nr) + AIO_EVENTS_OFFSET;			\
> +#define aio_ring_event(info, __nr, km) ({				\
> +	unsigned pos = ((__nr) & ((info)->nr - 1)) + AIO_EVENTS_OFFSET;	\
>  	struct io_event *__event;					\
>  	__event = kmap_atomic(						\
>  			(info)->ring_pages[pos / AIO_EVENTS_PER_PAGE], km); \
> @@ -262,7 +293,6 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
>  
>  	atomic_set(&ctx->users, 1);
>  	spin_lock_init(&ctx->ctx_lock);
> -	spin_lock_init(&ctx->ring_info.ring_lock);
>  	init_waitqueue_head(&ctx->wait);
>  
>  	INIT_LIST_HEAD(&ctx->active_reqs);
> @@ -426,6 +456,7 @@ void exit_aio(struct mm_struct *mm)
>  static struct kiocb *__aio_get_req(struct kioctx *ctx)
>  {
>  	struct kiocb *req = NULL;
> +	struct aio_ring_info *info;
>  	struct aio_ring *ring;
>  	int okay = 0;
>  
> @@ -448,15 +479,18 @@ static struct kiocb *__aio_get_req(struct kioctx *ctx)
>  	/* Check if the completion queue has enough free space to
>  	 * accept an event from this io.
>  	 */
> -	spin_lock_irq(&ctx->ctx_lock);
> -	ring = kmap_atomic(ctx->ring_info.ring_pages[0], KM_USER0);
> -	if (ctx->reqs_active < aio_ring_avail(&ctx->ring_info, ring)) {
> +	local_irq_disable();
> +	info = per_cpu_ptr(ctx->ring_info, smp_processor_id());
> +	ring = kmap_atomic(info->ring_pages[0], KM_IRQ0);
> +	if (ctx->reqs_active < aio_ring_avail(info, ring)) {
> +		spin_lock(&ctx->ctx_lock);
>  		list_add(&req->ki_list, &ctx->active_reqs);
>  		ctx->reqs_active++;
> +		spin_unlock(&ctx->ctx_lock);
>  		okay = 1;
>  	}
> -	kunmap_atomic(ring, KM_USER0);
> -	spin_unlock_irq(&ctx->ctx_lock);
> +	kunmap_atomic(ring, KM_IRQ0);
> +	local_irq_enable();
>  
>  	if (!okay) {
>  		kmem_cache_free(kiocb_cachep, req);
> @@ -578,9 +612,11 @@ int aio_put_req(struct kiocb *req)
>  {
>  	struct kioctx *ctx = req->ki_ctx;
>  	int ret;
> +
>  	spin_lock_irq(&ctx->ctx_lock);
>  	ret = __aio_put_req(ctx, req);
>  	spin_unlock_irq(&ctx->ctx_lock);
> +
>  	return ret;
>  }
>  
> @@ -954,7 +990,7 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
>  	struct aio_ring	*ring;
>  	struct io_event	*event;
>  	unsigned long	flags;
> -	unsigned long	tail;
> +	unsigned	tail;
>  	int		ret;
>  
>  	/*
> @@ -972,15 +1008,14 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
>  		return 1;
>  	}
>  
> -	info = &ctx->ring_info;
> -
>  	/* add a completion event to the ring buffer.
>  	 * must be done holding ctx->ctx_lock to prevent
>  	 * other code from messing with the tail
>  	 * pointer since we might be called from irq
>  	 * context.
>  	 */
> -	spin_lock_irqsave(&ctx->ctx_lock, flags);
> +	local_irq_save(flags);
> +	info = per_cpu_ptr(ctx->ring_info, smp_processor_id());
>  
>  	if (iocb->ki_run_list.prev && !list_empty(&iocb->ki_run_list))
>  		list_del_init(&iocb->ki_run_list);
> @@ -996,8 +1031,6 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
>  
>  	tail = info->tail;
>  	event = aio_ring_event(info, tail, KM_IRQ0);
> -	if (++tail >= info->nr)
> -		tail = 0;
>  
>  	event->obj = (u64)(unsigned long)iocb->ki_obj.user;
>  	event->data = iocb->ki_user_data;
> @@ -1013,13 +1046,14 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
>  	 */
>  	smp_wmb();	/* make event visible before updating tail */
>  
> +	tail++;
>  	info->tail = tail;
>  	ring->tail = tail;
>  
>  	put_aio_ring_event(event, KM_IRQ0);
>  	kunmap_atomic(ring, KM_IRQ1);
>  
> -	pr_debug("added to ring %p at [%lu]\n", iocb, tail);
> +	pr_debug("added to ring %p at [%u]\n", iocb, tail);
>  
>  	/*
>  	 * Check if the user asked us to deliver the result through an
> @@ -1031,7 +1065,9 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
>  
>  put_rq:
>  	/* everything turned out well, dispose of the aiocb. */
> +	spin_lock(&ctx->ctx_lock);
>  	ret = __aio_put_req(ctx, iocb);
> +	spin_unlock(&ctx->ctx_lock);
>  
>  	/*
>  	 * We have to order our ring_info tail store above and test
> @@ -1044,49 +1080,58 @@ put_rq:
>  	if (waitqueue_active(&ctx->wait))
>  		wake_up(&ctx->wait);
>  
> -	spin_unlock_irqrestore(&ctx->ctx_lock, flags);
> +	local_irq_restore(flags);
> +	return ret;
> +}
> +
> +static int __aio_read_evt(struct aio_ring_info *info, struct aio_ring *ring,
> +			  struct io_event *ent)
> +{
> +	struct io_event *evp;
> +	unsigned head;
> +	int ret = 0;
> +
> +	do {
> +		head = atomic_read(&ring->head);
> +		if (head == ring->tail)
> +			break;
> +		evp = aio_ring_event(info, head, KM_USER1);
> +		*ent = *evp;
> +		smp_mb(); /* finish reading the event before updatng the head */
> +		++ret;
> +		put_aio_ring_event(evp, KM_USER1);
> +	} while (head != atomic_cmpxchg(&ring->head, head, head + 1));
> +
>  	return ret;
>  }
>  
>  /* aio_read_evt
>   *	Pull an event off of the ioctx's event ring.  Returns the number of 
>   *	events fetched (0 or 1 ;-)
> - *	FIXME: make this use cmpxchg.
> - *	TODO: make the ringbuffer user mmap()able (requires FIXME).
> + *	TODO: make the ringbuffer user mmap()able
>   */
>  static int aio_read_evt(struct kioctx *ioctx, struct io_event *ent)
>  {
> -	struct aio_ring_info *info = &ioctx->ring_info;
> -	struct aio_ring *ring;
> -	unsigned long head;
> -	int ret = 0;
> +	int i, ret = 0;
>  
> -	ring = kmap_atomic(info->ring_pages[0], KM_USER0);
> -	dprintk("in aio_read_evt h%lu t%lu m%lu\n",
> -		 (unsigned long)ring->head, (unsigned long)ring->tail,
> -		 (unsigned long)ring->nr);
> +	for_each_possible_cpu(i) {
> +		struct aio_ring_info *info;
> +		struct aio_ring *ring;
>  
> -	if (ring->head == ring->tail)
> -		goto out;
> +		info = per_cpu_ptr(ioctx->ring_info, i);
> +		ring = kmap_atomic(info->ring_pages[0], KM_USER0);
> +		dprintk("in aio_read_evt h%u t%u m%u\n",
> +			 atomic_read(&ring->head), ring->tail, ring->nr);
>  
> -	spin_lock(&info->ring_lock);
> -
> -	head = ring->head % info->nr;
> -	if (head != ring->tail) {
> -		struct io_event *evp = aio_ring_event(info, head, KM_USER1);
> -		*ent = *evp;
> -		head = (head + 1) % info->nr;
> -		smp_mb(); /* finish reading the event before updatng the head */
> -		ring->head = head;
> -		ret = 1;
> -		put_aio_ring_event(evp, KM_USER1);
> +		ret = __aio_read_evt(info, ring, ent);
> +		kunmap_atomic(ring, KM_USER0);
> +		if (ret)
> +			break;
>  	}
> -	spin_unlock(&info->ring_lock);
>  
> -out:
> -	kunmap_atomic(ring, KM_USER0);
> -	dprintk("leaving aio_read_evt: %d  h%lu t%lu\n", ret,
> -		 (unsigned long)ring->head, (unsigned long)ring->tail);
> +	dprintk("leaving aio_read_evt: %d  h%u t%u\n", ret,
> +		 atomic_read(&ring->head), ring->tail);
> +
>  	return ret;
>  }
>  
> diff --git a/include/linux/aio.h b/include/linux/aio.h
> index b16a957..9a7acb4 100644
> --- a/include/linux/aio.h
> +++ b/include/linux/aio.h
> @@ -149,7 +149,7 @@ struct kiocb {
>  struct aio_ring {
>  	unsigned	id;	/* kernel internal index number */
>  	unsigned	nr;	/* number of io_events */
> -	unsigned	head;
> +	atomic_t	head;
>  	unsigned	tail;
>  
>  	unsigned	magic;
> @@ -157,11 +157,11 @@ struct aio_ring {
>  	unsigned	incompat_features;
>  	unsigned	header_length;	/* size of aio_ring */
>  
> -
> -	struct io_event		io_events[0];
> +	struct io_event	io_events[0];
>  }; /* 128 bytes + ring size */
>  
> -#define aio_ring_avail(info, ring)	(((ring)->head + (info)->nr - 1 - (ring)->tail) % (info)->nr)
> +#define aio_ring_avail(info, ring)					\
> +	((info)->nr + (unsigned) atomic_read(&(ring)->head) - (ring)->tail)
>  
>  #define AIO_RING_PAGES	8
>  struct aio_ring_info {
> @@ -169,7 +169,6 @@ struct aio_ring_info {
>  	unsigned long		mmap_size;
>  
>  	struct page		**ring_pages;
> -	spinlock_t		ring_lock;
>  	long			nr_pages;
>  
>  	unsigned		nr, tail;
> @@ -197,7 +196,7 @@ struct kioctx {
>  	/* sys_io_setup currently limits this to an unsigned int */
>  	unsigned		max_reqs;
>  
> -	struct aio_ring_info	ring_info;
> +	struct aio_ring_info	*ring_info;
>  
>  	struct delayed_work	wq;
>  
> -- 
> 1.6.3.rc0.1.gf800
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:30 ` [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer Jens Axboe
  2009-05-25  7:41   ` Christoph Hellwig
  2009-05-25  8:15     ` Pekka Enberg
@ 2009-05-25  9:28   ` Boaz Harrosh
  2009-05-26  1:45     ` Roland Dreier
  2009-05-26  5:23     ` FUJITA Tomonori
  2 siblings, 2 replies; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-25  9:28 UTC (permalink / raw)
  To: Jens Axboe, FUJITA Tomonori
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang

On 05/25/2009 10:30 AM, Jens Axboe wrote:
> Fold the sense buffer into the command, thereby eliminating a slab
> allocation and free per command.
> 
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

Jens Hi.

I'm "TO:" this to Tomo.

This is the way it used to be for a long time. It was only recently changed by
Tomo because of a bug on none-cache-coherent arches that need to dma-access the
sense_buffer and also on the other hand change scsi_cmnd members by CPU.

In my opinion all you need is an __aligned(SMP_CACHE_BYTES) declaration at
sense_buffer[] and let there be a hole at the end before the array. But Tomo
did not like that, so he separated the two.

Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
at the __align() attribute. (So only stupid ARCHES get hurt)

(see below)
> ---
>  drivers/scsi/scsi.c      |   44 ++++++++++----------------------------------
>  include/scsi/scsi_cmnd.h |   12 ++++++------
>  2 files changed, 16 insertions(+), 40 deletions(-)
> 
> diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
> index 166417a..6a993af 100644
> --- a/drivers/scsi/scsi.c
> +++ b/drivers/scsi/scsi.c
> @@ -133,7 +133,6 @@ EXPORT_SYMBOL(scsi_device_type);
>  
>  struct scsi_host_cmd_pool {
>  	struct kmem_cache	*cmd_slab;
> -	struct kmem_cache	*sense_slab;
>  	unsigned int		users;
>  	char			*cmd_name;
>  	char			*sense_name;
> @@ -167,20 +166,9 @@ static DEFINE_MUTEX(host_cmd_pool_mutex);
>  static struct scsi_cmnd *
>  scsi_pool_alloc_command(struct scsi_host_cmd_pool *pool, gfp_t gfp_mask)
>  {
> -	struct scsi_cmnd *cmd;
> -
> -	cmd = kmem_cache_zalloc(pool->cmd_slab, gfp_mask | pool->gfp_mask);
> -	if (!cmd)
> -		return NULL;
> +	gfp_t gfp = gfp_mask | pool->gfp_mask;
>  
> -	cmd->sense_buffer = kmem_cache_alloc(pool->sense_slab,
> -					     gfp_mask | pool->gfp_mask);
> -	if (!cmd->sense_buffer) {
> -		kmem_cache_free(pool->cmd_slab, cmd);
> -		return NULL;
> -	}
> -
> -	return cmd;
> +	return kmem_cache_zalloc(pool->cmd_slab, gfp);
>  }
>  
>  /**
> @@ -198,7 +186,6 @@ scsi_pool_free_command(struct scsi_host_cmd_pool *pool,
>  	if (cmd->prot_sdb)
>  		kmem_cache_free(scsi_sdb_cache, cmd->prot_sdb);
>  
> -	kmem_cache_free(pool->sense_slab, cmd->sense_buffer);
>  	kmem_cache_free(pool->cmd_slab, cmd);
>  }
>  
> @@ -242,7 +229,6 @@ scsi_host_alloc_command(struct Scsi_Host *shost, gfp_t gfp_mask)
>  struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
>  {
>  	struct scsi_cmnd *cmd;
> -	unsigned char *buf;
>  
>  	cmd = scsi_host_alloc_command(shost, gfp_mask);
>  
> @@ -257,11 +243,8 @@ struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
>  		}
>  		spin_unlock_irqrestore(&shost->free_list_lock, flags);
>  
> -		if (cmd) {
> -			buf = cmd->sense_buffer;
> +		if (cmd)
>  			memset(cmd, 0, sizeof(*cmd));
> -			cmd->sense_buffer = buf;
> -		}
>  	}
>  
>  	return cmd;
> @@ -361,19 +344,13 @@ static struct scsi_host_cmd_pool *scsi_get_host_cmd_pool(gfp_t gfp_mask)
>  	pool = (gfp_mask & __GFP_DMA) ? &scsi_cmd_dma_pool :
>  		&scsi_cmd_pool;
>  	if (!pool->users) {
> -		pool->cmd_slab = kmem_cache_create(pool->cmd_name,
> -						   sizeof(struct scsi_cmnd), 0,
> -						   pool->slab_flags, NULL);
> -		if (!pool->cmd_slab)
> -			goto fail;
> +		unsigned int slab_size;
>  
> -		pool->sense_slab = kmem_cache_create(pool->sense_name,
> -						     SCSI_SENSE_BUFFERSIZE, 0,
> -						     pool->slab_flags, NULL);
> -		if (!pool->sense_slab) {
> -			kmem_cache_destroy(pool->cmd_slab);
> +		slab_size = sizeof(struct scsi_cmnd) + SCSI_SENSE_BUFFERSIZE;

You might as well just define sense array as unsigned char sense_buffer[SCSI_SENSE_BUFFERSIZE]
and save the manual calculation.

> +		pool->cmd_slab = kmem_cache_create(pool->cmd_name, slab_size,
> +						   0, pool->slab_flags, NULL);
> +		if (!pool->cmd_slab)
>  			goto fail;
> -		}
>  	}
>  
>  	pool->users++;
> @@ -397,10 +374,9 @@ static void scsi_put_host_cmd_pool(gfp_t gfp_mask)
>  	 */
>  	BUG_ON(pool->users == 0);
>  
> -	if (!--pool->users) {
> +	if (!--pool->users)
>  		kmem_cache_destroy(pool->cmd_slab);
> -		kmem_cache_destroy(pool->sense_slab);
> -	}
> +
>  	mutex_unlock(&host_cmd_pool_mutex);
>  }
>  
> diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
> index 43b50d3..649ad36 100644
> --- a/include/scsi/scsi_cmnd.h
> +++ b/include/scsi/scsi_cmnd.h
> @@ -102,12 +102,6 @@ struct scsi_cmnd {
>  	struct request *request;	/* The command we are
>  				   	   working on */
>  
> -#define SCSI_SENSE_BUFFERSIZE 	96
> -	unsigned char *sense_buffer;
> -				/* obtained by REQUEST SENSE when
> -				 * CHECK CONDITION is received on original
> -				 * command (auto-sense) */
> -
>  	/* Low-level done function - can be used by low-level driver to point
>  	 *        to completion function.  Not used by mid/upper level code. */
>  	void (*scsi_done) (struct scsi_cmnd *);
> @@ -129,6 +123,12 @@ struct scsi_cmnd {
>  	int result;		/* Status code from lower level driver */
>  
>  	unsigned char tag;	/* SCSI-II queued command tag */
> +
> +#define SCSI_SENSE_BUFFERSIZE 	96
> +	unsigned char sense_buffer[0];

+	unsigned char sense_buffer[BUFFERSIZE]; __aligned(CACHE_COHERENT_BYTES)

> +				/* obtained by REQUEST SENSE when
> +				 * CHECK CONDITION is received on original
> +				 * command (auto-sense) */
>  };
>  
>  extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);

Thanks
Boaz

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:50       ` Christoph Hellwig
  2009-05-25  7:54         ` Jens Axboe
@ 2009-05-25 10:33         ` Boaz Harrosh
  2009-05-25 10:42           ` Christoph Hellwig
  2009-05-26  4:36         ` FUJITA Tomonori
  2 siblings, 1 reply; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-25 10:33 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, linux-kernel, linux-fsdevel, chris.mason, david,
	akpm, jack, yanmin_zhang, linux-scsi, Matthew Wilcox, Andi Kleen,
	James Bottomley

On 05/25/2009 10:50 AM, Christoph Hellwig wrote:
> On Mon, May 25, 2009 at 09:46:47AM +0200, Jens Axboe wrote:
>>> But that patch looks good to me, avoiding one allocation for each
>>> command and simplifying the code.  I try to remember why these were
>>> two slabs to start with but can't find any reason.
>>>

I posted an answer to that here:
http://www.spinics.net/lists/kernel/msg889604.html

It was done for none-cache-coherent systems that need to dma into sense_buffer.

>>> Btw, we might just want to declare the sense buffer directly as a sized
>>> array in the scsi command as there really doesn't seem to be a reason
>>> not to allocate it.
>> That is also a workable solution. I've been trying to cut down on the
>> number of allocations required per-IO, and there's definitely still some
>> low hanging fruit there. Some of it is already included, like the inline
>> io_vecs in the bio.
> 
> Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
> ->destroy_cmnd method to the host template which optionally move the
> command allocation to the LLDD.  That way we can embedd the scsi_cmnd
> into the drivers per-commad structure and eliminate another memory
> allocation.  

It is nice in theory, but when trying to implement I encountered some
problems.

1. If we have a machine with few type of hosts active each with it's own
cmnd_slab we end up with many more slabs then today. Even though at the
end they all happen to be of the same size. (With the pool reserves it
can get big also).

2. Some considerations are system-wide and system-dependent (like above
   problem) and should be centralized into one place so if/when things
   change they can be changed in one place.
2.1. Don't trust driver writers to do the right thing.

3. There are common needs that are cross drivers, and no code should be duplicated.
   For example Bidi-Commands, use of scsi_ptr, ISA_DMA, ... and such not.

I totally agree with the need and robustness this will give...

So I think we might approach this from a slightly different way.

Hosts specify an size_of_private_command at host template, which might include
the common-scsi_cmnd + sense_buffer + private_cmnd + optional scsi_ptr +
bidi_data_buffer + ...

scsi_ml has a base-two-sized set of slabs that get allocated on first use
(at host registration) and hosts get to share the pools with same size.
[Alternatively hosts just keep reserved-commands list and regular use gets
 kmalloced]

All handling is centralized, with special needs specified at host template
like dma_mask ISA_flags and such.

> Also this would naturally extend the keep one cmnd pool
> to drivers without requiring additional code.  As a second step it
> would also allow killing the scsi_host_cmd_pool byt just having
> a set of library routines that drivers which need SLAB_CACHE_DMA can
> use.
> 

I'm afraid this will need to be done first. Layout the new facilities
and implement today's lowest-denominator on top of that. Then convert
driver by driver. Finally remove the old croft.

Lets all agree on a rough sketch and we can all get behind it. There are
a few people I know that will help, Matthew Wilcox, Me , perhaps Jens
and Christoph.

This will also finally help Andi Kleen's needs with the masked allocators

Boaz

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25 10:33         ` Boaz Harrosh
@ 2009-05-25 10:42           ` Christoph Hellwig
  2009-05-25 10:49             ` Jens Axboe
  0 siblings, 1 reply; 61+ messages in thread
From: Christoph Hellwig @ 2009-05-25 10:42 UTC (permalink / raw)
  To: Boaz Harrosh
  Cc: Christoph Hellwig, Jens Axboe, linux-kernel, linux-fsdevel,
	chris.mason, david, akpm, jack, yanmin_zhang, linux-scsi,
	Matthew Wilcox, Andi Kleen, James Bottomley

On Mon, May 25, 2009 at 01:33:42PM +0300, Boaz Harrosh wrote:
> 1. If we have a machine with few type of hosts active each with it's own
> cmnd_slab we end up with many more slabs then today. Even though at the
> end they all happen to be of the same size. (With the pool reserves it
> can get big also).

Note that this should be optional.  Device not having their own
per-command structure would continue using the global pools.  Those
that have their own per-command structures already have their own pools
anyway.

> Hosts specify an size_of_private_command at host template, which might include
> the common-scsi_cmnd + sense_buffer + private_cmnd + optional scsi_ptr +
> bidi_data_buffer + ...

That sounds fine, too. 


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25 10:42           ` Christoph Hellwig
@ 2009-05-25 10:49             ` Jens Axboe
  0 siblings, 0 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-25 10:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Boaz Harrosh, linux-kernel, linux-fsdevel, chris.mason, david,
	akpm, jack, yanmin_zhang, linux-scsi, Matthew Wilcox, Andi Kleen,
	James Bottomley

On Mon, May 25 2009, Christoph Hellwig wrote:
> On Mon, May 25, 2009 at 01:33:42PM +0300, Boaz Harrosh wrote:
> > 1. If we have a machine with few type of hosts active each with it's own
> > cmnd_slab we end up with many more slabs then today. Even though at the
> > end they all happen to be of the same size. (With the pool reserves it
> > can get big also).
> 
> Note that this should be optional.  Device not having their own
> per-command structure would continue using the global pools.  Those
> that have their own per-command structures already have their own pools
> anyway.

The multiple pools of the same size "issue" can also easily be resolved
by having SCSI provide a way to setup/destroy these pools. Then it can
just reuse an existing pool, if it has the same size.

However, I doubt that this is really a real life issue that's worth
worrying about.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  8:15     ` Pekka Enberg
  (?)
@ 2009-05-25 11:32     ` Nick Piggin
  -1 siblings, 0 replies; 61+ messages in thread
From: Nick Piggin @ 2009-05-25 11:32 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Jens Axboe, linux-kernel, linux-fsdevel, chris.mason, david, hch,
	akpm, jack, yanmin_zhang, Christoph Lameter, Matthew Wilcox

On Mon, May 25, 2009 at 11:15:42AM +0300, Pekka Enberg wrote:
> On Mon, May 25, 2009 at 10:30 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> > Fold the sense buffer into the command, thereby eliminating a slab
> > allocation and free per command.
> >
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> 
> Interesting. I wonder how this affects the SLAB vs. SLUB regression
> people are seeing on high end machines in OLTP benchmarks.

It could improve it. I think these (bios, requests, commands etc) allocations
are what SLUB has trouble with in that workload, so eliminating one of them
should help it. I guess it will help the other allocators as well, but maybe
a smaller relative improvement?


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  9:28   ` Boaz Harrosh
@ 2009-05-26  1:45     ` Roland Dreier
  2009-05-26  4:36       ` FUJITA Tomonori
  2009-05-26  5:23     ` FUJITA Tomonori
  1 sibling, 1 reply; 61+ messages in thread
From: Roland Dreier @ 2009-05-26  1:45 UTC (permalink / raw)
  To: Boaz Harrosh
  Cc: Jens Axboe, FUJITA Tomonori, linux-kernel, linux-fsdevel,
	chris.mason, david, hch, akpm, jack, yanmin_zhang

 > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
 > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
 > at the __align() attribute. (So only stupid ARCHES get hurt)

this seems to come up repeatedly -- I had a proposal a _long_ time ago
that never quite got merged, cf http://lwn.net/Articles/2265/ and
http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
step further and create a __dma_buffer annotation for structure members.

Maybe I should resurrect that work one more time?

 - R.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  1:45     ` Roland Dreier
@ 2009-05-26  4:36       ` FUJITA Tomonori
  2009-05-26  6:29         ` Jens Axboe
  0 siblings, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  4:36 UTC (permalink / raw)
  To: rdreier
  Cc: bharrosh, jens.axboe, fujita.tomonori, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang

On Mon, 25 May 2009 18:45:25 -0700
Roland Dreier <rdreier@cisco.com> wrote:

>  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
>  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
>  > at the __align() attribute. (So only stupid ARCHES get hurt)
> 
> this seems to come up repeatedly -- I had a proposal a _long_ time ago
> that never quite got merged, cf http://lwn.net/Articles/2265/ and
> http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a

Yeah, I think that Benjamin did last time:

http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html

IIRC, James didn't like it so I wrote the current code. I didn't see
any big performance difference with scsi_debug:

http://marc.info/?l=linux-scsi&m=120038907123706&w=2

Jens, you see the performance difference due to this unification?


Personally, I don't fancy __cached_alignment__ annotation much. I
prefer to leave it behind a memory allocator.


> step further and create a __dma_buffer annotation for structure members.



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  7:50       ` Christoph Hellwig
  2009-05-25  7:54         ` Jens Axboe
  2009-05-25 10:33         ` Boaz Harrosh
@ 2009-05-26  4:36         ` FUJITA Tomonori
  2009-05-26  5:08           ` FUJITA Tomonori
  2 siblings, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  4:36 UTC (permalink / raw)
  To: hch
  Cc: jens.axboe, linux-kernel, linux-fsdevel, chris.mason, david,
	akpm, jack, yanmin_zhang, linux-scsi

On Mon, 25 May 2009 03:50:08 -0400
Christoph Hellwig <hch@infradead.org> wrote:

> On Mon, May 25, 2009 at 09:46:47AM +0200, Jens Axboe wrote:
> > > But that patch looks good to me, avoiding one allocation for each
> > > command and simplifying the code.  I try to remember why these were
> > > two slabs to start with but can't find any reason.
> > > 
> > > Btw, we might just want to declare the sense buffer directly as a sized
> > > array in the scsi command as there really doesn't seem to be a reason
> > > not to allocate it.
> > 
> > That is also a workable solution. I've been trying to cut down on the
> > number of allocations required per-IO, and there's definitely still some
> > low hanging fruit there. Some of it is already included, like the inline
> > io_vecs in the bio.
> 
> Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
> ->destroy_cmnd method to the host template which optionally move the
> command allocation to the LLDD.  That way we can embedd the scsi_cmnd
> into the drivers per-commad structure and eliminate another memory
> allocation.  Also this would naturally extend the keep one cmnd pool
> to drivers without requiring additional code.  As a second step it
> would also allow killing the scsi_host_cmd_pool byt just having
> a set of library routines that drivers which need SLAB_CACHE_DMA can
> use.

We discussed this idea when I rewrote the sense allocation code, I
think.

I like that idea that unifying scsi_cmnd and llds' per-commad
structure however there is one tricky thing about it.

Currently, a lld frees (or reuses) its per-commad structure when it
calls scsi_done(). SCSI-ml uses scsi_cmd after that so we need to
change the lifetime management (so we need to inspect all the llds,
e.g. this change will break iscsi ldd).

With that change, we can't tell llds how many per-commad structure are
possibly necessary. In general, LLDs want to know the maximum number
of per-commad structure; drivers allocates the number of per-commad
structure equal to host_template->can_queue.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  4:36         ` FUJITA Tomonori
@ 2009-05-26  5:08           ` FUJITA Tomonori
  0 siblings, 0 replies; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  5:08 UTC (permalink / raw)
  To: fujita.tomonori
  Cc: hch, jens.axboe, linux-kernel, linux-fsdevel, chris.mason, david,
	akpm, jack, yanmin_zhang, linux-scsi

On Tue, 26 May 2009 13:36:43 +0900
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> wrote:

> On Mon, 25 May 2009 03:50:08 -0400
> Christoph Hellwig <hch@infradead.org> wrote:
> 
> > On Mon, May 25, 2009 at 09:46:47AM +0200, Jens Axboe wrote:
> > > > But that patch looks good to me, avoiding one allocation for each
> > > > command and simplifying the code.  I try to remember why these were
> > > > two slabs to start with but can't find any reason.
> > > > 
> > > > Btw, we might just want to declare the sense buffer directly as a sized
> > > > array in the scsi command as there really doesn't seem to be a reason
> > > > not to allocate it.
> > > 
> > > That is also a workable solution. I've been trying to cut down on the
> > > number of allocations required per-IO, and there's definitely still some
> > > low hanging fruit there. Some of it is already included, like the inline
> > > io_vecs in the bio.
> > 
> > Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
> > ->destroy_cmnd method to the host template which optionally move the
> > command allocation to the LLDD.  That way we can embedd the scsi_cmnd
> > into the drivers per-commad structure and eliminate another memory
> > allocation.  Also this would naturally extend the keep one cmnd pool
> > to drivers without requiring additional code.  As a second step it
> > would also allow killing the scsi_host_cmd_pool byt just having
> > a set of library routines that drivers which need SLAB_CACHE_DMA can
> > use.
> 
> We discussed this idea when I rewrote the sense allocation code, I
> think.
> 
> I like that idea that unifying scsi_cmnd and llds' per-commad
> structure however there is one tricky thing about it.
> 
> Currently, a lld frees (or reuses) its per-commad structure when it
> calls scsi_done(). SCSI-ml uses scsi_cmd after that so we need to
> change the lifetime management (so we need to inspect all the llds,
> e.g. this change will break iscsi ldd).

Oops, as you said, this can be optional (so we don't need to convert
all llds). But as I said, this changes the definition of when
scsi_cmnd is free and ldds don't like that change, I think.


> With that change, we can't tell llds how many per-commad structure are
> possibly necessary. In general, LLDs want to know the maximum number
> of per-commad structure; drivers allocates the number of per-commad
> structure equal to host_template->can_queue.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-25  9:28   ` Boaz Harrosh
  2009-05-26  1:45     ` Roland Dreier
@ 2009-05-26  5:23     ` FUJITA Tomonori
  1 sibling, 0 replies; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  5:23 UTC (permalink / raw)
  To: bharrosh
  Cc: jens.axboe, fujita.tomonori, linux-kernel, linux-fsdevel,
	chris.mason, david, hch, akpm, jack, yanmin_zhang

On Mon, 25 May 2009 12:28:01 +0300
Boaz Harrosh <bharrosh@panasas.com> wrote:

> On 05/25/2009 10:30 AM, Jens Axboe wrote:
> > Fold the sense buffer into the command, thereby eliminating a slab
> > allocation and free per command.
> > 
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> 
> Jens Hi.
> 
> I'm "TO:" this to Tomo.
> 
> This is the way it used to be for a long time. It was only recently changed by
> Tomo because of a bug on none-cache-coherent arches that need to dma-access the
> sense_buffer and also on the other hand change scsi_cmnd members by CPU.
> 
> In my opinion all you need is an __aligned(SMP_CACHE_BYTES) declaration at
> sense_buffer[] and let there be a hole at the end before the array. But Tomo
> did not like that, so he separated the two.

IIRC, it was not my opinion :) I don't think that putting
CACHE_ALIGNMENT here is a good idea though.

If this separated sense buffer allocation actually hurts the
performance, then I prefer the ->alloc_cmnd and ->destroy_cmnd hook
idea. Then most of llds are happy about the current sense buffer
scheme and some can use ->alloc_cmnd and ->destroy_cmnd hooks for the
better performance.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  4:36       ` FUJITA Tomonori
@ 2009-05-26  6:29         ` Jens Axboe
  2009-05-26  7:25           ` FUJITA Tomonori
  0 siblings, 1 reply; 61+ messages in thread
From: Jens Axboe @ 2009-05-26  6:29 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: rdreier, bharrosh, linux-kernel, linux-fsdevel, chris.mason,
	david, hch, akpm, jack, yanmin_zhang

On Tue, May 26 2009, FUJITA Tomonori wrote:
> On Mon, 25 May 2009 18:45:25 -0700
> Roland Dreier <rdreier@cisco.com> wrote:
> 
> >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > 
> > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> 
> Yeah, I think that Benjamin did last time:
> 
> http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> 
> IIRC, James didn't like it so I wrote the current code. I didn't see
> any big performance difference with scsi_debug:
> 
> http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> 
> Jens, you see the performance difference due to this unification?

Yes, it's definitely a worth while optimization. The problem isn't as
such this specific allocation, it's the total number of allocations we
do for a piece of IO. This sense buffer one is just one of many, I'm
continually working to reduce them. If we get rid of this one and add
the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
one. So in the IO stack, we went from 6 allocations to 3 for a piece of
IO. And then it starts to add up. Even at just 30-50k iops, that's more
than 1% of time in the testing I did.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  6:29         ` Jens Axboe
@ 2009-05-26  7:25           ` FUJITA Tomonori
  2009-05-26  7:32             ` Jens Axboe
  0 siblings, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  7:25 UTC (permalink / raw)
  To: jens.axboe, James.Bottomley
  Cc: fujita.tomonori, rdreier, bharrosh, linux-kernel, linux-fsdevel,
	chris.mason, david, hch, akpm, jack, yanmin_zhang, linux-scsi

On Tue, 26 May 2009 08:29:53 +0200
Jens Axboe <jens.axboe@oracle.com> wrote:

> On Tue, May 26 2009, FUJITA Tomonori wrote:
> > On Mon, 25 May 2009 18:45:25 -0700
> > Roland Dreier <rdreier@cisco.com> wrote:
> > 
> > >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > > 
> > > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> > 
> > Yeah, I think that Benjamin did last time:
> > 
> > http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> > 
> > IIRC, James didn't like it so I wrote the current code. I didn't see
> > any big performance difference with scsi_debug:
> > 
> > http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> > 
> > Jens, you see the performance difference due to this unification?
> 
> Yes, it's definitely a worth while optimization. The problem isn't as
> such this specific allocation, it's the total number of allocations we
> do for a piece of IO. This sense buffer one is just one of many, I'm
> continually working to reduce them. If we get rid of this one and add
> the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
> one. So in the IO stack, we went from 6 allocations to 3 for a piece of
> IO. And then it starts to add up. Even at just 30-50k iops, that's more
> than 1% of time in the testing I did.

I see, thanks. Hmm, possibly slab becomes slower. ;)

Then I think that we need something like the ->alloc_cmd()
method. Let's ask James. 

I don't think that it's just about simply adding the hook; there are
some issues that we need to think about. Though Boaz worries too much
a bit, I think.

I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
there are any llds don't use ->alloc_cmd() worry about the overhead of
the separated sense buffer allocation. If a lld doesn't define the own
alloc_cmd, then I think it's fine to use the generic command
allocator that does the separate sense buffer allocation.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  7:25           ` FUJITA Tomonori
@ 2009-05-26  7:32             ` Jens Axboe
  2009-05-26  7:38               ` FUJITA Tomonori
  2009-05-26  7:56               ` FUJITA Tomonori
  0 siblings, 2 replies; 61+ messages in thread
From: Jens Axboe @ 2009-05-26  7:32 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: James.Bottomley, rdreier, bharrosh, linux-kernel, linux-fsdevel,
	chris.mason, david, hch, akpm, jack, yanmin_zhang, linux-scsi

On Tue, May 26 2009, FUJITA Tomonori wrote:
> On Tue, 26 May 2009 08:29:53 +0200
> Jens Axboe <jens.axboe@oracle.com> wrote:
> 
> > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > On Mon, 25 May 2009 18:45:25 -0700
> > > Roland Dreier <rdreier@cisco.com> wrote:
> > > 
> > > >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > > >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > > >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > > > 
> > > > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > > > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > > > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> > > 
> > > Yeah, I think that Benjamin did last time:
> > > 
> > > http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> > > 
> > > IIRC, James didn't like it so I wrote the current code. I didn't see
> > > any big performance difference with scsi_debug:
> > > 
> > > http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> > > 
> > > Jens, you see the performance difference due to this unification?
> > 
> > Yes, it's definitely a worth while optimization. The problem isn't as
> > such this specific allocation, it's the total number of allocations we
> > do for a piece of IO. This sense buffer one is just one of many, I'm
> > continually working to reduce them. If we get rid of this one and add
> > the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
> > one. So in the IO stack, we went from 6 allocations to 3 for a piece of
> > IO. And then it starts to add up. Even at just 30-50k iops, that's more
> > than 1% of time in the testing I did.
> 
> I see, thanks. Hmm, possibly slab becomes slower. ;)
> 
> Then I think that we need something like the ->alloc_cmd()
> method. Let's ask James. 
> 
> I don't think that it's just about simply adding the hook; there are
> some issues that we need to think about. Though Boaz worries too much
> a bit, I think.
> 
> I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
> there are any llds don't use ->alloc_cmd() worry about the overhead of
> the separated sense buffer allocation. If a lld doesn't define the own
> alloc_cmd, then I think it's fine to use the generic command
> allocator that does the separate sense buffer allocation.

I think we should do the two things seperately. If we can safely inline
the sense buffer in the command by doing the right alignment, then lets
do that. The ->alloc_cmd() approach will be easier to do with an inline
sense buffer.

But there's really no reason to tie the two things together.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  7:32             ` Jens Axboe
@ 2009-05-26  7:38               ` FUJITA Tomonori
  2009-05-26 14:47                 ` James Bottomley
  2009-05-26  7:56               ` FUJITA Tomonori
  1 sibling, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  7:38 UTC (permalink / raw)
  To: jens.axboe
  Cc: fujita.tomonori, James.Bottomley, rdreier, bharrosh,
	linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, linux-scsi

On Tue, 26 May 2009 09:32:29 +0200
Jens Axboe <jens.axboe@oracle.com> wrote:

> On Tue, May 26 2009, FUJITA Tomonori wrote:
> > On Tue, 26 May 2009 08:29:53 +0200
> > Jens Axboe <jens.axboe@oracle.com> wrote:
> > 
> > > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > > On Mon, 25 May 2009 18:45:25 -0700
> > > > Roland Dreier <rdreier@cisco.com> wrote:
> > > > 
> > > > >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > > > >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > > > >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > > > > 
> > > > > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > > > > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > > > > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> > > > 
> > > > Yeah, I think that Benjamin did last time:
> > > > 
> > > > http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> > > > 
> > > > IIRC, James didn't like it so I wrote the current code. I didn't see
> > > > any big performance difference with scsi_debug:
> > > > 
> > > > http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> > > > 
> > > > Jens, you see the performance difference due to this unification?
> > > 
> > > Yes, it's definitely a worth while optimization. The problem isn't as
> > > such this specific allocation, it's the total number of allocations we
> > > do for a piece of IO. This sense buffer one is just one of many, I'm
> > > continually working to reduce them. If we get rid of this one and add
> > > the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
> > > one. So in the IO stack, we went from 6 allocations to 3 for a piece of
> > > IO. And then it starts to add up. Even at just 30-50k iops, that's more
> > > than 1% of time in the testing I did.
> > 
> > I see, thanks. Hmm, possibly slab becomes slower. ;)
> > 
> > Then I think that we need something like the ->alloc_cmd()
> > method. Let's ask James. 
> > 
> > I don't think that it's just about simply adding the hook; there are
> > some issues that we need to think about. Though Boaz worries too much
> > a bit, I think.
> > 
> > I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
> > there are any llds don't use ->alloc_cmd() worry about the overhead of
> > the separated sense buffer allocation. If a lld doesn't define the own
> > alloc_cmd, then I think it's fine to use the generic command
> > allocator that does the separate sense buffer allocation.
> 
> I think we should do the two things seperately. If we can safely inline
> the sense buffer in the command by doing the right alignment, then lets
> do that. The ->alloc_cmd() approach will be easier to do with an inline
> sense buffer.

James rejected this in the past. Let's wait for his verdict.

Yeah, we can inline the sense buffer but as we discussed in the past
several times, there are some good reasons that we should not do so, I
think.


> But there's really no reason to tie the two things together.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  7:32             ` Jens Axboe
  2009-05-26  7:38               ` FUJITA Tomonori
@ 2009-05-26  7:56               ` FUJITA Tomonori
  1 sibling, 0 replies; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26  7:56 UTC (permalink / raw)
  To: jens.axboe
  Cc: fujita.tomonori, James.Bottomley, rdreier, bharrosh,
	linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, linux-scsi

On Tue, 26 May 2009 09:32:29 +0200
Jens Axboe <jens.axboe@oracle.com> wrote:

> On Tue, May 26 2009, FUJITA Tomonori wrote:
> > On Tue, 26 May 2009 08:29:53 +0200
> > Jens Axboe <jens.axboe@oracle.com> wrote:
> > 
> > > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > > On Mon, 25 May 2009 18:45:25 -0700
> > > > Roland Dreier <rdreier@cisco.com> wrote:
> > > > 
> > > > >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > > > >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > > > >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > > > > 
> > > > > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > > > > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > > > > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> > > > 
> > > > Yeah, I think that Benjamin did last time:
> > > > 
> > > > http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> > > > 
> > > > IIRC, James didn't like it so I wrote the current code. I didn't see
> > > > any big performance difference with scsi_debug:
> > > > 
> > > > http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> > > > 
> > > > Jens, you see the performance difference due to this unification?
> > > 
> > > Yes, it's definitely a worth while optimization. The problem isn't as
> > > such this specific allocation, it's the total number of allocations we
> > > do for a piece of IO. This sense buffer one is just one of many, I'm
> > > continually working to reduce them. If we get rid of this one and add
> > > the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
> > > one. So in the IO stack, we went from 6 allocations to 3 for a piece of
> > > IO. And then it starts to add up. Even at just 30-50k iops, that's more
> > > than 1% of time in the testing I did.
> > 
> > I see, thanks. Hmm, possibly slab becomes slower. ;)
> > 
> > Then I think that we need something like the ->alloc_cmd()
> > method. Let's ask James. 
> > 
> > I don't think that it's just about simply adding the hook; there are
> > some issues that we need to think about. Though Boaz worries too much
> > a bit, I think.
> > 
> > I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
> > there are any llds don't use ->alloc_cmd() worry about the overhead of
> > the separated sense buffer allocation. If a lld doesn't define the own
> > alloc_cmd, then I think it's fine to use the generic command
> > allocator that does the separate sense buffer allocation.
> 
> I think we should do the two things seperately. If we can safely inline
> the sense buffer in the command by doing the right alignment, then lets
> do that. The ->alloc_cmd() approach will be easier to do with an inline
> sense buffer.

BTW, only alignment is not enough (Boaz didn't point out it, I
think). You need alignment and a hole after the buffer:

http://lkml.org/lkml/2007/12/20/661


I think that it is one of these good reasons that we should not inline
the sense buffer. We will enlarge scsi_cmnd lots.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/13] block: avoid indirect calls to enter cfq io  scheduler
  2009-05-25  7:30 ` [PATCH 07/13] block: avoid indirect calls to enter cfq io scheduler Jens Axboe
@ 2009-05-26  9:02     ` Nikanth K
  0 siblings, 0 replies; 61+ messages in thread
From: Nikanth K @ 2009-05-26  9:02 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, Nikanth Karthikesan

On Mon, May 25, 2009 at 1:00 PM, Jens Axboe <jens.axboe@oracle.com> wrote:
>
> They can be expensive, since CPUs generally do not branch predict
> well for them.
>

Cant gcc take care of this? Comparing a pointer and then calling the
function directly without using the pointer! Wont this increase the
text size of the kernel and possibly degrade performance? Do you have
any measurement of the improvement? Is this kind of optimization being
used elsewhere?

Thanks
Nikanth

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/13] block: avoid indirect calls to enter cfq io scheduler
@ 2009-05-26  9:02     ` Nikanth K
  0 siblings, 0 replies; 61+ messages in thread
From: Nikanth K @ 2009-05-26  9:02 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, Nikanth Karthikesan

On Mon, May 25, 2009 at 1:00 PM, Jens Axboe <jens.axboe@oracle.com> wrote:
>
> They can be expensive, since CPUs generally do not branch predict
> well for them.
>

Cant gcc take care of this? Comparing a pointer and then calling the
function directly without using the pointer! Wont this increase the
text size of the kernel and possibly degrade performance? Do you have
any measurement of the improvement? Is this kind of optimization being
used elsewhere?

Thanks
Nikanth

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26  7:38               ` FUJITA Tomonori
@ 2009-05-26 14:47                 ` James Bottomley
  2009-05-26 15:13                   ` Matthew Wilcox
                                     ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: James Bottomley @ 2009-05-26 14:47 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: jens.axboe, rdreier, bharrosh, linux-kernel, linux-fsdevel,
	chris.mason, david, hch, akpm, jack, yanmin_zhang, linux-scsi

On Tue, 2009-05-26 at 16:38 +0900, FUJITA Tomonori wrote:
> On Tue, 26 May 2009 09:32:29 +0200
> Jens Axboe <jens.axboe@oracle.com> wrote:
> 
> > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > On Tue, 26 May 2009 08:29:53 +0200
> > > Jens Axboe <jens.axboe@oracle.com> wrote:
> > > 
> > > > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > > > On Mon, 25 May 2009 18:45:25 -0700
> > > > > Roland Dreier <rdreier@cisco.com> wrote:
> > > > > 
> > > > > >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > > > > >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > > > > >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > > > > > 
> > > > > > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > > > > > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > > > > > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> > > > > 
> > > > > Yeah, I think that Benjamin did last time:
> > > > > 
> > > > > http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> > > > > 
> > > > > IIRC, James didn't like it so I wrote the current code. I didn't see
> > > > > any big performance difference with scsi_debug:
> > > > > 
> > > > > http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> > > > > 
> > > > > Jens, you see the performance difference due to this unification?
> > > > 
> > > > Yes, it's definitely a worth while optimization. The problem isn't as
> > > > such this specific allocation, it's the total number of allocations we
> > > > do for a piece of IO. This sense buffer one is just one of many, I'm
> > > > continually working to reduce them. If we get rid of this one and add
> > > > the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
> > > > one. So in the IO stack, we went from 6 allocations to 3 for a piece of
> > > > IO. And then it starts to add up. Even at just 30-50k iops, that's more
> > > > than 1% of time in the testing I did.
> > > 
> > > I see, thanks. Hmm, possibly slab becomes slower. ;)
> > > 
> > > Then I think that we need something like the ->alloc_cmd()
> > > method. Let's ask James. 
> > > 
> > > I don't think that it's just about simply adding the hook; there are
> > > some issues that we need to think about. Though Boaz worries too much
> > > a bit, I think.
> > > 
> > > I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
> > > there are any llds don't use ->alloc_cmd() worry about the overhead of
> > > the separated sense buffer allocation. If a lld doesn't define the own
> > > alloc_cmd, then I think it's fine to use the generic command
> > > allocator that does the separate sense buffer allocation.
> > 
> > I think we should do the two things seperately. If we can safely inline
> > the sense buffer in the command by doing the right alignment, then lets
> > do that. The ->alloc_cmd() approach will be easier to do with an inline
> > sense buffer.
> 
> James rejected this in the past. Let's wait for his verdict.

OK, so the reason for the original problems where the sense buffer was
inlined with the scsi_command was that we need to DMA to the sense
buffer but not to the command.  Plus the command is in fairly constant
use so we get cacheline interference unless they're always in separate
caches.  This necessitates opening up a hole in the command to achieve
this (you can separate to the next cache line if you can guarantee that
the command always begins on a cacheline.  If not, it has to be
2*cacheline).  The L1 cacheline can be up to 128 bytes on some
architectures, so we'd need to know the waste of space is worth it in
terms of speed.  The other problem is that the entire command now has to
be allocated in DMAable memory, which restricts the allocation on some
systems.

> Yeah, we can inline the sense buffer but as we discussed in the past
> several times, there are some good reasons that we should not do so, I
> think.

There are several other approaches:

     1. Keep the sense buffer packed in the command but disallow DMA to
        it, which fixes all the alignment problems.  Then we supply a
        set of rotating DMA buffers to drivers which need to do the DMA
        (which isn't the majority).
     2. Sense is a comparative rarity, so us a more compact pooling
        scheme and discard sense for reuse as soon as we know it's not
        used (as in at softirq time when there's no sense collected).

I'd need a little more clarity on the actual size of the problem before
making any choices.

The other thing to bear in mind is that two allocations of M and N might
be more costly than a single allocation of N+M; however, an allocation
of M+N+extra can end up more costly if the extra causes more page
reclaim before we get an actual command.

James




^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26 14:47                 ` James Bottomley
@ 2009-05-26 15:13                   ` Matthew Wilcox
  2009-05-26 15:31                   ` FUJITA Tomonori
  2009-05-26 16:12                   ` Boaz Harrosh
  2 siblings, 0 replies; 61+ messages in thread
From: Matthew Wilcox @ 2009-05-26 15:13 UTC (permalink / raw)
  To: James Bottomley
  Cc: FUJITA Tomonori, jens.axboe, rdreier, bharrosh, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On Tue, May 26, 2009 at 09:47:02AM -0500, James Bottomley wrote:
> > Yeah, we can inline the sense buffer but as we discussed in the past
> > several times, there are some good reasons that we should not do so, I
> > think.
> 
> There are several other approaches:
> 
>      1. Keep the sense buffer packed in the command but disallow DMA to
>         it, which fixes all the alignment problems.  Then we supply a
>         set of rotating DMA buffers to drivers which need to do the DMA
>         (which isn't the majority).
>      2. Sense is a comparative rarity, so us a more compact pooling
>         scheme and discard sense for reuse as soon as we know it's not
>         used (as in at softirq time when there's no sense collected).
> 
> I'd need a little more clarity on the actual size of the problem before
> making any choices.

I'm not sure if this is what you meant by option 2 or not, but one
proposal was to keep a number of sense buffers around per-host, and only
allocate extras when we run close to empty.

-- 
Matthew Wilcox				Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26 14:47                 ` James Bottomley
  2009-05-26 15:13                   ` Matthew Wilcox
@ 2009-05-26 15:31                   ` FUJITA Tomonori
  2009-05-26 16:05                     ` Boaz Harrosh
  2009-05-26 16:12                   ` Boaz Harrosh
  2 siblings, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-26 15:31 UTC (permalink / raw)
  To: James.Bottomley
  Cc: fujita.tomonori, jens.axboe, rdreier, bharrosh, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On Tue, 26 May 2009 09:47:02 -0500
James Bottomley <James.Bottomley@HansenPartnership.com> wrote:

> On Tue, 2009-05-26 at 16:38 +0900, FUJITA Tomonori wrote:
> > On Tue, 26 May 2009 09:32:29 +0200
> > Jens Axboe <jens.axboe@oracle.com> wrote:
> > 
> > > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > > On Tue, 26 May 2009 08:29:53 +0200
> > > > Jens Axboe <jens.axboe@oracle.com> wrote:
> > > > 
> > > > > On Tue, May 26 2009, FUJITA Tomonori wrote:
> > > > > > On Mon, 25 May 2009 18:45:25 -0700
> > > > > > Roland Dreier <rdreier@cisco.com> wrote:
> > > > > > 
> > > > > > >  > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > > > > > >  > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > > > > > >  > at the __align() attribute. (So only stupid ARCHES get hurt)
> > > > > > > 
> > > > > > > this seems to come up repeatedly -- I had a proposal a _long_ time ago
> > > > > > > that never quite got merged, cf http://lwn.net/Articles/2265/ and
> > > > > > > http://lwn.net/Articles/2269/ -- from 2002 (!?).  The idea is to go a
> > > > > > 
> > > > > > Yeah, I think that Benjamin did last time:
> > > > > > 
> > > > > > http://www.mail-archive.com/linux-scsi@vger.kernel.org/msg12632.html
> > > > > > 
> > > > > > IIRC, James didn't like it so I wrote the current code. I didn't see
> > > > > > any big performance difference with scsi_debug:
> > > > > > 
> > > > > > http://marc.info/?l=linux-scsi&m=120038907123706&w=2
> > > > > > 
> > > > > > Jens, you see the performance difference due to this unification?
> > > > > 
> > > > > Yes, it's definitely a worth while optimization. The problem isn't as
> > > > > such this specific allocation, it's the total number of allocations we
> > > > > do for a piece of IO. This sense buffer one is just one of many, I'm
> > > > > continually working to reduce them. If we get rid of this one and add
> > > > > the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
> > > > > one. So in the IO stack, we went from 6 allocations to 3 for a piece of
> > > > > IO. And then it starts to add up. Even at just 30-50k iops, that's more
> > > > > than 1% of time in the testing I did.
> > > > 
> > > > I see, thanks. Hmm, possibly slab becomes slower. ;)
> > > > 
> > > > Then I think that we need something like the ->alloc_cmd()
> > > > method. Let's ask James. 
> > > > 
> > > > I don't think that it's just about simply adding the hook; there are
> > > > some issues that we need to think about. Though Boaz worries too much
> > > > a bit, I think.
> > > > 
> > > > I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
> > > > there are any llds don't use ->alloc_cmd() worry about the overhead of
> > > > the separated sense buffer allocation. If a lld doesn't define the own
> > > > alloc_cmd, then I think it's fine to use the generic command
> > > > allocator that does the separate sense buffer allocation.
> > > 
> > > I think we should do the two things seperately. If we can safely inline
> > > the sense buffer in the command by doing the right alignment, then lets
> > > do that. The ->alloc_cmd() approach will be easier to do with an inline
> > > sense buffer.
> > 
> > James rejected this in the past. Let's wait for his verdict.
> 
> OK, so the reason for the original problems where the sense buffer was
> inlined with the scsi_command was that we need to DMA to the sense
> buffer but not to the command.  Plus the command is in fairly constant
> use so we get cacheline interference unless they're always in separate
> caches.  This necessitates opening up a hole in the command to achieve
> this (you can separate to the next cache line if you can guarantee that
> the command always begins on a cacheline.  If not, it has to be
> 2*cacheline).  The L1 cacheline can be up to 128 bytes on some
> architectures, so we'd need to know the waste of space is worth it in
> terms of speed.  The other problem is that the entire command now has to
> be allocated in DMAable memory, which restricts the allocation on some
> systems.

Yeah, I think that there are good reasons why we shouldn't inline the
sense buffer. As I already wrote, seems that the DMA requirement
wasn't properly understood; it's not about the alignment.


> > Yeah, we can inline the sense buffer but as we discussed in the past
> > several times, there are some good reasons that we should not do so, I
> > think.
> 
> There are several other approaches:
> 
>      1. Keep the sense buffer packed in the command but disallow DMA to
>         it, which fixes all the alignment problems.  Then we supply a
>         set of rotating DMA buffers to drivers which need to do the DMA
>         (which isn't the majority).

Can we just fix some drivers not to do the DMA with the sense buffer in
scsi_cmnd? IIRC, there are only five or six drivers that do such.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26 15:31                   ` FUJITA Tomonori
@ 2009-05-26 16:05                     ` Boaz Harrosh
  2009-05-27  1:36                       ` FUJITA Tomonori
  0 siblings, 1 reply; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-26 16:05 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: James.Bottomley, jens.axboe, rdreier, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On 05/26/2009 06:31 PM, FUJITA Tomonori wrote:
> 
> Can we just fix some drivers not to do the DMA with the sense buffer in
> scsi_cmnd? IIRC, there are only five or six drivers that do such.

This is not so.
All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
the regular read path. Including all the drivers that do nothing and let
scsi-ml do the REQUEST_SENSE

Actually I have exact numbers, from the last time I did all that

Boaz

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26 14:47                 ` James Bottomley
  2009-05-26 15:13                   ` Matthew Wilcox
  2009-05-26 15:31                   ` FUJITA Tomonori
@ 2009-05-26 16:12                   ` Boaz Harrosh
  2009-05-26 16:28                     ` Boaz Harrosh
  2 siblings, 1 reply; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-26 16:12 UTC (permalink / raw)
  To: James Bottomley
  Cc: FUJITA Tomonori, jens.axboe, rdreier, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On 05/26/2009 05:47 PM, James Bottomley wrote:
> 
> There are several other approaches:
> 
>      1. Keep the sense buffer packed in the command but disallow DMA to
>         it, which fixes all the alignment problems.  Then we supply a
>         set of rotating DMA buffers to drivers which need to do the DMA
>         (which isn't the majority).

This one is not possible because it is scsi-ml in majority of cases that
does the DMA request through scsi_eh_prep_cmnd() and a regular read.
The drivers don't even know anything about it.

>      2. Sense is a comparative rarity, so us a more compact pooling
>         scheme and discard sense for reuse as soon as we know it's not
>         used (as in at softirq time when there's no sense collected).
> 

This is the way to go for sure. And only on ARCHs with none-coherent-cache
all the good ARCHs can just use embedded sense just fine.

> I'd need a little more clarity on the actual size of the problem before
> making any choices.
> 
> The other thing to bear in mind is that two allocations of M and N might
> be more costly than a single allocation of N+M; however, an allocation
> of M+N+extra can end up more costly if the extra causes more page
> reclaim before we get an actual command.
> 
> James
> 
Boaz

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26 16:12                   ` Boaz Harrosh
@ 2009-05-26 16:28                     ` Boaz Harrosh
  0 siblings, 0 replies; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-26 16:28 UTC (permalink / raw)
  To: James Bottomley
  Cc: FUJITA Tomonori, jens.axboe, rdreier, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On 05/26/2009 07:12 PM, Boaz Harrosh wrote:
> On 05/26/2009 05:47 PM, James Bottomley wrote:
>> There are several other approaches:
>>
>>      1. Keep the sense buffer packed in the command but disallow DMA to
>>         it, which fixes all the alignment problems.  Then we supply a
>>         set of rotating DMA buffers to drivers which need to do the DMA
>>         (which isn't the majority).
> 
> This one is not possible because it is scsi-ml in majority of cases that
> does the DMA request through scsi_eh_prep_cmnd() and a regular read.
> The drivers don't even know anything about it.
> 

I retract that no, yes and scsi-ml is one more possible client of
the "rotating DMA buffers"

>>      2. Sense is a comparative rarity, so us a more compact pooling
>>         scheme and discard sense for reuse as soon as we know it's not
>>         used (as in at softirq time when there's no sense collected).
>>
> 
> This is the way to go for sure. And only on ARCHs with none-coherent-cache
> all the good ARCHs can just use embedded sense just fine.
> 
>> I'd need a little more clarity on the actual size of the problem before
>> making any choices.
>>
>> The other thing to bear in mind is that two allocations of M and N might
>> be more costly than a single allocation of N+M; however, an allocation
>> of M+N+extra can end up more costly if the extra causes more page
>> reclaim before we get an actual command.
>>
>> James
>>
> Boaz
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-26 16:05                     ` Boaz Harrosh
@ 2009-05-27  1:36                       ` FUJITA Tomonori
  2009-05-27  7:54                         ` Boaz Harrosh
  0 siblings, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-27  1:36 UTC (permalink / raw)
  To: bharrosh
  Cc: fujita.tomonori, James.Bottomley, jens.axboe, rdreier,
	linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, linux-scsi

On Tue, 26 May 2009 19:05:05 +0300
Boaz Harrosh <bharrosh@panasas.com> wrote:

> On 05/26/2009 06:31 PM, FUJITA Tomonori wrote:
> > 
> > Can we just fix some drivers not to do the DMA with the sense buffer in
> > scsi_cmnd? IIRC, there are only five or six drivers that do such.
> 
> This is not so.
> All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
> the regular read path. Including all the drivers that do nothing and let
> scsi-ml do the REQUEST_SENSE
> 
> Actually I have exact numbers, from the last time I did all that

Hmm, we discussed this before, I think.

scsi-ml uses scsi_eh_prep_cmnd only via scsi_send_eh_cmnd(). There are
some users of scsi_send_eh_cmnd in scsi-ml but only scsi_request_sense
does the DMA in the sense_buffer of scsi_cmnd.

Only scsi_error_handler() uses scsi_request_sense() and
scsi_send_eh_cmnd() works synchronously. So scsi-ml can easily avoid
the the DMA in the sense_buffer of scsi_cmnd if we have one sense
buffer per scsi_host.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-27  1:36                       ` FUJITA Tomonori
@ 2009-05-27  7:54                         ` Boaz Harrosh
  2009-05-27  8:26                           ` FUJITA Tomonori
  0 siblings, 1 reply; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-27  7:54 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: James.Bottomley, jens.axboe, rdreier, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On 05/27/2009 04:36 AM, FUJITA Tomonori wrote:
> On Tue, 26 May 2009 19:05:05 +0300
> Boaz Harrosh <bharrosh@panasas.com> wrote:
> 
>> On 05/26/2009 06:31 PM, FUJITA Tomonori wrote:
>>> Can we just fix some drivers not to do the DMA with the sense buffer in
>>> scsi_cmnd? IIRC, there are only five or six drivers that do such.
>> This is not so.
>> All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
>> the regular read path. Including all the drivers that do nothing and let
>> scsi-ml do the REQUEST_SENSE
>>
>> Actually I have exact numbers, from the last time I did all that
> 
> Hmm, we discussed this before, I think.
> 

Sure we did I sent these patches. to summarize, 3 types of drivers:
1. Only memcpy into sense_buffer				- 60%
2. Use scsi_eh_prep_cmnd and DMA read into sense.
2.1 Do nothing and scsi-ml does scsi_eh_prep_cmnd		- 30%
3. Prepare DMA descriptors for sense_buffer before execution	- 10%

> scsi-ml uses scsi_eh_prep_cmnd only via scsi_send_eh_cmnd(). There are
> some users of scsi_send_eh_cmnd in scsi-ml but only scsi_request_sense
> does the DMA in the sense_buffer of scsi_cmnd.
> 

Also drivers use scsi_eh_prep_cmnd at interrupt time and proceed to
DMA into the sense_buffer.

> Only scsi_error_handler() uses scsi_request_sense() and
> scsi_send_eh_cmnd() works synchronously. So scsi-ml can easily avoid
> the the DMA in the sense_buffer of scsi_cmnd if we have one sense
> buffer per scsi_host.

Not so. As James explained then, once you have a CHECK_CONDITION return, the
Q-per-host is frozen, yes. But as soon as you send the REQUEST_SENSE the 
target Q is unfrozen again and all in-flight commands can error, much before
the REQUEST_SENSE returns.

Boaz

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-27  7:54                         ` Boaz Harrosh
@ 2009-05-27  8:26                           ` FUJITA Tomonori
  2009-05-27  9:11                             ` Boaz Harrosh
  0 siblings, 1 reply; 61+ messages in thread
From: FUJITA Tomonori @ 2009-05-27  8:26 UTC (permalink / raw)
  To: bharrosh
  Cc: fujita.tomonori, James.Bottomley, jens.axboe, rdreier,
	linux-kernel, linux-fsdevel, chris.mason, david, hch, akpm, jack,
	yanmin_zhang, linux-scsi

On Wed, 27 May 2009 10:54:41 +0300
Boaz Harrosh <bharrosh@panasas.com> wrote:

> On 05/27/2009 04:36 AM, FUJITA Tomonori wrote:
> > On Tue, 26 May 2009 19:05:05 +0300
> > Boaz Harrosh <bharrosh@panasas.com> wrote:
> > 
> >> On 05/26/2009 06:31 PM, FUJITA Tomonori wrote:
> >>> Can we just fix some drivers not to do the DMA with the sense buffer in
> >>> scsi_cmnd? IIRC, there are only five or six drivers that do such.
> >> This is not so.
> >> All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
> >> the regular read path. Including all the drivers that do nothing and let
> >> scsi-ml do the REQUEST_SENSE
> >>
> >> Actually I have exact numbers, from the last time I did all that
> > 
> > Hmm, we discussed this before, I think.
> > 
> 
> Sure we did I sent these patches. to summarize, 3 types of drivers:
> 1. Only memcpy into sense_buffer				- 60%
> 2. Use scsi_eh_prep_cmnd and DMA read into sense.
> 2.1 Do nothing and scsi-ml does scsi_eh_prep_cmnd		- 30%
> 3. Prepare DMA descriptors for sense_buffer before execution	- 10%
> 
> > scsi-ml uses scsi_eh_prep_cmnd only via scsi_send_eh_cmnd(). There are
> > some users of scsi_send_eh_cmnd in scsi-ml but only scsi_request_sense
> > does the DMA in the sense_buffer of scsi_cmnd.
> > 
> 
> Also drivers use scsi_eh_prep_cmnd at interrupt time and proceed to
> DMA into the sense_buffer.
> 
> > Only scsi_error_handler() uses scsi_request_sense() and
> > scsi_send_eh_cmnd() works synchronously. So scsi-ml can easily avoid
> > the the DMA in the sense_buffer of scsi_cmnd if we have one sense
> > buffer per scsi_host.
> 
> Not so. As James explained then, once you have a CHECK_CONDITION return, the
> Q-per-host is frozen, yes. But as soon as you send the REQUEST_SENSE the 
> target Q is unfrozen again and all in-flight commands can error, much before
> the REQUEST_SENSE returns.

Hmm, I'm not sure what you mean.

Why is 'all in-flight commands can error' a problem? The sense_buffer
per host is used by only scsi_eh kernel thread.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer
  2009-05-27  8:26                           ` FUJITA Tomonori
@ 2009-05-27  9:11                             ` Boaz Harrosh
  0 siblings, 0 replies; 61+ messages in thread
From: Boaz Harrosh @ 2009-05-27  9:11 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: James.Bottomley, jens.axboe, rdreier, linux-kernel,
	linux-fsdevel, chris.mason, david, hch, akpm, jack, yanmin_zhang,
	linux-scsi

On 05/27/2009 11:26 AM, FUJITA Tomonori wrote:
> On Wed, 27 May 2009 10:54:41 +0300
> Boaz Harrosh <bharrosh@panasas.com> wrote:
> 
>> On 05/27/2009 04:36 AM, FUJITA Tomonori wrote:
>>> On Tue, 26 May 2009 19:05:05 +0300
>>> Boaz Harrosh <bharrosh@panasas.com> wrote:
>>>
>>>> On 05/26/2009 06:31 PM, FUJITA Tomonori wrote:
>>>>> Can we just fix some drivers not to do the DMA with the sense buffer in
>>>>> scsi_cmnd? IIRC, there are only five or six drivers that do such.
>>>> This is not so.
>>>> All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
>>>> the regular read path. Including all the drivers that do nothing and let
>>>> scsi-ml do the REQUEST_SENSE
>>>>
>>>> Actually I have exact numbers, from the last time I did all that
>>> Hmm, we discussed this before, I think.
>>>
>> Sure we did I sent these patches. to summarize, 3 types of drivers:
>> 1. Only memcpy into sense_buffer				- 60%
>> 2. Use scsi_eh_prep_cmnd and DMA read into sense.
>> 2.1 Do nothing and scsi-ml does scsi_eh_prep_cmnd		- 30%
>> 3. Prepare DMA descriptors for sense_buffer before execution	- 10%
>>
>>> scsi-ml uses scsi_eh_prep_cmnd only via scsi_send_eh_cmnd(). There are
>>> some users of scsi_send_eh_cmnd in scsi-ml but only scsi_request_sense
>>> does the DMA in the sense_buffer of scsi_cmnd.
>>>
>> Also drivers use scsi_eh_prep_cmnd at interrupt time and proceed to
>> DMA into the sense_buffer.
>>
>>> Only scsi_error_handler() uses scsi_request_sense() and
>>> scsi_send_eh_cmnd() works synchronously. So scsi-ml can easily avoid
>>> the the DMA in the sense_buffer of scsi_cmnd if we have one sense
>>> buffer per scsi_host.
>> Not so. As James explained then, once you have a CHECK_CONDITION return, the
>> Q-per-host is frozen, yes. But as soon as you send the REQUEST_SENSE the 
>> target Q is unfrozen again and all in-flight commands can error, much before
>> the REQUEST_SENSE returns.
> 
> Hmm, I'm not sure what you mean.
> 
> Why is 'all in-flight commands can error' a problem? The sense_buffer
> per host is used by only scsi_eh kernel thread.

I agree, then the current situation has a problem.

Target has command A && B in Q.
- A returns CHECK_CONDITION, scsi_eh thread kicks in, sends a REQUEST_SENSE.
- Immediately command B returns with CHECK_CONDITION, Target Q is frozen again.
- message is queued for scsi_eh thread but that one is stuck waiting for the first
  REQUEST_SENSE to return, and the second-REQUEST_SENSE is never sent, target Q is
  frozen forever.

I guess all the drivers that support target queueing do not depend on scsi_eh
thread to issue the REQUEST_SENSE command. As I said, there are very few drivers
that do nothing and let scsi_eh take care of REQUEST_SENSE.

This will not however solve these drivers that might need many
concurrent sense buffers.

Boaz

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2009-05-27  9:11 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-25  7:30 [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe
2009-05-25  7:30 ` [PATCH 01/13] libata: get rid of ATA_MAX_QUEUE loop in ata_qc_complete_multiple() Jens Axboe
2009-05-25  7:30 ` [PATCH 01/12] ntfs: remove old debug check for dirty data in ntfs_put_super() Jens Axboe
2009-05-25  7:30 ` [PATCH 02/13] block: add static rq allocation cache Jens Axboe
2009-05-25  7:30 ` [PATCH 02/12] btrfs: properly register fs backing device Jens Axboe
2009-05-25  7:30 ` [PATCH 03/13] scsi: unify allocation of scsi command and sense buffer Jens Axboe
2009-05-25  7:41   ` Christoph Hellwig
2009-05-25  7:46     ` Jens Axboe
2009-05-25  7:50       ` Christoph Hellwig
2009-05-25  7:54         ` Jens Axboe
2009-05-25 10:33         ` Boaz Harrosh
2009-05-25 10:42           ` Christoph Hellwig
2009-05-25 10:49             ` Jens Axboe
2009-05-26  4:36         ` FUJITA Tomonori
2009-05-26  5:08           ` FUJITA Tomonori
2009-05-25  8:15   ` Pekka Enberg
2009-05-25  8:15     ` Pekka Enberg
2009-05-25 11:32     ` Nick Piggin
2009-05-25  9:28   ` Boaz Harrosh
2009-05-26  1:45     ` Roland Dreier
2009-05-26  4:36       ` FUJITA Tomonori
2009-05-26  6:29         ` Jens Axboe
2009-05-26  7:25           ` FUJITA Tomonori
2009-05-26  7:32             ` Jens Axboe
2009-05-26  7:38               ` FUJITA Tomonori
2009-05-26 14:47                 ` James Bottomley
2009-05-26 15:13                   ` Matthew Wilcox
2009-05-26 15:31                   ` FUJITA Tomonori
2009-05-26 16:05                     ` Boaz Harrosh
2009-05-27  1:36                       ` FUJITA Tomonori
2009-05-27  7:54                         ` Boaz Harrosh
2009-05-27  8:26                           ` FUJITA Tomonori
2009-05-27  9:11                             ` Boaz Harrosh
2009-05-26 16:12                   ` Boaz Harrosh
2009-05-26 16:28                     ` Boaz Harrosh
2009-05-26  7:56               ` FUJITA Tomonori
2009-05-26  5:23     ` FUJITA Tomonori
2009-05-25  7:30 ` [PATCH 03/12] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
2009-05-25  7:30 ` [PATCH 04/13] scsi: get rid of lock in __scsi_put_command() Jens Axboe
2009-05-25  7:30 ` [PATCH 04/12] writeback: switch to per-bdi threads for flushing data Jens Axboe
2009-05-25  7:30 ` [PATCH 05/13] aio: mostly crap Jens Axboe
2009-05-25  9:09   ` Jan Kara
2009-05-25  7:30 ` [PATCH 05/12] writeback: get rid of pdflush completely Jens Axboe
2009-05-25  7:30 ` [PATCH 06/13] block: move elevator ops into the queue Jens Axboe
2009-05-25  7:30 ` [PATCH 06/12] writeback: separate the flushing state/task from the bdi Jens Axboe
2009-05-25  7:30 ` [PATCH 07/13] block: avoid indirect calls to enter cfq io scheduler Jens Axboe
2009-05-26  9:02   ` Nikanth K
2009-05-26  9:02     ` Nikanth K
2009-05-25  7:30 ` [PATCH 07/12] writeback: support > 1 flusher thread per bdi Jens Axboe
2009-05-25  7:30 ` [PATCH 08/13] block: change the tag sync vs async restriction logic Jens Axboe
2009-05-25  7:30 ` [PATCH 08/12] writeback: include default_backing_dev_info in writeback Jens Axboe
2009-05-25  7:31 ` [PATCH 09/13] libata: switch to using block layer tagging support Jens Axboe
2009-05-25  7:31 ` [PATCH 09/12] writeback: allow sleepy exit of default writeback task Jens Axboe
2009-05-25  7:31 ` [PATCH 10/13] block: add function for waiting for a specific free tag Jens Axboe
2009-05-25  7:31 ` [PATCH 10/12] writeback: add some debug inode list counters to bdi stats Jens Axboe
2009-05-25  7:31 ` [PATCH 11/13] block: disallow merging of read-ahead bits into normal request Jens Axboe
2009-05-25  7:31 ` [PATCH 11/12] writeback: add name to backing_dev_info Jens Axboe
2009-05-25  7:31 ` [PATCH 12/13] block: first cut at implementing a NAPI approach for block devices Jens Axboe
2009-05-25  7:31 ` [PATCH 12/12] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
2009-05-25  7:31 ` [PATCH 13/13] block: unlocked completion test patch Jens Axboe
2009-05-25  7:33 ` [PATCH 0/12] Per-bdi writeback flusher threads #5 Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.