All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: linux-kernel@vger.kernel.org,
	containers@lists.linux-foundation.org, dm-devel@redhat.com,
	jens.axboe@oracle.com, ryov@valinux.co.jp,
	balbir@linux.vnet.ibm.com, righi.andrea@gmail.com
Cc: nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
	mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
	fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com,
	taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
	jmoyer@redhat.com, dhaval@linux.vnet.ibm.com,
	m-ikeda@ds.jp.nec.com, agk@redhat.com, vgoyal@redhat.com,
	akpm@linux-foundation.org, peterz@infradead.org,
	jmarchan@redhat.com
Subject: [PATCH 12/24] io-controller: Wait for requests to complete from last queue before new queue is scheduled
Date: Sun, 16 Aug 2009 15:30:34 -0400	[thread overview]
Message-ID: <1250451046-9966-13-git-send-email-vgoyal@redhat.com> (raw)
In-Reply-To: <1250451046-9966-1-git-send-email-vgoyal@redhat.com>

o Currently one can dispatch requests from multiple queues to the disk. This
  is true for hardware which supports queuing. So if a disk support queue
  depth of 31 it is possible that 20 requests are dispatched from queue 1
  and then next queue is scheduled in which dispatches more requests.

o This multiple queue dispatch introduces issues for accurate accounting of
  disk time consumed by a particular queue. For example, if one async queue
  is scheduled in, it can dispatch 31 requests to the disk and then it will
  be expired and a new sync queue might get scheduled in. These 31 requests
  might take a long time to finish but this time is never accounted to the
  async queue which dispatched these requests.

o This patch introduces the functionality where we wait for all the requests
  to finish from previous queue before next queue is scheduled in. That way
  a queue is more accurately accounted for disk time it has consumed. Note
  this still does not take care of errors introduced by disk write caching.

o Because above behavior can result in reduced throughput, this behavior will
  be enabled only if user sets "fairness" tunable to 1.

o This patch helps in achieving more isolation between reads and buffered
  writes in different cgroups. buffered writes typically utilize full queue
  depth and then expire the queue. On the contarary, sequential reads
  typicaly driver queue depth of 1. So despite the fact that writes are
  using more disk time it is never accounted to write queue because we don't
  wait for requests to finish after dispatching these. This patch helps
  do more accurate accounting of disk time, especially for buffered writes
  hence providing better fairness hence better isolation between two cgroups
  running read and write workloads.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |   19 +++++++++++++++++++
 block/elevator-fq.h |   10 +++++++++-
 3 files changed, 29 insertions(+), 1 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 11ae473..52c4710 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2123,6 +2123,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_async),
 #ifdef CONFIG_GROUP_IOSCHED
 	ELV_ATTR(group_idle),
+	ELV_ATTR(fairness),
 #endif
 	__ATTR_NULL
 };
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 09377d0..b1b7dc8 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -677,6 +677,8 @@ SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
 EXPORT_SYMBOL(elv_slice_sync_show);
 SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
+SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
+EXPORT_SYMBOL(elv_fairness_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -701,6 +703,8 @@ STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_sync_store);
 STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
+STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
+EXPORT_SYMBOL(elv_fairness_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -2260,6 +2264,17 @@ void *elv_select_ioq(struct request_queue *q, int force)
 	}
 
 expire:
+	if (efqd->fairness && !force && ioq && ioq->dispatched) {
+		/*
+		 * If there are request dispatched from this queue, don't
+		 * dispatch requests from new queue till all the requests from
+		 * this queue have completed.
+		 */
+		elv_log_ioq(efqd, ioq, "select: wait for requests to finish"
+				" disp=%lu", ioq->dispatched);
+		ioq = NULL;
+		goto keep_queue;
+	}
 	elv_slice_expired(q);
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
@@ -2375,6 +2390,10 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 				goto done;
 			}
 
+			/* Wait for requests to finish from this queue */
+			if (efqd->fairness && ioq->dispatched)
+				goto done;
+
 			/* Expire the queue */
 			elv_slice_expired(q);
 			goto done;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 5f2cb8b..9b9ebf4 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -179,6 +179,12 @@ struct elv_fq_data {
 
 	/* Fallback dummy ioq for extreme OOM conditions */
 	struct io_queue oom_ioq;
+
+	/*
+	 * If set to 1, waits for all request completions from current
+	 * queue before new queue is scheduled in
+	 */
+	unsigned int fairness;
 };
 
 /* Logging facilities. */
@@ -436,7 +442,9 @@ extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 						size_t count);
-
+extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
+					size_t count);
 /* Functions used by elevator.c */
 extern struct elv_fq_data *elv_alloc_fq_data(struct request_queue *q,
 					struct elevator_queue *e);
-- 
1.6.0.6


WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: linux-kernel@vger.kernel.org,
	containers@lists.linux-foundation.org, dm-devel@redhat.com,
	jens.axboe@oracle.com, ryov@valinux.co.jp,
	balbir@linux.vnet.ibm.com, righi.andrea@gmail.com
Cc: paolo.valente@unimore.it, jmarchan@redhat.com,
	dhaval@linux.vnet.ibm.com, peterz@infradead.org,
	guijianfeng@cn.fujitsu.com, fernando@oss.ntt.co.jp,
	lizf@cn.fujitsu.com, jmoyer@redhat.com, mikew@google.com,
	fchecconi@gmail.com, dpshah@google.com, vgoyal@redhat.com,
	nauman@google.com, s-uchida@ap.jp.nec.com,
	akpm@linux-foundation.org, agk@redhat.com, m-ikeda@ds.jp.nec.com
Subject: [PATCH 12/24] io-controller: Wait for requests to complete from last queue before new queue is scheduled
Date: Sun, 16 Aug 2009 15:30:34 -0400	[thread overview]
Message-ID: <1250451046-9966-13-git-send-email-vgoyal@redhat.com> (raw)
In-Reply-To: <1250451046-9966-1-git-send-email-vgoyal@redhat.com>

o Currently one can dispatch requests from multiple queues to the disk. This
  is true for hardware which supports queuing. So if a disk support queue
  depth of 31 it is possible that 20 requests are dispatched from queue 1
  and then next queue is scheduled in which dispatches more requests.

o This multiple queue dispatch introduces issues for accurate accounting of
  disk time consumed by a particular queue. For example, if one async queue
  is scheduled in, it can dispatch 31 requests to the disk and then it will
  be expired and a new sync queue might get scheduled in. These 31 requests
  might take a long time to finish but this time is never accounted to the
  async queue which dispatched these requests.

o This patch introduces the functionality where we wait for all the requests
  to finish from previous queue before next queue is scheduled in. That way
  a queue is more accurately accounted for disk time it has consumed. Note
  this still does not take care of errors introduced by disk write caching.

o Because above behavior can result in reduced throughput, this behavior will
  be enabled only if user sets "fairness" tunable to 1.

o This patch helps in achieving more isolation between reads and buffered
  writes in different cgroups. buffered writes typically utilize full queue
  depth and then expire the queue. On the contarary, sequential reads
  typicaly driver queue depth of 1. So despite the fact that writes are
  using more disk time it is never accounted to write queue because we don't
  wait for requests to finish after dispatching these. This patch helps
  do more accurate accounting of disk time, especially for buffered writes
  hence providing better fairness hence better isolation between two cgroups
  running read and write workloads.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
---
 block/cfq-iosched.c |    1 +
 block/elevator-fq.c |   19 +++++++++++++++++++
 block/elevator-fq.h |   10 +++++++++-
 3 files changed, 29 insertions(+), 1 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 11ae473..52c4710 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -2123,6 +2123,7 @@ static struct elv_fs_entry cfq_attrs[] = {
 	ELV_ATTR(slice_async),
 #ifdef CONFIG_GROUP_IOSCHED
 	ELV_ATTR(group_idle),
+	ELV_ATTR(fairness),
 #endif
 	__ATTR_NULL
 };
diff --git a/block/elevator-fq.c b/block/elevator-fq.c
index 09377d0..b1b7dc8 100644
--- a/block/elevator-fq.c
+++ b/block/elevator-fq.c
@@ -677,6 +677,8 @@ SHOW_FUNCTION(elv_slice_sync_show, efqd->elv_slice[1], 1);
 EXPORT_SYMBOL(elv_slice_sync_show);
 SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1);
 EXPORT_SYMBOL(elv_slice_async_show);
+SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0);
+EXPORT_SYMBOL(elv_fairness_show);
 #undef SHOW_FUNCTION
 
 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV)			\
@@ -701,6 +703,8 @@ STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_sync_store);
 STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
 EXPORT_SYMBOL(elv_slice_async_store);
+STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
+EXPORT_SYMBOL(elv_fairness_store);
 #undef STORE_FUNCTION
 
 void elv_schedule_dispatch(struct request_queue *q)
@@ -2260,6 +2264,17 @@ void *elv_select_ioq(struct request_queue *q, int force)
 	}
 
 expire:
+	if (efqd->fairness && !force && ioq && ioq->dispatched) {
+		/*
+		 * If there are request dispatched from this queue, don't
+		 * dispatch requests from new queue till all the requests from
+		 * this queue have completed.
+		 */
+		elv_log_ioq(efqd, ioq, "select: wait for requests to finish"
+				" disp=%lu", ioq->dispatched);
+		ioq = NULL;
+		goto keep_queue;
+	}
 	elv_slice_expired(q);
 new_queue:
 	ioq = elv_set_active_ioq(q, new_ioq);
@@ -2375,6 +2390,10 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
 				goto done;
 			}
 
+			/* Wait for requests to finish from this queue */
+			if (efqd->fairness && ioq->dispatched)
+				goto done;
+
 			/* Expire the queue */
 			elv_slice_expired(q);
 			goto done;
diff --git a/block/elevator-fq.h b/block/elevator-fq.h
index 5f2cb8b..9b9ebf4 100644
--- a/block/elevator-fq.h
+++ b/block/elevator-fq.h
@@ -179,6 +179,12 @@ struct elv_fq_data {
 
 	/* Fallback dummy ioq for extreme OOM conditions */
 	struct io_queue oom_ioq;
+
+	/*
+	 * If set to 1, waits for all request completions from current
+	 * queue before new queue is scheduled in
+	 */
+	unsigned int fairness;
 };
 
 /* Logging facilities. */
@@ -436,7 +442,9 @@ extern ssize_t elv_slice_sync_store(struct elevator_queue *q, const char *name,
 extern ssize_t elv_slice_async_show(struct elevator_queue *q, char *name);
 extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name,
 						size_t count);
-
+extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name);
+extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name,
+					size_t count);
 /* Functions used by elevator.c */
 extern struct elv_fq_data *elv_alloc_fq_data(struct request_queue *q,
 					struct elevator_queue *e);
-- 
1.6.0.6

  parent reply	other threads:[~2009-08-16 19:34 UTC|newest]

Thread overview: 135+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-16 19:30 [RFC] IO scheduler based IO controller V8 Vivek Goyal
2009-08-16 19:30 ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 01/24] io-controller: Documentation Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
     [not found]   ` <1250451046-9966-2-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-25  3:36     ` Rik van Riel
2009-08-25  3:36   ` Rik van Riel
2009-08-25  3:36     ` Rik van Riel
2009-08-16 19:30 ` [PATCH 02/24] io-controller: Core of the elevator fair queuing Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-17  5:29   ` Gui Jianfeng
2009-08-17  5:29     ` Gui Jianfeng
2009-08-17 20:37     ` Vivek Goyal
2009-08-17 20:37       ` Vivek Goyal
     [not found]     ` <4A88EACC.6010805-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-17 20:37       ` Vivek Goyal
2009-08-19 16:01   ` Jerome Marchand
2009-08-19 18:41     ` Vivek Goyal
2009-08-19 18:41       ` Vivek Goyal
2009-08-20 14:51       ` Jerome Marchand
2009-08-20 15:04         ` Vivek Goyal
2009-08-20 15:04           ` Vivek Goyal
     [not found]         ` <4A8D6302.3080301-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-20 15:04           ` Vivek Goyal
     [not found]       ` <20090819184142.GD4391-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-20 14:51         ` Jerome Marchand
     [not found]     ` <4A8C21DE.1080001-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-19 18:41       ` Vivek Goyal
2009-08-19 18:30   ` Vivek Goyal
2009-08-19 18:30     ` Vivek Goyal
2009-08-21  1:54   ` Gui Jianfeng
     [not found]     ` <4A8DFE3A.6030503-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-21  2:00       ` Vivek Goyal
2009-08-21  2:00     ` Vivek Goyal
2009-08-21  2:00       ` Vivek Goyal
     [not found]   ` <1250451046-9966-3-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-17  5:29     ` Gui Jianfeng
2009-08-19 16:01     ` Jerome Marchand
2009-08-19 18:30     ` Vivek Goyal
2009-08-21  1:54     ` Gui Jianfeng
2009-08-27  2:49     ` Gui Jianfeng
2009-08-27  2:49   ` Gui Jianfeng
2009-08-27 21:08     ` Vivek Goyal
2009-08-27 21:08       ` Vivek Goyal
     [not found]     ` <4A95F444.9040705-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-27 21:08       ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 03/24] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-19  3:36   ` Gui Jianfeng
     [not found]     ` <4A8B7336.7010800-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-19 18:39       ` Vivek Goyal
2009-08-19 18:39     ` Vivek Goyal
2009-08-19 18:39       ` Vivek Goyal
     [not found]   ` <1250451046-9966-4-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-19  3:36     ` Gui Jianfeng
2009-08-19  3:36     ` Gui Jianfeng
2009-08-16 19:30 ` [PATCH 04/24] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 05/24] io-controller: Core scheduler changes to support hierarhical scheduling Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 06/24] io-controller: cgroup related changes for hierarchical group support Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 07/24] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
     [not found] ` <1250451046-9966-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-16 19:30   ` [PATCH 01/24] io-controller: Documentation Vivek Goyal
2009-08-16 19:30   ` [PATCH 02/24] io-controller: Core of the elevator fair queuing Vivek Goyal
2009-08-16 19:30   ` [PATCH 03/24] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-08-16 19:30   ` [PATCH 04/24] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-08-16 19:30   ` [PATCH 05/24] io-controller: Core scheduler changes to support hierarhical scheduling Vivek Goyal
2009-08-16 19:30   ` [PATCH 06/24] io-controller: cgroup related changes for hierarchical group support Vivek Goyal
2009-08-16 19:30   ` [PATCH 07/24] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-08-16 19:30   ` [PATCH 08/24] io-controller: cfq changes to use " Vivek Goyal
2009-08-16 19:30   ` [PATCH 09/24] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-08-16 19:30   ` [PATCH 10/24] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-08-16 19:30   ` [PATCH 11/24] io-controller: Introduce group idling Vivek Goyal
2009-08-16 19:30   ` [PATCH 12/24] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal
2009-08-16 19:30   ` [PATCH 13/24] io-controller: Separate out queue and data Vivek Goyal
2009-08-16 19:30   ` [PATCH 14/24] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-08-16 19:30   ` [PATCH 15/24] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-08-16 19:30   ` [PATCH 16/24] io-controller: deadline " Vivek Goyal
2009-08-16 19:30   ` [PATCH 17/24] io-controller: anticipatory " Vivek Goyal
2009-08-16 19:30   ` [PATCH 18/24] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-08-16 19:30   ` [PATCH 19/24] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-08-16 19:30   ` [PATCH 20/24] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-08-16 19:30   ` [PATCH 21/24] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-08-16 19:30   ` [PATCH 22/24] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-08-16 19:30   ` [PATCH 23/24] io-controller: map sync requests to group using bio tracking info Vivek Goyal
2009-08-16 19:30   ` [PATCH 24/24] io-controller: debug elevator fair queuing support Vivek Goyal
2009-08-16 19:53   ` [RFC] IO scheduler based IO controller V8 Vivek Goyal
2009-08-16 19:30 ` [PATCH 08/24] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 09/24] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 10/24] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 11/24] io-controller: Introduce group idling Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-20  1:46   ` [PATCH] IO-Controller: clear ioq wait flag if a request goes into that ioq Gui Jianfeng
     [not found]     ` <4A8CAAE2.1030804-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-20 13:42       ` Vivek Goyal
2009-08-20 13:42     ` Vivek Goyal
2009-08-20 13:42       ` Vivek Goyal
     [not found]       ` <20090820134221.GC10615-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-21  0:57         ` Gui Jianfeng
2009-08-21  0:57       ` Gui Jianfeng
     [not found]   ` <1250451046-9966-12-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-20  1:46     ` Gui Jianfeng
2009-08-28  1:12     ` [PATCH 11/24] io-controller: Introduce group idling Gui Jianfeng
2009-08-28  1:12   ` Gui Jianfeng
2009-08-28  1:12     ` Gui Jianfeng
2009-08-16 19:30 ` Vivek Goyal [this message]
2009-08-16 19:30   ` [PATCH 12/24] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal
2009-08-24  3:30   ` Gui Jianfeng
     [not found]   ` <1250451046-9966-13-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-24  3:30     ` Gui Jianfeng
2009-08-16 19:30 ` [PATCH 13/24] io-controller: Separate out queue and data Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 14/24] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 15/24] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 16/24] io-controller: deadline " Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 17/24] io-controller: anticipatory " Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 18/24] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-18 11:42   ` Ryo Tsuruta
2009-08-18 11:42     ` Ryo Tsuruta
     [not found]     ` <20090818.204212.59676649.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-08-18 14:26       ` Vivek Goyal
2009-08-18 14:26     ` Vivek Goyal
2009-08-18 14:26       ` Vivek Goyal
2009-08-19  1:43       ` Ryo Tsuruta
     [not found]       ` <20090818142636.GA7367-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-19  1:43         ` Ryo Tsuruta
     [not found]   ` <1250451046-9966-19-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-18 11:42     ` Ryo Tsuruta
2009-08-16 19:30 ` [PATCH 19/24] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 20/24] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 21/24] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 22/24] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 23/24] io-controller: map sync requests to group using bio tracking info Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:30 ` [PATCH 24/24] io-controller: debug elevator fair queuing support Vivek Goyal
2009-08-16 19:30   ` Vivek Goyal
2009-08-16 19:53 ` [RFC] IO scheduler based IO controller V8 Vivek Goyal
2009-08-16 19:53   ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1250451046-9966-13-git-send-email-vgoyal@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=agk@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=containers@lists.linux-foundation.org \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=dm-devel@redhat.com \
    --cc=dpshah@google.com \
    --cc=fchecconi@gmail.com \
    --cc=fernando@oss.ntt.co.jp \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=jens.axboe@oracle.com \
    --cc=jmarchan@redhat.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=m-ikeda@ds.jp.nec.com \
    --cc=mikew@google.com \
    --cc=nauman@google.com \
    --cc=paolo.valente@unimore.it \
    --cc=peterz@infradead.org \
    --cc=righi.andrea@gmail.com \
    --cc=ryov@valinux.co.jp \
    --cc=s-uchida@ap.jp.nec.com \
    --cc=taka@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.