From: Vivek Goyal <vgoyal@redhat.com> To: linux-kernel@vger.kernel.org, containers@lists.linux-foundation.org, dm-devel@redhat.com, jens.axboe@oracle.com, nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com, mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it, ryov@valinux.co.jp, fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com, taka@valinux.co.jp, guijianfeng@cn.fujitsu.com, jmoyer@redhat.com, dhaval@linux.vnet.ibm.com, balbir@linux.vnet.ibm.com, righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com, jbaron@redhat.com Cc: agk@redhat.com, snitzer@redhat.com, vgoyal@redhat.com, akpm@linux-foundation.org, peterz@infradead.org Subject: [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Date: Thu, 2 Jul 2009 16:01:57 -0400 [thread overview] Message-ID: <1246564917-19603-26-git-send-email-vgoyal@redhat.com> (raw) In-Reply-To: <1246564917-19603-1-git-send-email-vgoyal@redhat.com> o A debug patch which does wait for next IO from async queue once it becomes empty. o For async writes, traffic seen by IO scheduler is not in proportion to the weight of the cgroup task/page belongs to. So if there are two processes doing heavy writeouts in two cgroups with weights 1000 and 500 respectively, then IO scheduler does not see more traffic/IO from higher weight cgroup even if IO scheduler tries to give it higher disk time. Effectively, the async queue belonging to higher weight cgroup becomes empty, and gets out of contention for disk and lower weight cgroup gets to use disk giving an impression in user space that higher weight cgroup did not get higher time to disk. o This is more of a problem at page cache level where a higher weight process might be writing out the pages of lower weight process etc and should be fixed there. o While we fix those issues, introducing this debug patch which allows one to idle on async queue (tunable via /sys/blolc/<disk>/queue/async_slice_idle) so that once a higher weight queue becomes empty, instead of expiring it we try to wait for next request to come from that queue hence giving it higher disk time. A higher value of async_slice_idle, around 300ms, helps me get some right numbers for my setup. Note: higher disk time would not necessarily translate in more IO done as higher weight group is not pushing enough IO to io scheduler. It is just a debugging aid to prove correctness of IO controller by providing higher disk times to higher weight cgroup. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> --- block/cfq-iosched.c | 1 + block/elevator-fq.c | 39 ++++++++++++++++++++++++++++++++++++--- block/elevator-fq.h | 5 +++++ 3 files changed, 42 insertions(+), 3 deletions(-) diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index a40a2fa..fbe56a9 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -2093,6 +2093,7 @@ static struct elv_fs_entry cfq_attrs[] = { ELV_ATTR(slice_sync), ELV_ATTR(slice_async), ELV_ATTR(fairness), + ELV_ATTR(async_slice_idle), __ATTR_NULL }; diff --git a/block/elevator-fq.c b/block/elevator-fq.c index 5b3f068..7c83d1e 100644 --- a/block/elevator-fq.c +++ b/block/elevator-fq.c @@ -24,6 +24,7 @@ const int elv_slice_sync = HZ / 10; int elv_slice_async = HZ / 25; const int elv_slice_async_rq = 2; int elv_slice_idle = HZ / 125; +int elv_async_slice_idle = 0; static struct kmem_cache *elv_ioq_pool; /* Maximum Window length for updating average disk rate */ @@ -2819,6 +2820,8 @@ SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1); EXPORT_SYMBOL(elv_slice_async_show); SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0); EXPORT_SYMBOL(elv_fairness_show); +SHOW_FUNCTION(elv_async_slice_idle_show, efqd->elv_async_slice_idle, 1); +EXPORT_SYMBOL(elv_async_slice_idle_show); #undef SHOW_FUNCTION #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ @@ -2845,6 +2848,8 @@ STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1); EXPORT_SYMBOL(elv_slice_async_store); STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 2, 0); EXPORT_SYMBOL(elv_fairness_store); +STORE_FUNCTION(elv_async_slice_idle_store, &efqd->elv_async_slice_idle, 0, UINT_MAX, 1); +EXPORT_SYMBOL(elv_async_slice_idle_store); #undef STORE_FUNCTION void elv_schedule_dispatch(struct request_queue *q) @@ -3018,7 +3023,7 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq, ioq->pid = current->pid; ioq->sched_queue = sched_queue; - if (is_sync && !elv_ioq_class_idle(ioq)) + if (!elv_ioq_class_idle(ioq) && (is_sync || efqd->fairness)) elv_mark_ioq_idle_window(ioq); bfq_init_entity(&ioq->entity, iog); ioq->entity.budget = elv_prio_to_slice(efqd, ioq); @@ -3699,7 +3704,12 @@ static void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy) /* * idle is disabled, either manually or by past process history */ - if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq)) + if ((elv_ioq_sync(ioq) && !efqd->elv_slice_idle) || + !elv_ioq_idle_window(ioq)) + return; + + /* If this is async queue and async_slice_idle is disabled, return */ + if (!elv_ioq_sync(ioq) && !efqd->elv_async_slice_idle) return; /* @@ -3708,7 +3718,10 @@ static void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy) */ if (wait_for_busy) { elv_mark_ioq_wait_busy(ioq); - sl = efqd->elv_slice_idle; + if (elv_ioq_sync(ioq)) + sl = efqd->elv_slice_idle; + else + sl = efqd->elv_async_slice_idle; mod_timer(&efqd->idle_slice_timer, jiffies + sl); elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl); return; @@ -3882,6 +3895,18 @@ void *elv_fq_select_ioq(struct request_queue *q, int force) goto keep_queue; } + /* + * If this is an async queue which has time slice left but not + * requests. Wait busy is also not on (may be because when last + * request completed, ioq was not empty). Wait for the request + * completion. May be completion will turn wait busy on. + */ + if (efqd->fairness && efqd->elv_async_slice_idle && !elv_ioq_sync(ioq) + && elv_ioq_nr_dispatched(ioq)) { + ioq = NULL; + goto keep_queue; + } + slice_expired = 0; expire: if (efqd->fairness >= 2 && !force && ioq && ioq->dispatched @@ -4076,6 +4101,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq) goto done; } + /* For async queue try to do wait busy */ + if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued + && (elv_iog_nr_active(iog) <= 1)) { + elv_ioq_arm_slice_timer(q, 1); + goto done; + } + /* * If there are no requests waiting in this queue, and * there are other queues ready to issue requests, AND @@ -4215,6 +4247,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e) efqd->elv_slice[0] = elv_slice_async; efqd->elv_slice[1] = elv_slice_sync; efqd->elv_slice_idle = elv_slice_idle; + efqd->elv_async_slice_idle = elv_async_slice_idle; efqd->hw_tag = 1; /* For the time being keep fairness enabled by default */ diff --git a/block/elevator-fq.h b/block/elevator-fq.h index 19ac8ca..f089a55 100644 --- a/block/elevator-fq.h +++ b/block/elevator-fq.h @@ -362,6 +362,8 @@ struct elv_fq_data { * users of this functionality. */ unsigned int elv_slice_idle; + /* idle slice for async queue */ + unsigned int elv_async_slice_idle; struct timer_list idle_slice_timer; struct work_struct unplug_work; @@ -647,6 +649,9 @@ extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name, extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name); extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name, size_t count); +extern ssize_t elv_async_slice_idle_show(struct elevator_queue *q, char *name); +extern ssize_t elv_async_slice_idle_store(struct elevator_queue *q, + const char *name, size_t count); /* Functions used by elevator.c */ extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e); -- 1.6.0.6
WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com> To: linux-kernel@vger.kernel.org, containers@lists.linux-foundation.org, dm-devel@redhat.com, jens.axboe@oracle.com, nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com, mikew@google Cc: peterz@infradead.org, akpm@linux-foundation.org, snitzer@redhat.com, agk@redhat.com, vgoyal@redhat.com Subject: [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Date: Thu, 2 Jul 2009 16:01:57 -0400 [thread overview] Message-ID: <1246564917-19603-26-git-send-email-vgoyal@redhat.com> (raw) In-Reply-To: <1246564917-19603-1-git-send-email-vgoyal@redhat.com> o A debug patch which does wait for next IO from async queue once it becomes empty. o For async writes, traffic seen by IO scheduler is not in proportion to the weight of the cgroup task/page belongs to. So if there are two processes doing heavy writeouts in two cgroups with weights 1000 and 500 respectively, then IO scheduler does not see more traffic/IO from higher weight cgroup even if IO scheduler tries to give it higher disk time. Effectively, the async queue belonging to higher weight cgroup becomes empty, and gets out of contention for disk and lower weight cgroup gets to use disk giving an impression in user space that higher weight cgroup did not get higher time to disk. o This is more of a problem at page cache level where a higher weight process might be writing out the pages of lower weight process etc and should be fixed there. o While we fix those issues, introducing this debug patch which allows one to idle on async queue (tunable via /sys/blolc/<disk>/queue/async_slice_idle) so that once a higher weight queue becomes empty, instead of expiring it we try to wait for next request to come from that queue hence giving it higher disk time. A higher value of async_slice_idle, around 300ms, helps me get some right numbers for my setup. Note: higher disk time would not necessarily translate in more IO done as higher weight group is not pushing enough IO to io scheduler. It is just a debugging aid to prove correctness of IO controller by providing higher disk times to higher weight cgroup. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> --- block/cfq-iosched.c | 1 + block/elevator-fq.c | 39 ++++++++++++++++++++++++++++++++++++--- block/elevator-fq.h | 5 +++++ 3 files changed, 42 insertions(+), 3 deletions(-) diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index a40a2fa..fbe56a9 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -2093,6 +2093,7 @@ static struct elv_fs_entry cfq_attrs[] = { ELV_ATTR(slice_sync), ELV_ATTR(slice_async), ELV_ATTR(fairness), + ELV_ATTR(async_slice_idle), __ATTR_NULL }; diff --git a/block/elevator-fq.c b/block/elevator-fq.c index 5b3f068..7c83d1e 100644 --- a/block/elevator-fq.c +++ b/block/elevator-fq.c @@ -24,6 +24,7 @@ const int elv_slice_sync = HZ / 10; int elv_slice_async = HZ / 25; const int elv_slice_async_rq = 2; int elv_slice_idle = HZ / 125; +int elv_async_slice_idle = 0; static struct kmem_cache *elv_ioq_pool; /* Maximum Window length for updating average disk rate */ @@ -2819,6 +2820,8 @@ SHOW_FUNCTION(elv_slice_async_show, efqd->elv_slice[0], 1); EXPORT_SYMBOL(elv_slice_async_show); SHOW_FUNCTION(elv_fairness_show, efqd->fairness, 0); EXPORT_SYMBOL(elv_fairness_show); +SHOW_FUNCTION(elv_async_slice_idle_show, efqd->elv_async_slice_idle, 1); +EXPORT_SYMBOL(elv_async_slice_idle_show); #undef SHOW_FUNCTION #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ @@ -2845,6 +2848,8 @@ STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1); EXPORT_SYMBOL(elv_slice_async_store); STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 2, 0); EXPORT_SYMBOL(elv_fairness_store); +STORE_FUNCTION(elv_async_slice_idle_store, &efqd->elv_async_slice_idle, 0, UINT_MAX, 1); +EXPORT_SYMBOL(elv_async_slice_idle_store); #undef STORE_FUNCTION void elv_schedule_dispatch(struct request_queue *q) @@ -3018,7 +3023,7 @@ int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq, ioq->pid = current->pid; ioq->sched_queue = sched_queue; - if (is_sync && !elv_ioq_class_idle(ioq)) + if (!elv_ioq_class_idle(ioq) && (is_sync || efqd->fairness)) elv_mark_ioq_idle_window(ioq); bfq_init_entity(&ioq->entity, iog); ioq->entity.budget = elv_prio_to_slice(efqd, ioq); @@ -3699,7 +3704,12 @@ static void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy) /* * idle is disabled, either manually or by past process history */ - if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq)) + if ((elv_ioq_sync(ioq) && !efqd->elv_slice_idle) || + !elv_ioq_idle_window(ioq)) + return; + + /* If this is async queue and async_slice_idle is disabled, return */ + if (!elv_ioq_sync(ioq) && !efqd->elv_async_slice_idle) return; /* @@ -3708,7 +3718,10 @@ static void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy) */ if (wait_for_busy) { elv_mark_ioq_wait_busy(ioq); - sl = efqd->elv_slice_idle; + if (elv_ioq_sync(ioq)) + sl = efqd->elv_slice_idle; + else + sl = efqd->elv_async_slice_idle; mod_timer(&efqd->idle_slice_timer, jiffies + sl); elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl); return; @@ -3882,6 +3895,18 @@ void *elv_fq_select_ioq(struct request_queue *q, int force) goto keep_queue; } + /* + * If this is an async queue which has time slice left but not + * requests. Wait busy is also not on (may be because when last + * request completed, ioq was not empty). Wait for the request + * completion. May be completion will turn wait busy on. + */ + if (efqd->fairness && efqd->elv_async_slice_idle && !elv_ioq_sync(ioq) + && elv_ioq_nr_dispatched(ioq)) { + ioq = NULL; + goto keep_queue; + } + slice_expired = 0; expire: if (efqd->fairness >= 2 && !force && ioq && ioq->dispatched @@ -4076,6 +4101,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq) goto done; } + /* For async queue try to do wait busy */ + if (efqd->fairness && !elv_ioq_sync(ioq) && !ioq->nr_queued + && (elv_iog_nr_active(iog) <= 1)) { + elv_ioq_arm_slice_timer(q, 1); + goto done; + } + /* * If there are no requests waiting in this queue, and * there are other queues ready to issue requests, AND @@ -4215,6 +4247,7 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e) efqd->elv_slice[0] = elv_slice_async; efqd->elv_slice[1] = elv_slice_sync; efqd->elv_slice_idle = elv_slice_idle; + efqd->elv_async_slice_idle = elv_async_slice_idle; efqd->hw_tag = 1; /* For the time being keep fairness enabled by default */ diff --git a/block/elevator-fq.h b/block/elevator-fq.h index 19ac8ca..f089a55 100644 --- a/block/elevator-fq.h +++ b/block/elevator-fq.h @@ -362,6 +362,8 @@ struct elv_fq_data { * users of this functionality. */ unsigned int elv_slice_idle; + /* idle slice for async queue */ + unsigned int elv_async_slice_idle; struct timer_list idle_slice_timer; struct work_struct unplug_work; @@ -647,6 +649,9 @@ extern ssize_t elv_slice_async_store(struct elevator_queue *q, const char *name, extern ssize_t elv_fairness_show(struct elevator_queue *q, char *name); extern ssize_t elv_fairness_store(struct elevator_queue *q, const char *name, size_t count); +extern ssize_t elv_async_slice_idle_show(struct elevator_queue *q, char *name); +extern ssize_t elv_async_slice_idle_store(struct elevator_queue *q, + const char *name, size_t count); /* Functions used by elevator.c */ extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e); -- 1.6.0.6
next prev parent reply other threads:[~2009-07-02 20:03 UTC|newest] Thread overview: 191+ messages / expand[flat|nested] mbox.gz Atom feed top 2009-07-02 20:01 [RFC] IO scheduler based IO controller V6 Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 01/25] io-controller: Documentation Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 02/25] io-controller: Core of the B-WF2Q+ scheduler Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 03/25] io-controller: bfq support of in-class preemption Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 04/25] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 05/25] io-controller: Charge for time slice based on average disk rate Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 06/25] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 07/25] io-controller: core bfq scheduler changes for hierarchical setup Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 08/25] io-controller: cgroup related changes for hierarchical group support Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal [not found] ` <1246564917-19603-10-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-06 2:46 ` Gui Jianfeng 2009-07-06 2:46 ` Gui Jianfeng 2009-07-06 2:46 ` Gui Jianfeng 2009-07-06 14:16 ` Vivek Goyal 2009-07-06 14:16 ` Vivek Goyal [not found] ` <20090706141650.GD8279-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-07 1:40 ` [PATCH] io-controller: Get rid of css id from io cgroup Gui Jianfeng 2009-07-07 1:40 ` Gui Jianfeng 2009-07-08 14:04 ` Vivek Goyal 2009-07-08 14:04 ` Vivek Goyal [not found] ` <4A52A77E.8050203-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-08 14:04 ` Vivek Goyal [not found] ` <4A51657B.7000008-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-06 14:16 ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal 2009-07-02 20:01 ` [PATCH 10/25] io-controller: cfq changes to use " Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 11/25] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-08 2:16 ` Gui Jianfeng 2009-07-08 2:16 ` Gui Jianfeng 2009-07-08 14:00 ` Vivek Goyal 2009-07-08 14:00 ` Vivek Goyal [not found] ` <4A54018C.5090804-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-08 14:00 ` Vivek Goyal [not found] ` <1246564917-19603-12-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-08 2:16 ` Gui Jianfeng 2009-07-02 20:01 ` [PATCH 12/25] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal [not found] ` <1246564917-19603-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-02 20:01 ` [PATCH 01/25] io-controller: Documentation Vivek Goyal 2009-07-02 20:01 ` [PATCH 02/25] io-controller: Core of the B-WF2Q+ scheduler Vivek Goyal 2009-07-02 20:01 ` [PATCH 03/25] io-controller: bfq support of in-class preemption Vivek Goyal 2009-07-02 20:01 ` [PATCH 04/25] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal 2009-07-02 20:01 ` [PATCH 05/25] io-controller: Charge for time slice based on average disk rate Vivek Goyal 2009-07-02 20:01 ` [PATCH 06/25] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal 2009-07-02 20:01 ` [PATCH 07/25] io-controller: core bfq scheduler changes for hierarchical setup Vivek Goyal 2009-07-02 20:01 ` [PATCH 08/25] io-controller: cgroup related changes for hierarchical group support Vivek Goyal 2009-07-02 20:01 ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal 2009-07-02 20:01 ` [PATCH 10/25] io-controller: cfq changes to use " Vivek Goyal 2009-07-02 20:01 ` [PATCH 11/25] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal 2009-07-02 20:01 ` [PATCH 12/25] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal 2009-07-02 20:01 ` [PATCH 13/25] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal 2009-07-02 20:01 ` [PATCH 14/25] io-controller: Separate out queue and data Vivek Goyal 2009-07-02 20:01 ` [PATCH 15/25] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal 2009-07-02 20:01 ` [PATCH 16/25] io-controller: noop changes for hierarchical fair queuing Vivek Goyal 2009-07-02 20:01 ` [PATCH 17/25] io-controller: deadline " Vivek Goyal 2009-07-02 20:01 ` [PATCH 18/25] io-controller: anticipatory " Vivek Goyal 2009-07-02 20:01 ` [PATCH 19/25] blkio_cgroup patches from Ryo to track async bios Vivek Goyal 2009-07-02 20:01 ` [PATCH 20/25] io-controller: map async requests to appropriate cgroup Vivek Goyal 2009-07-02 20:01 ` [PATCH 21/25] io-controller: Per cgroup request descriptor support Vivek Goyal 2009-07-02 20:01 ` [PATCH 22/25] io-controller: Per io group bdi congestion interface Vivek Goyal 2009-07-02 20:01 ` [PATCH 23/25] io-controller: Support per cgroup per device weights and io class Vivek Goyal 2009-07-02 20:01 ` [PATCH 24/25] io-controller: Debug hierarchical IO scheduling Vivek Goyal 2009-07-02 20:01 ` [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal 2009-07-08 3:56 ` [RFC] IO scheduler based IO controller V6 Balbir Singh 2009-07-10 1:56 ` [PATCH] io-controller: implement per group request allocation limitation Gui Jianfeng 2009-07-27 2:10 ` [RFC] IO scheduler based IO controller V6 Gui Jianfeng 2009-07-02 20:01 ` [PATCH 13/25] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:09 ` Nauman Rafique 2009-07-02 20:09 ` Nauman Rafique [not found] ` <e98e18940907021309u1f784b3at409b55ba46ed108c-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-07-02 20:17 ` Vivek Goyal 2009-07-02 20:17 ` Vivek Goyal 2009-07-02 20:17 ` Vivek Goyal [not found] ` <1246564917-19603-14-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-02 20:09 ` Nauman Rafique 2009-07-02 20:01 ` [PATCH 14/25] io-controller: Separate out queue and data Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 15/25] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 16/25] io-controller: noop changes for hierarchical fair queuing Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 17/25] io-controller: deadline " Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 18/25] io-controller: anticipatory " Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 19/25] blkio_cgroup patches from Ryo to track async bios Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 20/25] io-controller: map async requests to appropriate cgroup Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal [not found] ` <1246564917-19603-21-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-08-03 2:13 ` Gui Jianfeng 2009-08-03 2:13 ` Gui Jianfeng 2009-08-03 2:13 ` Gui Jianfeng [not found] ` <4A7647DA.5050607-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-08-04 1:25 ` Vivek Goyal 2009-08-04 1:25 ` Vivek Goyal 2009-08-04 1:25 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 21/25] io-controller: Per cgroup request descriptor support Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-08 3:27 ` Gui Jianfeng 2009-07-08 3:27 ` Gui Jianfeng [not found] ` <4A54121D.5090008-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-08 13:57 ` Vivek Goyal 2009-07-08 13:57 ` Vivek Goyal 2009-07-08 13:57 ` Vivek Goyal [not found] ` <1246564917-19603-22-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-08 3:27 ` Gui Jianfeng 2009-07-21 5:37 ` Gui Jianfeng 2009-07-21 5:37 ` Gui Jianfeng [not found] ` <4A655434.5060404-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-21 5:55 ` Nauman Rafique 2009-07-21 5:55 ` Nauman Rafique 2009-07-21 5:55 ` Nauman Rafique 2009-07-21 14:01 ` Vivek Goyal 2009-07-21 14:01 ` Vivek Goyal [not found] ` <20090721140134.GB540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-21 17:57 ` Nauman Rafique 2009-07-21 17:57 ` Nauman Rafique 2009-07-21 17:57 ` Nauman Rafique [not found] ` <e98e18940907202255y5c7c546ei95d87e5a451ad0c2-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-07-21 14:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 22/25] io-controller: Per io group bdi congestion interface Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-17 0:16 ` Munehiro Ikeda 2009-07-17 0:16 ` Munehiro Ikeda 2009-07-17 13:52 ` Vivek Goyal 2009-07-17 13:52 ` Vivek Goyal [not found] ` <4A5FC2CA.1040609-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org> 2009-07-17 13:52 ` Vivek Goyal [not found] ` <1246564917-19603-23-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-17 0:16 ` Munehiro Ikeda 2009-07-02 20:01 ` [PATCH 23/25] io-controller: Support per cgroup per device weights and io class Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` [PATCH 24/25] io-controller: Debug hierarchical IO scheduling Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal 2009-07-02 20:01 ` Vivek Goyal [this message] 2009-07-02 20:01 ` [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal 2009-07-08 3:56 ` [RFC] IO scheduler based IO controller V6 Balbir Singh 2009-07-08 3:56 ` Balbir Singh [not found] ` <20090708035621.GB3215-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org> 2009-07-08 13:41 ` Vivek Goyal 2009-07-08 13:41 ` Vivek Goyal 2009-07-08 13:41 ` Vivek Goyal [not found] ` <20090708134114.GA24048-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-08 14:39 ` Balbir Singh 2009-07-08 14:39 ` Balbir Singh 2009-07-08 14:39 ` Balbir Singh [not found] ` <20090708143925.GE3215-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org> 2009-07-09 1:58 ` Vivek Goyal 2009-07-09 1:58 ` Vivek Goyal 2009-07-09 1:58 ` Vivek Goyal 2009-07-10 1:56 ` [PATCH] io-controller: implement per group request allocation limitation Gui Jianfeng 2009-07-10 1:56 ` Gui Jianfeng 2009-07-13 16:03 ` Vivek Goyal 2009-07-13 16:03 ` Vivek Goyal 2009-07-13 21:08 ` Munehiro Ikeda 2009-07-13 21:08 ` Munehiro Ikeda 2009-07-14 7:45 ` Gui Jianfeng 2009-07-14 7:45 ` Gui Jianfeng 2009-08-04 2:00 ` Munehiro Ikeda 2009-08-04 2:00 ` Munehiro Ikeda [not found] ` <4A77964A.7040602-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org> 2009-08-04 6:38 ` Gui Jianfeng 2009-08-04 22:37 ` Vivek Goyal 2009-08-04 6:38 ` Gui Jianfeng 2009-08-04 6:38 ` Gui Jianfeng 2009-08-04 22:37 ` Vivek Goyal 2009-08-04 22:37 ` Vivek Goyal [not found] ` <4A5C377F.4040105-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-08-04 2:00 ` Munehiro Ikeda [not found] ` <4A5BA238.3030902-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org> 2009-07-14 7:45 ` Gui Jianfeng [not found] ` <20090713160352.GA3714-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-13 21:08 ` Munehiro Ikeda 2009-07-14 7:37 ` Gui Jianfeng 2009-07-14 7:37 ` Gui Jianfeng 2009-08-04 2:02 ` Munehiro Ikeda 2009-08-04 2:02 ` Munehiro Ikeda 2009-08-04 6:41 ` Gui Jianfeng 2009-08-04 6:41 ` Gui Jianfeng [not found] ` <4A7796D2.4030104-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org> 2009-08-04 6:41 ` Gui Jianfeng 2009-08-04 2:04 ` Munehiro Ikeda 2009-08-04 2:04 ` Munehiro Ikeda [not found] ` <4A779719.1070900-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org> 2009-08-04 6:45 ` Gui Jianfeng 2009-08-04 6:45 ` Gui Jianfeng 2009-08-04 6:45 ` Gui Jianfeng [not found] ` <4A569FC5.7090801-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-13 16:03 ` Vivek Goyal 2009-08-04 2:02 ` Munehiro Ikeda 2009-08-04 2:04 ` Munehiro Ikeda 2009-07-27 2:10 ` [RFC] IO scheduler based IO controller V6 Gui Jianfeng [not found] ` <4A6D0C9A.3080600-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org> 2009-07-27 12:55 ` Vivek Goyal 2009-07-27 12:55 ` Vivek Goyal 2009-07-27 12:55 ` Vivek Goyal 2009-07-28 3:27 ` Vivek Goyal 2009-07-28 3:27 ` Vivek Goyal [not found] ` <20090728032712.GC3620-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-28 3:36 ` Gui Jianfeng 2009-07-28 3:36 ` Gui Jianfeng 2009-07-28 3:36 ` Gui Jianfeng [not found] ` <20090727125503.GA24449-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-07-28 3:27 ` Vivek Goyal 2009-07-28 11:36 ` Gui Jianfeng 2009-07-29 9:07 ` Gui Jianfeng 2009-07-28 11:36 ` Gui Jianfeng 2009-07-29 9:07 ` Gui Jianfeng
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1246564917-19603-26-git-send-email-vgoyal@redhat.com \ --to=vgoyal@redhat.com \ --cc=agk@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=balbir@linux.vnet.ibm.com \ --cc=containers@lists.linux-foundation.org \ --cc=dhaval@linux.vnet.ibm.com \ --cc=dm-devel@redhat.com \ --cc=dpshah@google.com \ --cc=fchecconi@gmail.com \ --cc=fernando@oss.ntt.co.jp \ --cc=guijianfeng@cn.fujitsu.com \ --cc=jbaron@redhat.com \ --cc=jens.axboe@oracle.com \ --cc=jmoyer@redhat.com \ --cc=linux-kernel@vger.kernel.org \ --cc=lizf@cn.fujitsu.com \ --cc=m-ikeda@ds.jp.nec.com \ --cc=mikew@google.com \ --cc=nauman@google.com \ --cc=paolo.valente@unimore.it \ --cc=peterz@infradead.org \ --cc=righi.andrea@gmail.com \ --cc=ryov@valinux.co.jp \ --cc=s-uchida@ap.jp.nec.com \ --cc=snitzer@redhat.com \ --cc=taka@valinux.co.jp \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.