All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	agk-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org,
	fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org,
	jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Subject: Re: IO scheduler based IO Controller V2
Date: Thu, 7 May 2009 00:02:51 +0200	[thread overview]
Message-ID: <20090506220250.GD4282@linux> (raw)
In-Reply-To: <20090506212121.GI8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

On Wed, May 06, 2009 at 05:21:21PM -0400, Vivek Goyal wrote:
> > Well, IMHO the big concern is at which level we want to implement the
> > logic of control: IO scheduler, when the IO requests are already
> > submitted and need to be dispatched, or at high level when the
> > applications generates IO requests (or maybe both).
> > 
> > And, as pointed by Andrew, do everything by a cgroup-based controller.
> 
> I am not sure what's the rationale behind that. Why to do it at higher
> layer? Doing it at IO scheduler layer will make sure that one does not
> breaks the IO scheduler's properties with-in cgroup. (See my other mail
> with some io-throttling test results).
> 
> The advantage of higher layer mechanism is that it can also cover software
> RAID devices well. 
> 
> > 
> > The other features, proportional BW, throttling, take the current ioprio
> > model in account, etc. are implementation details and any of the
> > proposed solutions can be extended to support all these features. I
> > mean, io-throttle can be extended to support proportional BW (for a
> > certain perspective it is already provided by the throttling water mark
> > in v16), as well as the IO scheduler based controller can be extended to
> > support absolute BW limits. The same for dm-ioband. I don't think
> > there're huge obstacle to merge the functionalities in this sense.
> 
> Yes, from technical point of view, one can implement a proportional BW
> controller at higher layer also. But that would practically mean almost
> re-implementing the CFQ logic at higher layer. Now why to get into all
> that complexity. Why not simply make CFQ hiearchical to also handle the
> groups?

Make CFQ aware of cgroups is very important also. I could be wrong, but
I don't think we shouldn't re-implement the same exact CFQ logic at
higher layers. CFQ dispatches IO requests, at higher layers applications
submit IO requests. We're talking about different things and applying
different logic doesn't sound too strange IMHO. I mean, at least we
should consider/test also this different approach before deciding drop
it.

This solution also guarantee no changes in the IO schedulers for those
who are not interested in using the cgroup IO controller. What is the
impact of the IO scheduler based controller for those users?

> 
> Secondly, think of following odd scenarios if we implement a higher level
> proportional BW controller which can offer the same feature as CFQ and
> also can handle group scheduling.
> 
> Case1:
> ======	 
>            (Higher level proportional BW controller)
> 			/dev/sda (CFQ)
> 
> So if somebody wants a group scheduling, we will be doing same IO control
> at two places (with-in group). Once at higher level and second time at CFQ
> level. Does not sound too logical to me.
> 
> Case2:
> ======
> 
>            (Higher level proportional BW controller)
> 			/dev/sda (NOOP)
> 	
> This is other extrememt. Lower level IO scheduler does not offer any kind
> of notion of class or prio with-in class and higher level scheduler will
> still be maintaining all the infrastructure unnecessarily.
> 
> That's why I get back to this simple question again, why not extend the
> IO schedulers to handle group scheduling and do both proportional BW and
> max bw control there.
> 
> > 
> > > 
> > > Andrea, last time you were planning to have a look at my patches and see
> > > if max bw controller can be implemented there. I got a feeling that it
> > > should not be too difficult to implement it there. We already have the
> > > hierarchical tree of io queues and groups in elevator layer and we run
> > > BFQ (WF2Q+) algorithm to select next queue to dispatch the IO from. It is
> > > just a matter of also keeping track of IO rate per queue/group and we should
> > > be easily be able to delay the dispatch of IO from a queue if its group has
> > > crossed the specified max bw.
> > 
> > Yes, sorry for my late, I quickly tested your patchset, but I still need
> > to understand many details of your solution. In the next days I'll
> > re-read everything carefully and I'll try to do a detailed review of
> > your patchset (just re-building the kernel with your patchset applied).
> > 
> 
> Sure. My patchset is still in the infancy stage. So don't expect great
> results. But it does highlight the idea and design very well.
> 
> > > 
> > > This should lead to less code and reduced complextiy (compared with the
> > > case where we do max bw control with io-throttling patches and proportional
> > > BW control using IO scheduler based control patches).
> > 
> > mmmh... changing the logic at the elevator and all IO schedulers doesn't
> > sound like reduced complexity and less code changed. With io-throttle we
> > just need to place the cgroup_io_throttle() hook in the right functions
> > where we want to apply throttling. This is a quite easy approach to
> > extend the IO control also to logical devices (more in general devices
> > that use their own make_request_fn) or even network-attached devices, as
> > well as networking filesystems, etc.
> > 
> > But I may be wrong. As I said I still need to review in the details your
> > solution.
> 
> Well I meant reduced code in the sense if we implement both max bw and
> proportional bw at IO scheduler level instead of proportional BW at
> IO scheduler and max bw at higher level.

OK.

> 
> I agree that doing max bw control at higher level has this advantage that
> it covers all the kind of deivces (higher level logical devices) and IO
> scheduler level solution does not do that. But this comes at the price
> of broken IO scheduler properties with-in cgroup.
> 
> Maybe we can then implement both. A higher level max bw controller and a
> max bw feature implemented along side proportional BW controller at IO
> scheduler level. Folks who use hardware RAID, or single disk devices can
> use max bw control of IO scheduler and those using software RAID devices
> can use higher level max bw controller.

OK, maybe.

> 
> > 
> > >  
> > > So do you think that it would make sense to do max BW control along with
> > > proportional weight IO controller at IO scheduler? If yes, then we can
> > > work together and continue to develop this patchset to also support max
> > > bw control and meet your requirements and drop the io-throttling patches.
> > 
> > It is surely worth to be explored. Honestly, I don't know if it would be
> > a better solution or not. Probably comparing some results with different
> > IO workloads is the best way to proceed and decide which is the right
> > way to go. This is necessary IMHO, before totally dropping one solution
> > or another.
> 
> Sure. My patches have started giving some basic results but because there
> is lot of work remaining before a fair comparison can be done on the
> basis of performance under various work loads. So some more time to
> go before we can do a fair comparison based on numbers.
>  
> > 
> > > 
> > > The only thing which concerns me is the fact that IO scheduler does not
> > > have the view of higher level logical device. So if somebody has setup a
> > > software RAID and wants to put max BW limit on software raid device, this
> > > solution will not work. One shall have to live with max bw limits on 
> > > individual disks (where io scheduler is actually running). Do your patches
> > > allow to put limit on software RAID devices also? 
> > 
> > No, but as said above my patchset provides the interfaces to apply the
> > IO control and accounting wherever we want. At the moment there's just
> > one interface, cgroup_io_throttle().
> 
> Sorry, I did not get it clearly. I guess I did not ask the question right.
> So lets say I got a setup where there are two phyical devices /dev/sda and
> /dev/sdb and I create a logical device (say using device mapper facilities)
> on top of these two physical disks. And some application is generating
> the IO for logical device lv0.
> 
> 				Appl
> 				 |
> 				lv0
> 			       /  \
> 			    sda	   sdb
> 
> 
> Where should I put the bandwidth limiting rules now for io-throtle. I 
> specify these for lv0 device or for sda and sdb devices?

The BW limiting rules would be applied into the make_request_fn provided
by the lv0 device. If it's not provided, before calling
generic_make_request(). A problem could be that the driver must be aware
of the particular lv0 device at that point.

> 
> Thanks
> Vivek

OK. I definitely need to look at your patchset before saying any other
opinion... :)

Thanks,
-Andrea

WARNING: multiple messages have this Message-ID (diff)
From: Andrea Righi <righi.andrea@gmail.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
	mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
	jens.axboe@oracle.com, ryov@valinux.co.jp,
	fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com,
	taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
	jmoyer@redhat.com, dhaval@linux.vnet.ibm.com,
	balbir@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
	containers@lists.linux-foundation.org, agk@redhat.com,
	dm-devel@redhat.com, snitzer@redhat.com, m-ikeda@ds.jp.nec.com,
	peterz@infradead.org
Subject: Re: IO scheduler based IO Controller V2
Date: Thu, 7 May 2009 00:02:51 +0200	[thread overview]
Message-ID: <20090506220250.GD4282@linux> (raw)
In-Reply-To: <20090506212121.GI8180@redhat.com>

On Wed, May 06, 2009 at 05:21:21PM -0400, Vivek Goyal wrote:
> > Well, IMHO the big concern is at which level we want to implement the
> > logic of control: IO scheduler, when the IO requests are already
> > submitted and need to be dispatched, or at high level when the
> > applications generates IO requests (or maybe both).
> > 
> > And, as pointed by Andrew, do everything by a cgroup-based controller.
> 
> I am not sure what's the rationale behind that. Why to do it at higher
> layer? Doing it at IO scheduler layer will make sure that one does not
> breaks the IO scheduler's properties with-in cgroup. (See my other mail
> with some io-throttling test results).
> 
> The advantage of higher layer mechanism is that it can also cover software
> RAID devices well. 
> 
> > 
> > The other features, proportional BW, throttling, take the current ioprio
> > model in account, etc. are implementation details and any of the
> > proposed solutions can be extended to support all these features. I
> > mean, io-throttle can be extended to support proportional BW (for a
> > certain perspective it is already provided by the throttling water mark
> > in v16), as well as the IO scheduler based controller can be extended to
> > support absolute BW limits. The same for dm-ioband. I don't think
> > there're huge obstacle to merge the functionalities in this sense.
> 
> Yes, from technical point of view, one can implement a proportional BW
> controller at higher layer also. But that would practically mean almost
> re-implementing the CFQ logic at higher layer. Now why to get into all
> that complexity. Why not simply make CFQ hiearchical to also handle the
> groups?

Make CFQ aware of cgroups is very important also. I could be wrong, but
I don't think we shouldn't re-implement the same exact CFQ logic at
higher layers. CFQ dispatches IO requests, at higher layers applications
submit IO requests. We're talking about different things and applying
different logic doesn't sound too strange IMHO. I mean, at least we
should consider/test also this different approach before deciding drop
it.

This solution also guarantee no changes in the IO schedulers for those
who are not interested in using the cgroup IO controller. What is the
impact of the IO scheduler based controller for those users?

> 
> Secondly, think of following odd scenarios if we implement a higher level
> proportional BW controller which can offer the same feature as CFQ and
> also can handle group scheduling.
> 
> Case1:
> ======	 
>            (Higher level proportional BW controller)
> 			/dev/sda (CFQ)
> 
> So if somebody wants a group scheduling, we will be doing same IO control
> at two places (with-in group). Once at higher level and second time at CFQ
> level. Does not sound too logical to me.
> 
> Case2:
> ======
> 
>            (Higher level proportional BW controller)
> 			/dev/sda (NOOP)
> 	
> This is other extrememt. Lower level IO scheduler does not offer any kind
> of notion of class or prio with-in class and higher level scheduler will
> still be maintaining all the infrastructure unnecessarily.
> 
> That's why I get back to this simple question again, why not extend the
> IO schedulers to handle group scheduling and do both proportional BW and
> max bw control there.
> 
> > 
> > > 
> > > Andrea, last time you were planning to have a look at my patches and see
> > > if max bw controller can be implemented there. I got a feeling that it
> > > should not be too difficult to implement it there. We already have the
> > > hierarchical tree of io queues and groups in elevator layer and we run
> > > BFQ (WF2Q+) algorithm to select next queue to dispatch the IO from. It is
> > > just a matter of also keeping track of IO rate per queue/group and we should
> > > be easily be able to delay the dispatch of IO from a queue if its group has
> > > crossed the specified max bw.
> > 
> > Yes, sorry for my late, I quickly tested your patchset, but I still need
> > to understand many details of your solution. In the next days I'll
> > re-read everything carefully and I'll try to do a detailed review of
> > your patchset (just re-building the kernel with your patchset applied).
> > 
> 
> Sure. My patchset is still in the infancy stage. So don't expect great
> results. But it does highlight the idea and design very well.
> 
> > > 
> > > This should lead to less code and reduced complextiy (compared with the
> > > case where we do max bw control with io-throttling patches and proportional
> > > BW control using IO scheduler based control patches).
> > 
> > mmmh... changing the logic at the elevator and all IO schedulers doesn't
> > sound like reduced complexity and less code changed. With io-throttle we
> > just need to place the cgroup_io_throttle() hook in the right functions
> > where we want to apply throttling. This is a quite easy approach to
> > extend the IO control also to logical devices (more in general devices
> > that use their own make_request_fn) or even network-attached devices, as
> > well as networking filesystems, etc.
> > 
> > But I may be wrong. As I said I still need to review in the details your
> > solution.
> 
> Well I meant reduced code in the sense if we implement both max bw and
> proportional bw at IO scheduler level instead of proportional BW at
> IO scheduler and max bw at higher level.

OK.

> 
> I agree that doing max bw control at higher level has this advantage that
> it covers all the kind of deivces (higher level logical devices) and IO
> scheduler level solution does not do that. But this comes at the price
> of broken IO scheduler properties with-in cgroup.
> 
> Maybe we can then implement both. A higher level max bw controller and a
> max bw feature implemented along side proportional BW controller at IO
> scheduler level. Folks who use hardware RAID, or single disk devices can
> use max bw control of IO scheduler and those using software RAID devices
> can use higher level max bw controller.

OK, maybe.

> 
> > 
> > >  
> > > So do you think that it would make sense to do max BW control along with
> > > proportional weight IO controller at IO scheduler? If yes, then we can
> > > work together and continue to develop this patchset to also support max
> > > bw control and meet your requirements and drop the io-throttling patches.
> > 
> > It is surely worth to be explored. Honestly, I don't know if it would be
> > a better solution or not. Probably comparing some results with different
> > IO workloads is the best way to proceed and decide which is the right
> > way to go. This is necessary IMHO, before totally dropping one solution
> > or another.
> 
> Sure. My patches have started giving some basic results but because there
> is lot of work remaining before a fair comparison can be done on the
> basis of performance under various work loads. So some more time to
> go before we can do a fair comparison based on numbers.
>  
> > 
> > > 
> > > The only thing which concerns me is the fact that IO scheduler does not
> > > have the view of higher level logical device. So if somebody has setup a
> > > software RAID and wants to put max BW limit on software raid device, this
> > > solution will not work. One shall have to live with max bw limits on 
> > > individual disks (where io scheduler is actually running). Do your patches
> > > allow to put limit on software RAID devices also? 
> > 
> > No, but as said above my patchset provides the interfaces to apply the
> > IO control and accounting wherever we want. At the moment there's just
> > one interface, cgroup_io_throttle().
> 
> Sorry, I did not get it clearly. I guess I did not ask the question right.
> So lets say I got a setup where there are two phyical devices /dev/sda and
> /dev/sdb and I create a logical device (say using device mapper facilities)
> on top of these two physical disks. And some application is generating
> the IO for logical device lv0.
> 
> 				Appl
> 				 |
> 				lv0
> 			       /  \
> 			    sda	   sdb
> 
> 
> Where should I put the bandwidth limiting rules now for io-throtle. I 
> specify these for lv0 device or for sda and sdb devices?

The BW limiting rules would be applied into the make_request_fn provided
by the lv0 device. If it's not provided, before calling
generic_make_request(). A problem could be that the driver must be aware
of the particular lv0 device at that point.

> 
> Thanks
> Vivek

OK. I definitely need to look at your patchset before saying any other
opinion... :)

Thanks,
-Andrea

  parent reply	other threads:[~2009-05-06 22:02 UTC|newest]

Thread overview: 297+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-05 19:58 IO scheduler based IO Controller V2 Vivek Goyal
2009-05-05 19:58 ` [PATCH 01/18] io-controller: Documentation Vivek Goyal
2009-05-06  3:16   ` Gui Jianfeng
     [not found]     ` <4A0100F4.4040400-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-06 13:31       ` Vivek Goyal
2009-05-06 13:31     ` Vivek Goyal
     [not found]   ` <1241553525-28095-2-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06  3:16     ` Gui Jianfeng
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 02/18] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-05-05 19:58 ` [PATCH 03/18] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 04/18] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
     [not found]   ` <1241553525-28095-5-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-22  8:54     ` Gui Jianfeng
2009-05-22  8:54   ` Gui Jianfeng
     [not found]     ` <4A166829.6070608-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-22 12:33       ` Vivek Goyal
2009-05-22 12:33     ` Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 05/18] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-07  7:42   ` Gui Jianfeng
2009-05-07  8:05     ` Li Zefan
     [not found]     ` <4A0290ED.7080506-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-07  8:05       ` Li Zefan
2009-05-08 12:45       ` Vivek Goyal
2009-05-08 12:45     ` Vivek Goyal
     [not found]   ` <1241553525-28095-6-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-07  7:42     ` Gui Jianfeng
2009-05-08 21:09     ` Andrea Righi
2009-05-08 21:09   ` Andrea Righi
2009-05-08 21:17     ` Vivek Goyal
2009-05-08 21:17     ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 06/18] io-controller: cfq changes to use " Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 07/18] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-13  2:39   ` Gui Jianfeng
     [not found]     ` <4A0A32CB.4020609-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-13 14:51       ` Vivek Goyal
2009-05-13 14:51     ` Vivek Goyal
2009-05-14  7:53       ` Gui Jianfeng
     [not found]       ` <20090513145127.GB7696-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-14  7:53         ` Gui Jianfeng
     [not found]   ` <1241553525-28095-8-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-13  2:39     ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 08/18] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-13 15:00   ` Vivek Goyal
2009-05-13 15:00   ` Vivek Goyal
2009-06-09  7:56   ` Gui Jianfeng
2009-06-09 17:51     ` Vivek Goyal
2009-06-09 17:51       ` Vivek Goyal
     [not found]       ` <20090609175131.GB13476-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-10  1:30         ` Gui Jianfeng
2009-06-10  1:30       ` Gui Jianfeng
2009-06-10  1:30         ` Gui Jianfeng
2009-06-10 13:26         ` Vivek Goyal
2009-06-10 13:26           ` Vivek Goyal
2009-06-11  1:22           ` Gui Jianfeng
2009-06-11  1:22             ` Gui Jianfeng
     [not found]           ` <20090610132638.GB19680-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-11  1:22             ` Gui Jianfeng
     [not found]         ` <4A2F0CBE.8030208-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-10 13:26           ` Vivek Goyal
     [not found]     ` <4A2E15B6.8030001-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-06-09 17:51       ` Vivek Goyal
     [not found]   ` <1241553525-28095-9-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-13 15:00     ` Vivek Goyal
2009-06-09  7:56     ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 09/18] io-controller: Separate out queue and data Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 10/18] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-05-05 19:58 ` [PATCH 11/18] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 12/18] io-controller: deadline " Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 13/18] io-controller: anticipatory " Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 14/18] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 15/18] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 16/18] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 17/18] io-controller: IO group refcounting support Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
     [not found]   ` <1241553525-28095-18-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-08  2:59     ` Gui Jianfeng
2009-05-08  2:59       ` Gui Jianfeng
2009-05-08 12:44       ` Vivek Goyal
     [not found]       ` <4A03A013.9000405-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-08 12:44         ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 18/18] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-05-05 19:58 ` Vivek Goyal
2009-05-06 21:40   ` IKEDA, Munehiro
     [not found]     ` <4A0203DB.1090809-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-05-06 21:58       ` Vivek Goyal
2009-05-06 21:58         ` Vivek Goyal
     [not found]         ` <20090506215833.GK8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 22:19           ` IKEDA, Munehiro
2009-05-06 22:19             ` IKEDA, Munehiro
     [not found]             ` <4A020CD5.2000308-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-05-06 22:24               ` Vivek Goyal
2009-05-06 22:24                 ` Vivek Goyal
     [not found]                 ` <20090506222458.GM8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 23:01                   ` IKEDA, Munehiro
2009-05-06 23:01                     ` IKEDA, Munehiro
     [not found]   ` <1241553525-28095-19-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 21:40     ` IKEDA, Munehiro
     [not found] ` <1241553525-28095-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-05 19:58   ` [PATCH 01/18] io-controller: Documentation Vivek Goyal
2009-05-05 19:58   ` [PATCH 02/18] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-05-05 19:58     ` Vivek Goyal
2009-05-22  6:43     ` Gui Jianfeng
2009-05-22 12:32       ` Vivek Goyal
2009-05-23 20:04         ` Jens Axboe
     [not found]         ` <20090522123231.GA14972-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-23 20:04           ` Jens Axboe
     [not found]       ` <4A164978.1020604-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-22 12:32         ` Vivek Goyal
     [not found]     ` <1241553525-28095-3-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-22  6:43       ` Gui Jianfeng
2009-05-05 19:58   ` [PATCH 03/18] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-05-05 19:58   ` [PATCH 04/18] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-05-05 19:58   ` [PATCH 05/18] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-05-05 19:58   ` [PATCH 06/18] io-controller: cfq changes to use " Vivek Goyal
2009-05-05 19:58   ` [PATCH 07/18] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-05-05 19:58   ` [PATCH 08/18] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-05-05 19:58   ` [PATCH 09/18] io-controller: Separate out queue and data Vivek Goyal
2009-05-05 19:58   ` [PATCH 10/18] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-05-05 19:58     ` Vivek Goyal
2009-05-05 19:58   ` [PATCH 11/18] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-05-05 19:58   ` [PATCH 12/18] io-controller: deadline " Vivek Goyal
2009-05-05 19:58   ` [PATCH 13/18] io-controller: anticipatory " Vivek Goyal
2009-05-05 19:58   ` [PATCH 14/18] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-05-05 19:58   ` [PATCH 15/18] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-05-05 19:58   ` [PATCH 16/18] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-05-05 19:58   ` [PATCH 17/18] io-controller: IO group refcounting support Vivek Goyal
2009-05-05 19:58   ` [PATCH 18/18] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-05-05 20:24   ` IO scheduler based IO Controller V2 Andrew Morton
2009-05-05 20:24     ` Andrew Morton
2009-05-05 22:20     ` Peter Zijlstra
2009-05-06  3:42       ` Balbir Singh
2009-05-06  3:42       ` Balbir Singh
2009-05-06 10:20         ` Fabio Checconi
2009-05-06 17:10           ` Balbir Singh
2009-05-06 17:10             ` Balbir Singh
     [not found]           ` <20090506102030.GB20544-f9ZlEuEWxVeACYmtYXMKmw@public.gmane.org>
2009-05-06 17:10             ` Balbir Singh
2009-05-06 18:47         ` Divyesh Shah
     [not found]         ` <20090506034254.GD4416-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-05-06 10:20           ` Fabio Checconi
2009-05-06 18:47           ` Divyesh Shah
2009-05-06 20:42           ` Andrea Righi
2009-05-06 20:42         ` Andrea Righi
2009-05-06  2:33     ` Vivek Goyal
2009-05-06 17:59       ` Nauman Rafique
2009-05-06 20:07       ` Andrea Righi
2009-05-06 21:21         ` Vivek Goyal
2009-05-06 21:21         ` Vivek Goyal
     [not found]           ` <20090506212121.GI8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 22:02             ` Andrea Righi [this message]
2009-05-06 22:02               ` Andrea Righi
2009-05-06 22:17               ` Vivek Goyal
2009-05-06 22:17                 ` Vivek Goyal
     [not found]       ` <20090506023332.GA1212-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 17:59         ` Nauman Rafique
2009-05-06 20:07         ` Andrea Righi
2009-05-06 20:32         ` Vivek Goyal
2009-05-07  0:18         ` Ryo Tsuruta
2009-05-06 20:32       ` Vivek Goyal
     [not found]         ` <20090506203228.GH8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 21:34           ` Andrea Righi
2009-05-06 21:34         ` Andrea Righi
2009-05-06 21:52           ` Vivek Goyal
2009-05-06 21:52             ` Vivek Goyal
2009-05-06 22:35             ` Andrea Righi
2009-05-07  1:48               ` Ryo Tsuruta
2009-05-07  1:48               ` Ryo Tsuruta
     [not found]             ` <20090506215235.GJ8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-06 22:35               ` Andrea Righi
2009-05-07  9:04               ` Andrea Righi
2009-05-07  9:04             ` Andrea Righi
2009-05-07 12:22               ` Andrea Righi
2009-05-07 12:22               ` Andrea Righi
2009-05-07 14:11               ` Vivek Goyal
2009-05-07 14:11               ` Vivek Goyal
     [not found]                 ` <20090507141126.GA9463-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-07 14:45                   ` Vivek Goyal
2009-05-07 14:45                     ` Vivek Goyal
     [not found]                     ` <20090507144501.GB9463-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-07 15:36                       ` Vivek Goyal
2009-05-07 15:36                         ` Vivek Goyal
     [not found]                         ` <20090507153642.GC9463-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-07 15:42                           ` Vivek Goyal
2009-05-07 15:42                             ` Vivek Goyal
2009-05-07 22:19                           ` Andrea Righi
2009-05-07 22:19                         ` Andrea Righi
2009-05-08 18:09                           ` Vivek Goyal
2009-05-08 20:05                             ` Andrea Righi
2009-05-08 21:56                               ` Vivek Goyal
2009-05-08 21:56                                 ` Vivek Goyal
2009-05-09  9:22                                 ` Peter Zijlstra
2009-05-14 10:31                                 ` Andrea Righi
     [not found]                                 ` <20090508215618.GJ7293-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-09  9:22                                   ` Peter Zijlstra
2009-05-14 10:31                                   ` Andrea Righi
2009-05-14 16:43                                   ` Dhaval Giani
2009-05-14 16:43                                     ` Dhaval Giani
     [not found]                             ` <20090508180951.GG7293-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-08 20:05                               ` Andrea Righi
2009-05-08 18:09                           ` Vivek Goyal
2009-05-07 22:40                       ` Andrea Righi
2009-05-07 22:40                     ` Andrea Righi
2009-05-07  0:18       ` Ryo Tsuruta
     [not found]         ` <20090507.091858.226775723.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-05-07  1:25           ` Vivek Goyal
2009-05-07  1:25             ` Vivek Goyal
     [not found]             ` <20090507012559.GC4187-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-11 11:23               ` Ryo Tsuruta
2009-05-11 11:23             ` Ryo Tsuruta
     [not found]               ` <20090511.202309.112614168.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-05-11 12:49                 ` Vivek Goyal
2009-05-11 12:49                   ` Vivek Goyal
2009-05-08 14:24           ` Rik van Riel
2009-05-08 14:24         ` Rik van Riel
     [not found]           ` <4A0440B2.7040300-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-11 10:11             ` Ryo Tsuruta
2009-05-11 10:11           ` Ryo Tsuruta
     [not found]     ` <20090505132441.1705bfad.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-05-05 22:20       ` Peter Zijlstra
2009-05-06  2:33       ` Vivek Goyal
2009-05-06  3:41       ` Balbir Singh
2009-05-06  3:41     ` Balbir Singh
2009-05-06 13:28       ` Vivek Goyal
2009-05-06 13:28         ` Vivek Goyal
     [not found]       ` <20090506034118.GC4416-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-05-06 13:28         ` Vivek Goyal
2009-05-06  8:11   ` Gui Jianfeng
2009-05-08  9:45   ` [PATCH] io-controller: Add io group reference handling for request Gui Jianfeng
2009-05-13  2:00   ` [PATCH] IO Controller: Add per-device weight and ioprio_class handling Gui Jianfeng
2009-05-06  8:11 ` IO scheduler based IO Controller V2 Gui Jianfeng
     [not found]   ` <4A014619.1040000-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-06 16:10     ` Vivek Goyal
2009-05-06 16:10       ` Vivek Goyal
2009-05-07  5:36       ` Li Zefan
     [not found]         ` <4A027348.6000808-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-08 13:37           ` Vivek Goyal
2009-05-08 13:37             ` Vivek Goyal
2009-05-11  2:59             ` Gui Jianfeng
     [not found]             ` <20090508133740.GD7293-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-11  2:59               ` Gui Jianfeng
2009-05-07  5:47       ` Gui Jianfeng
     [not found]       ` <20090506161012.GC8180-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-07  5:36         ` Li Zefan
2009-05-07  5:47         ` Gui Jianfeng
2009-05-08  9:45 ` [PATCH] io-controller: Add io group reference handling for request Gui Jianfeng
     [not found]   ` <4A03FF3C.4020506-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-08 13:57     ` Vivek Goyal
2009-05-08 13:57       ` Vivek Goyal
     [not found]       ` <20090508135724.GE7293-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-08 17:41         ` Nauman Rafique
2009-05-08 17:41       ` Nauman Rafique
2009-05-08 17:41         ` Nauman Rafique
2009-05-08 18:56         ` Vivek Goyal
     [not found]           ` <20090508185644.GH7293-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-08 19:06             ` Nauman Rafique
2009-05-08 19:06           ` Nauman Rafique
2009-05-08 19:06             ` Nauman Rafique
2009-05-11  1:33         ` Gui Jianfeng
2009-05-11 15:41           ` Vivek Goyal
     [not found]             ` <20090511154127.GD6036-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-15  5:15               ` Gui Jianfeng
2009-05-15  5:15                 ` Gui Jianfeng
2009-05-15  7:48                 ` Andrea Righi
2009-05-15  8:16                   ` Gui Jianfeng
2009-05-15  8:16                   ` Gui Jianfeng
     [not found]                     ` <4A0D24E6.6010807-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-15 14:09                       ` Vivek Goyal
2009-05-15 14:09                         ` Vivek Goyal
2009-05-15 14:06                   ` Vivek Goyal
2009-05-15 14:06                   ` Vivek Goyal
2009-05-17 10:26                     ` Andrea Righi
2009-05-18 14:01                       ` Vivek Goyal
2009-05-18 14:01                         ` Vivek Goyal
2009-05-18 14:39                         ` Andrea Righi
2009-05-26 11:34                           ` Ryo Tsuruta
2009-05-26 11:34                           ` Ryo Tsuruta
2009-05-27  6:56                             ` Ryo Tsuruta
2009-05-27  6:56                               ` Ryo Tsuruta
2009-05-27  8:17                               ` Andrea Righi
2009-05-27  8:17                                 ` Andrea Righi
2009-05-27 11:53                                 ` Ryo Tsuruta
2009-05-27 11:53                                 ` Ryo Tsuruta
2009-05-27 17:32                               ` Vivek Goyal
2009-05-27 17:32                                 ` Vivek Goyal
     [not found]                               ` <20090527.155631.226800550.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-05-27  8:17                                 ` Andrea Righi
2009-05-27 17:32                                 ` Vivek Goyal
     [not found]                             ` <20090526.203424.39179999.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-05-27  6:56                               ` Ryo Tsuruta
2009-05-19 12:18                         ` Ryo Tsuruta
     [not found]                         ` <20090518140114.GB27080-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-18 14:39                           ` Andrea Righi
2009-05-19 12:18                           ` Ryo Tsuruta
     [not found]                     ` <20090515140643.GB19350-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-17 10:26                       ` Andrea Righi
     [not found]                 ` <4A0CFA6C.3080609-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-15  7:48                   ` Andrea Righi
2009-05-15  7:40               ` Gui Jianfeng
2009-05-15  7:40                 ` Gui Jianfeng
2009-05-15 14:01                 ` Vivek Goyal
     [not found]                 ` <4A0D1C55.9040700-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-15 14:01                   ` Vivek Goyal
     [not found]           ` <4A078051.5060702-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-11 15:41             ` Vivek Goyal
     [not found]         ` <e98e18940905081041r386e52a5q5a2b1f13f1e8c634-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-05-08 18:56           ` Vivek Goyal
2009-05-11  1:33           ` Gui Jianfeng
2009-05-13  2:00 ` [PATCH] IO Controller: Add per-device weight and ioprio_class handling Gui Jianfeng
2009-05-13 14:44   ` Vivek Goyal
     [not found]     ` <20090513144432.GA7696-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-14  0:59       ` Gui Jianfeng
2009-05-14  0:59     ` Gui Jianfeng
2009-05-13 15:29   ` Vivek Goyal
2009-05-14  1:02     ` Gui Jianfeng
     [not found]     ` <20090513152909.GD7696-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-14  1:02       ` Gui Jianfeng
2009-05-13 15:59   ` Vivek Goyal
2009-05-14  1:51     ` Gui Jianfeng
     [not found]     ` <20090513155900.GA15623-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-14  1:51       ` Gui Jianfeng
2009-05-14  2:25       ` Gui Jianfeng
2009-05-14  2:25     ` Gui Jianfeng
     [not found]   ` <4A0A29B5.7030109-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-13 14:44     ` Vivek Goyal
2009-05-13 15:29     ` Vivek Goyal
2009-05-13 15:59     ` Vivek Goyal
2009-05-13 17:17     ` Vivek Goyal
2009-05-13 19:09     ` Vivek Goyal
2009-05-13 17:17   ` Vivek Goyal
     [not found]     ` <20090513171734.GA18371-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-14  1:24       ` Gui Jianfeng
2009-05-14  1:24     ` Gui Jianfeng
2009-05-13 19:09   ` Vivek Goyal
2009-05-14  1:35     ` Gui Jianfeng
     [not found]     ` <20090513190929.GB18371-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-05-14  1:35       ` Gui Jianfeng
2009-05-14  7:26       ` Gui Jianfeng
2009-05-14  7:26     ` Gui Jianfeng
2009-05-14 15:15       ` Vivek Goyal
2009-05-18 22:33       ` IKEDA, Munehiro
2009-05-20  1:44         ` Gui Jianfeng
     [not found]           ` <4A136090.5090705-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-20 15:41             ` IKEDA, Munehiro
2009-05-20 15:41               ` IKEDA, Munehiro
     [not found]         ` <4A11E244.2000305-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-05-20  1:44           ` Gui Jianfeng
     [not found]       ` <4A0BC7AB.8030703-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-05-14 15:15         ` Vivek Goyal
2009-05-18 22:33         ` IKEDA, Munehiro
  -- strict thread matches above, loose matches on Subject: below --
2009-05-05 19:58 IO scheduler based IO Controller V2 Vivek Goyal
2009-05-05 19:58 Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090506220250.GD4282@linux \
    --to=righi.andrea-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=agk-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org \
    --cc=jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org \
    --cc=snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.