From: Vivek Goyal <vgoyal@redhat.com> To: Nauman Rafique <nauman@google.com> Cc: linux-kernel@vger.kernel.org, jens.axboe@oracle.com, containers@lists.linux-foundation.org, dm-devel@redhat.com, dpshah@google.com, lizf@cn.fujitsu.com, mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it, ryov@valinux.co.jp, fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com, taka@valinux.co.jp, guijianfeng@cn.fujitsu.com, jmoyer@redhat.com, dhaval@linux.vnet.ibm.com, balbir@linux.vnet.ibm.com, righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com, agk@redhat.com, akpm@linux-foundation.org, peterz@infradead.org, jmarchan@redhat.com, torvalds@linux-foundation.org, mingo@elte.hu, riel@redhat.com, yoshikawa.takuya@oss.ntt.co.jp Subject: Re: IO scheduler based IO controller V10 Date: Mon, 28 Sep 2009 23:22:55 -0400 [thread overview] Message-ID: <20090929032255.GA10664@redhat.com> (raw) In-Reply-To: <e98e18940909281737q142c788dpd20b8bdc05dd0eff@mail.gmail.com> On Mon, Sep 28, 2009 at 05:37:28PM -0700, Nauman Rafique wrote: > Hi Vivek, > Me, Divyesh, Fernando and Yoshikawa had a chance to have a chat with > Jens about IO controller during Linux Plumbers Conference '09. Jens > expressed his concerns about the size and complexity of the patches. I > believe that is a reasonable concern. We talked about things that > could be done to reduce the size of the patches. The requirement that > the "solution has to work with all IO schedulers" seems like a > secondary concern at this point; and it came out as one thing that can > help to reduce the size of the patch set. Initially doing cgroup based IO control only for CFQ should help a lot in reducing the patchset size. > Another possibility is to > use a simpler scheduling algorithm e.g. weighted round robin, instead > of BFQ scheduler. BFQ indeed has great properties, but we cannot deny > the fact that it is complex to understand, and might be cumbersome to > maintain. Core of the BFQ I have gotten rid of already. The remaining part is idle tree and data structures. I will see how can I simplify it further. > Also, hierarchical scheduling is something that could be > unnecessary in the first set of patches, even though cgroups are > hierarchical in nature. Sure. Though I don't think that a lot of code is there because of hierarchical nature. If we solve the issue at CFQ layer, we have to maintain atleast two levels. One for queue and other for groups. So even the simplest solution becomes almost hierarchical in nature. But I will still see how to get rid of some code here too... > > We are starting from a point where there is no cgroup based IO > scheduling in the kernel. And it is probably not reasonable to satisfy > all IO scheduling related requirements in one patch set. We can start > with something simple, and build on top of that. So a very simple > patch set that enables cgroup based proportional scheduling for CFQ > seems like the way to go at this point. Sure, we can start with CFQ only. But a bigger question we need to answer is that is CFQ the right place to solve the issue? Jens, do you think that CFQ is the right place to solve the problem? Andrew seems to favor a high level approach so that IO schedulers are less complex and we can provide fairness at high level logical devices also. I will again try to summarize my understanding so far about the pros/cons of each approach and then we can take the discussion forward. Fairness in terms of size of IO or disk time used ================================================= On a seeky media, fairness in terms of disk time can get us better results instead fairness interms of size of IO or number of IO. If we implement some kind of time based solution at higher layer, then that higher layer should know who used how much of time each group used. We can probably do some kind of timestamping in bio to get a sense when did it get into disk and when did it finish. But on a multi queue hardware there can be multiple requests in the disk either from same queue or from differnet queues and with pure timestamping based apparoch, so far I could not think how at high level we will get an idea who used how much of time. So this is the first point of contention that how do we want to provide fairness. In terms of disk time used or in terms of size of IO/number of IO. Max bandwidth Controller or Proportional bandwidth controller ============================================================= What is our primary requirement here? A weight based proportional bandwidth controller where we can use the resources optimally and any kind of throttling kicks in only if there is contention for the disk. Or we want max bandwidth control where a group is not allowed to use the disk even if disk is free. Or we need both? I would think that at some point of time we will need both but we can start with proportional bandwidth control first. Fairness for higher level logical devices ========================================= Do we want good fairness numbers for higher level logical devices also or it is sufficient to provide fairness at leaf nodes. Providing fairness at leaf nodes can help us use the resources optimally and in the process we can get fairness at higher level also in many of the cases. But do we want strict fairness numbers on higher level logical devices even if it means sub-optimal usage of unerlying phsical devices? I think that for proportinal bandwidth control, it should be ok to provide fairness at higher level logical device but for max bandwidth control it might make more sense to provide fairness at higher level. Consider a case where from a striped device a customer wants to limit a group to 30MB/s and in case of leaf node control, if every leaf node provides 30MB/s, it might accumulate to much more than specified rate at logical device. Latency Control and strong isolation between groups =================================================== Do we want a good isolation between groups and better latencies and stronger isolation between groups? I think if problem is solved at IO scheduler level, we can achieve better latency control and hence stronger isolation between groups. Higher level solutions should find it hard to provide same kind of latency control and isolation between groups as IO scheduler based solution. Fairness for buffered writes ============================ Doing io control at any place below page cache has disadvantage that page cache might not dispatch more writes from higher weight group hence higher weight group might not see more IO done. Andrew says that we don't have a solution to this problem in kernel and he would like to see it handled properly. Only way to solve this seems to be to slow down the writers before they write into page cache. IO throttling patch handled it by slowing down writer if it crossed max specified rate. Other suggestions have come in the form of dirty_ratio per memory cgroup or a separate cgroup controller al-together where some kind of per group write limit can be specified. So if solution is implemented at IO scheduler layer or at device mapper layer, both shall have to rely on another controller to be co-mounted to handle buffered writes properly. Fairness with-in group ====================== One of the issues with higher level controller is that how to do fair throttling so that fairness with-in group is not impacted. Especially the case of making sure that we don't break the notion of ioprio of the processes with-in group. Especially io throttling patch was very bad in terms of prio with-in group where throttling treated everyone equally and difference between process prio disappeared. Reads Vs Writes =============== A higher level control most likely will change the ratio in which reads and writes are dispatched to disk with-in group. It used to be decided by IO scheduler so far but with higher level groups doing throttling and possibly buffering the bios and releasing them later, they will have to come up with their own policy on in what proportion reads and writes should be dispatched. In case of IO scheduler based control, all the queuing takes place at IO scheduler and it still retains control of in what ration reads and writes should be dispatched. Summary ======= - An io scheduler based io controller can provide better latencies, stronger isolation between groups, time based fairness and will not interfere with io schedulers policies like class, ioprio and reader vs writer issues. But it can gunrantee fairness at higher logical level devices. Especially in case of max bw control, leaf node control does not sound to be the most appropriate thing. - IO throttling provides max bw control in terms of absolute rate. It has the advantage that it can provide control at higher level logical device and also control buffered writes without need of additional controller co-mounted. But it does only max bw control and not proportion control so one might not be using resources optimally. It looses sense of task prio and class with-in group as any of the task can be throttled with-in group. Because throttling does not kick in till you hit the max bw limit, it should find it hard to provide same latencies as io scheduler based control. - dm-ioband also has the advantage that it can provide fairness at higher level logical devices. But, fairness is provided only in terms of size of IO or number of IO. No time based fairness. It is very throughput oriented and does not throttle high speed group if other group is running slow random reader. This results in bad latnecies for random reader group and weaker isolation between groups. Also it does not provide fairness if a group is not continuously backlogged. So if one is running 1-2 dd/sequential readers in the group, one does not get fairness until workload is increased to a point where group becomes continuously backlogged. This also results in poor latencies and limited fairness. At this point of time it does not look like a single IO controller all the scenarios/requirements. This means few things to me. - Drop some of the requirements and go with one implementation which meets those reduced set of requirements. - Have more than one IO controller implementation in kenrel. One for lower level control for better latencies, stronger isolation and optimal resource usage and other one for fairness at higher level logical devices and max bandwidth control. And let user decide which one to use based on his/her needs. - Come up with more intelligent way of doing IO control where single controller covers all the cases. At this point of time, I am more inclined towards option 2 of having more than one implementation in kernel. :-) (Until and unless we can brainstrom and come up with ideas to make option 3 happen). > > It would be great if we discuss our plans on the mailing list, so we > can get early feedback from everyone. This is what comes to my mind so far. Please add to the list if I have missed some points. Also correct me if I am wrong about the pros/cons of the approaches. Thoughts/ideas/opinions are welcome... Thanks Vivek
WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com> To: Nauman Rafique <nauman@google.com> Cc: dhaval@linux.vnet.ibm.com, peterz@infradead.org, dm-devel@redhat.com, dpshah@google.com, jens.axboe@oracle.com, agk@redhat.com, balbir@linux.vnet.ibm.com, paolo.valente@unimore.it, jmarchan@redhat.com, guijianfeng@cn.fujitsu.com, fernando@oss.ntt.co.jp, mikew@google.com, yoshikawa.takuya@oss.ntt.co.jp, jmoyer@redhat.com, mingo@elte.hu, m-ikeda@ds.jp.nec.com, riel@redhat.com, lizf@cn.fujitsu.com, fchecconi@gmail.com, s-uchida@ap.jp.nec.com, containers@lists.linux-foundation.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, righi.andrea@gmail.com, torvalds@linux-foundation.org Subject: Re: IO scheduler based IO controller V10 Date: Mon, 28 Sep 2009 23:22:55 -0400 [thread overview] Message-ID: <20090929032255.GA10664@redhat.com> (raw) In-Reply-To: <e98e18940909281737q142c788dpd20b8bdc05dd0eff@mail.gmail.com> On Mon, Sep 28, 2009 at 05:37:28PM -0700, Nauman Rafique wrote: > Hi Vivek, > Me, Divyesh, Fernando and Yoshikawa had a chance to have a chat with > Jens about IO controller during Linux Plumbers Conference '09. Jens > expressed his concerns about the size and complexity of the patches. I > believe that is a reasonable concern. We talked about things that > could be done to reduce the size of the patches. The requirement that > the "solution has to work with all IO schedulers" seems like a > secondary concern at this point; and it came out as one thing that can > help to reduce the size of the patch set. Initially doing cgroup based IO control only for CFQ should help a lot in reducing the patchset size. > Another possibility is to > use a simpler scheduling algorithm e.g. weighted round robin, instead > of BFQ scheduler. BFQ indeed has great properties, but we cannot deny > the fact that it is complex to understand, and might be cumbersome to > maintain. Core of the BFQ I have gotten rid of already. The remaining part is idle tree and data structures. I will see how can I simplify it further. > Also, hierarchical scheduling is something that could be > unnecessary in the first set of patches, even though cgroups are > hierarchical in nature. Sure. Though I don't think that a lot of code is there because of hierarchical nature. If we solve the issue at CFQ layer, we have to maintain atleast two levels. One for queue and other for groups. So even the simplest solution becomes almost hierarchical in nature. But I will still see how to get rid of some code here too... > > We are starting from a point where there is no cgroup based IO > scheduling in the kernel. And it is probably not reasonable to satisfy > all IO scheduling related requirements in one patch set. We can start > with something simple, and build on top of that. So a very simple > patch set that enables cgroup based proportional scheduling for CFQ > seems like the way to go at this point. Sure, we can start with CFQ only. But a bigger question we need to answer is that is CFQ the right place to solve the issue? Jens, do you think that CFQ is the right place to solve the problem? Andrew seems to favor a high level approach so that IO schedulers are less complex and we can provide fairness at high level logical devices also. I will again try to summarize my understanding so far about the pros/cons of each approach and then we can take the discussion forward. Fairness in terms of size of IO or disk time used ================================================= On a seeky media, fairness in terms of disk time can get us better results instead fairness interms of size of IO or number of IO. If we implement some kind of time based solution at higher layer, then that higher layer should know who used how much of time each group used. We can probably do some kind of timestamping in bio to get a sense when did it get into disk and when did it finish. But on a multi queue hardware there can be multiple requests in the disk either from same queue or from differnet queues and with pure timestamping based apparoch, so far I could not think how at high level we will get an idea who used how much of time. So this is the first point of contention that how do we want to provide fairness. In terms of disk time used or in terms of size of IO/number of IO. Max bandwidth Controller or Proportional bandwidth controller ============================================================= What is our primary requirement here? A weight based proportional bandwidth controller where we can use the resources optimally and any kind of throttling kicks in only if there is contention for the disk. Or we want max bandwidth control where a group is not allowed to use the disk even if disk is free. Or we need both? I would think that at some point of time we will need both but we can start with proportional bandwidth control first. Fairness for higher level logical devices ========================================= Do we want good fairness numbers for higher level logical devices also or it is sufficient to provide fairness at leaf nodes. Providing fairness at leaf nodes can help us use the resources optimally and in the process we can get fairness at higher level also in many of the cases. But do we want strict fairness numbers on higher level logical devices even if it means sub-optimal usage of unerlying phsical devices? I think that for proportinal bandwidth control, it should be ok to provide fairness at higher level logical device but for max bandwidth control it might make more sense to provide fairness at higher level. Consider a case where from a striped device a customer wants to limit a group to 30MB/s and in case of leaf node control, if every leaf node provides 30MB/s, it might accumulate to much more than specified rate at logical device. Latency Control and strong isolation between groups =================================================== Do we want a good isolation between groups and better latencies and stronger isolation between groups? I think if problem is solved at IO scheduler level, we can achieve better latency control and hence stronger isolation between groups. Higher level solutions should find it hard to provide same kind of latency control and isolation between groups as IO scheduler based solution. Fairness for buffered writes ============================ Doing io control at any place below page cache has disadvantage that page cache might not dispatch more writes from higher weight group hence higher weight group might not see more IO done. Andrew says that we don't have a solution to this problem in kernel and he would like to see it handled properly. Only way to solve this seems to be to slow down the writers before they write into page cache. IO throttling patch handled it by slowing down writer if it crossed max specified rate. Other suggestions have come in the form of dirty_ratio per memory cgroup or a separate cgroup controller al-together where some kind of per group write limit can be specified. So if solution is implemented at IO scheduler layer or at device mapper layer, both shall have to rely on another controller to be co-mounted to handle buffered writes properly. Fairness with-in group ====================== One of the issues with higher level controller is that how to do fair throttling so that fairness with-in group is not impacted. Especially the case of making sure that we don't break the notion of ioprio of the processes with-in group. Especially io throttling patch was very bad in terms of prio with-in group where throttling treated everyone equally and difference between process prio disappeared. Reads Vs Writes =============== A higher level control most likely will change the ratio in which reads and writes are dispatched to disk with-in group. It used to be decided by IO scheduler so far but with higher level groups doing throttling and possibly buffering the bios and releasing them later, they will have to come up with their own policy on in what proportion reads and writes should be dispatched. In case of IO scheduler based control, all the queuing takes place at IO scheduler and it still retains control of in what ration reads and writes should be dispatched. Summary ======= - An io scheduler based io controller can provide better latencies, stronger isolation between groups, time based fairness and will not interfere with io schedulers policies like class, ioprio and reader vs writer issues. But it can gunrantee fairness at higher logical level devices. Especially in case of max bw control, leaf node control does not sound to be the most appropriate thing. - IO throttling provides max bw control in terms of absolute rate. It has the advantage that it can provide control at higher level logical device and also control buffered writes without need of additional controller co-mounted. But it does only max bw control and not proportion control so one might not be using resources optimally. It looses sense of task prio and class with-in group as any of the task can be throttled with-in group. Because throttling does not kick in till you hit the max bw limit, it should find it hard to provide same latencies as io scheduler based control. - dm-ioband also has the advantage that it can provide fairness at higher level logical devices. But, fairness is provided only in terms of size of IO or number of IO. No time based fairness. It is very throughput oriented and does not throttle high speed group if other group is running slow random reader. This results in bad latnecies for random reader group and weaker isolation between groups. Also it does not provide fairness if a group is not continuously backlogged. So if one is running 1-2 dd/sequential readers in the group, one does not get fairness until workload is increased to a point where group becomes continuously backlogged. This also results in poor latencies and limited fairness. At this point of time it does not look like a single IO controller all the scenarios/requirements. This means few things to me. - Drop some of the requirements and go with one implementation which meets those reduced set of requirements. - Have more than one IO controller implementation in kenrel. One for lower level control for better latencies, stronger isolation and optimal resource usage and other one for fairness at higher level logical devices and max bandwidth control. And let user decide which one to use based on his/her needs. - Come up with more intelligent way of doing IO control where single controller covers all the cases. At this point of time, I am more inclined towards option 2 of having more than one implementation in kernel. :-) (Until and unless we can brainstrom and come up with ideas to make option 3 happen). > > It would be great if we discuss our plans on the mailing list, so we > can get early feedback from everyone. This is what comes to my mind so far. Please add to the list if I have missed some points. Also correct me if I am wrong about the pros/cons of the approaches. Thoughts/ideas/opinions are welcome... Thanks Vivek
next prev parent reply other threads:[~2009-09-29 3:23 UTC|newest] Thread overview: 466+ messages / expand[flat|nested] mbox.gz Atom feed top 2009-09-24 19:25 IO scheduler based IO controller V10 Vivek Goyal 2009-09-24 19:25 ` [PATCH 01/28] io-controller: Documentation Vivek Goyal 2009-09-24 19:25 ` [PATCH 02/28] io-controller: Core of the elevator fair queuing Vivek Goyal 2009-09-24 19:25 ` [PATCH 03/28] io-controller: Keep a cache of recently expired queues Vivek Goyal 2009-09-24 19:25 ` [PATCH 04/28] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal 2009-09-24 19:25 ` [PATCH 05/28] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal 2009-09-24 19:25 ` [PATCH 06/28] io-controller: Core scheduler changes to support hierarhical scheduling Vivek Goyal 2009-09-24 19:25 ` [PATCH 07/28] io-controller: cgroup related changes for hierarchical group support Vivek Goyal 2009-09-24 19:25 ` [PATCH 08/28] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal 2009-09-24 19:25 ` [PATCH 09/28] io-controller: cfq changes to use " Vivek Goyal 2009-09-24 19:25 ` [PATCH 10/28] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal 2009-09-24 19:25 ` [PATCH 11/28] io-controller: Debug hierarchical IO scheduling Vivek Goyal 2009-09-24 19:25 ` [PATCH 12/28] io-controller: Introduce group idling Vivek Goyal 2009-09-24 19:25 ` [PATCH 13/28] io-controller: Implement wait busy for io queues Vivek Goyal 2009-09-24 19:25 ` [PATCH 14/28] io-controller: Keep track of late preemptions Vivek Goyal 2009-09-24 19:25 ` [PATCH 15/28] io-controller: Allow CFQ specific extra preemptions Vivek Goyal 2009-09-25 6:24 ` Gui Jianfeng 2009-09-25 6:24 ` Gui Jianfeng [not found] ` <1253820332-10246-16-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-25 6:24 ` Gui Jianfeng 2009-09-24 19:25 ` [PATCH 16/28] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal 2009-09-24 19:25 ` [PATCH 17/28] io-controller: Separate out queue and data Vivek Goyal 2009-09-24 19:25 ` [PATCH 18/28] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal 2009-09-24 19:25 ` [PATCH 20/28] io-controller: noop changes for hierarchical fair queuing Vivek Goyal 2009-09-24 19:25 ` [PATCH 21/28] io-controller: deadline " Vivek Goyal 2009-09-24 19:25 ` [PATCH 22/28] io-controller: anticipatory " Vivek Goyal 2009-09-24 19:25 ` [PATCH 23/28] io-controller: blkio_cgroup patches from Ryo to track async bios Vivek Goyal 2009-09-24 19:25 ` [PATCH 24/28] io-controller: map async requests to appropriate cgroup Vivek Goyal [not found] ` <1253820332-10246-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-24 19:25 ` [PATCH 01/28] io-controller: Documentation Vivek Goyal 2009-09-24 19:25 ` [PATCH 02/28] io-controller: Core of the elevator fair queuing Vivek Goyal 2009-09-24 19:25 ` [PATCH 03/28] io-controller: Keep a cache of recently expired queues Vivek Goyal 2009-09-24 19:25 ` [PATCH 04/28] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal 2009-09-24 19:25 ` [PATCH 05/28] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal 2009-09-24 19:25 ` [PATCH 06/28] io-controller: Core scheduler changes to support hierarhical scheduling Vivek Goyal 2009-09-24 19:25 ` [PATCH 07/28] io-controller: cgroup related changes for hierarchical group support Vivek Goyal 2009-09-24 19:25 ` [PATCH 08/28] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal 2009-09-24 19:25 ` [PATCH 09/28] io-controller: cfq changes to use " Vivek Goyal 2009-09-24 19:25 ` [PATCH 10/28] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal 2009-09-24 19:25 ` [PATCH 11/28] io-controller: Debug hierarchical IO scheduling Vivek Goyal 2009-09-24 19:25 ` [PATCH 12/28] io-controller: Introduce group idling Vivek Goyal 2009-09-24 19:25 ` [PATCH 13/28] io-controller: Implement wait busy for io queues Vivek Goyal 2009-09-24 19:25 ` [PATCH 14/28] io-controller: Keep track of late preemptions Vivek Goyal 2009-09-24 19:25 ` [PATCH 15/28] io-controller: Allow CFQ specific extra preemptions Vivek Goyal 2009-09-24 19:25 ` [PATCH 16/28] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal 2009-09-24 19:25 ` [PATCH 17/28] io-controller: Separate out queue and data Vivek Goyal 2009-09-24 19:25 ` [PATCH 18/28] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal 2009-09-24 19:25 ` [PATCH 19/28] io-controller: Avoid expiring ioq for single ioq scheduler if only root group Vivek Goyal 2009-09-24 19:25 ` Vivek Goyal 2009-09-24 19:25 ` [PATCH 20/28] io-controller: noop changes for hierarchical fair queuing Vivek Goyal 2009-09-24 19:25 ` [PATCH 21/28] io-controller: deadline " Vivek Goyal 2009-09-24 19:25 ` [PATCH 22/28] io-controller: anticipatory " Vivek Goyal 2009-09-24 19:25 ` [PATCH 23/28] io-controller: blkio_cgroup patches from Ryo to track async bios Vivek Goyal 2009-09-24 19:25 ` [PATCH 24/28] io-controller: map async requests to appropriate cgroup Vivek Goyal 2009-09-24 19:25 ` [PATCH 25/28] io-controller: Per cgroup request descriptor support Vivek Goyal 2009-09-24 19:25 ` [PATCH 26/28] io-controller: Per io group bdi congestion interface Vivek Goyal 2009-09-24 19:25 ` [PATCH 27/28] io-controller: Support per cgroup per device weights and io class Vivek Goyal 2009-09-24 19:25 ` [PATCH 28/28] io-controller: debug elevator fair queuing support Vivek Goyal 2009-09-24 21:33 ` IO scheduler based IO controller V10 Andrew Morton 2009-09-25 2:20 ` Ulrich Lukas 2009-09-29 0:37 ` Nauman Rafique 2009-09-24 19:25 ` [PATCH 25/28] io-controller: Per cgroup request descriptor support Vivek Goyal 2009-09-24 19:25 ` [PATCH 26/28] io-controller: Per io group bdi congestion interface Vivek Goyal 2009-09-24 19:25 ` [PATCH 27/28] io-controller: Support per cgroup per device weights and io class Vivek Goyal 2009-09-24 19:25 ` [PATCH 28/28] io-controller: debug elevator fair queuing support Vivek Goyal 2009-09-24 21:33 ` IO scheduler based IO controller V10 Andrew Morton 2009-09-24 21:33 ` Andrew Morton 2009-09-25 1:09 ` KAMEZAWA Hiroyuki [not found] ` <20090925100952.55c2dd7a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org> 2009-09-25 1:18 ` KAMEZAWA Hiroyuki 2009-09-25 4:14 ` Vivek Goyal 2009-09-25 1:18 ` KAMEZAWA Hiroyuki 2009-09-25 1:18 ` KAMEZAWA Hiroyuki 2009-09-25 5:29 ` Balbir Singh 2009-09-25 7:09 ` Ryo Tsuruta 2009-09-25 7:09 ` Ryo Tsuruta [not found] ` <20090925052911.GK4590-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org> 2009-09-25 7:09 ` Ryo Tsuruta [not found] ` <20090925101821.1de8091a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org> 2009-09-25 5:29 ` Balbir Singh 2009-09-25 4:14 ` Vivek Goyal 2009-09-25 4:14 ` Vivek Goyal 2009-09-25 5:04 ` Vivek Goyal 2009-09-25 5:04 ` Vivek Goyal [not found] ` <20090925050429.GB12555-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-25 9:07 ` Ryo Tsuruta 2009-09-25 9:07 ` Ryo Tsuruta 2009-09-25 9:07 ` Ryo Tsuruta 2009-09-25 14:33 ` Vivek Goyal 2009-09-25 14:33 ` Vivek Goyal [not found] ` <20090925143337.GA15007-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-28 7:30 ` Ryo Tsuruta 2009-09-28 7:30 ` Ryo Tsuruta [not found] ` <20090925.180724.104041942.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-09-25 14:33 ` Vivek Goyal 2009-09-25 15:04 ` Rik van Riel 2009-09-25 15:04 ` Rik van Riel 2009-09-25 15:04 ` Rik van Riel 2009-09-28 7:38 ` Ryo Tsuruta 2009-09-28 7:38 ` Ryo Tsuruta [not found] ` <4ABCDBFF.1020203-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-28 7:38 ` Ryo Tsuruta 2009-10-08 4:42 ` More performance numbers (Was: Re: IO scheduler based IO controller V10) Vivek Goyal 2009-10-08 4:42 ` Vivek Goyal 2009-10-08 8:34 ` Andrea Righi 2009-10-08 8:34 ` Andrea Righi [not found] ` <20091008044251.GA3490-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-08 8:34 ` Andrea Righi [not found] ` <20090924143315.781cd0ac.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org> 2009-09-25 1:09 ` IO scheduler based IO controller V10 KAMEZAWA Hiroyuki 2009-09-25 5:04 ` Vivek Goyal 2009-10-08 4:42 ` More performance numbers (Was: Re: IO scheduler based IO controller V10) Vivek Goyal 2009-10-10 19:53 ` Performance numbers with IO throttling patches " Vivek Goyal 2009-10-10 19:53 ` Vivek Goyal 2009-10-10 19:53 ` Vivek Goyal [not found] ` <20091010195316.GB16510-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-10 22:27 ` Andrea Righi 2009-10-10 22:27 ` Andrea Righi 2009-10-10 22:27 ` Andrea Righi 2009-10-11 12:32 ` Vivek Goyal 2009-10-11 12:32 ` Vivek Goyal 2009-10-12 21:11 ` Vivek Goyal 2009-10-12 21:11 ` Vivek Goyal 2009-10-17 15:18 ` Andrea Righi [not found] ` <20091012211120.GE7152-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-17 15:18 ` Andrea Righi 2009-10-12 21:11 ` Vivek Goyal 2009-09-25 2:20 ` IO scheduler based IO controller V10 Ulrich Lukas [not found] ` <4ABC28DE.7050809-7vBoImwI/YtIVYojq0lqJrNAH6kLmebB@public.gmane.org> 2009-09-25 20:26 ` Vivek Goyal 2009-09-25 20:26 ` Vivek Goyal 2009-09-25 20:26 ` Vivek Goyal 2009-09-26 14:51 ` Mike Galbraith [not found] ` <1253976676.7005.40.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-27 6:55 ` Mike Galbraith 2009-09-27 6:55 ` Mike Galbraith [not found] ` <1254034500.7933.6.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-27 16:42 ` Jens Axboe 2009-09-27 16:42 ` Jens Axboe [not found] ` <20090927164235.GA23126-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-09-27 18:15 ` Mike Galbraith 2009-09-30 19:58 ` Mike Galbraith 2009-09-27 18:15 ` Mike Galbraith 2009-09-28 4:04 ` Mike Galbraith 2009-09-28 5:55 ` Mike Galbraith 2009-09-28 17:48 ` Vivek Goyal 2009-09-28 17:48 ` Vivek Goyal 2009-09-28 18:24 ` Mike Galbraith [not found] ` <20090928174809.GB3643-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-28 18:24 ` Mike Galbraith [not found] ` <1254110648.7683.3.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-28 5:55 ` Mike Galbraith 2009-09-28 17:48 ` Vivek Goyal [not found] ` <1254075359.7354.66.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-28 4:04 ` Mike Galbraith 2009-09-30 19:58 ` Mike Galbraith [not found] ` <1254340730.7695.32.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-30 20:05 ` Mike Galbraith 2009-09-30 20:05 ` Mike Galbraith 2009-09-30 20:24 ` Vivek Goyal 2009-09-30 20:24 ` Vivek Goyal [not found] ` <20090930202447.GA28236-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-01 7:33 ` Mike Galbraith 2009-10-01 7:33 ` Mike Galbraith [not found] ` <1254382405.7595.9.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-01 18:58 ` Jens Axboe 2009-10-01 18:58 ` Jens Axboe 2009-10-02 6:23 ` Mike Galbraith [not found] ` <1254464628.7158.101.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 8:04 ` Jens Axboe 2009-10-02 8:04 ` Jens Axboe 2009-10-02 8:04 ` Jens Axboe 2009-10-02 8:53 ` Mike Galbraith [not found] ` <1254473609.6378.24.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 9:00 ` Mike Galbraith 2009-10-02 9:55 ` Jens Axboe 2009-10-02 9:00 ` Mike Galbraith 2009-10-02 9:55 ` Jens Axboe 2009-10-02 12:22 ` Mike Galbraith [not found] ` <20091002095555.GB26962-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 12:22 ` Mike Galbraith [not found] ` <20091002080417.GG14918-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 8:53 ` Mike Galbraith 2009-10-02 9:24 ` Ingo Molnar 2009-10-02 9:24 ` Ingo Molnar 2009-10-02 9:24 ` Ingo Molnar 2009-10-02 9:28 ` Jens Axboe 2009-10-02 9:28 ` Jens Axboe [not found] ` <20091002092839.GA26962-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 14:24 ` Linus Torvalds 2009-10-02 14:24 ` Linus Torvalds 2009-10-02 14:45 ` Mike Galbraith 2009-10-02 14:57 ` Jens Axboe [not found] ` <1254494742.7307.37.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 14:57 ` Jens Axboe 2009-10-02 14:56 ` Jens Axboe 2009-10-02 14:56 ` Jens Axboe 2009-10-02 15:14 ` Linus Torvalds 2009-10-02 15:14 ` Linus Torvalds 2009-10-02 16:01 ` jim owens 2009-10-02 16:01 ` jim owens 2009-10-02 17:11 ` Jens Axboe 2009-10-02 17:11 ` Jens Axboe 2009-10-02 17:20 ` Ingo Molnar 2009-10-02 17:20 ` Ingo Molnar 2009-10-02 17:25 ` Jens Axboe 2009-10-02 17:25 ` Jens Axboe 2009-10-02 17:28 ` Ingo Molnar 2009-10-02 17:28 ` Ingo Molnar [not found] ` <20091002172842.GA4884-X9Un+BFzKDI@public.gmane.org> 2009-10-02 17:37 ` Jens Axboe 2009-10-02 17:37 ` Jens Axboe [not found] ` <20091002173732.GK31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 17:56 ` Ingo Molnar 2009-10-02 18:13 ` Mike Galbraith 2009-10-02 17:56 ` Ingo Molnar 2009-10-02 17:56 ` Ingo Molnar [not found] ` <20091002175629.GA14860-X9Un+BFzKDI@public.gmane.org> 2009-10-02 18:04 ` Jens Axboe 2009-10-02 18:04 ` Jens Axboe [not found] ` <20091002180437.GL31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 18:22 ` Mike Galbraith 2009-10-02 18:36 ` Theodore Tso 2009-10-02 18:22 ` Mike Galbraith 2009-10-02 18:26 ` Jens Axboe 2009-10-02 18:33 ` Mike Galbraith [not found] ` <20091002182608.GO31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 18:33 ` Mike Galbraith [not found] ` <1254507754.8667.15.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 18:26 ` Jens Axboe 2009-10-02 18:36 ` Theodore Tso 2009-10-02 18:45 ` Jens Axboe 2009-10-02 18:45 ` Jens Axboe 2009-10-02 19:01 ` Ingo Molnar 2009-10-02 19:09 ` Jens Axboe 2009-10-02 19:09 ` Jens Axboe [not found] ` <20091002190110.GA25297-X9Un+BFzKDI@public.gmane.org> 2009-10-02 19:09 ` Jens Axboe [not found] ` <20091002184549.GS31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 19:01 ` Ingo Molnar [not found] ` <20091002183649.GE8161-3s7WtUTddSA@public.gmane.org> 2009-10-02 18:45 ` Jens Axboe 2009-10-02 18:13 ` Mike Galbraith 2009-10-02 18:19 ` Jens Axboe 2009-10-02 18:57 ` Mike Galbraith 2009-10-02 20:47 ` Mike Galbraith [not found] ` <1254509838.8667.30.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 20:47 ` Mike Galbraith 2009-10-03 5:48 ` Mike Galbraith 2009-10-03 5:56 ` Mike Galbraith 2009-10-03 6:31 ` tweaking IO latency [was Re: IO scheduler based IO controller V10] Mike Galbraith 2009-10-03 7:24 ` IO scheduler based IO controller V10 Jens Axboe 2009-10-03 9:00 ` Mike Galbraith 2009-10-03 9:12 ` Corrado Zoccolo 2009-10-03 9:12 ` Corrado Zoccolo [not found] ` <4e5e476b0910030212y50f97d97nc2e17c35d855cc63-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-10-03 13:18 ` Jens Axboe 2009-10-03 13:18 ` Jens Axboe 2009-10-03 13:18 ` Jens Axboe [not found] ` <1254560434.17052.14.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 9:12 ` Corrado Zoccolo 2009-10-03 13:17 ` Jens Axboe 2009-10-03 13:17 ` Jens Axboe 2009-10-03 13:17 ` Jens Axboe [not found] ` <20091003072401.GV31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 9:00 ` Mike Galbraith 2009-10-03 11:29 ` Vivek Goyal 2009-10-03 11:29 ` Vivek Goyal [not found] ` <20091003112915.GA12925-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-03 12:40 ` Do not overload dispatch queue (Was: Re: IO scheduler based IO controller V10) Vivek Goyal 2009-10-03 12:40 ` Vivek Goyal 2009-10-03 12:40 ` Vivek Goyal 2009-10-03 13:21 ` Jens Axboe [not found] ` <20091003132115.GB31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 13:56 ` Vivek Goyal 2009-10-03 13:56 ` Vivek Goyal 2009-10-03 13:56 ` Vivek Goyal 2009-10-03 14:02 ` Mike Galbraith [not found] ` <1254578553.7499.5.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 14:28 ` Jens Axboe 2009-10-03 14:28 ` Jens Axboe 2009-10-03 14:33 ` Mike Galbraith 2009-10-03 14:33 ` Mike Galbraith 2009-10-03 14:51 ` Mike Galbraith 2009-10-03 14:51 ` Mike Galbraith [not found] ` <1254581496.8293.8.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 15:14 ` Jens Axboe 2009-10-03 15:14 ` Jens Axboe 2009-10-03 15:14 ` Jens Axboe [not found] ` <20091003151445.GF31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 15:57 ` Mike Galbraith 2009-10-03 15:57 ` Mike Galbraith 2009-10-03 17:35 ` Jens Axboe 2009-10-03 17:35 ` Jens Axboe [not found] ` <20091003173532.GG31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 17:45 ` Linus Torvalds 2009-10-03 17:45 ` Linus Torvalds [not found] ` <alpine.LFD.2.01.0910031042560.6996-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org> 2009-10-03 17:51 ` Jens Axboe 2009-10-03 17:51 ` Jens Axboe 2009-10-03 19:07 ` Mike Galbraith 2009-10-03 19:07 ` Mike Galbraith 2009-10-03 19:07 ` Mike Galbraith 2009-10-03 19:11 ` Mike Galbraith 2009-10-03 19:11 ` Mike Galbraith 2009-10-03 19:23 ` Jens Axboe 2009-10-03 19:23 ` Jens Axboe 2009-10-03 19:49 ` Mike Galbraith 2009-10-03 19:49 ` Mike Galbraith 2009-10-04 10:50 ` Mike Galbraith 2009-10-04 11:33 ` Mike Galbraith [not found] ` <1254653434.7237.18.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-04 17:39 ` Jens Axboe 2009-10-04 17:39 ` Jens Axboe [not found] ` <20091004173901.GD26573-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-04 18:23 ` Mike Galbraith 2009-10-04 18:23 ` Mike Galbraith 2009-10-04 18:23 ` Mike Galbraith [not found] ` <1254680622.27889.2.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-04 18:38 ` Jens Axboe 2009-10-04 18:38 ` Jens Axboe 2009-10-04 18:38 ` Jens Axboe [not found] ` <20091004183822.GF26573-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-04 19:47 ` Mike Galbraith 2009-10-04 19:47 ` Mike Galbraith 2009-10-04 19:47 ` Mike Galbraith 2009-10-04 20:17 ` Jens Axboe 2009-10-04 20:17 ` Jens Axboe [not found] ` <20091004201708.GJ26573-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-04 22:15 ` Mike Galbraith 2009-10-04 22:15 ` Mike Galbraith 2009-10-04 22:15 ` Mike Galbraith [not found] ` <1254685638.7637.6.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-04 20:17 ` Jens Axboe [not found] ` <20091003192321.GA26573-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 19:49 ` Mike Galbraith [not found] ` <1254596864.7153.9.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 19:11 ` Mike Galbraith 2009-10-03 19:23 ` Jens Axboe [not found] ` <1254585420.7539.2.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 17:35 ` Jens Axboe [not found] ` <20091003142840.GE31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 14:33 ` Mike Galbraith 2009-10-03 14:51 ` Mike Galbraith [not found] ` <20091003135623.GD12925-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-03 14:02 ` Mike Galbraith [not found] ` <20091003124049.GB12925-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-03 13:21 ` Jens Axboe 2009-10-03 13:57 ` Mike Galbraith 2009-10-03 13:57 ` Mike Galbraith [not found] ` <1254549378.8299.21.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 6:31 ` tweaking IO latency [was Re: IO scheduler based IO controller V10] Mike Galbraith 2009-10-03 7:24 ` IO scheduler based IO controller V10 Jens Axboe 2009-10-03 11:29 ` Vivek Goyal [not found] ` <1254548931.8299.18.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-03 5:56 ` Mike Galbraith 2009-10-03 7:20 ` Ingo Molnar 2009-10-03 7:20 ` Ingo Molnar 2009-10-03 7:20 ` Ingo Molnar [not found] ` <20091003072021.GB21407-X9Un+BFzKDI@public.gmane.org> 2009-10-03 7:25 ` Jens Axboe 2009-10-03 7:25 ` Jens Axboe 2009-10-03 7:25 ` Jens Axboe 2009-10-03 8:53 ` Mike Galbraith [not found] ` <20091003072540.GW31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-03 8:53 ` Mike Galbraith 2009-10-03 9:01 ` Corrado Zoccolo 2009-10-03 9:01 ` Corrado Zoccolo [not found] ` <20091002181903.GN31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 18:57 ` Mike Galbraith 2009-10-03 5:48 ` Mike Galbraith [not found] ` <1254507215.8667.7.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 18:19 ` Jens Axboe [not found] ` <20091002172554.GJ31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 17:28 ` Ingo Molnar [not found] ` <20091002172046.GA2376-X9Un+BFzKDI@public.gmane.org> 2009-10-02 17:25 ` Jens Axboe [not found] ` <20091002171129.GG31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 17:20 ` Ingo Molnar [not found] ` <alpine.LFD.2.01.0910020811490.6996-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org> 2009-10-02 16:01 ` jim owens 2009-10-02 17:11 ` Jens Axboe 2009-10-02 16:33 ` Ray Lee 2009-10-02 17:13 ` Jens Axboe 2009-10-02 17:13 ` Jens Axboe [not found] ` <2c0942db0910020933l6d312c6ahae0e00619f598b39-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-10-02 17:13 ` Jens Axboe [not found] ` <20091002145610.GD31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 15:14 ` Linus Torvalds 2009-10-02 16:33 ` Ray Lee [not found] ` <alpine.LFD.2.01.0910020715160.6996-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org> 2009-10-02 14:45 ` Mike Galbraith 2009-10-02 14:56 ` Jens Axboe 2009-10-02 16:22 ` Ingo Molnar 2009-10-02 16:22 ` Ingo Molnar 2009-10-02 16:22 ` Ingo Molnar [not found] ` <20091002092409.GA19529-X9Un+BFzKDI@public.gmane.org> 2009-10-02 9:28 ` Jens Axboe 2009-10-02 9:36 ` Mike Galbraith 2009-10-02 9:36 ` Mike Galbraith 2009-10-02 16:37 ` Ingo Molnar 2009-10-02 16:37 ` Ingo Molnar [not found] ` <1254476214.11022.8.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 16:37 ` Ingo Molnar [not found] ` <20091001185816.GU14918-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 6:23 ` Mike Galbraith 2009-10-02 18:08 ` Jens Axboe 2009-10-02 18:08 ` Jens Axboe 2009-10-02 18:29 ` Mike Galbraith 2009-10-02 18:36 ` Jens Axboe [not found] ` <1254508197.8667.22.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 18:36 ` Jens Axboe [not found] ` <20091002180857.GM31616-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org> 2009-10-02 18:29 ` Mike Galbraith [not found] ` <1254341139.7695.36.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-30 20:24 ` Vivek Goyal 2009-09-27 17:00 ` Corrado Zoccolo 2009-09-28 14:56 ` Vivek Goyal 2009-09-28 14:56 ` Vivek Goyal [not found] ` <20090928145655.GB8192-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-28 15:35 ` Corrado Zoccolo 2009-09-28 15:35 ` Corrado Zoccolo 2009-09-28 17:14 ` Vivek Goyal 2009-09-28 17:14 ` Vivek Goyal 2009-09-29 7:10 ` Corrado Zoccolo [not found] ` <20090928171420.GA3643-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-29 7:10 ` Corrado Zoccolo 2009-09-28 17:51 ` Mike Galbraith 2009-09-28 18:18 ` Vivek Goyal 2009-09-28 18:18 ` Vivek Goyal 2009-09-28 18:53 ` Mike Galbraith 2009-09-29 7:14 ` Corrado Zoccolo [not found] ` <1254164034.9820.81.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-29 7:14 ` Corrado Zoccolo [not found] ` <20090928181846.GC3643-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-28 18:53 ` Mike Galbraith 2009-09-29 5:55 ` Mike Galbraith [not found] ` <1254160274.9820.25.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-09-28 18:18 ` Vivek Goyal 2009-09-29 5:55 ` Mike Galbraith [not found] ` <4e5e476b0909280835w3410d58aod93a29d1dcda8909-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-09-28 17:14 ` Vivek Goyal 2009-09-28 17:51 ` Mike Galbraith [not found] ` <4e5e476b0909271000u69d79346s27cccad219e49902-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-09-28 14:56 ` Vivek Goyal [not found] ` <20090925202636.GC15007-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-26 14:51 ` Mike Galbraith 2009-09-27 17:00 ` Corrado Zoccolo 2009-09-29 0:37 ` Nauman Rafique 2009-09-29 0:37 ` Nauman Rafique 2009-09-29 3:22 ` Vivek Goyal [this message] 2009-09-29 3:22 ` Vivek Goyal [not found] ` <20090929032255.GA10664-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-29 9:56 ` Ryo Tsuruta 2009-09-29 9:56 ` Ryo Tsuruta 2009-09-29 10:49 ` Takuya Yoshikawa 2009-09-29 14:10 ` Vivek Goyal 2009-09-29 14:10 ` Vivek Goyal 2009-09-29 19:53 ` Nauman Rafique [not found] ` <20090929141049.GA12141-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-09-29 19:53 ` Nauman Rafique 2009-09-30 8:43 ` Ryo Tsuruta 2009-09-30 8:43 ` Ryo Tsuruta 2009-09-30 11:05 ` Vivek Goyal 2009-09-30 11:05 ` Vivek Goyal 2009-10-01 6:41 ` Ryo Tsuruta 2009-10-01 6:41 ` Ryo Tsuruta [not found] ` <20091001.154125.104044685.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-10-01 13:31 ` Vivek Goyal 2009-10-01 13:31 ` Vivek Goyal 2009-10-01 13:31 ` Vivek Goyal 2009-10-02 2:57 ` Vivek Goyal 2009-10-02 2:57 ` Vivek Goyal 2009-10-02 20:27 ` Munehiro Ikeda 2009-10-02 20:27 ` Munehiro Ikeda [not found] ` <4AC6623F.70600-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org> 2009-10-05 10:38 ` Ryo Tsuruta 2009-10-05 10:38 ` Ryo Tsuruta 2009-10-05 10:38 ` Ryo Tsuruta [not found] ` <20091005.193808.104033719.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-10-05 12:31 ` Vivek Goyal 2009-10-05 12:31 ` Vivek Goyal 2009-10-05 12:31 ` Vivek Goyal 2009-10-05 14:55 ` Ryo Tsuruta 2009-10-05 14:55 ` Ryo Tsuruta 2009-10-05 17:10 ` Vivek Goyal 2009-10-05 17:10 ` Vivek Goyal 2009-10-05 18:11 ` Nauman Rafique 2009-10-05 18:11 ` Nauman Rafique [not found] ` <e98e18940910051111r110dc776l5105bf931761b842-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-10-06 7:17 ` Ryo Tsuruta 2009-10-06 7:17 ` Ryo Tsuruta 2009-10-06 7:17 ` Ryo Tsuruta 2009-10-06 11:22 ` Vivek Goyal 2009-10-06 11:22 ` Vivek Goyal 2009-10-07 14:38 ` Ryo Tsuruta 2009-10-07 14:38 ` Ryo Tsuruta 2009-10-07 15:09 ` Vivek Goyal 2009-10-07 15:09 ` Vivek Goyal 2009-10-08 2:18 ` Ryo Tsuruta 2009-10-08 2:18 ` Ryo Tsuruta [not found] ` <20091007150929.GB3674-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-08 2:18 ` Ryo Tsuruta 2009-10-07 16:41 ` Rik van Riel 2009-10-07 16:41 ` Rik van Riel 2009-10-07 20:23 ` Andy [not found] ` <4ACCC4B7.4050805-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-08 10:22 ` Ryo Tsuruta 2009-10-08 10:22 ` Ryo Tsuruta 2009-10-08 10:22 ` Ryo Tsuruta [not found] ` <20091007.233805.183040347.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-10-07 15:09 ` Vivek Goyal 2009-10-07 16:41 ` Rik van Riel [not found] ` <20091006112201.GA27866-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-07 14:38 ` Ryo Tsuruta [not found] ` <20091006.161744.189719641.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-10-06 11:22 ` Vivek Goyal [not found] ` <20091005171023.GG22143-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-05 18:11 ` Nauman Rafique [not found] ` <20091005.235535.193690928.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-10-05 17:10 ` Vivek Goyal [not found] ` <20091005123148.GB22143-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-05 14:55 ` Ryo Tsuruta [not found] ` <20091002025731.GA2738-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-02 20:27 ` Munehiro Ikeda [not found] ` <20091001133109.GA4058-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-02 2:57 ` Vivek Goyal [not found] ` <20090930110500.GA26631-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-01 6:41 ` Ryo Tsuruta [not found] ` <20090930.174319.183036386.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-09-30 11:05 ` Vivek Goyal [not found] ` <20090929.185653.183056711.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org> 2009-09-29 10:49 ` Takuya Yoshikawa 2009-09-29 14:10 ` Vivek Goyal 2009-09-30 3:11 ` Vivek Goyal 2009-09-30 3:11 ` Vivek Goyal 2009-09-30 3:11 ` Vivek Goyal [not found] ` <e98e18940909281737q142c788dpd20b8bdc05dd0eff-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-09-29 3:22 ` Vivek Goyal -- strict thread matches above, loose matches on Subject: below -- 2009-10-02 10:55 Corrado Zoccolo 2009-10-02 10:55 Corrado Zoccolo 2009-10-02 11:04 ` Jens Axboe [not found] ` <200910021255.27689.czoccolo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> 2009-10-02 11:04 ` Jens Axboe 2009-10-02 12:49 ` Vivek Goyal 2009-10-02 12:49 ` Vivek Goyal 2009-10-02 12:49 ` Vivek Goyal [not found] ` <20091002124921.GA4494-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-02 15:27 ` Corrado Zoccolo 2009-10-02 15:27 ` Corrado Zoccolo 2009-10-02 15:31 ` Vivek Goyal 2009-10-02 15:31 ` Vivek Goyal [not found] ` <4e5e476b0910020827s23e827b1n847c64e355999d4a-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-10-02 15:31 ` Vivek Goyal 2009-10-02 15:32 ` Mike Galbraith 2009-10-02 15:32 ` Mike Galbraith 2009-10-02 15:32 ` Mike Galbraith 2009-10-02 15:40 ` Vivek Goyal 2009-10-02 15:40 ` Vivek Goyal [not found] ` <20091002154020.GC4494-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-02 16:03 ` Mike Galbraith 2009-10-02 16:50 ` Valdis.Kletnieks-PjAqaU27lzQ 2009-10-02 16:03 ` Mike Galbraith 2009-10-02 16:50 ` Valdis.Kletnieks 2009-10-02 16:50 ` Valdis.Kletnieks [not found] ` <12774.1254502217-+bZmOdGhbsPr6rcHtW+onFJE71vCis6O@public.gmane.org> 2009-10-02 19:58 ` Vivek Goyal 2009-10-02 19:58 ` Vivek Goyal 2009-10-02 19:58 ` Vivek Goyal 2009-10-02 22:14 ` Corrado Zoccolo 2009-10-02 22:14 ` Corrado Zoccolo [not found] ` <4e5e476b0910021514i1b461229t667bed94fd67f140-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2009-10-02 22:27 ` Vivek Goyal 2009-10-02 22:27 ` Vivek Goyal 2009-10-02 22:27 ` Vivek Goyal 2009-10-03 12:43 ` Corrado Zoccolo [not found] ` <20091002222756.GG4494-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-03 12:43 ` Corrado Zoccolo [not found] ` <20091002195815.GE4494-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2009-10-02 22:14 ` Corrado Zoccolo [not found] ` <1254497520.10392.11.camel-YqMYhexLQo1vAv1Ojkdn7Q@public.gmane.org> 2009-10-02 15:40 ` Vivek Goyal 2009-09-24 19:25 Vivek Goyal
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20090929032255.GA10664@redhat.com \ --to=vgoyal@redhat.com \ --cc=agk@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=balbir@linux.vnet.ibm.com \ --cc=containers@lists.linux-foundation.org \ --cc=dhaval@linux.vnet.ibm.com \ --cc=dm-devel@redhat.com \ --cc=dpshah@google.com \ --cc=fchecconi@gmail.com \ --cc=fernando@oss.ntt.co.jp \ --cc=guijianfeng@cn.fujitsu.com \ --cc=jens.axboe@oracle.com \ --cc=jmarchan@redhat.com \ --cc=jmoyer@redhat.com \ --cc=linux-kernel@vger.kernel.org \ --cc=lizf@cn.fujitsu.com \ --cc=m-ikeda@ds.jp.nec.com \ --cc=mikew@google.com \ --cc=mingo@elte.hu \ --cc=nauman@google.com \ --cc=paolo.valente@unimore.it \ --cc=peterz@infradead.org \ --cc=riel@redhat.com \ --cc=righi.andrea@gmail.com \ --cc=ryov@valinux.co.jp \ --cc=s-uchida@ap.jp.nec.com \ --cc=taka@valinux.co.jp \ --cc=torvalds@linux-foundation.org \ --cc=yoshikawa.takuya@oss.ntt.co.jp \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.