All of lore.kernel.org
 help / color / mirror / Atom feed
From: Munehiro Ikeda <m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
To: Gui Jianfeng <guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	agk-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org,
	fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org,
	jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Subject: Re: [PATCH] io-controller: implement per group request allocation limitation
Date: Mon, 03 Aug 2009 22:00:42 -0400	[thread overview]
Message-ID: <4A77964A.7040602__34980.8533479974$1249351517$gmane$org@ds.jp.nec.com> (raw)
In-Reply-To: <4A5C377F.4040105-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>

[-- Attachment #1: Type: text/plain, Size: 9600 bytes --]

Gui Jianfeng wrote, on 07/14/2009 03:45 AM:
> Munehiro Ikeda wrote:
>> Vivek Goyal wrote, on 07/13/2009 12:03 PM:
>>> On Fri, Jul 10, 2009 at 09:56:21AM +0800, Gui Jianfeng wrote:
>>>> Hi Vivek,
>>>>
>>>> This patch exports a cgroup based per group request limits interface.
>>>> and removes the global one. Now we can use this interface to perform
>>>> different request allocation limitation for different groups.
>>>>
>>> Thanks Gui. Few points come to mind.
>>>
>>> - You seem to be making this as per cgroup limit on all devices. I guess
>>>     that different devices in the system can have different settings of
>>>     q->nr_requests and hence will probably want different per group limit.
>>>     So we might have to make it per cgroup per device limit.
>>  From the viewpoint of implementation, there is a difficulty in my mind to
>> implement per cgroup per device limit arising from that io_group is
>> allocated
>> when associated device is firstly used.  I guess Gui chose per cgroup limit
>> on all devices approach because of this, right?
>
>    Yes, I choose this solution from the simplicity point of view, the code will
>    get complicated if choosing per cgroup per device limit. But it seems per
>    cgroup per device limits is more reasonable.
>
>>
>>> - There does not seem to be any checks for making sure that children
>>>     cgroups don't have more request descriptors allocated than parent
>>> group.
>>>
>>> - I am re-thinking that what's the advantage of configuring request
>>>     descriptors also through cgroups. It does bring in additional
>>> complexity
>>>     with it and it should justfiy the advantages. Can you think of some?
>>>
>>>     Until and unless we can come up with some significant advantages, I
>>> will
>>>     prefer to continue to use per group limit through q->nr_group_requests
>>>     interface instead of cgroup. Once things stablize, we can revisit
>>> it and
>>>     see how this interface can be improved.
>> I agree.  I will try to clarify if per group per device limitation is
>> needed
>> or not (or, if it has the advantage beyond the complexity) through some
>> tests.
>
>    Great, hope to hear you soon.

Sorry for so late.  I tried it, and write the result and my opinion 
below...


Scenario
====================

The possible scenario where per-cgroup nr_requests limitation is 
beneficial in my mind is that:

- Process P1 in cgroup "g1" is running with submitting many requests
    to a device.  The number of the requests in the device queue is
    almost nr_requests for the device.

- After a while, process P2 in cgroup "g2" starts running.  P2 also
    tries to submit requests as many as P1.

- Assuming that user wants P2 to grab bandwidth as soon as possible
    and keep it certain level.

In this scenario, I predicted the bandwidth behavior of P2 along with 
tuning global nr_group_requests like below.

- If having nr_group_requests almost same as nr_requests, P1 can
    allocate requests up to nr_requests and there is no room for P2 at
    the beginning of its running.  As a result of it, P2 has to wait
    for a while till P1's requests are completed and rising of
    bandwidth is delayed.

- If having nr_group_requests fewer to restrict requests from P1 and
    make room for P2, the bandwidth of P2 may be lower than the case
    that P1 can allocate more requests.

If the prediction is correct and per-cgroup nr_requests limitation can 
make the situation better, per-cgroup nr_requests is supposed to be 
beneficial.


Verification Conditions
========================

- Kernel:
    2.6.31-rc1
    + Patches from Vivek on Jul 2, 2009
      (IO scheduler based IO controller V6)
 
https://lists.linux-foundation.org/pipermail/containers/2009-July/018948.html
    + Patches from Gui Jianfeng on Jul 7, 2009 (Bug fixes)
 
https://lists.linux-foundation.org/pipermail/containers/2009-July/019086.html
 
https://lists.linux-foundation.org/pipermail/containers/2009-July/019087.html
    + Patch from Gui Jianfeng on Jul 9, 2009 (per-cgroup requests limit)
 
https://lists.linux-foundation.org/pipermail/containers/2009-July/019123.html
    + Patch from me on Jul 16, 2009 (Bug fix)
 
https://lists.linux-foundation.org/pipermail/containers/2009-July/019286.html
    + 2 local bug fix patches
        (Not posted yet, I'm posting them in following mails)

- All results are measured under nr_requests=500.

- Used fio to make I/O.  Job file is like below.  Used libaio and
    direct-I/O and tuned iodepth to make rl->count[1] approx 500 always.

----- fio job file : from here -----

[global]
size=128m
directory=/mnt/b1

runtime=30
time_based

write_bw_log
bwavgtime=200

rw=randread
direct=1
ioengine=libaio
iodepth=500

[g1]
exec_prerun=./pre.sh /mnt/cgroups/g1
exec_postrun=./log.sh /mnt/cgroups/g1 sdb "_post"

[g2]
startdelay=10
exec_prerun=./pre.sh /mnt/cgroups/g2
exec_postrun=./log.sh /mnt/cgroups/g2 sdb "_post"

----- fio job file : till here -----

Note:
pre.sh and log.sh used in exec_{pre|post}run are to assign processes 
to expected cgroups and record the conditions.  Let me omit the detail 
of them because they are not fundamental part of this verification.


Results
====================

Bandwidth of g2 (=P2) was measured under some conditions.  Conditions 
and bandwidth logs are shown below.
Bandwidth logs are shown only the beginning part (from starting of P2 
to 3000[ms] after aprox.) because the full logs are so long.  Average 
bandwidth from the beginning of log to ~10[sec] is also calculated.

Note1:
fio seems to log bandwidth only when actual data transfer occurs 
(correct me if it's not true).  This means that there is no line with 
BW=0.  In there is no data transfer, the time-stamp are simply skipped 
to record.

Note2:
Graph picture of the bandwidth logs is attached.
    Result(1): orange
    Result(2): green
    Result(3): brown
    Result(4): black


---------- Result (1) ----------

* Both of g1 and g2 have nr_group_requests=500

< Conditions >
nr_requests = 500
g1/
    io.nr_group_requests = 500
    io.weight = 500
    io.ioprio_class = 2
g2/
    io.nr_group_requests = 500
    io.weight = 500
    io.ioprio_class = 2

< Bandwidth log of g2 >
t [ms]	bw[KiB/s]
969	4
1170	1126
1374	1084
1576	876
1776	901
1980	1069
2191	1087
2400	1117
2612	1087
2822	1136
...

< Average bandwidth >
1063 [KiB/s]
(969~9979[ms])


---------- Result (2) ----------

* Both of g1 and g2 have nr_group_requests=100

< Conditions >
nr_requests = 500
g1/
    io.nr_group_requests = 100
    io.weight = 500
    io.ioprio_class = 2
g2/
    io.nr_group_requests = 100
    io.weight = 500
    io.ioprio_class = 2

< Bandwidth log of g2 >
t [ms]	bw[KiB/s]
1498	2
1733	892
2096	722
2311	1224
2534	1180
2753	1197
2988	1137
...

< Average bandwidth >
998 [KiB/s]
(1498~9898[ms])


---------- Result (3) ----------

* To set different nr_group_requests on g1 and g2

< Conditions >
nr_requests = 500
g1/
    io.nr_group_requests = 100
    io.weight = 500
    io.ioprio_class = 2
g2/
    io.nr_group_requests = 500
    io.weight = 500
    io.ioprio_class = 2

< Bandwidth log of g2 >
t [ms]	bw[KiB/s]
244	839
451	1133
659	964
877	1038
1088	1125
1294	979
1501	1068
1708	934
1916	1048
2117	1126
2328	1111
2533	1118
2758	1206
2969	990
...

< Average bandwidth >
1048 [KiB/s]
(244~9906[ms])


---------- Result (4) ----------

* To make g2/io.ioprio_class as RT

< Conditions >
nr_requests = 500
g1/
    io.nr_group_requests = 500
    io.weight = 500
    io.ioprio_class = 2
g2/
    io.nr_group_requests = 500
    io.weight = 500
    io.ioprio_class = 1

< Bandwidth log of g2 >
t [ms]	bw[KiB/s]
476	8
677	2211
878	2221
1080	2486
1281	2241
1481	2109
1681	2334
1882	2129
2082	2211
2283	1915
2489	1778
2690	1915
2891	1997
...

< Average bandwidth >
2132[KiB/s]
(476~9954[ms])


Consideration and Conclusion
=============================

  From result(1), it is observed that it takes 1000~1200[ms] to rise 
P2 bandwidth.  In result(2), where both of g1 and g2 have 
nr_group_requests=100, the delay gets longer as 1800~2000[ms].  In 
addition to it, the average bandwidth becomes ~5% lower than 
result(1).  This is supposed that P2 couldn't allocate enough requests.
Then, result(3) shows that bandwidth of P2 can rise quickly (~300[ms]) 
if nr_group_requests can be set per-cgroup.  Result(4) shows that the 
delay can be shortened by setting g2 as RT class, however, the delay 
is still longer than result(3).

I think it is confirmed that "per-cgroup nr_requests limitation is 
useful in a certain situation".  Beyond that, the discussion topic is 
the benefit pointed out above is eligible for the complication of the 
implementation.  IMHO, I don't think the implementation of per-cgroup 
request limitation is too complicated to accept.  On the other hand I 
guess it suddenly gets complicated if we try to implement further 
more, especially hierarchical support.  It is also true that I have a 
feeling that implementation without per-device limitation and 
hierarchical support is like "unfinished work".

So, my opinion so far is that, per-cgroup nr_requests limitation 
should be merged only if hierarchical support is concluded 
"unnecessary" for it.  If merging it tempts hierarchical support, it 
shouldn't be.
How about your opinion, all?

My considerations or verification method might be wrong.  Please 
correct them if any.  And if you have any other idea of scenario to 
verify the effect of per-cgroup nr_requests limitation, please let me 
know.  I'll try it.



-- 
IKEDA, Munehiro
    NEC Corporation of America
      m-ikeda-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org



[-- Attachment #2: g2_bw.png --]
[-- Type: image/png, Size: 62770 bytes --]

[-- Attachment #3: Type: text/plain, Size: 206 bytes --]

_______________________________________________
Containers mailing list
Containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
https://lists.linux-foundation.org/mailman/listinfo/containers

  parent reply	other threads:[~2009-08-04  2:00 UTC|newest]

Thread overview: 191+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-02 20:01 [RFC] IO scheduler based IO controller V6 Vivek Goyal
2009-07-02 20:01 ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 01/25] io-controller: Documentation Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 02/25] io-controller: Core of the B-WF2Q+ scheduler Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 03/25] io-controller: bfq support of in-class preemption Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 04/25] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 05/25] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 06/25] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 07/25] io-controller: core bfq scheduler changes for hierarchical setup Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 08/25] io-controller: cgroup related changes for hierarchical group support Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
     [not found]   ` <1246564917-19603-10-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-06  2:46     ` Gui Jianfeng
2009-07-06  2:46   ` Gui Jianfeng
2009-07-06  2:46     ` Gui Jianfeng
2009-07-06 14:16     ` Vivek Goyal
2009-07-06 14:16       ` Vivek Goyal
     [not found]       ` <20090706141650.GD8279-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-07  1:40         ` [PATCH] io-controller: Get rid of css id from io cgroup Gui Jianfeng
2009-07-07  1:40           ` Gui Jianfeng
2009-07-08 14:04           ` Vivek Goyal
2009-07-08 14:04             ` Vivek Goyal
     [not found]           ` <4A52A77E.8050203-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-08 14:04             ` Vivek Goyal
     [not found]     ` <4A51657B.7000008-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-06 14:16       ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01 ` [PATCH 10/25] io-controller: cfq changes to use " Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 11/25] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-08  2:16   ` Gui Jianfeng
2009-07-08  2:16     ` Gui Jianfeng
2009-07-08 14:00     ` Vivek Goyal
2009-07-08 14:00       ` Vivek Goyal
     [not found]     ` <4A54018C.5090804-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-08 14:00       ` Vivek Goyal
     [not found]   ` <1246564917-19603-12-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-08  2:16     ` Gui Jianfeng
2009-07-02 20:01 ` [PATCH 12/25] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
     [not found] ` <1246564917-19603-1-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-02 20:01   ` [PATCH 01/25] io-controller: Documentation Vivek Goyal
2009-07-02 20:01   ` [PATCH 02/25] io-controller: Core of the B-WF2Q+ scheduler Vivek Goyal
2009-07-02 20:01   ` [PATCH 03/25] io-controller: bfq support of in-class preemption Vivek Goyal
2009-07-02 20:01   ` [PATCH 04/25] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01   ` [PATCH 05/25] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-07-02 20:01   ` [PATCH 06/25] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-07-02 20:01   ` [PATCH 07/25] io-controller: core bfq scheduler changes for hierarchical setup Vivek Goyal
2009-07-02 20:01   ` [PATCH 08/25] io-controller: cgroup related changes for hierarchical group support Vivek Goyal
2009-07-02 20:01   ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01   ` [PATCH 10/25] io-controller: cfq changes to use " Vivek Goyal
2009-07-02 20:01   ` [PATCH 11/25] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-07-02 20:01   ` [PATCH 12/25] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-07-02 20:01   ` [PATCH 13/25] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal
2009-07-02 20:01   ` [PATCH 14/25] io-controller: Separate out queue and data Vivek Goyal
2009-07-02 20:01   ` [PATCH 15/25] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-07-02 20:01   ` [PATCH 16/25] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-07-02 20:01   ` [PATCH 17/25] io-controller: deadline " Vivek Goyal
2009-07-02 20:01   ` [PATCH 18/25] io-controller: anticipatory " Vivek Goyal
2009-07-02 20:01   ` [PATCH 19/25] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-07-02 20:01   ` [PATCH 20/25] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-07-02 20:01   ` [PATCH 21/25] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-07-02 20:01   ` [PATCH 22/25] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-07-02 20:01   ` [PATCH 23/25] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-07-02 20:01   ` [PATCH 24/25] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-07-02 20:01   ` [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
2009-07-08  3:56   ` [RFC] IO scheduler based IO controller V6 Balbir Singh
2009-07-10  1:56   ` [PATCH] io-controller: implement per group request allocation limitation Gui Jianfeng
2009-07-27  2:10   ` [RFC] IO scheduler based IO controller V6 Gui Jianfeng
2009-07-02 20:01 ` [PATCH 13/25] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:09   ` Nauman Rafique
2009-07-02 20:09     ` Nauman Rafique
     [not found]     ` <e98e18940907021309u1f784b3at409b55ba46ed108c-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-07-02 20:17       ` Vivek Goyal
2009-07-02 20:17     ` Vivek Goyal
2009-07-02 20:17       ` Vivek Goyal
     [not found]   ` <1246564917-19603-14-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-02 20:09     ` Nauman Rafique
2009-07-02 20:01 ` [PATCH 14/25] io-controller: Separate out queue and data Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 15/25] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 16/25] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 17/25] io-controller: deadline " Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 18/25] io-controller: anticipatory " Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 19/25] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 20/25] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
     [not found]   ` <1246564917-19603-21-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-08-03  2:13     ` Gui Jianfeng
2009-08-03  2:13   ` Gui Jianfeng
2009-08-03  2:13     ` Gui Jianfeng
     [not found]     ` <4A7647DA.5050607-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-04  1:25       ` Vivek Goyal
2009-08-04  1:25     ` Vivek Goyal
2009-08-04  1:25       ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 21/25] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-08  3:27   ` Gui Jianfeng
2009-07-08  3:27     ` Gui Jianfeng
     [not found]     ` <4A54121D.5090008-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-08 13:57       ` Vivek Goyal
2009-07-08 13:57     ` Vivek Goyal
2009-07-08 13:57       ` Vivek Goyal
     [not found]   ` <1246564917-19603-22-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-08  3:27     ` Gui Jianfeng
2009-07-21  5:37     ` Gui Jianfeng
2009-07-21  5:37   ` Gui Jianfeng
     [not found]     ` <4A655434.5060404-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-21  5:55       ` Nauman Rafique
2009-07-21  5:55     ` Nauman Rafique
2009-07-21  5:55       ` Nauman Rafique
2009-07-21 14:01       ` Vivek Goyal
2009-07-21 14:01         ` Vivek Goyal
     [not found]         ` <20090721140134.GB540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-21 17:57           ` Nauman Rafique
2009-07-21 17:57         ` Nauman Rafique
2009-07-21 17:57           ` Nauman Rafique
     [not found]       ` <e98e18940907202255y5c7c546ei95d87e5a451ad0c2-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-07-21 14:01         ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 22/25] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-17  0:16   ` Munehiro Ikeda
2009-07-17  0:16     ` Munehiro Ikeda
2009-07-17 13:52     ` Vivek Goyal
2009-07-17 13:52       ` Vivek Goyal
     [not found]     ` <4A5FC2CA.1040609-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-07-17 13:52       ` Vivek Goyal
     [not found]   ` <1246564917-19603-23-git-send-email-vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-17  0:16     ` Munehiro Ikeda
2009-07-02 20:01 ` [PATCH 23/25] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 24/25] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
2009-07-02 20:01   ` Vivek Goyal
2009-07-08  3:56 ` [RFC] IO scheduler based IO controller V6 Balbir Singh
2009-07-08  3:56   ` Balbir Singh
     [not found]   ` <20090708035621.GB3215-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-07-08 13:41     ` Vivek Goyal
2009-07-08 13:41   ` Vivek Goyal
2009-07-08 13:41     ` Vivek Goyal
     [not found]     ` <20090708134114.GA24048-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-08 14:39       ` Balbir Singh
2009-07-08 14:39     ` Balbir Singh
2009-07-08 14:39       ` Balbir Singh
     [not found]       ` <20090708143925.GE3215-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-07-09  1:58         ` Vivek Goyal
2009-07-09  1:58       ` Vivek Goyal
2009-07-09  1:58         ` Vivek Goyal
2009-07-10  1:56 ` [PATCH] io-controller: implement per group request allocation limitation Gui Jianfeng
2009-07-10  1:56   ` Gui Jianfeng
2009-07-13 16:03   ` Vivek Goyal
2009-07-13 16:03     ` Vivek Goyal
2009-07-13 21:08     ` Munehiro Ikeda
2009-07-13 21:08       ` Munehiro Ikeda
2009-07-14  7:45       ` Gui Jianfeng
2009-07-14  7:45         ` Gui Jianfeng
2009-08-04  2:00         ` Munehiro Ikeda
2009-08-04  2:00           ` Munehiro Ikeda
     [not found]           ` <4A77964A.7040602-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-08-04  6:38             ` Gui Jianfeng
2009-08-04 22:37             ` Vivek Goyal
2009-08-04  6:38           ` Gui Jianfeng
2009-08-04  6:38             ` Gui Jianfeng
2009-08-04 22:37           ` Vivek Goyal
2009-08-04 22:37             ` Vivek Goyal
     [not found]         ` <4A5C377F.4040105-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-08-04  2:00           ` Munehiro Ikeda [this message]
     [not found]       ` <4A5BA238.3030902-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-07-14  7:45         ` Gui Jianfeng
     [not found]     ` <20090713160352.GA3714-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-13 21:08       ` Munehiro Ikeda
2009-07-14  7:37       ` Gui Jianfeng
2009-07-14  7:37         ` Gui Jianfeng
2009-08-04  2:02   ` Munehiro Ikeda
2009-08-04  2:02     ` Munehiro Ikeda
2009-08-04  6:41     ` Gui Jianfeng
2009-08-04  6:41       ` Gui Jianfeng
     [not found]     ` <4A7796D2.4030104-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-08-04  6:41       ` Gui Jianfeng
2009-08-04  2:04   ` Munehiro Ikeda
2009-08-04  2:04     ` Munehiro Ikeda
     [not found]     ` <4A779719.1070900-MDRzhb/z0dd8UrSeD/g0lQ@public.gmane.org>
2009-08-04  6:45       ` Gui Jianfeng
2009-08-04  6:45     ` Gui Jianfeng
2009-08-04  6:45       ` Gui Jianfeng
     [not found]   ` <4A569FC5.7090801-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-13 16:03     ` Vivek Goyal
2009-08-04  2:02     ` Munehiro Ikeda
2009-08-04  2:04     ` Munehiro Ikeda
2009-07-27  2:10 ` [RFC] IO scheduler based IO controller V6 Gui Jianfeng
     [not found]   ` <4A6D0C9A.3080600-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-07-27 12:55     ` Vivek Goyal
2009-07-27 12:55   ` Vivek Goyal
2009-07-27 12:55     ` Vivek Goyal
2009-07-28  3:27     ` Vivek Goyal
2009-07-28  3:27       ` Vivek Goyal
     [not found]       ` <20090728032712.GC3620-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-28  3:36         ` Gui Jianfeng
2009-07-28  3:36       ` Gui Jianfeng
2009-07-28  3:36         ` Gui Jianfeng
     [not found]     ` <20090727125503.GA24449-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-07-28  3:27       ` Vivek Goyal
2009-07-28 11:36       ` Gui Jianfeng
2009-07-29  9:07       ` Gui Jianfeng
2009-07-28 11:36     ` Gui Jianfeng
2009-07-29  9:07     ` Gui Jianfeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='4A77964A.7040602__34980.8533479974$1249351517$gmane$org@ds.jp.nec.com' \
    --to=m-ikeda-mdrzhb/z0dd8ursed/g0lq@public.gmane.org \
    --cc=agk-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=dhaval-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=fchecconi-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org \
    --cc=guijianfeng-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org \
    --cc=jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=paolo.valente-rcYM44yAMweonA0d6jMUrA@public.gmane.org \
    --cc=righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.