All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Theodore Tso <tytso-3s7WtUTddSA@public.gmane.org>
Cc: randy.dunlap-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	Paul Menage <menage-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	dradford-cT2on/YLNlBWk0Htik3J/w@public.gmane.org,
	ngupta-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	subrata-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org,
	agk-9JcytcrH/bA+uJoB2kUjGw@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Carl Henrik Lunde
	<chlunde-om2ZC0WAoZIXWF+eFR7m5Q@public.gmane.org>,
	dave-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org,
	roberto-5KDOxZqKugI@public.gmane.org,
	Jens Axboe <jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
	matt-cT2on/YLNlBWk0Htik3J/w@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	Gui-FOgKQjlUJ6BQetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	eric.rannaud-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	Balbir Singh
	<balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
Subject: Re: [PATCH 9/9] ext3: do not throttle metadata and journal IO
Date: Thu, 23 Apr 2009 23:13:04 +0200	[thread overview]
Message-ID: <20090423211300.GA20176@linux> (raw)
In-Reply-To: <20090423121745.GC2723-3s7WtUTddSA@public.gmane.org>

On Thu, Apr 23, 2009 at 08:17:45AM -0400, Theodore Tso wrote:
> On Thu, Apr 23, 2009 at 11:44:24AM +0200, Andrea Righi wrote:
> > This is true in part. Actually io-throttle v12 has been largely tested,
> > also in production environments (Matt and David in cc can confirm
> > this) with quite interesting results.
> > 
> > I tested the previous versions usually with many parallel iozone, dd,
> > using many different configurations.
> > 
> > In v12 writeback IO is not actually limited, what io-throttle did was to
> > account and limit reads and direct IO in submit_bio() and limit and
> > account page cache writes in balance_dirty_pages_ratelimited_nr().
> 
> Did the testing include what happened if the system was also
> simultaneously under memory pressure?  What you might find happening
> then is that the cgroups which have lots of dirty pages, which are not
> getting written out, have their memory usage "protected", while
> cgroups that have lots of clean pages have more of their pages
> (unfairly) evicted from memory.  The worst case, of course, would be
> if the memory pressure is coming from an uncapped cgroup.

This is an interesting case that should be considered of course. The
tests I did were mainly focused in distinct environment where each
cgroup writes its own files and dirties its own memory. I'll add this
case to the next tests I'll do with io-throttle.

But it's a general problem IMHO and doesn't depend only on the presence
of an IO controller. The same issue can happen if a cgroup reads a file
from a slow device and another cgroup writes to all the pages of the
other cgroup.

Maybe this kind of cgroup unfairness should be addressed by the memory
controller, the IO controller should be just like another slow device in
this particular case.

> 
> > In a previous discussion (http://lkml.org/lkml/2008/11/4/565) we decided
> > to split the problems: the decision was that IO controller should
> > consider only IO requests and the memory controller should take care of
> > the OOM / dirty pages problems. Distinct memcg dirty_ratio seemed to be
> > a good start. Anyway, I think we're not so far from having an acceptable
> > solution, also looking at the recent thoughts and discussions in this
> > thread. For the implementation part, as pointed by Kamezawa per bdi /
> > task dirty ratio is a very similar problem. Probably we can simply
> > replicate the same concepts per cgroup.
> 
> I looked at that discussion, and it doesn't seem to be about splitting
> the problem between the IO controller and the memory controller at
> all.  Instead, Andrew is talking about how thottling dirty memory page
> writeback on a per-cpuset basis (which is what Christoph Lamaeter
> wanted for large SGI systems) made sense as compared to controlling
> the rate at which pages got dirty, which is considered much higher
> priority:
> 
>     Generally, I worry that this is a specific fix to a specific problem
>     encountered on specific machines with specific setups and specific
>     workloads, and that it's just all too low-level and myopic.
> 
>     And now we're back in the usual position where there's existing code and
>     everyone says it's terribly wonderful and everyone is reluctant to step
>     back and look at the big picture.  Am I wrong?
> 
>     Plus: we need per-memcg dirty-memory throttling, and this is more
>     important than per-cpuset, I suspect.  How will the (already rather
>     buggy) code look once we've stuffed both of them in there?

You're right. That thread was mainly focused on the dirty-page issue. My
fault, sorry.

I've looked back in my old mail archives to find other old discussions
about the dirty page and IO controller issue. I report some of them here
for completeness:

https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011474.html
https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011466.html
https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011482.html
https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011472.html

>    
> So that's basically the same worry I have; which is we're looking at
> things at a too-low-level basis, and not at the big picture.
> 
> There wasn't discussion about the I/O controller on this thread at
> all, at least as far as I could find; nor that splitting the problem
> was the right way to solve the problem.  Maybe somewhere there was a
> call for someone to step back and take a look at the "big picture"
> (what I've been calling the high level design), but I didn't see it in
> the thread.
> 
> It would seem to be much simpler if there was a single tuning knob for
> the I/O controller and for dirty page writeback --- after all, why
> *else* would you be trying to control the rate at which pages get
> dirty?  And if you have a cgroup which sometimes does a lot of writes

Actually we do already control the rate at which dirty pages are
generated. In balance_dirty_pages() we add a congestion_wait() when the
bdi is congested.

We do that when we write to a slow device for example. Slow because it
is intrinsically slow or because it is limited by some IO controlling
rules.

It is a very similar issue IMHO.

> via direct I/O, and sometimes does a lot of writes through the page
> cache, and sometimes does *both*, it would seem to me that if you want
> to be able to smoothly limit the amount of I/O it does, you would want
> to account and charge for direct I/O and page cache I/O under the same
> "bucket".   Is that what the user would want?   
> 
> Suppose you only have 200 MB/sec worth of disk bandwidth, and you
> parcel it out in 50 MB/sec chunks to 4 cgroups.  But you also parcel
> out 50MB/sec of dirty writepages quota to each of the 4 cgroups.  Now
> suppose one of the cgroups, which was normally doing not much of
> anything, suddenly starts doing a database backup which does 50 MB/sec
> of direct I/O reading from the database file, and 50 MB/sec dirtying
> pages in the page cache as it writes the backup file.  Suddenly that
> one cgroup is using half of the system's I/O bandwidth!

Agreed. The bucket should be the same. The dirty memory should be
probably limited only in terms of "space" for this case instead of BW.

And we should guarantee that a cgroup doesn't fill unfairly the memory
with dirty pages (system-wide or in other cgroups).

> 
> And before you say this is "correct" from a definitional point of
> view, is it "correct" from what a system administrator would want to
> control?  Is it the right __feature__?  If you just say, well, we
> defined the problem that way, and we're doing things the way we
> defined it, that's a case of garbage in, garbage out.  You also have
> to ask the question, "did we define the _problem_ in the right way?"
> What does the user of this feature really want to do?  
> 
> It would seem to me that the system administrator would want a single
> knob, saying "I don't know or care how the processes in a cgroup does
> its I/O; I just want to limit things so that the cgroup can only hog
> 25% of the I/O bandwidth."

Agreed.

> 
> And note this is completely separate from the question of what happens
> if you throttle I/O in the page cache writeback loop, and you end up
> with an imbalance in the clean/dirty ratios of the cgroups.  And
> looking at this thread, life gets even *more* amusing on NUMA machines
> if you do this; what if you end up starving a cpuset as a result of
> this I/O balancing decision, so a particular cpuset doesn't have
> enough memory?  That's when you'll *definitely* start having OOM
> problems.
> 
> So maybe someone has thought about all of these issues --- if so, may
> I gently suggest that someone write all of this down?  The design
> issues here are subtle, at least to my little brain, and relying on
> people remembering that something was discussed on LKML six months ago
> doesn't seem like a good long-term strategy.  Eventually this code
> will need to be maintained, and maybe some of the engineers working on
> it will have moved on to other projects.  So this is something that is
> rather definitely deserves to be written up and dropped into
> Documentation/ or in ample code code comments discussing on the
> various subsystems interact.

I agree about the documentation. As also suggested by Balbir we should
definitely start to write something in a common place (wiki?) to collect
all the concepts and objectives we defined in the past and propose a
coherent solution.

Otherwise the risk is to continuously move around discussing about the
same issues and proposing each one a different solution for specific
problems.

I can start extending the io-throttle documentation and
collect/integrate some concepts we've discussed in the past, but first
of all we really need to define all the possible use cases IMHO.

Honestly, I've never considered the cgroups "interactions" and the
unfair distribution of dirty pages among cgroups, for example, as
correctly pointed out by Ted.

Thanks,
-Andrea

WARNING: multiple messages have this Message-ID (diff)
From: Andrea Righi <righi.andrea@gmail.com>
To: Theodore Tso <tytso@mit.edu>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	akpm@linux-foundation.org, randy.dunlap@oracle.com,
	Carl Henrik Lunde <chlunde@ping.uio.no>,
	Jens Axboe <jens.axboe@oracle.com>,
	eric.rannaud@gmail.com, Balbir Singh <balbir@linux.vnet.ibm.com>,
	fernando@oss.ntt.co.jp, dradford@bluehost.com,
	Gui@smtp1.linux-foundation.org, agk@sourceware.org,
	subrata@linux.vnet.ibm.com, Paul Menage <menage@google.com>,
	containers@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, dave@linux.vnet.ibm.com,
	matt@bluehost.com, roberto@unbit.it, ngupta@google.com
Subject: Re: [PATCH 9/9] ext3: do not throttle metadata and journal IO
Date: Thu, 23 Apr 2009 23:13:04 +0200	[thread overview]
Message-ID: <20090423211300.GA20176@linux> (raw)
In-Reply-To: <20090423121745.GC2723@mit.edu>

On Thu, Apr 23, 2009 at 08:17:45AM -0400, Theodore Tso wrote:
> On Thu, Apr 23, 2009 at 11:44:24AM +0200, Andrea Righi wrote:
> > This is true in part. Actually io-throttle v12 has been largely tested,
> > also in production environments (Matt and David in cc can confirm
> > this) with quite interesting results.
> > 
> > I tested the previous versions usually with many parallel iozone, dd,
> > using many different configurations.
> > 
> > In v12 writeback IO is not actually limited, what io-throttle did was to
> > account and limit reads and direct IO in submit_bio() and limit and
> > account page cache writes in balance_dirty_pages_ratelimited_nr().
> 
> Did the testing include what happened if the system was also
> simultaneously under memory pressure?  What you might find happening
> then is that the cgroups which have lots of dirty pages, which are not
> getting written out, have their memory usage "protected", while
> cgroups that have lots of clean pages have more of their pages
> (unfairly) evicted from memory.  The worst case, of course, would be
> if the memory pressure is coming from an uncapped cgroup.

This is an interesting case that should be considered of course. The
tests I did were mainly focused in distinct environment where each
cgroup writes its own files and dirties its own memory. I'll add this
case to the next tests I'll do with io-throttle.

But it's a general problem IMHO and doesn't depend only on the presence
of an IO controller. The same issue can happen if a cgroup reads a file
from a slow device and another cgroup writes to all the pages of the
other cgroup.

Maybe this kind of cgroup unfairness should be addressed by the memory
controller, the IO controller should be just like another slow device in
this particular case.

> 
> > In a previous discussion (http://lkml.org/lkml/2008/11/4/565) we decided
> > to split the problems: the decision was that IO controller should
> > consider only IO requests and the memory controller should take care of
> > the OOM / dirty pages problems. Distinct memcg dirty_ratio seemed to be
> > a good start. Anyway, I think we're not so far from having an acceptable
> > solution, also looking at the recent thoughts and discussions in this
> > thread. For the implementation part, as pointed by Kamezawa per bdi /
> > task dirty ratio is a very similar problem. Probably we can simply
> > replicate the same concepts per cgroup.
> 
> I looked at that discussion, and it doesn't seem to be about splitting
> the problem between the IO controller and the memory controller at
> all.  Instead, Andrew is talking about how thottling dirty memory page
> writeback on a per-cpuset basis (which is what Christoph Lamaeter
> wanted for large SGI systems) made sense as compared to controlling
> the rate at which pages got dirty, which is considered much higher
> priority:
> 
>     Generally, I worry that this is a specific fix to a specific problem
>     encountered on specific machines with specific setups and specific
>     workloads, and that it's just all too low-level and myopic.
> 
>     And now we're back in the usual position where there's existing code and
>     everyone says it's terribly wonderful and everyone is reluctant to step
>     back and look at the big picture.  Am I wrong?
> 
>     Plus: we need per-memcg dirty-memory throttling, and this is more
>     important than per-cpuset, I suspect.  How will the (already rather
>     buggy) code look once we've stuffed both of them in there?

You're right. That thread was mainly focused on the dirty-page issue. My
fault, sorry.

I've looked back in my old mail archives to find other old discussions
about the dirty page and IO controller issue. I report some of them here
for completeness:

https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011474.html
https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011466.html
https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011482.html
https://lists.linux-foundation.org/pipermail/virtualization/2008-August/011472.html

>    
> So that's basically the same worry I have; which is we're looking at
> things at a too-low-level basis, and not at the big picture.
> 
> There wasn't discussion about the I/O controller on this thread at
> all, at least as far as I could find; nor that splitting the problem
> was the right way to solve the problem.  Maybe somewhere there was a
> call for someone to step back and take a look at the "big picture"
> (what I've been calling the high level design), but I didn't see it in
> the thread.
> 
> It would seem to be much simpler if there was a single tuning knob for
> the I/O controller and for dirty page writeback --- after all, why
> *else* would you be trying to control the rate at which pages get
> dirty?  And if you have a cgroup which sometimes does a lot of writes

Actually we do already control the rate at which dirty pages are
generated. In balance_dirty_pages() we add a congestion_wait() when the
bdi is congested.

We do that when we write to a slow device for example. Slow because it
is intrinsically slow or because it is limited by some IO controlling
rules.

It is a very similar issue IMHO.

> via direct I/O, and sometimes does a lot of writes through the page
> cache, and sometimes does *both*, it would seem to me that if you want
> to be able to smoothly limit the amount of I/O it does, you would want
> to account and charge for direct I/O and page cache I/O under the same
> "bucket".   Is that what the user would want?   
> 
> Suppose you only have 200 MB/sec worth of disk bandwidth, and you
> parcel it out in 50 MB/sec chunks to 4 cgroups.  But you also parcel
> out 50MB/sec of dirty writepages quota to each of the 4 cgroups.  Now
> suppose one of the cgroups, which was normally doing not much of
> anything, suddenly starts doing a database backup which does 50 MB/sec
> of direct I/O reading from the database file, and 50 MB/sec dirtying
> pages in the page cache as it writes the backup file.  Suddenly that
> one cgroup is using half of the system's I/O bandwidth!

Agreed. The bucket should be the same. The dirty memory should be
probably limited only in terms of "space" for this case instead of BW.

And we should guarantee that a cgroup doesn't fill unfairly the memory
with dirty pages (system-wide or in other cgroups).

> 
> And before you say this is "correct" from a definitional point of
> view, is it "correct" from what a system administrator would want to
> control?  Is it the right __feature__?  If you just say, well, we
> defined the problem that way, and we're doing things the way we
> defined it, that's a case of garbage in, garbage out.  You also have
> to ask the question, "did we define the _problem_ in the right way?"
> What does the user of this feature really want to do?  
> 
> It would seem to me that the system administrator would want a single
> knob, saying "I don't know or care how the processes in a cgroup does
> its I/O; I just want to limit things so that the cgroup can only hog
> 25% of the I/O bandwidth."

Agreed.

> 
> And note this is completely separate from the question of what happens
> if you throttle I/O in the page cache writeback loop, and you end up
> with an imbalance in the clean/dirty ratios of the cgroups.  And
> looking at this thread, life gets even *more* amusing on NUMA machines
> if you do this; what if you end up starving a cpuset as a result of
> this I/O balancing decision, so a particular cpuset doesn't have
> enough memory?  That's when you'll *definitely* start having OOM
> problems.
> 
> So maybe someone has thought about all of these issues --- if so, may
> I gently suggest that someone write all of this down?  The design
> issues here are subtle, at least to my little brain, and relying on
> people remembering that something was discussed on LKML six months ago
> doesn't seem like a good long-term strategy.  Eventually this code
> will need to be maintained, and maybe some of the engineers working on
> it will have moved on to other projects.  So this is something that is
> rather definitely deserves to be written up and dropped into
> Documentation/ or in ample code code comments discussing on the
> various subsystems interact.

I agree about the documentation. As also suggested by Balbir we should
definitely start to write something in a common place (wiki?) to collect
all the concepts and objectives we defined in the past and propose a
coherent solution.

Otherwise the risk is to continuously move around discussing about the
same issues and proposing each one a different solution for specific
problems.

I can start extending the io-throttle documentation and
collect/integrate some concepts we've discussed in the past, but first
of all we really need to define all the possible use cases IMHO.

Honestly, I've never considered the cgroups "interactions" and the
unfair distribution of dirty pages among cgroups, for example, as
correctly pointed out by Ted.

Thanks,
-Andrea

  parent reply	other threads:[~2009-04-23 21:13 UTC|newest]

Thread overview: 207+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-14 20:21 [PATCH 0/9] cgroup: io-throttle controller (v13) Andrea Righi
     [not found] ` <1239740480-28125-1-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-14 20:21   ` [PATCH 1/9] io-throttle documentation Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-17  1:24     ` KAMEZAWA Hiroyuki
2009-04-17  1:56       ` Li Zefan
2009-04-17 10:25         ` Andrea Righi
2009-04-17 10:41           ` Andrea Righi
2009-04-17 10:41           ` Andrea Righi
2009-04-17 11:35           ` Fernando Luis Vázquez Cao
2009-04-17 11:35           ` Fernando Luis Vázquez Cao
2009-04-20  9:38           ` Ryo Tsuruta
2009-04-20  9:38           ` Ryo Tsuruta
     [not found]             ` <20090420.183815.226804723.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-20 15:00               ` Andrea Righi
2009-04-20 15:00             ` Andrea Righi
2009-04-27 10:45               ` Ryo Tsuruta
2009-04-27 12:15                 ` Ryo Tsuruta
     [not found]                 ` <20090427.194533.183037823.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-27 12:15                   ` Ryo Tsuruta
2009-04-27 21:56                   ` Andrea Righi
2009-04-27 21:56                     ` Andrea Righi
2009-04-27 10:45               ` Ryo Tsuruta
     [not found]         ` <49E7E1CF.6060209-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-04-17 10:25           ` Andrea Righi
2009-04-17  7:34       ` Gui Jianfeng
2009-04-17  7:43         ` KAMEZAWA Hiroyuki
2009-04-17  9:29           ` Gui Jianfeng
     [not found]           ` <20090417164351.ea85012d.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  9:29             ` Gui Jianfeng
     [not found]         ` <49E8311D.5030901-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2009-04-17  7:43           ` KAMEZAWA Hiroyuki
     [not found]       ` <20090417102417.88a0ef93.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  1:56         ` Li Zefan
2009-04-17  7:34         ` Gui Jianfeng
2009-04-17  9:55         ` Andrea Righi
2009-04-17  9:55       ` Andrea Righi
     [not found]     ` <1239740480-28125-2-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-17  1:24       ` KAMEZAWA Hiroyuki
2009-04-17 17:39       ` Vivek Goyal
2009-04-17 17:39         ` Vivek Goyal
2009-04-17 23:12         ` Andrea Righi
2009-04-19 13:42           ` Vivek Goyal
2009-04-19 13:42             ` Vivek Goyal
     [not found]             ` <20090419134201.GF8493-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-19 15:47               ` Andrea Righi
2009-04-19 15:47             ` Andrea Righi
2009-04-20 21:28               ` Vivek Goyal
2009-04-20 21:28                 ` Vivek Goyal
2009-04-20 22:05                 ` Andrea Righi
2009-04-21  1:08                   ` Vivek Goyal
2009-04-21  1:08                     ` Vivek Goyal
     [not found]                     ` <20090421010846.GA15850-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-21  8:37                       ` Andrea Righi
2009-04-21  8:37                     ` Andrea Righi
2009-04-21 14:23                       ` Vivek Goyal
2009-04-21 14:23                         ` Vivek Goyal
     [not found]                         ` <20090421142305.GB22619-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-21 18:29                           ` Vivek Goyal
2009-04-21 18:29                             ` Vivek Goyal
     [not found]                             ` <20090421182958.GF22619-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-21 21:36                               ` Andrea Righi
2009-04-21 21:36                                 ` Andrea Righi
2009-04-21 21:28                           ` Andrea Righi
2009-04-21 21:28                         ` Andrea Righi
     [not found]                 ` <20090420212827.GA9080-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-20 22:05                   ` Andrea Righi
2009-04-19 13:54           ` Vivek Goyal
2009-04-19 13:54             ` Vivek Goyal
     [not found]         ` <20090417173955.GF29086-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-04-17 23:12           ` Andrea Righi
2009-04-14 20:21   ` [PATCH 2/9] res_counter: introduce ratelimiting attributes Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 3/9] bio-cgroup controller Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-15  2:15     ` KAMEZAWA Hiroyuki
2009-04-15  9:37       ` Andrea Righi
2009-04-15 12:38         ` Ryo Tsuruta
     [not found]           ` <20090415.213850.226770691.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-15 13:23             ` Andrea Righi
2009-04-15 13:23           ` Andrea Righi
2009-04-15 23:58             ` KAMEZAWA Hiroyuki
2009-04-15 23:58             ` KAMEZAWA Hiroyuki
2009-04-16 10:42               ` Andrea Righi
2009-04-16 12:00                 ` Ryo Tsuruta
2009-04-16 12:00                 ` Ryo Tsuruta
2009-04-17  0:04                 ` KAMEZAWA Hiroyuki
2009-04-17  9:44                   ` Andrea Righi
     [not found]                   ` <20090417090451.5ad9022f.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  9:44                     ` Andrea Righi
2009-04-17  0:04                 ` KAMEZAWA Hiroyuki
     [not found]               ` <20090416085814.8b6d077f.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-16 10:42                 ` Andrea Righi
2009-04-15 12:38         ` Ryo Tsuruta
2009-04-15 13:07       ` Andrea Righi
     [not found]       ` <20090415111528.b796519a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-15  9:37         ` Andrea Righi
2009-04-15 13:07         ` Andrea Righi
2009-04-16 22:29     ` Andrew Morton
2009-04-17  0:20       ` KAMEZAWA Hiroyuki
     [not found]         ` <20090417092040.1c832c69.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  0:44           ` Andrew Morton
2009-04-17  0:44             ` Andrew Morton
2009-04-17  1:44             ` Ryo Tsuruta
2009-04-17  4:15               ` Andrew Morton
     [not found]                 ` <20090416211514.038c5e91.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  7:48                   ` Ryo Tsuruta
2009-04-17  7:48                 ` Ryo Tsuruta
     [not found]               ` <20090417.104432.193700511.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-17  4:15                 ` Andrew Morton
2009-04-17  1:50             ` Balbir Singh
     [not found]             ` <20090416174428.6bb5da21.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  1:44               ` Ryo Tsuruta
2009-04-17  1:50               ` Balbir Singh
     [not found]       ` <20090416152937.b2188370.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  0:20         ` KAMEZAWA Hiroyuki
2009-04-17  9:40         ` Andrea Righi
2009-04-17  9:40       ` Andrea Righi
2009-04-17  1:49     ` Takuya Yoshikawa
2009-04-17  2:24       ` KAMEZAWA Hiroyuki
     [not found]         ` <20090417112433.085ed604.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  7:22           ` Ryo Tsuruta
2009-04-17  7:22         ` Ryo Tsuruta
2009-04-17  8:00           ` KAMEZAWA Hiroyuki
     [not found]             ` <20090417170016.5c7268f1.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  8:48               ` KAMEZAWA Hiroyuki
2009-04-17  8:48             ` KAMEZAWA Hiroyuki
     [not found]               ` <20090417174854.07aeec9f.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-17  8:51                 ` KAMEZAWA Hiroyuki
2009-04-17  8:51               ` KAMEZAWA Hiroyuki
     [not found]           ` <20090417.162201.183038478.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-17  8:00             ` KAMEZAWA Hiroyuki
2009-04-17 11:27             ` Block I/O tracking (was Re: [PATCH 3/9] bio-cgroup controller) Fernando Luis Vázquez Cao
2009-04-17 11:27           ` Fernando Luis Vázquez Cao
2009-04-17 22:09             ` Andrea Righi
     [not found]             ` <49E8679D.8010405-gVGce1chcLdL9jVzuh4AOg@public.gmane.org>
2009-04-17 22:09               ` Andrea Righi
2009-04-17  7:32       ` [PATCH 3/9] bio-cgroup controller Ryo Tsuruta
     [not found]       ` <49E7E037.9080004-gVGce1chcLdL9jVzuh4AOg@public.gmane.org>
2009-04-17  2:24         ` KAMEZAWA Hiroyuki
2009-04-17  7:32         ` Ryo Tsuruta
2009-04-17 10:22     ` Balbir Singh
     [not found]       ` <20090417102214.GC3896-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-04-20 11:35         ` Ryo Tsuruta
2009-04-20 11:35       ` Ryo Tsuruta
     [not found]         ` <20090420.203540.104031006.ryov-jCdQPDEk3idL9jVzuh4AOg@public.gmane.org>
2009-04-20 14:56           ` Andrea Righi
2009-04-20 14:56         ` Andrea Righi
2009-04-21 11:39           ` Ryo Tsuruta
2009-04-21 11:39           ` Ryo Tsuruta
2009-04-21 15:31           ` Balbir Singh
2009-04-21 15:31           ` Balbir Singh
     [not found]     ` <1239740480-28125-4-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-15  2:15       ` KAMEZAWA Hiroyuki
2009-04-16 22:29       ` Andrew Morton
2009-04-17  1:49       ` Takuya Yoshikawa
2009-04-17 10:22       ` Balbir Singh
2009-04-14 20:21   ` [PATCH 4/9] support checking of cgroup subsystem dependencies Andrea Righi
2009-04-14 20:21   ` [PATCH 5/9] io-throttle controller infrastructure Andrea Righi
2009-04-14 20:21   ` [PATCH 6/9] kiothrottled: throttle buffered (writeback) IO Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 7/9] io-throttle instrumentation Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 8/9] export per-task io-throttle statistics to userspace Andrea Righi
2009-04-14 20:21     ` Andrea Righi
2009-04-14 20:21   ` [PATCH 9/9] ext3: do not throttle metadata and journal IO Andrea Righi
2009-04-14 20:21     ` Andrea Righi
     [not found]     ` <1239740480-28125-10-git-send-email-righi.andrea-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2009-04-17 12:38       ` Theodore Tso
2009-04-17 12:38     ` Theodore Tso
2009-04-17 12:50       ` Jens Axboe
     [not found]         ` <20090417125004.GY4593-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>
2009-04-17 14:39           ` Andrea Righi
2009-04-17 14:39         ` Andrea Righi
2009-04-21  0:18           ` Theodore Tso
2009-04-21  8:30             ` Andrea Righi
2009-04-21 14:06               ` Theodore Tso
2009-04-21 14:31                 ` Andrea Righi
2009-04-21 16:35                   ` Theodore Tso
     [not found]                     ` <20090421163537.GI19186-3s7WtUTddSA@public.gmane.org>
2009-04-21 17:23                       ` Balbir Singh
2009-04-21 17:23                     ` Balbir Singh
     [not found]                       ` <20090421172317.GM19637-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-04-21 17:46                         ` Theodore Tso
2009-04-21 17:46                       ` Theodore Tso
     [not found]                         ` <20090421174620.GD15541-3s7WtUTddSA@public.gmane.org>
2009-04-21 18:14                           ` Balbir Singh
2009-04-21 18:14                         ` Balbir Singh
2009-04-21 19:14                           ` Theodore Tso
     [not found]                             ` <20090421191401.GF15541-3s7WtUTddSA@public.gmane.org>
2009-04-21 20:49                               ` Andrea Righi
2009-04-22  3:30                               ` Balbir Singh
2009-04-21 20:49                             ` Andrea Righi
2009-04-22  0:33                               ` KAMEZAWA Hiroyuki
2009-04-22  1:21                                 ` KAMEZAWA Hiroyuki
     [not found]                                   ` <20090422102153.9aec17b9.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-22 10:22                                     ` Andrea Righi
2009-04-22 10:22                                   ` Andrea Righi
2009-04-23  0:05                                     ` KAMEZAWA Hiroyuki
2009-04-23  1:22                                       ` Theodore Tso
     [not found]                                         ` <20090423012254.GZ15541-3s7WtUTddSA@public.gmane.org>
2009-04-23  2:54                                           ` KAMEZAWA Hiroyuki
2009-04-23  2:54                                         ` KAMEZAWA Hiroyuki
     [not found]                                           ` <20090423115419.c493266a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-23  4:35                                             ` Theodore Tso
2009-04-23  4:35                                           ` Theodore Tso
     [not found]                                             ` <20090423043547.GB2723-3s7WtUTddSA@public.gmane.org>
2009-04-23  4:58                                               ` Andrew Morton
2009-04-23  4:58                                                 ` Andrew Morton
2009-04-23  5:37                                                 ` KAMEZAWA Hiroyuki
     [not found]                                                 ` <20090422215825.f83e1b27.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-23  5:37                                                   ` KAMEZAWA Hiroyuki
2009-04-23  9:44                                               ` Andrea Righi
2009-04-24  5:14                                               ` Balbir Singh
2009-04-23  9:44                                             ` Andrea Righi
2009-04-23 12:17                                               ` Theodore Tso
2009-04-23 12:17                                               ` Theodore Tso
     [not found]                                                 ` <20090423121745.GC2723-3s7WtUTddSA@public.gmane.org>
2009-04-23 12:27                                                   ` Theodore Tso
2009-04-23 12:27                                                     ` Theodore Tso
2009-04-23 21:13                                                   ` Andrea Righi [this message]
2009-04-23 21:13                                                     ` Andrea Righi
2009-04-24  0:26                                                     ` KAMEZAWA Hiroyuki
2009-04-24  0:26                                                     ` KAMEZAWA Hiroyuki
2009-04-24  5:14                                             ` Balbir Singh
2009-04-23 10:03                                       ` Andrea Righi
     [not found]                                       ` <20090423090535.ec419269.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-23  1:22                                         ` Theodore Tso
2009-04-23 10:03                                         ` Andrea Righi
2009-04-23  0:05                                     ` KAMEZAWA Hiroyuki
     [not found]                                 ` <20090422093349.1ee9ae82.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2009-04-22  1:21                                   ` KAMEZAWA Hiroyuki
2009-04-22  0:33                               ` KAMEZAWA Hiroyuki
2009-04-22  3:30                             ` Balbir Singh
     [not found]                           ` <20090421181429.GO19637-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-04-21 19:14                             ` Theodore Tso
2009-04-21 16:35                   ` Theodore Tso
     [not found]                 ` <20090421140631.GF19186-3s7WtUTddSA@public.gmane.org>
2009-04-21 14:31                   ` Andrea Righi
2009-04-24 15:10                   ` Balbir Singh
2009-04-24 15:10                 ` Balbir Singh
2009-04-21 14:06               ` Theodore Tso
     [not found]             ` <20090421001822.GB19186-3s7WtUTddSA@public.gmane.org>
2009-04-21  8:30               ` Andrea Righi
2009-04-21  0:18           ` Theodore Tso
     [not found]       ` <20090417123805.GC7117-3s7WtUTddSA@public.gmane.org>
2009-04-17 12:50         ` Jens Axboe
2009-04-16 22:24   ` [PATCH 0/9] cgroup: io-throttle controller (v13) Andrew Morton
2009-04-16 22:24     ` Andrew Morton
     [not found]     ` <20090416152433.aaaba300.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2009-04-17  9:37       ` Andrea Righi
2009-04-17  9:37         ` Andrea Righi
2009-04-30 13:20   ` Alan D. Brunelle
2009-04-14 20:21 ` [PATCH 4/9] support checking of cgroup subsystem dependencies Andrea Righi
2009-04-14 20:21 ` [PATCH 5/9] io-throttle controller infrastructure Andrea Righi
2009-04-30 13:20 ` [PATCH 0/9] cgroup: io-throttle controller (v13) Alan D. Brunelle
     [not found]   ` <49F9A5BA.9030100-VXdhtT5mjnY@public.gmane.org>
2009-05-01 11:11     ` Andrea Righi
2009-05-01 11:11   ` Andrea Righi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090423211300.GA20176@linux \
    --to=righi.andrea-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=Gui-FOgKQjlUJ6BQetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=agk-9JcytcrH/bA+uJoB2kUjGw@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=balbir-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=chlunde-om2ZC0WAoZIXWF+eFR7m5Q@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=dave-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=dradford-cT2on/YLNlBWk0Htik3J/w@public.gmane.org \
    --cc=eric.rannaud-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=fernando-gVGce1chcLdL9jVzuh4AOg@public.gmane.org \
    --cc=jens.axboe-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=matt-cT2on/YLNlBWk0Htik3J/w@public.gmane.org \
    --cc=menage-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=ngupta-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=randy.dunlap-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=roberto-5KDOxZqKugI@public.gmane.org \
    --cc=subrata-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
    --cc=tytso-3s7WtUTddSA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.