All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>,
	ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	sjayaraman-IBi9RG/b67k@public.gmane.org,
	lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Fengguang Wu
	<fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Subject: Re: [RFC] writeback and cgroup
Date: Wed, 4 Apr 2012 16:18:16 -0400	[thread overview]
Message-ID: <20120404201816.GL12676@redhat.com> (raw)
In-Reply-To: <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>

On Wed, Apr 04, 2012 at 12:33:55PM -0700, Tejun Heo wrote:
> Hey, Fengguang.
> 
> On Wed, Apr 04, 2012 at 10:51:24AM -0700, Fengguang Wu wrote:
> > Yeah it should be trivial to apply the balance_dirty_pages()
> > throttling algorithm to the read/direct IOs. However up to now I don't
> > see much added value to *duplicate* the current block IO controller
> > functionalities, assuming the current users and developers are happy
> > with it.
> 
> Heh, trust me.  It's half broken and people ain't happy.  I get that
> your algorithm can be updatd to consider all IOs and I believe that
> but what I don't get is how would such information get to writeback
> and in turn how writeback would enforce the result on reads and direct
> IOs.  Through what path?  Will all reads and direct IOs travel through
> balance_dirty_pages() even direct IOs on raw block devices?  Or would
> the writeback algorithm take the configuration from cfq, apply the
> algorithm and give back the limits to enforce to cfq?  If the latter,
> isn't that at least somewhat messed up?

I think he wanted to get the configuration with the help of blkcg
interface and just implement those policies up there without any
further interaction with CFQ or lower layers.

[..]
> > The sweet split point would be for balance_dirty_pages() to do cgroup
> > aware buffered write throttling and leave other IOs to the current
> > blkcg. For this to work well as a total solution for end users, I hope
> > we can cooperate and figure out ways for the two throttling entities
> > to work well with each other.
> 
> There's where I'm confused.  How is the said split supposed to work?
> They aren't independent.  I mean, who gets to decide what and where
> are those decisions enforced?

As you said, split is just a temporary gap filling in the absense of a
good solutiong for throttling buffered writes (which is often a source
of problem for sync IO latencies). So with this solution one could put
independetly control the buffered write rate of a cgroup. Lower layers
will not throttle that traffic again as it would show up in root
cgroup. Hence blkcg and writeback need not to communicate much as
such except for confirations knobs and possibly for some stats.

[..]
> > - running concurrent flusher threads for cgroups, which adds back the
> >   disk seeks and lock contentions. And still has problems with sync
> >   and shared inodes.
> 

Or, export the notion of per group per bdi congestion and flusher does
not try to submit IO from an inode if device is congested. That way
flusher will not get blocked and we don't have to create one flusher
thread per cgroup and be happy with one flusher per bdi.

And with the comprobmise of one inode belonging to one cgroup, we will
still dispatch a bunch of IO from one inode and then move to next.
Depending on size of chunk we can reduce the seek a bit. Size of quantum
will decide tradeoff between seek and fairness of writes from inodes.

[..]
> > - the mess of metadata handling
> 
> Does throttling from writeback actually solve this problem?  What
> about fsync()?  Does that already go through balance_dirty_pages()?

By throttling the process at the time of dirtying memory, you just allowed
enough IO from process as allowed by the limits. Now fsync() has to send
only those pages to the disk and does not have to be throttled again.

So throttling process while you are admitting IO avoids these issues
with filesystem metadata.

But at the same time it does not feel right to throttle read and AIO
synchronously. Current behavior of kernel queuing up bio and throttling
it asynchronously is desirable. Only buffered write is a special case
as we anyway throttle it actively based on amount of dirty memory.

[..]
> 
> > - unnecessarily coupled with memcg, in order to take advantage of the
> >   per-memcg dirty limits for balance_dirty_pages() to actually convert
> >   the "pushed back" dirty pages pressure into lowered dirty rate. Why
> >   the hell the users *have to* setup memcg (suffering from all the
> >   inconvenience and overheads) in order to do IO throttling?  Please,
> >   this is really ugly! And the "back pressure" may constantly push the
> >   memcg dirty pages to the limits. I'm not going to support *miss use*
> >   of per-memcg dirty limits like this!
> 
> Writeback sits between blkcg and memcg and it indeed can be hairy to
> consider both sides especially given the current sorry complex state
> of cgroup and I can see why it would seem tempting to add a separate
> controller or at least knobs to support that.  That said, I *think*
> given that memcg controls all other memory parameters it probably
> would make most sense giving that parameter to memcg too.  I don't
> think this is really relevant to this discussion tho.  Who owns
> dirty_limits is a separate issue.

I agree that dirty_limit control resembles more closely to memcg than
blkcg as it is all about writing to memory and that's the resource
controlled by memcg.

I think Fegguang wanted to keep those knobs in blkcg as he thinks that
in writeback logic he can actively throttle readers and direct IO too.
But that does not sounds little messy to me too.

Hey how about reconsidering my other proposal for which I had posted
the patches. And that is keep throttling still at device level. Reads
and direct IO get throttled asynchronously but buffered writes get
throttled synchronously.

Advantages of this scheme.

- There are no separate knobs.

- All the IO (read, direct IO and buffered write) is controlled using
  same set of knobs and goes in queue of same cgroup.

- Writeback logic has no knowledge of throttling. It just invokes a 
  hook into throttling logic of device queue.

I guess this is a hybrid of active writeback throttling and back pressure
mechanism.

But it still does not solve the NFS issue as well as for direct IO,
filesystems still can get serialized, so metadata issue still needs to 
be resolved. So one can argue that why not go for full "back pressure"
method, despite it being more complex.

Here is the link, just to refresh the memory. Something to keep in mind
while assessing alternatives.

https://lkml.org/lkml/2011/6/28/243

Thanks
Vivek

WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Tejun Heo <tj@kernel.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org
Subject: Re: [RFC] writeback and cgroup
Date: Wed, 4 Apr 2012 16:18:16 -0400	[thread overview]
Message-ID: <20120404201816.GL12676@redhat.com> (raw)
In-Reply-To: <20120404193355.GD29686@dhcp-172-17-108-109.mtv.corp.google.com>

On Wed, Apr 04, 2012 at 12:33:55PM -0700, Tejun Heo wrote:
> Hey, Fengguang.
> 
> On Wed, Apr 04, 2012 at 10:51:24AM -0700, Fengguang Wu wrote:
> > Yeah it should be trivial to apply the balance_dirty_pages()
> > throttling algorithm to the read/direct IOs. However up to now I don't
> > see much added value to *duplicate* the current block IO controller
> > functionalities, assuming the current users and developers are happy
> > with it.
> 
> Heh, trust me.  It's half broken and people ain't happy.  I get that
> your algorithm can be updatd to consider all IOs and I believe that
> but what I don't get is how would such information get to writeback
> and in turn how writeback would enforce the result on reads and direct
> IOs.  Through what path?  Will all reads and direct IOs travel through
> balance_dirty_pages() even direct IOs on raw block devices?  Or would
> the writeback algorithm take the configuration from cfq, apply the
> algorithm and give back the limits to enforce to cfq?  If the latter,
> isn't that at least somewhat messed up?

I think he wanted to get the configuration with the help of blkcg
interface and just implement those policies up there without any
further interaction with CFQ or lower layers.

[..]
> > The sweet split point would be for balance_dirty_pages() to do cgroup
> > aware buffered write throttling and leave other IOs to the current
> > blkcg. For this to work well as a total solution for end users, I hope
> > we can cooperate and figure out ways for the two throttling entities
> > to work well with each other.
> 
> There's where I'm confused.  How is the said split supposed to work?
> They aren't independent.  I mean, who gets to decide what and where
> are those decisions enforced?

As you said, split is just a temporary gap filling in the absense of a
good solutiong for throttling buffered writes (which is often a source
of problem for sync IO latencies). So with this solution one could put
independetly control the buffered write rate of a cgroup. Lower layers
will not throttle that traffic again as it would show up in root
cgroup. Hence blkcg and writeback need not to communicate much as
such except for confirations knobs and possibly for some stats.

[..]
> > - running concurrent flusher threads for cgroups, which adds back the
> >   disk seeks and lock contentions. And still has problems with sync
> >   and shared inodes.
> 

Or, export the notion of per group per bdi congestion and flusher does
not try to submit IO from an inode if device is congested. That way
flusher will not get blocked and we don't have to create one flusher
thread per cgroup and be happy with one flusher per bdi.

And with the comprobmise of one inode belonging to one cgroup, we will
still dispatch a bunch of IO from one inode and then move to next.
Depending on size of chunk we can reduce the seek a bit. Size of quantum
will decide tradeoff between seek and fairness of writes from inodes.

[..]
> > - the mess of metadata handling
> 
> Does throttling from writeback actually solve this problem?  What
> about fsync()?  Does that already go through balance_dirty_pages()?

By throttling the process at the time of dirtying memory, you just allowed
enough IO from process as allowed by the limits. Now fsync() has to send
only those pages to the disk and does not have to be throttled again.

So throttling process while you are admitting IO avoids these issues
with filesystem metadata.

But at the same time it does not feel right to throttle read and AIO
synchronously. Current behavior of kernel queuing up bio and throttling
it asynchronously is desirable. Only buffered write is a special case
as we anyway throttle it actively based on amount of dirty memory.

[..]
> 
> > - unnecessarily coupled with memcg, in order to take advantage of the
> >   per-memcg dirty limits for balance_dirty_pages() to actually convert
> >   the "pushed back" dirty pages pressure into lowered dirty rate. Why
> >   the hell the users *have to* setup memcg (suffering from all the
> >   inconvenience and overheads) in order to do IO throttling?  Please,
> >   this is really ugly! And the "back pressure" may constantly push the
> >   memcg dirty pages to the limits. I'm not going to support *miss use*
> >   of per-memcg dirty limits like this!
> 
> Writeback sits between blkcg and memcg and it indeed can be hairy to
> consider both sides especially given the current sorry complex state
> of cgroup and I can see why it would seem tempting to add a separate
> controller or at least knobs to support that.  That said, I *think*
> given that memcg controls all other memory parameters it probably
> would make most sense giving that parameter to memcg too.  I don't
> think this is really relevant to this discussion tho.  Who owns
> dirty_limits is a separate issue.

I agree that dirty_limit control resembles more closely to memcg than
blkcg as it is all about writing to memory and that's the resource
controlled by memcg.

I think Fegguang wanted to keep those knobs in blkcg as he thinks that
in writeback logic he can actively throttle readers and direct IO too.
But that does not sounds little messy to me too.

Hey how about reconsidering my other proposal for which I had posted
the patches. And that is keep throttling still at device level. Reads
and direct IO get throttled asynchronously but buffered writes get
throttled synchronously.

Advantages of this scheme.

- There are no separate knobs.

- All the IO (read, direct IO and buffered write) is controlled using
  same set of knobs and goes in queue of same cgroup.

- Writeback logic has no knowledge of throttling. It just invokes a 
  hook into throttling logic of device queue.

I guess this is a hybrid of active writeback throttling and back pressure
mechanism.

But it still does not solve the NFS issue as well as for direct IO,
filesystems still can get serialized, so metadata issue still needs to 
be resolved. So one can argue that why not go for full "back pressure"
method, despite it being more complex.

Here is the link, just to refresh the memory. Something to keep in mind
while assessing alternatives.

https://lkml.org/lkml/2011/6/28/243

Thanks
Vivek

WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com>
To: Tejun Heo <tj@kernel.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org
Subject: Re: [RFC] writeback and cgroup
Date: Wed, 4 Apr 2012 16:18:16 -0400	[thread overview]
Message-ID: <20120404201816.GL12676@redhat.com> (raw)
In-Reply-To: <20120404193355.GD29686@dhcp-172-17-108-109.mtv.corp.google.com>

On Wed, Apr 04, 2012 at 12:33:55PM -0700, Tejun Heo wrote:
> Hey, Fengguang.
> 
> On Wed, Apr 04, 2012 at 10:51:24AM -0700, Fengguang Wu wrote:
> > Yeah it should be trivial to apply the balance_dirty_pages()
> > throttling algorithm to the read/direct IOs. However up to now I don't
> > see much added value to *duplicate* the current block IO controller
> > functionalities, assuming the current users and developers are happy
> > with it.
> 
> Heh, trust me.  It's half broken and people ain't happy.  I get that
> your algorithm can be updatd to consider all IOs and I believe that
> but what I don't get is how would such information get to writeback
> and in turn how writeback would enforce the result on reads and direct
> IOs.  Through what path?  Will all reads and direct IOs travel through
> balance_dirty_pages() even direct IOs on raw block devices?  Or would
> the writeback algorithm take the configuration from cfq, apply the
> algorithm and give back the limits to enforce to cfq?  If the latter,
> isn't that at least somewhat messed up?

I think he wanted to get the configuration with the help of blkcg
interface and just implement those policies up there without any
further interaction with CFQ or lower layers.

[..]
> > The sweet split point would be for balance_dirty_pages() to do cgroup
> > aware buffered write throttling and leave other IOs to the current
> > blkcg. For this to work well as a total solution for end users, I hope
> > we can cooperate and figure out ways for the two throttling entities
> > to work well with each other.
> 
> There's where I'm confused.  How is the said split supposed to work?
> They aren't independent.  I mean, who gets to decide what and where
> are those decisions enforced?

As you said, split is just a temporary gap filling in the absense of a
good solutiong for throttling buffered writes (which is often a source
of problem for sync IO latencies). So with this solution one could put
independetly control the buffered write rate of a cgroup. Lower layers
will not throttle that traffic again as it would show up in root
cgroup. Hence blkcg and writeback need not to communicate much as
such except for confirations knobs and possibly for some stats.

[..]
> > - running concurrent flusher threads for cgroups, which adds back the
> >   disk seeks and lock contentions. And still has problems with sync
> >   and shared inodes.
> 

Or, export the notion of per group per bdi congestion and flusher does
not try to submit IO from an inode if device is congested. That way
flusher will not get blocked and we don't have to create one flusher
thread per cgroup and be happy with one flusher per bdi.

And with the comprobmise of one inode belonging to one cgroup, we will
still dispatch a bunch of IO from one inode and then move to next.
Depending on size of chunk we can reduce the seek a bit. Size of quantum
will decide tradeoff between seek and fairness of writes from inodes.

[..]
> > - the mess of metadata handling
> 
> Does throttling from writeback actually solve this problem?  What
> about fsync()?  Does that already go through balance_dirty_pages()?

By throttling the process at the time of dirtying memory, you just allowed
enough IO from process as allowed by the limits. Now fsync() has to send
only those pages to the disk and does not have to be throttled again.

So throttling process while you are admitting IO avoids these issues
with filesystem metadata.

But at the same time it does not feel right to throttle read and AIO
synchronously. Current behavior of kernel queuing up bio and throttling
it asynchronously is desirable. Only buffered write is a special case
as we anyway throttle it actively based on amount of dirty memory.

[..]
> 
> > - unnecessarily coupled with memcg, in order to take advantage of the
> >   per-memcg dirty limits for balance_dirty_pages() to actually convert
> >   the "pushed back" dirty pages pressure into lowered dirty rate. Why
> >   the hell the users *have to* setup memcg (suffering from all the
> >   inconvenience and overheads) in order to do IO throttling?  Please,
> >   this is really ugly! And the "back pressure" may constantly push the
> >   memcg dirty pages to the limits. I'm not going to support *miss use*
> >   of per-memcg dirty limits like this!
> 
> Writeback sits between blkcg and memcg and it indeed can be hairy to
> consider both sides especially given the current sorry complex state
> of cgroup and I can see why it would seem tempting to add a separate
> controller or at least knobs to support that.  That said, I *think*
> given that memcg controls all other memory parameters it probably
> would make most sense giving that parameter to memcg too.  I don't
> think this is really relevant to this discussion tho.  Who owns
> dirty_limits is a separate issue.

I agree that dirty_limit control resembles more closely to memcg than
blkcg as it is all about writing to memory and that's the resource
controlled by memcg.

I think Fegguang wanted to keep those knobs in blkcg as he thinks that
in writeback logic he can actively throttle readers and direct IO too.
But that does not sounds little messy to me too.

Hey how about reconsidering my other proposal for which I had posted
the patches. And that is keep throttling still at device level. Reads
and direct IO get throttled asynchronously but buffered writes get
throttled synchronously.

Advantages of this scheme.

- There are no separate knobs.

- All the IO (read, direct IO and buffered write) is controlled using
  same set of knobs and goes in queue of same cgroup.

- Writeback logic has no knowledge of throttling. It just invokes a 
  hook into throttling logic of device queue.

I guess this is a hybrid of active writeback throttling and back pressure
mechanism.

But it still does not solve the NFS issue as well as for direct IO,
filesystems still can get serialized, so metadata issue still needs to 
be resolved. So one can argue that why not go for full "back pressure"
method, despite it being more complex.

Here is the link, just to refresh the memory. Something to keep in mind
while assessing alternatives.

https://lkml.org/lkml/2011/6/28/243

Thanks
Vivek

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-04-04 20:18 UTC|newest]

Thread overview: 262+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-03 18:36 [RFC] writeback and cgroup Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-04 14:51 ` Vivek Goyal
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 15:36   ` [Lsf] " Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 18:56     ` Tejun Heo
2012-04-04 18:56       ` Tejun Heo
2012-04-04 19:19       ` Vivek Goyal
2012-04-04 19:19         ` Vivek Goyal
2012-04-25  8:47         ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]         ` <20120404191918.GK12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]       ` <20120404185605.GC29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:19         ` Vivek Goyal
     [not found]     ` <CAH2r5mtwQa0Uu=_Yd2JywVJXA=OMGV43X_OUfziC-yeVy9BGtQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-04 18:56       ` Tejun Heo
2012-04-04 18:49   ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 19:23     ` [Lsf] " Steve French
2012-04-04 19:23       ` Steve French
     [not found]       ` <CAH2r5mvP56D0y4mk5wKrJcj+=OZ0e0Q5No_L+9a8a=GMcEhRew-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15       ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
     [not found]     ` <20120404184909.GB29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:23       ` Steve French
2012-04-04 20:32       ` Vivek Goyal
2012-04-05 16:38       ` Tejun Heo
2012-04-14 11:53       ` [Lsf] " Peter Zijlstra
2012-04-04 20:32     ` Vivek Goyal
2012-04-04 20:32       ` Vivek Goyal
     [not found]       ` <20120404203239.GM12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02       ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-05 16:38     ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
     [not found]       ` <20120405163854.GE12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:13         ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-14 11:53     ` [Lsf] " Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-16  1:25       ` Steve French
     [not found]   ` <20120404145134.GC12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 15:36     ` Steve French
2012-04-04 18:49     ` Tejun Heo
2012-04-07  8:00     ` Jan Kara
2012-04-07  8:00   ` Jan Kara
2012-04-07  8:00     ` Jan Kara
     [not found]     ` <20120407080027.GA2584-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 16:23       ` [Lsf] " Steve French
2012-04-10 18:06       ` Vivek Goyal
2012-04-10 16:23     ` [Lsf] " Steve French
2012-04-10 16:23       ` Steve French
2012-04-10 16:23       ` Steve French
     [not found]       ` <CAH2r5mvLVnM3Se5vBBsYzwaz5Ckp3i6SVnGp2T0XaGe9_u8YYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:16       ` Vivek Goyal
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:06     ` Vivek Goyal
2012-04-10 18:06       ` Vivek Goyal
     [not found]       ` <20120410180653.GJ21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 21:05         ` Jan Kara
2012-04-10 21:05           ` Jan Kara
2012-04-10 21:05           ` Jan Kara
     [not found]           ` <20120410210505.GE4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 21:20             ` Vivek Goyal
2012-04-10 21:20           ` Vivek Goyal
2012-04-10 21:20             ` Vivek Goyal
     [not found]             ` <20120410212041.GP21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 22:24               ` Jan Kara
2012-04-10 22:24             ` Jan Kara
2012-04-10 22:24               ` Jan Kara
     [not found]               ` <20120410222425.GF4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 15:40                 ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:45                   ` Vivek Goyal
2012-04-11 15:45                     ` Vivek Goyal
     [not found]                     ` <20120411154531.GE16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:05                     ` Jan Kara
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:23                       ` Vivek Goyal
2012-04-11 17:23                         ` Vivek Goyal
     [not found]                         ` <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 19:44                           ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
     [not found]                       ` <20120411170542.GB16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 17:23                         ` Vivek Goyal
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                       ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-18 18:18                         ` Vivek Goyal
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                         ` <20120417214831.GE19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                   ` <20120411154005.GD16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 15:45                     ` Vivek Goyal
2012-04-11 19:22                     ` Jan Kara
2012-04-14 12:25                     ` [Lsf] " Peter Zijlstra
2012-04-11 19:22                   ` Jan Kara
2012-04-11 19:22                     ` Jan Kara
     [not found]                     ` <20120411192231.GF16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-12 20:37                       ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:51                         ` Tejun Heo
2012-04-12 20:51                           ` Tejun Heo
     [not found]                           ` <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-14 14:36                             ` Fengguang Wu
2012-04-14 14:36                               ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-16 14:57                                 ` Vivek Goyal
2012-04-24 11:33                                 ` Fengguang Wu
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 15:58                                     ` Vivek Goyal
2012-04-24 15:58                                       ` Vivek Goyal
     [not found]                                       ` <20120424155843.GG26708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                       ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
     [not found]                                     ` <20120424145655.GA1474-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-24 15:58                                       ` Vivek Goyal
2012-04-25  3:16                                       ` Fengguang Wu
2012-04-25  3:16                                         ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
     [not found]                                           ` <20120425090156.GB12568-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-25 12:05                                             ` Fengguang Wu
2012-04-25 12:05                                               ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
     [not found]                                 ` <20120416145744.GA15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-15 11:37                         ` [Lsf] " Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
     [not found]                         ` <20120412203719.GL2207-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-12 20:51                           ` Tejun Heo
2012-04-15 11:37                           ` [Lsf] " Peter Zijlstra
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                     ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-18  6:30                       ` Jan Kara
2012-04-18  6:30                         ` Jan Kara
     [not found]                       ` <20120417220106.GF19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18  6:30                         ` Jan Kara
2012-04-14 12:25                   ` [Lsf] " Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-16 12:54                     ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 13:07                       ` Fengguang Wu
2012-04-16 13:07                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                           ` Fengguang Wu
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                           ` Vivek Goyal
     [not found]                           ` <20120416155207.GB15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-17  2:14                             ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
     [not found]                       ` <20120416125432.GB12776-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-16 13:07                         ` Fengguang Wu
     [not found] ` <20120403183655.GA23106-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 17:51   ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 18:35       ` Vivek Goyal
     [not found]       ` <20120404183528.GJ12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 21:42         ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-05 15:10             ` Vivek Goyal
     [not found]             ` <20120405151026.GB23999-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-06  0:32               ` Fengguang Wu
2012-04-06  0:32             ` Fengguang Wu
2012-04-06  0:32               ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 19:33     ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-06  9:59       ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-19 14:23           ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 18:31               ` Vivek Goyal
     [not found]               ` <20120419183118.GM10216-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 12:45               ` Fengguang Wu
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 19:29                 ` Vivek Goyal
2012-04-20 19:29                   ` Vivek Goyal
2012-04-20 21:33                   ` Tejun Heo
2012-04-20 21:33                     ` Tejun Heo
2012-04-22 14:26                     ` Fengguang Wu
2012-04-22 14:26                       ` Fengguang Wu
     [not found]                     ` <20120420213301.GA29134-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:26                       ` Fengguang Wu
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 12:30                     ` Vivek Goyal
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 16:04                       ` Tejun Heo
2012-04-23 16:04                         ` Tejun Heo
     [not found]                       ` <20120423123011.GA8103-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-23 16:04                         ` Tejun Heo
     [not found]                   ` <20120420192930.GR22419-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 21:33                     ` Tejun Heo
2012-04-20 19:29                 ` Vivek Goyal
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 20:26             ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-20 13:34               ` Fengguang Wu
2012-04-20 13:34                 ` Fengguang Wu
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                   ` Tejun Heo
     [not found]                   ` <20120420190844.GH32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                   ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-23 16:56                     ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
     [not found]                       ` <20120423165626.GB5406-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-24  7:58                         ` Fengguang Wu
2012-04-24  7:58                       ` Fengguang Wu
2012-04-24  7:58                         ` Fengguang Wu
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                           ` Tejun Heo
2012-04-23  9:14                 ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23 10:24                   ` Fengguang Wu
2012-04-23 10:24                     ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
2012-04-23 12:42                       ` Jan Kara
2012-04-23 14:31                       ` Fengguang Wu
2012-04-23 14:31                         ` Fengguang Wu
     [not found]                       ` <20120423124240.GE6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 14:31                         ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
     [not found]                   ` <20120423091432.GC6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 10:24                     ` Fengguang Wu
     [not found]               ` <20120419202635.GA4795-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-20 13:34                 ` Fengguang Wu
     [not found]           ` <20120417223854.GG19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-19 14:23             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
2012-04-18  6:57           ` Jan Kara
2012-04-18  7:58           ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
     [not found]           ` <20120418065720.GA21485-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
     [not found]       ` <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 20:18         ` Vivek Goyal [this message]
2012-04-04 20:18           ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-05 16:31           ` Tejun Heo
2012-04-05 16:31             ` Tejun Heo
2012-04-05 17:09             ` Vivek Goyal
2012-04-05 17:09               ` Vivek Goyal
     [not found]             ` <20120405163113.GD12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:09               ` Vivek Goyal
     [not found]           ` <20120404201816.GL12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-05 16:31             ` Tejun Heo
2012-04-06  9:59         ` Fengguang Wu
2012-04-03 18:36 Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120404201816.GL12676@redhat.com \
    --to=vgoyal-h+wxahxf7alqt0dzr+alfa@public.gmane.org \
    --cc=andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org \
    --cc=axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org \
    --cc=jack-AlSwsSmVLrQ@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=sjayaraman-IBi9RG/b67k@public.gmane.org \
    --cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.