All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
To: Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>,
	ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	sjayaraman-IBi9RG/b67k@public.gmane.org,
	lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Fengguang Wu
	<fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Subject: Re: [RFC] writeback and cgroup
Date: Wed, 11 Apr 2012 21:44:25 +0200	[thread overview]
Message-ID: <20120411194425.GG16008@quack.suse.cz> (raw)
In-Reply-To: <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

On Wed 11-04-12 13:23:11, Vivek Goyal wrote:
> On Wed, Apr 11, 2012 at 07:05:42PM +0200, Jan Kara wrote:
> > On Wed 11-04-12 11:45:31, Vivek Goyal wrote:
> > > On Wed, Apr 11, 2012 at 11:40:05AM -0400, Vivek Goyal wrote:
> > > > On Wed, Apr 11, 2012 at 12:24:25AM +0200, Jan Kara wrote:
> > > > 
> > > > [..]
> > > > > > I have implemented and posted patches for per bdi per cgroup congestion
> > > > > > flag. The only problem I see with that is that a group might be congested
> > > > > > for a long time because of lots of other IO happening (say direct IO) and
> > > > > > if you keep on backing off and never submit the metadata IO (transaction),
> > > > > > you get starved. And if you go ahead and submit IO in a congested group,
> > > > > > we are back to serialization issue.
> > > > >   Clearly, we mustn't throttle metadata IO once it gets to the block layer.
> > > > > That's why we discuss throttling of processes at transaction start after
> > > > > all. But I agree starvation is an issue - I originally thought blk-throttle
> > > > > throttles synchronously which wouldn't have starvation issues.
> > > 
> > > Current bio throttling is asynchrounous. Process can submit the bio
> > > and go back and wait for bio to finish. That bio will be queued at device
> > > queue in a per cgroup queue and will be dispatched to device according
> > > to configured IO rate for cgroup.
> > > 
> > > The additional feature for buffered throttle (which never went upstream),
> > > was synchronous in nature. That is we were actively putting writer to
> > > sleep on a per cgroup wait queue in the request queue and wake it up when
> > > it can do further IO based on cgroup limits.
> >   Hmm, but then there would be similar starvation issues as with my simple
> > scheme because async IO could always use the whole available bandwidth.
> 
> It depends on how the throttling logic decides to divide bandwidth between
> sync and async. I had chosen a round robin policy of dispatching some
> bios and then allowing some async IO etc. So async IO was not consuming
> the whole available bandwidth. We could easibly tilt it in favor of sync IO
> with a tunable knob.
  Ah, OK.

> > Mixing of sync & async throttling is really problematic... I'm wondering
> > how useful the async throttling is.
> 
> If sync throttling is useful, then async throttling has to be useful too?
> Especially given the fact that often async IO consumes all bandwidth
> impacting sync latencies.
  I wasn't clear enough I guess. I meant to ask if async throttling brings
some serious advantage over the sync one. And I think your answer is that
we want to have at least some IO prepared to be submitted to maintain
reasonable device utilization.

> > Because we will block on request
> > allocation once there are more than nr_requests pending requests so at that
> > point throttling becomes sync anyway.
> 
> First of all flushers will block on nr_requests and not actual writers.
  Well, but as soon as you are going to do real IO (not just use the
cache), you can block - i.e. direct IO writers, or fsync, or readers can
block.

> And secondly we thought of having per group request descriptors so that
> writes of one group don't impact others. So once the writes of a group
> are backlogged, then flusher can query the congestion status of group
> and not submit any more writes to that group. As some writes are already
> queued in that group, writes will not be starved. Well, in case of
> deadline, even direct writes go in write queue so theoritically we can
> hit starvation issue (flush not being able to submit writes without
> risking blocking) there too.
> 
> To avoid this starvation, ideally we need per bdi per cgroup flusher. so
> that flusher can simply block if there are not enough request descriptors
> in the cgroup.
  Yeah, on one hand this would simplify some things, but on the other hand
you would possibly create performance issue with interleaving IO from
different flusher threads (although that shouldn't be a big problem because
they would work on disjoint sets of inodes and should submit large enough
chunks) and also fs-wide operations such as sync(2) would need some
thinking.

Actually handling of sync(2) is interesting on it's own because if it
should obey throttling limits for each cgroup whose inode is written, it
may take *really* long time to complete it...
 
> So trying to throttle buffered writes synchronously in balance_dirty_pages(),
> atleast simlifies the implementation.  I like my implementation better
> over Fengguang's approach of throttling for simple reason that buffered
> writes and direct writes can be subjected to same throttling limits
> instead of separate limits for buffered writes.
  I guess we all agree (including Fengguang) that this is desirable.
-- 
Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
SUSE Labs, CR

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Jan Kara <jack@suse.cz>, Tejun Heo <tj@kernel.org>,
	Fengguang Wu <fengguang.wu@intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org
Subject: Re: [RFC] writeback and cgroup
Date: Wed, 11 Apr 2012 21:44:25 +0200	[thread overview]
Message-ID: <20120411194425.GG16008@quack.suse.cz> (raw)
In-Reply-To: <20120411172311.GF16692@redhat.com>

On Wed 11-04-12 13:23:11, Vivek Goyal wrote:
> On Wed, Apr 11, 2012 at 07:05:42PM +0200, Jan Kara wrote:
> > On Wed 11-04-12 11:45:31, Vivek Goyal wrote:
> > > On Wed, Apr 11, 2012 at 11:40:05AM -0400, Vivek Goyal wrote:
> > > > On Wed, Apr 11, 2012 at 12:24:25AM +0200, Jan Kara wrote:
> > > > 
> > > > [..]
> > > > > > I have implemented and posted patches for per bdi per cgroup congestion
> > > > > > flag. The only problem I see with that is that a group might be congested
> > > > > > for a long time because of lots of other IO happening (say direct IO) and
> > > > > > if you keep on backing off and never submit the metadata IO (transaction),
> > > > > > you get starved. And if you go ahead and submit IO in a congested group,
> > > > > > we are back to serialization issue.
> > > > >   Clearly, we mustn't throttle metadata IO once it gets to the block layer.
> > > > > That's why we discuss throttling of processes at transaction start after
> > > > > all. But I agree starvation is an issue - I originally thought blk-throttle
> > > > > throttles synchronously which wouldn't have starvation issues.
> > > 
> > > Current bio throttling is asynchrounous. Process can submit the bio
> > > and go back and wait for bio to finish. That bio will be queued at device
> > > queue in a per cgroup queue and will be dispatched to device according
> > > to configured IO rate for cgroup.
> > > 
> > > The additional feature for buffered throttle (which never went upstream),
> > > was synchronous in nature. That is we were actively putting writer to
> > > sleep on a per cgroup wait queue in the request queue and wake it up when
> > > it can do further IO based on cgroup limits.
> >   Hmm, but then there would be similar starvation issues as with my simple
> > scheme because async IO could always use the whole available bandwidth.
> 
> It depends on how the throttling logic decides to divide bandwidth between
> sync and async. I had chosen a round robin policy of dispatching some
> bios and then allowing some async IO etc. So async IO was not consuming
> the whole available bandwidth. We could easibly tilt it in favor of sync IO
> with a tunable knob.
  Ah, OK.

> > Mixing of sync & async throttling is really problematic... I'm wondering
> > how useful the async throttling is.
> 
> If sync throttling is useful, then async throttling has to be useful too?
> Especially given the fact that often async IO consumes all bandwidth
> impacting sync latencies.
  I wasn't clear enough I guess. I meant to ask if async throttling brings
some serious advantage over the sync one. And I think your answer is that
we want to have at least some IO prepared to be submitted to maintain
reasonable device utilization.

> > Because we will block on request
> > allocation once there are more than nr_requests pending requests so at that
> > point throttling becomes sync anyway.
> 
> First of all flushers will block on nr_requests and not actual writers.
  Well, but as soon as you are going to do real IO (not just use the
cache), you can block - i.e. direct IO writers, or fsync, or readers can
block.

> And secondly we thought of having per group request descriptors so that
> writes of one group don't impact others. So once the writes of a group
> are backlogged, then flusher can query the congestion status of group
> and not submit any more writes to that group. As some writes are already
> queued in that group, writes will not be starved. Well, in case of
> deadline, even direct writes go in write queue so theoritically we can
> hit starvation issue (flush not being able to submit writes without
> risking blocking) there too.
> 
> To avoid this starvation, ideally we need per bdi per cgroup flusher. so
> that flusher can simply block if there are not enough request descriptors
> in the cgroup.
  Yeah, on one hand this would simplify some things, but on the other hand
you would possibly create performance issue with interleaving IO from
different flusher threads (although that shouldn't be a big problem because
they would work on disjoint sets of inodes and should submit large enough
chunks) and also fs-wide operations such as sync(2) would need some
thinking.

Actually handling of sync(2) is interesting on it's own because if it
should obey throttling limits for each cgroup whose inode is written, it
may take *really* long time to complete it...
 
> So trying to throttle buffered writes synchronously in balance_dirty_pages(),
> atleast simlifies the implementation.  I like my implementation better
> over Fengguang's approach of throttling for simple reason that buffered
> writes and direct writes can be subjected to same throttling limits
> instead of separate limits for buffered writes.
  I guess we all agree (including Fengguang) that this is desirable.
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Jan Kara <jack@suse.cz>, Tejun Heo <tj@kernel.org>,
	Fengguang Wu <fengguang.wu@intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org
Subject: Re: [RFC] writeback and cgroup
Date: Wed, 11 Apr 2012 21:44:25 +0200	[thread overview]
Message-ID: <20120411194425.GG16008@quack.suse.cz> (raw)
In-Reply-To: <20120411172311.GF16692@redhat.com>

On Wed 11-04-12 13:23:11, Vivek Goyal wrote:
> On Wed, Apr 11, 2012 at 07:05:42PM +0200, Jan Kara wrote:
> > On Wed 11-04-12 11:45:31, Vivek Goyal wrote:
> > > On Wed, Apr 11, 2012 at 11:40:05AM -0400, Vivek Goyal wrote:
> > > > On Wed, Apr 11, 2012 at 12:24:25AM +0200, Jan Kara wrote:
> > > > 
> > > > [..]
> > > > > > I have implemented and posted patches for per bdi per cgroup congestion
> > > > > > flag. The only problem I see with that is that a group might be congested
> > > > > > for a long time because of lots of other IO happening (say direct IO) and
> > > > > > if you keep on backing off and never submit the metadata IO (transaction),
> > > > > > you get starved. And if you go ahead and submit IO in a congested group,
> > > > > > we are back to serialization issue.
> > > > >   Clearly, we mustn't throttle metadata IO once it gets to the block layer.
> > > > > That's why we discuss throttling of processes at transaction start after
> > > > > all. But I agree starvation is an issue - I originally thought blk-throttle
> > > > > throttles synchronously which wouldn't have starvation issues.
> > > 
> > > Current bio throttling is asynchrounous. Process can submit the bio
> > > and go back and wait for bio to finish. That bio will be queued at device
> > > queue in a per cgroup queue and will be dispatched to device according
> > > to configured IO rate for cgroup.
> > > 
> > > The additional feature for buffered throttle (which never went upstream),
> > > was synchronous in nature. That is we were actively putting writer to
> > > sleep on a per cgroup wait queue in the request queue and wake it up when
> > > it can do further IO based on cgroup limits.
> >   Hmm, but then there would be similar starvation issues as with my simple
> > scheme because async IO could always use the whole available bandwidth.
> 
> It depends on how the throttling logic decides to divide bandwidth between
> sync and async. I had chosen a round robin policy of dispatching some
> bios and then allowing some async IO etc. So async IO was not consuming
> the whole available bandwidth. We could easibly tilt it in favor of sync IO
> with a tunable knob.
  Ah, OK.

> > Mixing of sync & async throttling is really problematic... I'm wondering
> > how useful the async throttling is.
> 
> If sync throttling is useful, then async throttling has to be useful too?
> Especially given the fact that often async IO consumes all bandwidth
> impacting sync latencies.
  I wasn't clear enough I guess. I meant to ask if async throttling brings
some serious advantage over the sync one. And I think your answer is that
we want to have at least some IO prepared to be submitted to maintain
reasonable device utilization.

> > Because we will block on request
> > allocation once there are more than nr_requests pending requests so at that
> > point throttling becomes sync anyway.
> 
> First of all flushers will block on nr_requests and not actual writers.
  Well, but as soon as you are going to do real IO (not just use the
cache), you can block - i.e. direct IO writers, or fsync, or readers can
block.

> And secondly we thought of having per group request descriptors so that
> writes of one group don't impact others. So once the writes of a group
> are backlogged, then flusher can query the congestion status of group
> and not submit any more writes to that group. As some writes are already
> queued in that group, writes will not be starved. Well, in case of
> deadline, even direct writes go in write queue so theoritically we can
> hit starvation issue (flush not being able to submit writes without
> risking blocking) there too.
> 
> To avoid this starvation, ideally we need per bdi per cgroup flusher. so
> that flusher can simply block if there are not enough request descriptors
> in the cgroup.
  Yeah, on one hand this would simplify some things, but on the other hand
you would possibly create performance issue with interleaving IO from
different flusher threads (although that shouldn't be a big problem because
they would work on disjoint sets of inodes and should submit large enough
chunks) and also fs-wide operations such as sync(2) would need some
thinking.

Actually handling of sync(2) is interesting on it's own because if it
should obey throttling limits for each cgroup whose inode is written, it
may take *really* long time to complete it...
 
> So trying to throttle buffered writes synchronously in balance_dirty_pages(),
> atleast simlifies the implementation.  I like my implementation better
> over Fengguang's approach of throttling for simple reason that buffered
> writes and direct writes can be subjected to same throttling limits
> instead of separate limits for buffered writes.
  I guess we all agree (including Fengguang) that this is desirable.
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-04-11 19:44 UTC|newest]

Thread overview: 262+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-03 18:36 [RFC] writeback and cgroup Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-04 14:51 ` Vivek Goyal
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 15:36   ` [Lsf] " Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 18:56     ` Tejun Heo
2012-04-04 18:56       ` Tejun Heo
2012-04-04 19:19       ` Vivek Goyal
2012-04-04 19:19         ` Vivek Goyal
2012-04-25  8:47         ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]         ` <20120404191918.GK12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]       ` <20120404185605.GC29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:19         ` Vivek Goyal
     [not found]     ` <CAH2r5mtwQa0Uu=_Yd2JywVJXA=OMGV43X_OUfziC-yeVy9BGtQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-04 18:56       ` Tejun Heo
2012-04-04 18:49   ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 19:23     ` [Lsf] " Steve French
2012-04-04 19:23       ` Steve French
     [not found]       ` <CAH2r5mvP56D0y4mk5wKrJcj+=OZ0e0Q5No_L+9a8a=GMcEhRew-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15       ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
     [not found]     ` <20120404184909.GB29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:23       ` Steve French
2012-04-04 20:32       ` Vivek Goyal
2012-04-05 16:38       ` Tejun Heo
2012-04-14 11:53       ` [Lsf] " Peter Zijlstra
2012-04-04 20:32     ` Vivek Goyal
2012-04-04 20:32       ` Vivek Goyal
     [not found]       ` <20120404203239.GM12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02       ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-05 16:38     ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
     [not found]       ` <20120405163854.GE12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:13         ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-14 11:53     ` [Lsf] " Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-16  1:25       ` Steve French
     [not found]   ` <20120404145134.GC12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 15:36     ` Steve French
2012-04-04 18:49     ` Tejun Heo
2012-04-07  8:00     ` Jan Kara
2012-04-07  8:00   ` Jan Kara
2012-04-07  8:00     ` Jan Kara
     [not found]     ` <20120407080027.GA2584-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 16:23       ` [Lsf] " Steve French
2012-04-10 18:06       ` Vivek Goyal
2012-04-10 16:23     ` [Lsf] " Steve French
2012-04-10 16:23       ` Steve French
2012-04-10 16:23       ` Steve French
     [not found]       ` <CAH2r5mvLVnM3Se5vBBsYzwaz5Ckp3i6SVnGp2T0XaGe9_u8YYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:16       ` Vivek Goyal
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:06     ` Vivek Goyal
2012-04-10 18:06       ` Vivek Goyal
     [not found]       ` <20120410180653.GJ21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 21:05         ` Jan Kara
2012-04-10 21:05           ` Jan Kara
2012-04-10 21:05           ` Jan Kara
     [not found]           ` <20120410210505.GE4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 21:20             ` Vivek Goyal
2012-04-10 21:20           ` Vivek Goyal
2012-04-10 21:20             ` Vivek Goyal
     [not found]             ` <20120410212041.GP21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 22:24               ` Jan Kara
2012-04-10 22:24             ` Jan Kara
2012-04-10 22:24               ` Jan Kara
     [not found]               ` <20120410222425.GF4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 15:40                 ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:45                   ` Vivek Goyal
2012-04-11 15:45                     ` Vivek Goyal
     [not found]                     ` <20120411154531.GE16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:05                     ` Jan Kara
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:23                       ` Vivek Goyal
2012-04-11 17:23                         ` Vivek Goyal
     [not found]                         ` <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 19:44                           ` Jan Kara [this message]
2012-04-11 19:44                             ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
     [not found]                       ` <20120411170542.GB16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 17:23                         ` Vivek Goyal
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                       ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-18 18:18                         ` Vivek Goyal
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                         ` <20120417214831.GE19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                   ` <20120411154005.GD16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 15:45                     ` Vivek Goyal
2012-04-11 19:22                     ` Jan Kara
2012-04-14 12:25                     ` [Lsf] " Peter Zijlstra
2012-04-11 19:22                   ` Jan Kara
2012-04-11 19:22                     ` Jan Kara
     [not found]                     ` <20120411192231.GF16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-12 20:37                       ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:51                         ` Tejun Heo
2012-04-12 20:51                           ` Tejun Heo
     [not found]                           ` <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-14 14:36                             ` Fengguang Wu
2012-04-14 14:36                               ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-16 14:57                                 ` Vivek Goyal
2012-04-24 11:33                                 ` Fengguang Wu
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 15:58                                     ` Vivek Goyal
2012-04-24 15:58                                       ` Vivek Goyal
     [not found]                                       ` <20120424155843.GG26708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                       ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
     [not found]                                     ` <20120424145655.GA1474-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-24 15:58                                       ` Vivek Goyal
2012-04-25  3:16                                       ` Fengguang Wu
2012-04-25  3:16                                         ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
     [not found]                                           ` <20120425090156.GB12568-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-25 12:05                                             ` Fengguang Wu
2012-04-25 12:05                                               ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
     [not found]                                 ` <20120416145744.GA15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-15 11:37                         ` [Lsf] " Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
     [not found]                         ` <20120412203719.GL2207-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-12 20:51                           ` Tejun Heo
2012-04-15 11:37                           ` [Lsf] " Peter Zijlstra
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                     ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-18  6:30                       ` Jan Kara
2012-04-18  6:30                         ` Jan Kara
     [not found]                       ` <20120417220106.GF19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18  6:30                         ` Jan Kara
2012-04-14 12:25                   ` [Lsf] " Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-16 12:54                     ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 13:07                       ` Fengguang Wu
2012-04-16 13:07                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                           ` Fengguang Wu
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                           ` Vivek Goyal
     [not found]                           ` <20120416155207.GB15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-17  2:14                             ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
     [not found]                       ` <20120416125432.GB12776-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-16 13:07                         ` Fengguang Wu
     [not found] ` <20120403183655.GA23106-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 17:51   ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 18:35       ` Vivek Goyal
     [not found]       ` <20120404183528.GJ12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 21:42         ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-05 15:10             ` Vivek Goyal
     [not found]             ` <20120405151026.GB23999-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-06  0:32               ` Fengguang Wu
2012-04-06  0:32             ` Fengguang Wu
2012-04-06  0:32               ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 19:33     ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-06  9:59       ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-19 14:23           ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 18:31               ` Vivek Goyal
     [not found]               ` <20120419183118.GM10216-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 12:45               ` Fengguang Wu
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 19:29                 ` Vivek Goyal
2012-04-20 19:29                   ` Vivek Goyal
2012-04-20 21:33                   ` Tejun Heo
2012-04-20 21:33                     ` Tejun Heo
2012-04-22 14:26                     ` Fengguang Wu
2012-04-22 14:26                       ` Fengguang Wu
     [not found]                     ` <20120420213301.GA29134-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:26                       ` Fengguang Wu
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 12:30                     ` Vivek Goyal
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 16:04                       ` Tejun Heo
2012-04-23 16:04                         ` Tejun Heo
     [not found]                       ` <20120423123011.GA8103-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-23 16:04                         ` Tejun Heo
     [not found]                   ` <20120420192930.GR22419-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 21:33                     ` Tejun Heo
2012-04-20 19:29                 ` Vivek Goyal
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 20:26             ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-20 13:34               ` Fengguang Wu
2012-04-20 13:34                 ` Fengguang Wu
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                   ` Tejun Heo
     [not found]                   ` <20120420190844.GH32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                   ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-23 16:56                     ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
     [not found]                       ` <20120423165626.GB5406-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-24  7:58                         ` Fengguang Wu
2012-04-24  7:58                       ` Fengguang Wu
2012-04-24  7:58                         ` Fengguang Wu
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                           ` Tejun Heo
2012-04-23  9:14                 ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23 10:24                   ` Fengguang Wu
2012-04-23 10:24                     ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
2012-04-23 12:42                       ` Jan Kara
2012-04-23 14:31                       ` Fengguang Wu
2012-04-23 14:31                         ` Fengguang Wu
     [not found]                       ` <20120423124240.GE6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 14:31                         ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
     [not found]                   ` <20120423091432.GC6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 10:24                     ` Fengguang Wu
     [not found]               ` <20120419202635.GA4795-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-20 13:34                 ` Fengguang Wu
     [not found]           ` <20120417223854.GG19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-19 14:23             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
2012-04-18  6:57           ` Jan Kara
2012-04-18  7:58           ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
     [not found]           ` <20120418065720.GA21485-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
     [not found]       ` <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 20:18         ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-05 16:31           ` Tejun Heo
2012-04-05 16:31             ` Tejun Heo
2012-04-05 17:09             ` Vivek Goyal
2012-04-05 17:09               ` Vivek Goyal
     [not found]             ` <20120405163113.GD12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:09               ` Vivek Goyal
     [not found]           ` <20120404201816.GL12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-05 16:31             ` Tejun Heo
2012-04-06  9:59         ` Fengguang Wu
2012-04-03 18:36 Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120411194425.GG16008@quack.suse.cz \
    --to=jack-alswssmvlrq@public.gmane.org \
    --cc=andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org \
    --cc=axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=sjayaraman-IBi9RG/b67k@public.gmane.org \
    --cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.