All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: Fengguang Wu <fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>,
	ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	sjayaraman-IBi9RG/b67k@public.gmane.org,
	lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	Mel Gorman <mgorman-l3A5Bk7waGM@public.gmane.org>
Subject: Re: [RFC] writeback and cgroup
Date: Mon, 23 Apr 2012 09:56:26 -0700	[thread overview]
Message-ID: <20120423165626.GB5406@google.com> (raw)
In-Reply-To: <20120422144649.GA7066@localhost>

Hello, Fengguang.

On Sun, Apr 22, 2012 at 10:46:49PM +0800, Fengguang Wu wrote:
> OK. Sorry I should have explained why memcg dirty limit is not the
> right tool for back pressure based throttling.

I have two questions.  Why do we need memcg for this?  Writeback
currently works without memcg, right?  Why does that change with blkcg
aware bdi?

> Basically the more memcgs with dirty limits, the more hard time for
> the flusher to serve them fairly and knock down their dirty pages in
> time. Because the flusher works inode by inode, each one may take up
> to 0.5 second, and there may be many memcgs asking for the flusher's
> attention. Also the more memcgs, the global dirty pages pool are
> partitioned into smaller pieces, which means smaller safety margin for
> each memcg. Adding these two effects up, there may be constantly some
> memcgs hitting their dirty limits when there are dozens of memcgs.

And how is this different from a machine with smaller memory?  If so,
why?

> Such cross subsystem coordinations still look natural to me because
> "weight" is a fundamental and general parameter. It's really a blkcg
> thing (determined by the blkio.weight user interface) rather than
> specifically tied to cfq. When another kernel entity (eg. NFS or noop)
> decides to add support for proportional weight IO control in future,
> it can make use of the weights calculated by balance_dirty_pages(), too.

It is not fundamental and natural at all and is already made cfq
specific in the devel branch.  You seem to think "weight" is somehow a
global concept which everyone can agree on but it is not.  Weight of
what?  Is it disktime, bandwidth, iops or something else?  cfq deals
primarily with disktime because that makes sense for spinning drives
with single head.  For SSDs with smart enough FTLs, the unit should
probably be iops.  For storage technology bottlenecked on bus speed,
bw would make sense.

IIUC, writeback is primarily dealing with abstracted bandwidth which
is applied per-inode, which is fine at that layer as details like
block allocations isn't and shouldn't be visible there and files (or
inodes) are the level of abstraction.

However, this doesn't necessarily translate easily into the actual
underlying IO resource.  For devices with spindle, seek time dominates
and the same amount of IO may consume vastly different amount of IO
and the disk time becomes the primary resource, not the iops or
bandwidth.  Naturally, people want to allocate and limit the primary
resource, so cfq distributes disk time across different cgroups as
configured.

Your suggested solution is applying the same a number - the weight -
to one portion of a mostly arbitrarily split resource using a
different unit.  I don't even understand what that achieves.

The requirement is to be able to split IO resource according to
cgroups in configurable way and enforce the limits established by the
configuration, which we're currently failing to do for async IOs.
Your proposed solution applies some arbitrary ratio according to some
arbitrary interpretation of cfq IO time weight way up in the stack
which, when propagated to the lower layer, would cause significant
amount of delay and fluctuation which behaves completely independent
from how (using what unit, in what granularity and in what time scale)
actual IO resource is handled, split and accounted, which would result
in something which probably has some semblance of interpreting
blkcg.weight as vague best-effort priority at its luckiest moments.

So, I don't think your suggested solution is a solution at all.  I'm
in fact not even sure what it achieves at the cost of the gross
layering violation and fundamental design braindamage.

>         - No more latency
>         - No performance drop
>         - No bumpy progress and stalls
>         - No need to attach memcg to blkcg
>         - Feel free to create 1000+ IO controllers, to heart's content
>           w/o worrying about costs (if any, it would be some existing
>           scalability issues)

I'm not sure why memcg suddenly becomes necessary with blkcg and I
don't think having per-blkcg writeback and reasonable async
optimization from iosched would be considerably worse.  It sure will
add some overhead (e.g. from split buffering) but there will be proper
working isolation which is what this fuss is all about.  Also, I just
don't see how creating 1000+ (relatively active, I presume) blkcgs on
a single spindle would be sane and how is the end result gonna be
significantly better for your suggested solution, so let's please put
aside the silly non-use case.

In terms of overhead, I suspect the biggest would be the increased
buffering coming from split channels but that seems like the cost of
business to me.

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: Fengguang Wu <fengguang.wu@intel.com>
Cc: Jan Kara <jack@suse.cz>,
	vgoyal@redhat.com, Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org, Mel Gorman <mgorman@suse.de>
Subject: Re: [RFC] writeback and cgroup
Date: Mon, 23 Apr 2012 09:56:26 -0700	[thread overview]
Message-ID: <20120423165626.GB5406@google.com> (raw)
In-Reply-To: <20120422144649.GA7066@localhost>

Hello, Fengguang.

On Sun, Apr 22, 2012 at 10:46:49PM +0800, Fengguang Wu wrote:
> OK. Sorry I should have explained why memcg dirty limit is not the
> right tool for back pressure based throttling.

I have two questions.  Why do we need memcg for this?  Writeback
currently works without memcg, right?  Why does that change with blkcg
aware bdi?

> Basically the more memcgs with dirty limits, the more hard time for
> the flusher to serve them fairly and knock down their dirty pages in
> time. Because the flusher works inode by inode, each one may take up
> to 0.5 second, and there may be many memcgs asking for the flusher's
> attention. Also the more memcgs, the global dirty pages pool are
> partitioned into smaller pieces, which means smaller safety margin for
> each memcg. Adding these two effects up, there may be constantly some
> memcgs hitting their dirty limits when there are dozens of memcgs.

And how is this different from a machine with smaller memory?  If so,
why?

> Such cross subsystem coordinations still look natural to me because
> "weight" is a fundamental and general parameter. It's really a blkcg
> thing (determined by the blkio.weight user interface) rather than
> specifically tied to cfq. When another kernel entity (eg. NFS or noop)
> decides to add support for proportional weight IO control in future,
> it can make use of the weights calculated by balance_dirty_pages(), too.

It is not fundamental and natural at all and is already made cfq
specific in the devel branch.  You seem to think "weight" is somehow a
global concept which everyone can agree on but it is not.  Weight of
what?  Is it disktime, bandwidth, iops or something else?  cfq deals
primarily with disktime because that makes sense for spinning drives
with single head.  For SSDs with smart enough FTLs, the unit should
probably be iops.  For storage technology bottlenecked on bus speed,
bw would make sense.

IIUC, writeback is primarily dealing with abstracted bandwidth which
is applied per-inode, which is fine at that layer as details like
block allocations isn't and shouldn't be visible there and files (or
inodes) are the level of abstraction.

However, this doesn't necessarily translate easily into the actual
underlying IO resource.  For devices with spindle, seek time dominates
and the same amount of IO may consume vastly different amount of IO
and the disk time becomes the primary resource, not the iops or
bandwidth.  Naturally, people want to allocate and limit the primary
resource, so cfq distributes disk time across different cgroups as
configured.

Your suggested solution is applying the same a number - the weight -
to one portion of a mostly arbitrarily split resource using a
different unit.  I don't even understand what that achieves.

The requirement is to be able to split IO resource according to
cgroups in configurable way and enforce the limits established by the
configuration, which we're currently failing to do for async IOs.
Your proposed solution applies some arbitrary ratio according to some
arbitrary interpretation of cfq IO time weight way up in the stack
which, when propagated to the lower layer, would cause significant
amount of delay and fluctuation which behaves completely independent
from how (using what unit, in what granularity and in what time scale)
actual IO resource is handled, split and accounted, which would result
in something which probably has some semblance of interpreting
blkcg.weight as vague best-effort priority at its luckiest moments.

So, I don't think your suggested solution is a solution at all.  I'm
in fact not even sure what it achieves at the cost of the gross
layering violation and fundamental design braindamage.

>         - No more latency
>         - No performance drop
>         - No bumpy progress and stalls
>         - No need to attach memcg to blkcg
>         - Feel free to create 1000+ IO controllers, to heart's content
>           w/o worrying about costs (if any, it would be some existing
>           scalability issues)

I'm not sure why memcg suddenly becomes necessary with blkcg and I
don't think having per-blkcg writeback and reasonable async
optimization from iosched would be considerably worse.  It sure will
add some overhead (e.g. from split buffering) but there will be proper
working isolation which is what this fuss is all about.  Also, I just
don't see how creating 1000+ (relatively active, I presume) blkcgs on
a single spindle would be sane and how is the end result gonna be
significantly better for your suggested solution, so let's please put
aside the silly non-use case.

In terms of overhead, I suspect the biggest would be the increased
buffering coming from split channels but that seems like the cost of
business to me.

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: Fengguang Wu <fengguang.wu@intel.com>
Cc: Jan Kara <jack@suse.cz>,
	vgoyal@redhat.com, Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org, Mel Gorman <mgorman@suse.de>
Subject: Re: [RFC] writeback and cgroup
Date: Mon, 23 Apr 2012 09:56:26 -0700	[thread overview]
Message-ID: <20120423165626.GB5406@google.com> (raw)
In-Reply-To: <20120422144649.GA7066@localhost>

Hello, Fengguang.

On Sun, Apr 22, 2012 at 10:46:49PM +0800, Fengguang Wu wrote:
> OK. Sorry I should have explained why memcg dirty limit is not the
> right tool for back pressure based throttling.

I have two questions.  Why do we need memcg for this?  Writeback
currently works without memcg, right?  Why does that change with blkcg
aware bdi?

> Basically the more memcgs with dirty limits, the more hard time for
> the flusher to serve them fairly and knock down their dirty pages in
> time. Because the flusher works inode by inode, each one may take up
> to 0.5 second, and there may be many memcgs asking for the flusher's
> attention. Also the more memcgs, the global dirty pages pool are
> partitioned into smaller pieces, which means smaller safety margin for
> each memcg. Adding these two effects up, there may be constantly some
> memcgs hitting their dirty limits when there are dozens of memcgs.

And how is this different from a machine with smaller memory?  If so,
why?

> Such cross subsystem coordinations still look natural to me because
> "weight" is a fundamental and general parameter. It's really a blkcg
> thing (determined by the blkio.weight user interface) rather than
> specifically tied to cfq. When another kernel entity (eg. NFS or noop)
> decides to add support for proportional weight IO control in future,
> it can make use of the weights calculated by balance_dirty_pages(), too.

It is not fundamental and natural at all and is already made cfq
specific in the devel branch.  You seem to think "weight" is somehow a
global concept which everyone can agree on but it is not.  Weight of
what?  Is it disktime, bandwidth, iops or something else?  cfq deals
primarily with disktime because that makes sense for spinning drives
with single head.  For SSDs with smart enough FTLs, the unit should
probably be iops.  For storage technology bottlenecked on bus speed,
bw would make sense.

IIUC, writeback is primarily dealing with abstracted bandwidth which
is applied per-inode, which is fine at that layer as details like
block allocations isn't and shouldn't be visible there and files (or
inodes) are the level of abstraction.

However, this doesn't necessarily translate easily into the actual
underlying IO resource.  For devices with spindle, seek time dominates
and the same amount of IO may consume vastly different amount of IO
and the disk time becomes the primary resource, not the iops or
bandwidth.  Naturally, people want to allocate and limit the primary
resource, so cfq distributes disk time across different cgroups as
configured.

Your suggested solution is applying the same a number - the weight -
to one portion of a mostly arbitrarily split resource using a
different unit.  I don't even understand what that achieves.

The requirement is to be able to split IO resource according to
cgroups in configurable way and enforce the limits established by the
configuration, which we're currently failing to do for async IOs.
Your proposed solution applies some arbitrary ratio according to some
arbitrary interpretation of cfq IO time weight way up in the stack
which, when propagated to the lower layer, would cause significant
amount of delay and fluctuation which behaves completely independent
from how (using what unit, in what granularity and in what time scale)
actual IO resource is handled, split and accounted, which would result
in something which probably has some semblance of interpreting
blkcg.weight as vague best-effort priority at its luckiest moments.

So, I don't think your suggested solution is a solution at all.  I'm
in fact not even sure what it achieves at the cost of the gross
layering violation and fundamental design braindamage.

>         - No more latency
>         - No performance drop
>         - No bumpy progress and stalls
>         - No need to attach memcg to blkcg
>         - Feel free to create 1000+ IO controllers, to heart's content
>           w/o worrying about costs (if any, it would be some existing
>           scalability issues)

I'm not sure why memcg suddenly becomes necessary with blkcg and I
don't think having per-blkcg writeback and reasonable async
optimization from iosched would be considerably worse.  It sure will
add some overhead (e.g. from split buffering) but there will be proper
working isolation which is what this fuss is all about.  Also, I just
don't see how creating 1000+ (relatively active, I presume) blkcgs on
a single spindle would be sane and how is the end result gonna be
significantly better for your suggested solution, so let's please put
aside the silly non-use case.

In terms of overhead, I suspect the biggest would be the increased
buffering coming from split channels but that seems like the cost of
business to me.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-04-23 16:56 UTC|newest]

Thread overview: 262+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-03 18:36 [RFC] writeback and cgroup Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-04 14:51 ` Vivek Goyal
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 15:36   ` [Lsf] " Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 18:56     ` Tejun Heo
2012-04-04 18:56       ` Tejun Heo
2012-04-04 19:19       ` Vivek Goyal
2012-04-04 19:19         ` Vivek Goyal
2012-04-25  8:47         ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]         ` <20120404191918.GK12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]       ` <20120404185605.GC29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:19         ` Vivek Goyal
     [not found]     ` <CAH2r5mtwQa0Uu=_Yd2JywVJXA=OMGV43X_OUfziC-yeVy9BGtQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-04 18:56       ` Tejun Heo
2012-04-04 18:49   ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 19:23     ` [Lsf] " Steve French
2012-04-04 19:23       ` Steve French
     [not found]       ` <CAH2r5mvP56D0y4mk5wKrJcj+=OZ0e0Q5No_L+9a8a=GMcEhRew-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15       ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
     [not found]     ` <20120404184909.GB29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:23       ` Steve French
2012-04-04 20:32       ` Vivek Goyal
2012-04-05 16:38       ` Tejun Heo
2012-04-14 11:53       ` [Lsf] " Peter Zijlstra
2012-04-04 20:32     ` Vivek Goyal
2012-04-04 20:32       ` Vivek Goyal
     [not found]       ` <20120404203239.GM12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02       ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-05 16:38     ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
     [not found]       ` <20120405163854.GE12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:13         ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-14 11:53     ` [Lsf] " Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-16  1:25       ` Steve French
     [not found]   ` <20120404145134.GC12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 15:36     ` Steve French
2012-04-04 18:49     ` Tejun Heo
2012-04-07  8:00     ` Jan Kara
2012-04-07  8:00   ` Jan Kara
2012-04-07  8:00     ` Jan Kara
     [not found]     ` <20120407080027.GA2584-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 16:23       ` [Lsf] " Steve French
2012-04-10 18:06       ` Vivek Goyal
2012-04-10 16:23     ` [Lsf] " Steve French
2012-04-10 16:23       ` Steve French
2012-04-10 16:23       ` Steve French
     [not found]       ` <CAH2r5mvLVnM3Se5vBBsYzwaz5Ckp3i6SVnGp2T0XaGe9_u8YYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:16       ` Vivek Goyal
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:06     ` Vivek Goyal
2012-04-10 18:06       ` Vivek Goyal
     [not found]       ` <20120410180653.GJ21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 21:05         ` Jan Kara
2012-04-10 21:05           ` Jan Kara
2012-04-10 21:05           ` Jan Kara
     [not found]           ` <20120410210505.GE4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 21:20             ` Vivek Goyal
2012-04-10 21:20           ` Vivek Goyal
2012-04-10 21:20             ` Vivek Goyal
     [not found]             ` <20120410212041.GP21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 22:24               ` Jan Kara
2012-04-10 22:24             ` Jan Kara
2012-04-10 22:24               ` Jan Kara
     [not found]               ` <20120410222425.GF4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 15:40                 ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:45                   ` Vivek Goyal
2012-04-11 15:45                     ` Vivek Goyal
     [not found]                     ` <20120411154531.GE16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:05                     ` Jan Kara
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:23                       ` Vivek Goyal
2012-04-11 17:23                         ` Vivek Goyal
     [not found]                         ` <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 19:44                           ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
     [not found]                       ` <20120411170542.GB16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 17:23                         ` Vivek Goyal
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                       ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-18 18:18                         ` Vivek Goyal
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                         ` <20120417214831.GE19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                   ` <20120411154005.GD16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 15:45                     ` Vivek Goyal
2012-04-11 19:22                     ` Jan Kara
2012-04-14 12:25                     ` [Lsf] " Peter Zijlstra
2012-04-11 19:22                   ` Jan Kara
2012-04-11 19:22                     ` Jan Kara
     [not found]                     ` <20120411192231.GF16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-12 20:37                       ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:51                         ` Tejun Heo
2012-04-12 20:51                           ` Tejun Heo
     [not found]                           ` <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-14 14:36                             ` Fengguang Wu
2012-04-14 14:36                               ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-16 14:57                                 ` Vivek Goyal
2012-04-24 11:33                                 ` Fengguang Wu
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 15:58                                     ` Vivek Goyal
2012-04-24 15:58                                       ` Vivek Goyal
     [not found]                                       ` <20120424155843.GG26708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                       ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
     [not found]                                     ` <20120424145655.GA1474-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-24 15:58                                       ` Vivek Goyal
2012-04-25  3:16                                       ` Fengguang Wu
2012-04-25  3:16                                         ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
     [not found]                                           ` <20120425090156.GB12568-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-25 12:05                                             ` Fengguang Wu
2012-04-25 12:05                                               ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
     [not found]                                 ` <20120416145744.GA15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-15 11:37                         ` [Lsf] " Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
     [not found]                         ` <20120412203719.GL2207-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-12 20:51                           ` Tejun Heo
2012-04-15 11:37                           ` [Lsf] " Peter Zijlstra
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                     ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-18  6:30                       ` Jan Kara
2012-04-18  6:30                         ` Jan Kara
     [not found]                       ` <20120417220106.GF19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18  6:30                         ` Jan Kara
2012-04-14 12:25                   ` [Lsf] " Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-16 12:54                     ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 13:07                       ` Fengguang Wu
2012-04-16 13:07                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                           ` Fengguang Wu
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                           ` Vivek Goyal
     [not found]                           ` <20120416155207.GB15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-17  2:14                             ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
     [not found]                       ` <20120416125432.GB12776-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-16 13:07                         ` Fengguang Wu
     [not found] ` <20120403183655.GA23106-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 17:51   ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 18:35       ` Vivek Goyal
     [not found]       ` <20120404183528.GJ12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 21:42         ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-05 15:10             ` Vivek Goyal
     [not found]             ` <20120405151026.GB23999-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-06  0:32               ` Fengguang Wu
2012-04-06  0:32             ` Fengguang Wu
2012-04-06  0:32               ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 19:33     ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-06  9:59       ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-19 14:23           ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 18:31               ` Vivek Goyal
     [not found]               ` <20120419183118.GM10216-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 12:45               ` Fengguang Wu
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 19:29                 ` Vivek Goyal
2012-04-20 19:29                   ` Vivek Goyal
2012-04-20 21:33                   ` Tejun Heo
2012-04-20 21:33                     ` Tejun Heo
2012-04-22 14:26                     ` Fengguang Wu
2012-04-22 14:26                       ` Fengguang Wu
     [not found]                     ` <20120420213301.GA29134-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:26                       ` Fengguang Wu
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 12:30                     ` Vivek Goyal
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 16:04                       ` Tejun Heo
2012-04-23 16:04                         ` Tejun Heo
     [not found]                       ` <20120423123011.GA8103-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-23 16:04                         ` Tejun Heo
     [not found]                   ` <20120420192930.GR22419-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 21:33                     ` Tejun Heo
2012-04-20 19:29                 ` Vivek Goyal
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 20:26             ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-20 13:34               ` Fengguang Wu
2012-04-20 13:34                 ` Fengguang Wu
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                   ` Tejun Heo
     [not found]                   ` <20120420190844.GH32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                   ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-23 16:56                     ` Tejun Heo [this message]
2012-04-23 16:56                       ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
     [not found]                       ` <20120423165626.GB5406-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-24  7:58                         ` Fengguang Wu
2012-04-24  7:58                       ` Fengguang Wu
2012-04-24  7:58                         ` Fengguang Wu
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                           ` Tejun Heo
2012-04-23  9:14                 ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23 10:24                   ` Fengguang Wu
2012-04-23 10:24                     ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
2012-04-23 12:42                       ` Jan Kara
2012-04-23 14:31                       ` Fengguang Wu
2012-04-23 14:31                         ` Fengguang Wu
     [not found]                       ` <20120423124240.GE6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 14:31                         ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
     [not found]                   ` <20120423091432.GC6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 10:24                     ` Fengguang Wu
     [not found]               ` <20120419202635.GA4795-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-20 13:34                 ` Fengguang Wu
     [not found]           ` <20120417223854.GG19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-19 14:23             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
2012-04-18  6:57           ` Jan Kara
2012-04-18  7:58           ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
     [not found]           ` <20120418065720.GA21485-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
     [not found]       ` <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 20:18         ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-05 16:31           ` Tejun Heo
2012-04-05 16:31             ` Tejun Heo
2012-04-05 17:09             ` Vivek Goyal
2012-04-05 17:09               ` Vivek Goyal
     [not found]             ` <20120405163113.GD12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:09               ` Vivek Goyal
     [not found]           ` <20120404201816.GL12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-05 16:31             ` Tejun Heo
2012-04-06  9:59         ` Fengguang Wu
2012-04-03 18:36 Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120423165626.GB5406@google.com \
    --to=tj-dgejt+ai2ygdnm+yrofe0a@public.gmane.org \
    --cc=andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org \
    --cc=axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org \
    --cc=jack-AlSwsSmVLrQ@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=mgorman-l3A5Bk7waGM@public.gmane.org \
    --cc=rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=sjayaraman-IBi9RG/b67k@public.gmane.org \
    --cc=vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.