All of lore.kernel.org
 help / color / mirror / Atom feed
From: Fengguang Wu <fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
To: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Jens Axboe <axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org>,
	ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org,
	containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	sjayaraman-IBi9RG/b67k@public.gmane.org,
	lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Vivek Goyal <vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Subject: Re: [RFC] writeback and cgroup
Date: Sat, 14 Apr 2012 22:36:39 +0800	[thread overview]
Message-ID: <20120414143639.GA31241@localhost> (raw)
In-Reply-To: <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

[-- Attachment #1: Type: text/plain, Size: 4887 bytes --]

On Thu, Apr 12, 2012 at 01:51:48PM -0700, Tejun Heo wrote:
> Hello, Vivek.
> 
> On Thu, Apr 12, 2012 at 04:37:19PM -0400, Vivek Goyal wrote:
> > I mean how are we supposed to put cgroup throttling rules using cgroup
> > interface for network filesystems and for btrfs global bdi. Using "dev_t"
> > associated with bdi? I see that all the bdi's are showing up in
> > /sys/class/bdi, but how do I know which one I am intereste in or which
> > one belongs to filesystem I am interestd in putting throttling rule on.
> > 
> > For block devices, we simply use "major:min limit" format to write to
> > a cgroup file and this configuration will sit in one of the per queue
> > per cgroup data structure.
> > 
> > I am assuming that when you say throttling should happen at bdi, you
> > are thinking of maintaining per cgroup per bdi data structures and user
> > is somehow supposed to pass "bdi_maj:bdi_min  limit" through cgroup files?
> > If yes, how does one map a filesystem's bdi we want to put rules on?
> 
> I think you're worrying way too much.  One of the biggest reasons we
> have layers and abstractions is to avoid worrying about everything
> from everywhere.  Let block device implement per-device limits.  Let
> writeback work from the backpressure it gets from the relevant IO
> channel, bdi-cgroup combination in this case.
> 
> For stacked or combined devices, let the combining layer deal with
> piping the congestion information.  If it's per-file split, the
> combined bdi can simply forward information from the matching
> underlying device.  If the file is striped / duplicated somehow, the
> *only* layer which knows what to do is and should be the layer
> performing the striping and duplication.  There's no need to worry
> about it from blkcg and if you get the layering correct it isn't
> difficult to slice such logic inbetween.  In fact, most of it
> (backpressure propagation) would just happen as part of the usual
> buffering between layers.

Yeah the backpressure idea would work nicely with all possible
intermediate stacking between the bdi and leaf devices. In my attempt
to do combined IO bandwidth control for

- buffered writes, in balance_dirty_pages()
- direct IO, in the cfq IO scheduler

I have to look into the cfq code in the past days to get an idea how
the two throttling layers can cooperate (and suffer from the pains
arise from the violations of layers). It's also rather tricky to get
two previously independent throttling mechanisms to work seamlessly
with each other for providing the desired _unified_ user interface. It
took a lot of reasoning and experiments to work the basic scheme out...

But here is the first result. The attached graph shows progress of 4
tasks:
- cgroup A: 1 direct dd + 1 buffered dd
- cgroup B: 1 direct dd + 1 buffered dd

The 4 tasks are mostly progressing at the same pace. The top 2
smoother lines are for the buffered dirtiers. The bottom 2 lines are
for the direct writers. As you may notice, the two direct writers are
somehow stalled for 1-2 times, which increases the gaps between the
lines. Otherwise, the algorithm is working as expected to distribute
the bandwidth to each task.

The current code's target is to satisfy the more realistic user demand
of distributing bandwidth equally to each cgroup, and inside each
cgroup, distribute bandwidth equally to buffered/direct writes. On top
of which, weights can be specified to change the default distribution.

The implementation involves adding "weight for direct IO" to the cfq
groups and "weight for buffered writes" to the root cgroup. Note that
current cfq proportional IO conroller does not offer explicit control
over the direct:buffered ratio.

When there are both direct/buffered writers in the cgroup,
balance_dirty_pages() will kick in and adjust the weights for cfq to
execute. Note that cfq will continue to send all flusher IOs to the
root cgroup.  balance_dirty_pages() will compute the overall async
weight for it so that in the above test case, the computed weights
will be

- 1000 async weight for the root cgroup (2 buffered dds)
- 500 dio weight for cgroup A (1 direct dd)
- 500 dio weight for cgroup B (1 direct dd)

The second graph shows result for another test case:
- cgroup A, weight 300: 1 buffered cp
- cgroup B, weight 600: 1 buffered dd + 1 direct dd
- cgroup C, weight 300: 1 direct dd
which is also working as expected.

Once the cfq properly grants total async IO share to the flusher,
balance_dirty_pages() will then do its original job of distributing
the buffered write bandwidth among the buffered dd tasks.

It will have to assume that the devices under the same bdi are
"symmetry". It also needs further stats feedback on IOPS or disk time
in order to do IOPS/time based IO distribution. Anyway it would be
interesting to see how far this scheme can go. I'll cleanup the code
and post it soon.

Thanks,
Fengguang

[-- Attachment #2: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 72619 bytes --]

[-- Attachment #3: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 69646 bytes --]

[-- Attachment #4: Type: text/plain, Size: 205 bytes --]

_______________________________________________
Containers mailing list
Containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
https://lists.linuxfoundation.org/mailman/listinfo/containers

WARNING: multiple messages have this Message-ID (diff)
From: Fengguang Wu <fengguang.wu@intel.com>
To: Tejun Heo <tj@kernel.org>
Cc: Vivek Goyal <vgoyal@redhat.com>, Jan Kara <jack@suse.cz>,
	Jens Axboe <axboe@kernel.dk>,
	linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com,
	jmoyer@redhat.com, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
	lizefan@huawei.com, containers@lists.linux-foundation.org,
	cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com,
	lsf@lists.linux-foundation.org
Subject: Re: [RFC] writeback and cgroup
Date: Sat, 14 Apr 2012 22:36:39 +0800	[thread overview]
Message-ID: <20120414143639.GA31241@localhost> (raw)
In-Reply-To: <20120412205148.GA24056@google.com>

[-- Attachment #1: Type: text/plain, Size: 4887 bytes --]

On Thu, Apr 12, 2012 at 01:51:48PM -0700, Tejun Heo wrote:
> Hello, Vivek.
> 
> On Thu, Apr 12, 2012 at 04:37:19PM -0400, Vivek Goyal wrote:
> > I mean how are we supposed to put cgroup throttling rules using cgroup
> > interface for network filesystems and for btrfs global bdi. Using "dev_t"
> > associated with bdi? I see that all the bdi's are showing up in
> > /sys/class/bdi, but how do I know which one I am intereste in or which
> > one belongs to filesystem I am interestd in putting throttling rule on.
> > 
> > For block devices, we simply use "major:min limit" format to write to
> > a cgroup file and this configuration will sit in one of the per queue
> > per cgroup data structure.
> > 
> > I am assuming that when you say throttling should happen at bdi, you
> > are thinking of maintaining per cgroup per bdi data structures and user
> > is somehow supposed to pass "bdi_maj:bdi_min  limit" through cgroup files?
> > If yes, how does one map a filesystem's bdi we want to put rules on?
> 
> I think you're worrying way too much.  One of the biggest reasons we
> have layers and abstractions is to avoid worrying about everything
> from everywhere.  Let block device implement per-device limits.  Let
> writeback work from the backpressure it gets from the relevant IO
> channel, bdi-cgroup combination in this case.
> 
> For stacked or combined devices, let the combining layer deal with
> piping the congestion information.  If it's per-file split, the
> combined bdi can simply forward information from the matching
> underlying device.  If the file is striped / duplicated somehow, the
> *only* layer which knows what to do is and should be the layer
> performing the striping and duplication.  There's no need to worry
> about it from blkcg and if you get the layering correct it isn't
> difficult to slice such logic inbetween.  In fact, most of it
> (backpressure propagation) would just happen as part of the usual
> buffering between layers.

Yeah the backpressure idea would work nicely with all possible
intermediate stacking between the bdi and leaf devices. In my attempt
to do combined IO bandwidth control for

- buffered writes, in balance_dirty_pages()
- direct IO, in the cfq IO scheduler

I have to look into the cfq code in the past days to get an idea how
the two throttling layers can cooperate (and suffer from the pains
arise from the violations of layers). It's also rather tricky to get
two previously independent throttling mechanisms to work seamlessly
with each other for providing the desired _unified_ user interface. It
took a lot of reasoning and experiments to work the basic scheme out...

But here is the first result. The attached graph shows progress of 4
tasks:
- cgroup A: 1 direct dd + 1 buffered dd
- cgroup B: 1 direct dd + 1 buffered dd

The 4 tasks are mostly progressing at the same pace. The top 2
smoother lines are for the buffered dirtiers. The bottom 2 lines are
for the direct writers. As you may notice, the two direct writers are
somehow stalled for 1-2 times, which increases the gaps between the
lines. Otherwise, the algorithm is working as expected to distribute
the bandwidth to each task.

The current code's target is to satisfy the more realistic user demand
of distributing bandwidth equally to each cgroup, and inside each
cgroup, distribute bandwidth equally to buffered/direct writes. On top
of which, weights can be specified to change the default distribution.

The implementation involves adding "weight for direct IO" to the cfq
groups and "weight for buffered writes" to the root cgroup. Note that
current cfq proportional IO conroller does not offer explicit control
over the direct:buffered ratio.

When there are both direct/buffered writers in the cgroup,
balance_dirty_pages() will kick in and adjust the weights for cfq to
execute. Note that cfq will continue to send all flusher IOs to the
root cgroup.  balance_dirty_pages() will compute the overall async
weight for it so that in the above test case, the computed weights
will be

- 1000 async weight for the root cgroup (2 buffered dds)
- 500 dio weight for cgroup A (1 direct dd)
- 500 dio weight for cgroup B (1 direct dd)

The second graph shows result for another test case:
- cgroup A, weight 300: 1 buffered cp
- cgroup B, weight 600: 1 buffered dd + 1 direct dd
- cgroup C, weight 300: 1 direct dd
which is also working as expected.

Once the cfq properly grants total async IO share to the flusher,
balance_dirty_pages() will then do its original job of distributing
the buffered write bandwidth among the buffered dd tasks.

It will have to assume that the devices under the same bdi are
"symmetry". It also needs further stats feedback on IOPS or disk time
in order to do IOPS/time based IO distribution. Anyway it would be
interesting to see how far this scheme can go. I'll cleanup the code
and post it soon.

Thanks,
Fengguang

[-- Attachment #2: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 72619 bytes --]

[-- Attachment #3: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 69646 bytes --]

  parent reply	other threads:[~2012-04-14 14:36 UTC|newest]

Thread overview: 262+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-03 18:36 [RFC] writeback and cgroup Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-03 18:36 ` Tejun Heo
2012-04-04 14:51 ` Vivek Goyal
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 15:36   ` [Lsf] " Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 15:36     ` Steve French
2012-04-04 18:56     ` Tejun Heo
2012-04-04 18:56       ` Tejun Heo
2012-04-04 19:19       ` Vivek Goyal
2012-04-04 19:19         ` Vivek Goyal
2012-04-25  8:47         ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]         ` <20120404191918.GK12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  8:47           ` Suresh Jayaraman
     [not found]       ` <20120404185605.GC29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:19         ` Vivek Goyal
     [not found]     ` <CAH2r5mtwQa0Uu=_Yd2JywVJXA=OMGV43X_OUfziC-yeVy9BGtQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-04 18:56       ` Tejun Heo
2012-04-04 18:49   ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 18:49     ` Tejun Heo
2012-04-04 19:23     ` [Lsf] " Steve French
2012-04-04 19:23       ` Steve French
     [not found]       ` <CAH2r5mvP56D0y4mk5wKrJcj+=OZ0e0Q5No_L+9a8a=GMcEhRew-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15       ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
2012-04-14 12:15         ` Peter Zijlstra
     [not found]     ` <20120404184909.GB29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 19:23       ` Steve French
2012-04-04 20:32       ` Vivek Goyal
2012-04-05 16:38       ` Tejun Heo
2012-04-14 11:53       ` [Lsf] " Peter Zijlstra
2012-04-04 20:32     ` Vivek Goyal
2012-04-04 20:32       ` Vivek Goyal
     [not found]       ` <20120404203239.GM12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02       ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-04 23:02         ` Tejun Heo
2012-04-05 16:38     ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
2012-04-05 16:38       ` Tejun Heo
     [not found]       ` <20120405163854.GE12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:13         ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-05 17:13           ` Vivek Goyal
2012-04-14 11:53     ` [Lsf] " Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-14 11:53       ` Peter Zijlstra
2012-04-16  1:25       ` Steve French
     [not found]   ` <20120404145134.GC12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 15:36     ` Steve French
2012-04-04 18:49     ` Tejun Heo
2012-04-07  8:00     ` Jan Kara
2012-04-07  8:00   ` Jan Kara
2012-04-07  8:00     ` Jan Kara
     [not found]     ` <20120407080027.GA2584-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 16:23       ` [Lsf] " Steve French
2012-04-10 18:06       ` Vivek Goyal
2012-04-10 16:23     ` [Lsf] " Steve French
2012-04-10 16:23       ` Steve French
2012-04-10 16:23       ` Steve French
     [not found]       ` <CAH2r5mvLVnM3Se5vBBsYzwaz5Ckp3i6SVnGp2T0XaGe9_u8YYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:16       ` Vivek Goyal
2012-04-10 18:16         ` Vivek Goyal
2012-04-10 18:06     ` Vivek Goyal
2012-04-10 18:06       ` Vivek Goyal
     [not found]       ` <20120410180653.GJ21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 21:05         ` Jan Kara
2012-04-10 21:05           ` Jan Kara
2012-04-10 21:05           ` Jan Kara
     [not found]           ` <20120410210505.GE4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-10 21:20             ` Vivek Goyal
2012-04-10 21:20           ` Vivek Goyal
2012-04-10 21:20             ` Vivek Goyal
     [not found]             ` <20120410212041.GP21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-10 22:24               ` Jan Kara
2012-04-10 22:24             ` Jan Kara
2012-04-10 22:24               ` Jan Kara
     [not found]               ` <20120410222425.GF4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 15:40                 ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:40                   ` Vivek Goyal
2012-04-11 15:45                   ` Vivek Goyal
2012-04-11 15:45                     ` Vivek Goyal
     [not found]                     ` <20120411154531.GE16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:05                     ` Jan Kara
2012-04-11 17:05                       ` Jan Kara
2012-04-11 17:23                       ` Vivek Goyal
2012-04-11 17:23                         ` Vivek Goyal
     [not found]                         ` <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 19:44                           ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
2012-04-11 19:44                             ` Jan Kara
     [not found]                       ` <20120411170542.GB16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-11 17:23                         ` Vivek Goyal
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                       ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-17 21:48                         ` Tejun Heo
2012-04-18 18:18                         ` Vivek Goyal
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                         ` <20120417214831.GE19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18 18:18                           ` Vivek Goyal
     [not found]                   ` <20120411154005.GD16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-11 15:45                     ` Vivek Goyal
2012-04-11 19:22                     ` Jan Kara
2012-04-14 12:25                     ` [Lsf] " Peter Zijlstra
2012-04-11 19:22                   ` Jan Kara
2012-04-11 19:22                     ` Jan Kara
     [not found]                     ` <20120411192231.GF16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-12 20:37                       ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:37                         ` Vivek Goyal
2012-04-12 20:51                         ` Tejun Heo
2012-04-12 20:51                           ` Tejun Heo
     [not found]                           ` <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-14 14:36                             ` Fengguang Wu [this message]
2012-04-14 14:36                               ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-16 14:57                                 ` Vivek Goyal
2012-04-24 11:33                                 ` Fengguang Wu
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                   ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 14:56                                     ` Jan Kara
2012-04-24 15:58                                     ` Vivek Goyal
2012-04-24 15:58                                       ` Vivek Goyal
     [not found]                                       ` <20120424155843.GG26708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                       ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
2012-04-25  2:42                                         ` Fengguang Wu
     [not found]                                     ` <20120424145655.GA1474-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-24 15:58                                       ` Vivek Goyal
2012-04-25  3:16                                       ` Fengguang Wu
2012-04-25  3:16                                         ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
2012-04-25  9:01                                           ` Jan Kara
     [not found]                                           ` <20120425090156.GB12568-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-25 12:05                                             ` Fengguang Wu
2012-04-25 12:05                                               ` Fengguang Wu
2012-04-25  9:01                                         ` Jan Kara
     [not found]                                 ` <20120416145744.GA15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-24 11:33                                   ` Fengguang Wu
2012-04-16 14:57                               ` Vivek Goyal
2012-04-15 11:37                         ` [Lsf] " Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
2012-04-15 11:37                           ` Peter Zijlstra
     [not found]                         ` <20120412203719.GL2207-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-12 20:51                           ` Tejun Heo
2012-04-15 11:37                           ` [Lsf] " Peter Zijlstra
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                     ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-17 22:01                       ` Tejun Heo
2012-04-18  6:30                       ` Jan Kara
2012-04-18  6:30                         ` Jan Kara
     [not found]                       ` <20120417220106.GF19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-18  6:30                         ` Jan Kara
2012-04-14 12:25                   ` [Lsf] " Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-14 12:25                     ` Peter Zijlstra
2012-04-16 12:54                     ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 12:54                       ` Vivek Goyal
2012-04-16 13:07                       ` Fengguang Wu
2012-04-16 13:07                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                         ` Fengguang Wu
2012-04-16 14:19                           ` Fengguang Wu
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                         ` Vivek Goyal
2012-04-16 15:52                           ` Vivek Goyal
     [not found]                           ` <20120416155207.GB15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-17  2:14                             ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
2012-04-17  2:14                               ` Fengguang Wu
     [not found]                       ` <20120416125432.GB12776-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-16 13:07                         ` Fengguang Wu
     [not found] ` <20120403183655.GA23106-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 14:51   ` Vivek Goyal
2012-04-04 17:51   ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 17:51     ` Fengguang Wu
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 18:35       ` Vivek Goyal
     [not found]       ` <20120404183528.GJ12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-04 21:42         ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-04 21:42           ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-05 15:10             ` Vivek Goyal
     [not found]             ` <20120405151026.GB23999-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-06  0:32               ` Fengguang Wu
2012-04-06  0:32             ` Fengguang Wu
2012-04-06  0:32               ` Fengguang Wu
2012-04-05 15:10           ` Vivek Goyal
2012-04-04 18:35     ` Vivek Goyal
2012-04-04 19:33     ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-04 19:33       ` Tejun Heo
2012-04-06  9:59       ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-06  9:59         ` Fengguang Wu
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38         ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-17 22:38           ` Tejun Heo
2012-04-19 14:23           ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 14:23             ` Fengguang Wu
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 18:31               ` Vivek Goyal
     [not found]               ` <20120419183118.GM10216-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 12:45               ` Fengguang Wu
2012-04-20 12:45                 ` Fengguang Wu
2012-04-20 19:29                 ` Vivek Goyal
2012-04-20 19:29                   ` Vivek Goyal
2012-04-20 21:33                   ` Tejun Heo
2012-04-20 21:33                     ` Tejun Heo
2012-04-22 14:26                     ` Fengguang Wu
2012-04-22 14:26                       ` Fengguang Wu
     [not found]                     ` <20120420213301.GA29134-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:26                       ` Fengguang Wu
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 12:30                     ` Vivek Goyal
2012-04-23 12:30                       ` Vivek Goyal
2012-04-23 16:04                       ` Tejun Heo
2012-04-23 16:04                         ` Tejun Heo
     [not found]                       ` <20120423123011.GA8103-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-23 16:04                         ` Tejun Heo
     [not found]                   ` <20120420192930.GR22419-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-20 21:33                     ` Tejun Heo
2012-04-20 19:29                 ` Vivek Goyal
2012-04-19 18:31             ` Vivek Goyal
2012-04-19 20:26             ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-19 20:26               ` Jan Kara
2012-04-20 13:34               ` Fengguang Wu
2012-04-20 13:34                 ` Fengguang Wu
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                 ` Tejun Heo
2012-04-20 19:08                   ` Tejun Heo
     [not found]                   ` <20120420190844.GH32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                   ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-22 14:46                     ` Fengguang Wu
2012-04-23 16:56                     ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
2012-04-23 16:56                       ` Tejun Heo
     [not found]                       ` <20120423165626.GB5406-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-24  7:58                         ` Fengguang Wu
2012-04-24  7:58                       ` Fengguang Wu
2012-04-24  7:58                         ` Fengguang Wu
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                         ` Tejun Heo
2012-04-25 15:47                           ` Tejun Heo
2012-04-23  9:14                 ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23  9:14                   ` Jan Kara
2012-04-23 10:24                   ` Fengguang Wu
2012-04-23 10:24                     ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
2012-04-23 12:42                       ` Jan Kara
2012-04-23 14:31                       ` Fengguang Wu
2012-04-23 14:31                         ` Fengguang Wu
     [not found]                       ` <20120423124240.GE6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 14:31                         ` Fengguang Wu
2012-04-23 12:42                     ` Jan Kara
     [not found]                   ` <20120423091432.GC6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-23 10:24                     ` Fengguang Wu
     [not found]               ` <20120419202635.GA4795-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-20 13:34                 ` Fengguang Wu
     [not found]           ` <20120417223854.GG19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-19 14:23             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
2012-04-18  6:57           ` Jan Kara
2012-04-18  7:58           ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  7:58             ` Fengguang Wu
     [not found]           ` <20120418065720.GA21485-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org>
2012-04-18  7:58             ` Fengguang Wu
2012-04-18  6:57         ` Jan Kara
     [not found]       ` <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org>
2012-04-04 20:18         ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-04 20:18           ` Vivek Goyal
2012-04-05 16:31           ` Tejun Heo
2012-04-05 16:31             ` Tejun Heo
2012-04-05 17:09             ` Vivek Goyal
2012-04-05 17:09               ` Vivek Goyal
     [not found]             ` <20120405163113.GD12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2012-04-05 17:09               ` Vivek Goyal
     [not found]           ` <20120404201816.GL12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2012-04-05 16:31             ` Tejun Heo
2012-04-06  9:59         ` Fengguang Wu
2012-04-03 18:36 Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120414143639.GA31241@localhost \
    --to=fengguang.wu-ral2jqcrhueavxtiumwx3w@public.gmane.org \
    --cc=andrea-oIIqvOZpAevzfdHfmsDf5w@public.gmane.org \
    --cc=axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=jack-AlSwsSmVLrQ@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=lsf-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=sjayaraman-IBi9RG/b67k@public.gmane.org \
    --cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=vgoyal-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.