From: Tejun Heo <tj@kernel.org> To: Fengguang Wu <fengguang.wu@intel.com> Cc: Jan Kara <jack@suse.cz>, vgoyal@redhat.com, Jens Axboe <axboe@kernel.dk>, linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com, jmoyer@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, lizefan@huawei.com, containers@lists.linux-foundation.org, cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com, lsf@lists.linux-foundation.org, Mel Gorman <mgorman@suse.de> Subject: Re: [RFC] writeback and cgroup Date: Wed, 25 Apr 2012 08:47:06 -0700 [thread overview] Message-ID: <20120425154706.GA6370@google.com> (raw) In-Reply-To: <20120424075853.GA8391@localhost> Hey, Fengguang. On Tue, Apr 24, 2012 at 03:58:53PM +0800, Fengguang Wu wrote: > > I have two questions. Why do we need memcg for this? Writeback > > currently works without memcg, right? Why does that change with blkcg > > aware bdi? > > Yeah currently writeback does not depend on memcg. As for blkcg, it's > necessary to keep a number of dirty pages for each blkcg, so that the > cfq groups' async IO queue does not go empty and lose its turn to do > IO. memcg provides the proper infrastructure to account dirty pages. > > In a previous email, we have an example of two 10:1 weight cgroups, > each running one dd. They will make two IO pipes, each holding a number > of dirty pages. Since cfq honors dd-1 much more IO bandwidth, dd-1's > dirty pages are consumed quickly. However balance_dirty_pages(), > without knowing about cfq's bandwidth divisions, is throttling the > two dd tasks equally. So dd-1 will be producing dirty pages much > slower than cfq is consuming them. The flusher thus won't send enough > dirty pages down to fill the corresponding async IO queue for dd-1. > cfq cannot really give dd-1 more bandwidth share due to lack of data > feed. The end result will be: the two cgroups get 1:1 bandwidth share > honored by balance_dirty_pages() even though cfq honors 10:1 weights > to them. My question is why can't cgroup-bdi pair be handled the same or similar way each bdi is handled now? I haven't looked through the code yet but something is determining, even inadvertently, the dirty memory usage among different bdi's, right? What I'm curious about is why cgroupfying bdi makes any different to that. If it's indeterministic w/o memcg, let it be that way with blkcg too. Just treat cgroup-bdi as separate bdis. So, what changes? > However if it's a large memory machine whose dirty pages get > partitioned to 100 cgroups, the flusher will be serving them > in round robin fashion. Just treat cgroup-bdi as a separate bdi. Run an independent flusher on it. They're separate channels. > blkio.weight will be the "number" shared and interpreted by all IO > controller entities, whether it be cfq, NFS or balance_dirty_pages(). It already isn't. blk-throttle is an IO controller entity but doesn't make use of weight. > > However, this doesn't necessarily translate easily into the actual > > underlying IO resource. For devices with spindle, seek time dominates > > and the same amount of IO may consume vastly different amount of IO > > and the disk time becomes the primary resource, not the iops or > > bandwidth. Naturally, people want to allocate and limit the primary > > resource, so cfq distributes disk time across different cgroups as > > configured. > > Right. balance_dirty_pages() is always doing dirty throttling wrt. > bandwidth, even in your back pressure scheme, isn't it? In this regard, > there are nothing fundamentally different between our proposals. They If balance_dirty_pages() fails to keep the IO buffer full, it's balance_dirty_pages()'s failure (and doing so from time to time could be fine given enough benefits), but no matter what writeback does, blkcg *should* enforce the configured limits, so they're quite different in terms of encapsulation and functionality. > > Your suggested solution is applying the same a number - the weight - > > to one portion of a mostly arbitrarily split resource using a > > different unit. I don't even understand what that achieves. > > You seem to miss my stated plan: next step, balance_dirty_pages() will > get some feedback information from cfq to adjust its bandwidth targets > accordingly. That information will be > > io_cost = charge/sectors > > The charge value is exactly the value computed in cfq_group_served(), > which is the slice time or IOs dispatched depending the mode cfq is > operating in. By dividing ratelimit by the normalized io_cost, > balance_dirty_pages() will automatically get the same weight > interpretation as cfq. For example, on spin disks, it will be able to > allocate lower bandwidth to seeky cgroups due to the larger io_cost > reported by cfq. So, cfq is basing its cost calculation on disk time spent by sync IOs which gets fluctuated by uncategorized async IOs and you're gonna apply that number to async IOs in some magical way? What the hell does that achieve? Please take a step back and look at the whole stack and think about what each part is supposed to do and how they are supposed to interact. If you still can't see the mess you're trying to make, ummm... I don't know. Thanks. -- tejun
WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org> To: Fengguang Wu <fengguang.wu@intel.com> Cc: Jan Kara <jack@suse.cz>, vgoyal@redhat.com, Jens Axboe <axboe@kernel.dk>, linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com, jmoyer@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, lizefan@huawei.com, containers@lists.linux-foundation.org, cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com, lsf@lists.linux-foundation.org, Mel Gorman <mgorman@suse.de> Subject: Re: [RFC] writeback and cgroup Date: Wed, 25 Apr 2012 08:47:06 -0700 [thread overview] Message-ID: <20120425154706.GA6370@google.com> (raw) In-Reply-To: <20120424075853.GA8391@localhost> Hey, Fengguang. On Tue, Apr 24, 2012 at 03:58:53PM +0800, Fengguang Wu wrote: > > I have two questions. Why do we need memcg for this? Writeback > > currently works without memcg, right? Why does that change with blkcg > > aware bdi? > > Yeah currently writeback does not depend on memcg. As for blkcg, it's > necessary to keep a number of dirty pages for each blkcg, so that the > cfq groups' async IO queue does not go empty and lose its turn to do > IO. memcg provides the proper infrastructure to account dirty pages. > > In a previous email, we have an example of two 10:1 weight cgroups, > each running one dd. They will make two IO pipes, each holding a number > of dirty pages. Since cfq honors dd-1 much more IO bandwidth, dd-1's > dirty pages are consumed quickly. However balance_dirty_pages(), > without knowing about cfq's bandwidth divisions, is throttling the > two dd tasks equally. So dd-1 will be producing dirty pages much > slower than cfq is consuming them. The flusher thus won't send enough > dirty pages down to fill the corresponding async IO queue for dd-1. > cfq cannot really give dd-1 more bandwidth share due to lack of data > feed. The end result will be: the two cgroups get 1:1 bandwidth share > honored by balance_dirty_pages() even though cfq honors 10:1 weights > to them. My question is why can't cgroup-bdi pair be handled the same or similar way each bdi is handled now? I haven't looked through the code yet but something is determining, even inadvertently, the dirty memory usage among different bdi's, right? What I'm curious about is why cgroupfying bdi makes any different to that. If it's indeterministic w/o memcg, let it be that way with blkcg too. Just treat cgroup-bdi as separate bdis. So, what changes? > However if it's a large memory machine whose dirty pages get > partitioned to 100 cgroups, the flusher will be serving them > in round robin fashion. Just treat cgroup-bdi as a separate bdi. Run an independent flusher on it. They're separate channels. > blkio.weight will be the "number" shared and interpreted by all IO > controller entities, whether it be cfq, NFS or balance_dirty_pages(). It already isn't. blk-throttle is an IO controller entity but doesn't make use of weight. > > However, this doesn't necessarily translate easily into the actual > > underlying IO resource. For devices with spindle, seek time dominates > > and the same amount of IO may consume vastly different amount of IO > > and the disk time becomes the primary resource, not the iops or > > bandwidth. Naturally, people want to allocate and limit the primary > > resource, so cfq distributes disk time across different cgroups as > > configured. > > Right. balance_dirty_pages() is always doing dirty throttling wrt. > bandwidth, even in your back pressure scheme, isn't it? In this regard, > there are nothing fundamentally different between our proposals. They If balance_dirty_pages() fails to keep the IO buffer full, it's balance_dirty_pages()'s failure (and doing so from time to time could be fine given enough benefits), but no matter what writeback does, blkcg *should* enforce the configured limits, so they're quite different in terms of encapsulation and functionality. > > Your suggested solution is applying the same a number - the weight - > > to one portion of a mostly arbitrarily split resource using a > > different unit. I don't even understand what that achieves. > > You seem to miss my stated plan: next step, balance_dirty_pages() will > get some feedback information from cfq to adjust its bandwidth targets > accordingly. That information will be > > io_cost = charge/sectors > > The charge value is exactly the value computed in cfq_group_served(), > which is the slice time or IOs dispatched depending the mode cfq is > operating in. By dividing ratelimit by the normalized io_cost, > balance_dirty_pages() will automatically get the same weight > interpretation as cfq. For example, on spin disks, it will be able to > allocate lower bandwidth to seeky cgroups due to the larger io_cost > reported by cfq. So, cfq is basing its cost calculation on disk time spent by sync IOs which gets fluctuated by uncategorized async IOs and you're gonna apply that number to async IOs in some magical way? What the hell does that achieve? Please take a step back and look at the whole stack and think about what each part is supposed to do and how they are supposed to interact. If you still can't see the mess you're trying to make, ummm... I don't know. Thanks. -- tejun -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-04-25 15:47 UTC|newest] Thread overview: 262+ messages / expand[flat|nested] mbox.gz Atom feed top 2012-04-03 18:36 [RFC] writeback and cgroup Tejun Heo 2012-04-03 18:36 ` Tejun Heo 2012-04-03 18:36 ` Tejun Heo 2012-04-04 14:51 ` Vivek Goyal 2012-04-04 14:51 ` Vivek Goyal 2012-04-04 15:36 ` [Lsf] " Steve French 2012-04-04 15:36 ` Steve French 2012-04-04 15:36 ` Steve French 2012-04-04 18:56 ` Tejun Heo 2012-04-04 18:56 ` Tejun Heo 2012-04-04 19:19 ` Vivek Goyal 2012-04-04 19:19 ` Vivek Goyal 2012-04-25 8:47 ` Suresh Jayaraman 2012-04-25 8:47 ` Suresh Jayaraman 2012-04-25 8:47 ` Suresh Jayaraman [not found] ` <20120404191918.GK12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-25 8:47 ` Suresh Jayaraman [not found] ` <20120404185605.GC29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 19:19 ` Vivek Goyal [not found] ` <CAH2r5mtwQa0Uu=_Yd2JywVJXA=OMGV43X_OUfziC-yeVy9BGtQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2012-04-04 18:56 ` Tejun Heo 2012-04-04 18:49 ` Tejun Heo 2012-04-04 18:49 ` Tejun Heo 2012-04-04 18:49 ` Tejun Heo 2012-04-04 19:23 ` [Lsf] " Steve French 2012-04-04 19:23 ` Steve French [not found] ` <CAH2r5mvP56D0y4mk5wKrJcj+=OZ0e0Q5No_L+9a8a=GMcEhRew-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2012-04-14 12:15 ` Peter Zijlstra 2012-04-14 12:15 ` Peter Zijlstra 2012-04-14 12:15 ` Peter Zijlstra 2012-04-14 12:15 ` Peter Zijlstra [not found] ` <20120404184909.GB29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 19:23 ` Steve French 2012-04-04 20:32 ` Vivek Goyal 2012-04-05 16:38 ` Tejun Heo 2012-04-14 11:53 ` [Lsf] " Peter Zijlstra 2012-04-04 20:32 ` Vivek Goyal 2012-04-04 20:32 ` Vivek Goyal [not found] ` <20120404203239.GM12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-04 23:02 ` Tejun Heo 2012-04-04 23:02 ` Tejun Heo 2012-04-04 23:02 ` Tejun Heo 2012-04-04 23:02 ` Tejun Heo 2012-04-05 16:38 ` Tejun Heo 2012-04-05 16:38 ` Tejun Heo 2012-04-05 16:38 ` Tejun Heo [not found] ` <20120405163854.GE12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-05 17:13 ` Vivek Goyal 2012-04-05 17:13 ` Vivek Goyal 2012-04-05 17:13 ` Vivek Goyal 2012-04-14 11:53 ` [Lsf] " Peter Zijlstra 2012-04-14 11:53 ` Peter Zijlstra 2012-04-14 11:53 ` Peter Zijlstra 2012-04-16 1:25 ` Steve French [not found] ` <20120404145134.GC12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-04 15:36 ` Steve French 2012-04-04 18:49 ` Tejun Heo 2012-04-07 8:00 ` Jan Kara 2012-04-07 8:00 ` Jan Kara 2012-04-07 8:00 ` Jan Kara [not found] ` <20120407080027.GA2584-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-10 16:23 ` [Lsf] " Steve French 2012-04-10 18:06 ` Vivek Goyal 2012-04-10 16:23 ` [Lsf] " Steve French 2012-04-10 16:23 ` Steve French 2012-04-10 16:23 ` Steve French [not found] ` <CAH2r5mvLVnM3Se5vBBsYzwaz5Ckp3i6SVnGp2T0XaGe9_u8YYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2012-04-10 18:16 ` Vivek Goyal 2012-04-10 18:16 ` Vivek Goyal 2012-04-10 18:16 ` Vivek Goyal 2012-04-10 18:06 ` Vivek Goyal 2012-04-10 18:06 ` Vivek Goyal [not found] ` <20120410180653.GJ21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-10 21:05 ` Jan Kara 2012-04-10 21:05 ` Jan Kara 2012-04-10 21:05 ` Jan Kara [not found] ` <20120410210505.GE4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-10 21:20 ` Vivek Goyal 2012-04-10 21:20 ` Vivek Goyal 2012-04-10 21:20 ` Vivek Goyal [not found] ` <20120410212041.GP21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-10 22:24 ` Jan Kara 2012-04-10 22:24 ` Jan Kara 2012-04-10 22:24 ` Jan Kara [not found] ` <20120410222425.GF4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-11 15:40 ` Vivek Goyal 2012-04-11 15:40 ` Vivek Goyal 2012-04-11 15:40 ` Vivek Goyal 2012-04-11 15:45 ` Vivek Goyal 2012-04-11 15:45 ` Vivek Goyal [not found] ` <20120411154531.GE16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-11 17:05 ` Jan Kara 2012-04-11 17:05 ` Jan Kara 2012-04-11 17:05 ` Jan Kara 2012-04-11 17:23 ` Vivek Goyal 2012-04-11 17:23 ` Vivek Goyal [not found] ` <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-11 19:44 ` Jan Kara 2012-04-11 19:44 ` Jan Kara 2012-04-11 19:44 ` Jan Kara [not found] ` <20120411170542.GB16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-11 17:23 ` Vivek Goyal 2012-04-17 21:48 ` Tejun Heo 2012-04-17 21:48 ` Tejun Heo 2012-04-17 21:48 ` Tejun Heo 2012-04-17 21:48 ` Tejun Heo 2012-04-18 18:18 ` Vivek Goyal 2012-04-18 18:18 ` Vivek Goyal [not found] ` <20120417214831.GE19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-18 18:18 ` Vivek Goyal [not found] ` <20120411154005.GD16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-11 15:45 ` Vivek Goyal 2012-04-11 19:22 ` Jan Kara 2012-04-14 12:25 ` [Lsf] " Peter Zijlstra 2012-04-11 19:22 ` Jan Kara 2012-04-11 19:22 ` Jan Kara [not found] ` <20120411192231.GF16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-12 20:37 ` Vivek Goyal 2012-04-12 20:37 ` Vivek Goyal 2012-04-12 20:37 ` Vivek Goyal 2012-04-12 20:51 ` Tejun Heo 2012-04-12 20:51 ` Tejun Heo [not found] ` <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-14 14:36 ` Fengguang Wu 2012-04-14 14:36 ` Fengguang Wu 2012-04-16 14:57 ` Vivek Goyal 2012-04-16 14:57 ` Vivek Goyal 2012-04-24 11:33 ` Fengguang Wu 2012-04-24 11:33 ` Fengguang Wu 2012-04-24 14:56 ` Jan Kara 2012-04-24 14:56 ` Jan Kara 2012-04-24 14:56 ` Jan Kara 2012-04-24 14:56 ` Jan Kara 2012-04-24 15:58 ` Vivek Goyal 2012-04-24 15:58 ` Vivek Goyal [not found] ` <20120424155843.GG26708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-25 2:42 ` Fengguang Wu 2012-04-25 2:42 ` Fengguang Wu 2012-04-25 2:42 ` Fengguang Wu 2012-04-25 2:42 ` Fengguang Wu [not found] ` <20120424145655.GA1474-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-24 15:58 ` Vivek Goyal 2012-04-25 3:16 ` Fengguang Wu 2012-04-25 3:16 ` Fengguang Wu 2012-04-25 9:01 ` Jan Kara 2012-04-25 9:01 ` Jan Kara 2012-04-25 9:01 ` Jan Kara [not found] ` <20120425090156.GB12568-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-25 12:05 ` Fengguang Wu 2012-04-25 12:05 ` Fengguang Wu 2012-04-25 9:01 ` Jan Kara [not found] ` <20120416145744.GA15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-24 11:33 ` Fengguang Wu 2012-04-16 14:57 ` Vivek Goyal 2012-04-15 11:37 ` [Lsf] " Peter Zijlstra 2012-04-15 11:37 ` Peter Zijlstra 2012-04-15 11:37 ` Peter Zijlstra [not found] ` <20120412203719.GL2207-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-12 20:51 ` Tejun Heo 2012-04-15 11:37 ` [Lsf] " Peter Zijlstra 2012-04-17 22:01 ` Tejun Heo 2012-04-17 22:01 ` Tejun Heo 2012-04-17 22:01 ` Tejun Heo 2012-04-17 22:01 ` Tejun Heo 2012-04-18 6:30 ` Jan Kara 2012-04-18 6:30 ` Jan Kara [not found] ` <20120417220106.GF19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-18 6:30 ` Jan Kara 2012-04-14 12:25 ` [Lsf] " Peter Zijlstra 2012-04-14 12:25 ` Peter Zijlstra 2012-04-14 12:25 ` Peter Zijlstra 2012-04-16 12:54 ` Vivek Goyal 2012-04-16 12:54 ` Vivek Goyal 2012-04-16 12:54 ` Vivek Goyal 2012-04-16 13:07 ` Fengguang Wu 2012-04-16 13:07 ` Fengguang Wu 2012-04-16 14:19 ` Fengguang Wu 2012-04-16 14:19 ` Fengguang Wu 2012-04-16 14:19 ` Fengguang Wu 2012-04-16 15:52 ` Vivek Goyal 2012-04-16 15:52 ` Vivek Goyal 2012-04-16 15:52 ` Vivek Goyal [not found] ` <20120416155207.GB15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-17 2:14 ` Fengguang Wu 2012-04-17 2:14 ` Fengguang Wu 2012-04-17 2:14 ` Fengguang Wu [not found] ` <20120416125432.GB12776-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-16 13:07 ` Fengguang Wu [not found] ` <20120403183655.GA23106-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 14:51 ` Vivek Goyal 2012-04-04 17:51 ` Fengguang Wu 2012-04-04 17:51 ` Fengguang Wu 2012-04-04 17:51 ` Fengguang Wu 2012-04-04 18:35 ` Vivek Goyal 2012-04-04 18:35 ` Vivek Goyal [not found] ` <20120404183528.GJ12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-04 21:42 ` Fengguang Wu 2012-04-04 21:42 ` Fengguang Wu 2012-04-04 21:42 ` Fengguang Wu 2012-04-05 15:10 ` Vivek Goyal 2012-04-05 15:10 ` Vivek Goyal [not found] ` <20120405151026.GB23999-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-06 0:32 ` Fengguang Wu 2012-04-06 0:32 ` Fengguang Wu 2012-04-06 0:32 ` Fengguang Wu 2012-04-05 15:10 ` Vivek Goyal 2012-04-04 18:35 ` Vivek Goyal 2012-04-04 19:33 ` Tejun Heo 2012-04-04 19:33 ` Tejun Heo 2012-04-04 19:33 ` Tejun Heo 2012-04-06 9:59 ` Fengguang Wu 2012-04-06 9:59 ` Fengguang Wu 2012-04-06 9:59 ` Fengguang Wu 2012-04-17 22:38 ` Tejun Heo 2012-04-17 22:38 ` Tejun Heo 2012-04-17 22:38 ` Tejun Heo 2012-04-17 22:38 ` Tejun Heo 2012-04-19 14:23 ` Fengguang Wu 2012-04-19 14:23 ` Fengguang Wu 2012-04-19 14:23 ` Fengguang Wu 2012-04-19 18:31 ` Vivek Goyal 2012-04-19 18:31 ` Vivek Goyal [not found] ` <20120419183118.GM10216-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-20 12:45 ` Fengguang Wu 2012-04-20 12:45 ` Fengguang Wu 2012-04-20 12:45 ` Fengguang Wu 2012-04-20 19:29 ` Vivek Goyal 2012-04-20 19:29 ` Vivek Goyal 2012-04-20 21:33 ` Tejun Heo 2012-04-20 21:33 ` Tejun Heo 2012-04-22 14:26 ` Fengguang Wu 2012-04-22 14:26 ` Fengguang Wu [not found] ` <20120420213301.GA29134-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-22 14:26 ` Fengguang Wu 2012-04-23 12:30 ` Vivek Goyal 2012-04-23 12:30 ` Vivek Goyal 2012-04-23 12:30 ` Vivek Goyal 2012-04-23 16:04 ` Tejun Heo 2012-04-23 16:04 ` Tejun Heo [not found] ` <20120423123011.GA8103-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-23 16:04 ` Tejun Heo [not found] ` <20120420192930.GR22419-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-20 21:33 ` Tejun Heo 2012-04-20 19:29 ` Vivek Goyal 2012-04-19 18:31 ` Vivek Goyal 2012-04-19 20:26 ` Jan Kara 2012-04-19 20:26 ` Jan Kara 2012-04-19 20:26 ` Jan Kara 2012-04-20 13:34 ` Fengguang Wu 2012-04-20 13:34 ` Fengguang Wu 2012-04-20 19:08 ` Tejun Heo 2012-04-20 19:08 ` Tejun Heo 2012-04-20 19:08 ` Tejun Heo [not found] ` <20120420190844.GH32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-22 14:46 ` Fengguang Wu 2012-04-22 14:46 ` Fengguang Wu 2012-04-22 14:46 ` Fengguang Wu 2012-04-22 14:46 ` Fengguang Wu 2012-04-23 16:56 ` Tejun Heo 2012-04-23 16:56 ` Tejun Heo 2012-04-23 16:56 ` Tejun Heo [not found] ` <20120423165626.GB5406-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-24 7:58 ` Fengguang Wu 2012-04-24 7:58 ` Fengguang Wu 2012-04-24 7:58 ` Fengguang Wu 2012-04-25 15:47 ` Tejun Heo 2012-04-25 15:47 ` Tejun Heo [this message] 2012-04-25 15:47 ` Tejun Heo 2012-04-23 9:14 ` Jan Kara 2012-04-23 9:14 ` Jan Kara 2012-04-23 9:14 ` Jan Kara 2012-04-23 10:24 ` Fengguang Wu 2012-04-23 10:24 ` Fengguang Wu 2012-04-23 12:42 ` Jan Kara 2012-04-23 12:42 ` Jan Kara 2012-04-23 14:31 ` Fengguang Wu 2012-04-23 14:31 ` Fengguang Wu [not found] ` <20120423124240.GE6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-23 14:31 ` Fengguang Wu 2012-04-23 12:42 ` Jan Kara [not found] ` <20120423091432.GC6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-23 10:24 ` Fengguang Wu [not found] ` <20120419202635.GA4795-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-20 13:34 ` Fengguang Wu [not found] ` <20120417223854.GG19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-19 14:23 ` Fengguang Wu 2012-04-18 6:57 ` Jan Kara 2012-04-18 6:57 ` Jan Kara 2012-04-18 7:58 ` Fengguang Wu 2012-04-18 7:58 ` Fengguang Wu 2012-04-18 7:58 ` Fengguang Wu [not found] ` <20120418065720.GA21485-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-18 7:58 ` Fengguang Wu 2012-04-18 6:57 ` Jan Kara [not found] ` <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 20:18 ` Vivek Goyal 2012-04-04 20:18 ` Vivek Goyal 2012-04-04 20:18 ` Vivek Goyal 2012-04-05 16:31 ` Tejun Heo 2012-04-05 16:31 ` Tejun Heo 2012-04-05 17:09 ` Vivek Goyal 2012-04-05 17:09 ` Vivek Goyal [not found] ` <20120405163113.GD12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-05 17:09 ` Vivek Goyal [not found] ` <20120404201816.GL12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-05 16:31 ` Tejun Heo 2012-04-06 9:59 ` Fengguang Wu 2012-04-03 18:36 Tejun Heo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20120425154706.GA6370@google.com \ --to=tj@kernel.org \ --cc=andrea@betterlinux.com \ --cc=axboe@kernel.dk \ --cc=cgroups@vger.kernel.org \ --cc=containers@lists.linux-foundation.org \ --cc=ctalbott@google.com \ --cc=fengguang.wu@intel.com \ --cc=jack@suse.cz \ --cc=jmoyer@redhat.com \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lizefan@huawei.com \ --cc=lsf@lists.linux-foundation.org \ --cc=mgorman@suse.de \ --cc=rni@google.com \ --cc=sjayaraman@suse.com \ --cc=vgoyal@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.