From: Jan Kara <jack@suse.cz> To: Vivek Goyal <vgoyal@redhat.com> Cc: Tejun Heo <tj@kernel.org>, Fengguang Wu <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>, Jens Axboe <axboe@kernel.dk>, linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com, jmoyer@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, lizefan@huawei.com, containers@lists.linux-foundation.org, cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com, lsf@lists.linux-foundation.org Subject: Re: [RFC] writeback and cgroup Date: Sat, 7 Apr 2012 10:00:27 +0200 [thread overview] Message-ID: <20120407080027.GA2584@quack.suse.cz> (raw) In-Reply-To: <20120404145134.GC12676@redhat.com> Hi Vivek, On Wed 04-04-12 10:51:34, Vivek Goyal wrote: > On Tue, Apr 03, 2012 at 11:36:55AM -0700, Tejun Heo wrote: > [..] > > IIUC, without cgroup, the current writeback code works more or less > > like this. Throwing in cgroup doesn't really change the fundamental > > design. Instead of a single pipe going down, we just have multiple > > pipes to the same device, each of which should be treated separately. > > Of course, a spinning disk can't be divided that easily and their > > performance characteristics will be inter-dependent, but the place to > > solve that problem is where the problem is, the block layer. > > How do you take care of thorottling IO to NFS case in this model? Current > throttling logic is tied to block device and in case of NFS, there is no > block device. Yeah, for throttling NFS or other network filesystems we'd have to come up with some throttling mechanism at some other level. The problem with throttling at higher levels is that you have to somehow extract information from lower levels about amount of work so I'm not completely certain now, where would be the right place. Possibly it also depends on the intended usecase - so far I don't know about any real user for this functionality... > [..] > > In the discussion, for such implementation, the following obstacles > > were identified. > > > > * There are a lot of cases where IOs are issued by a task which isn't > > the originiator. ie. Writeback issues IOs for pages which are > > dirtied by some other tasks. So, by the time an IO reaches the > > block layer, we don't know which cgroup the IO belongs to. > > > > Recently, block layer has grown support to attach a task to a bio > > which causes the bio to be handled as if it were issued by the > > associated task regardless of the actual issuing task. It currently > > only allows attaching %current to a bio - bio_associate_current() - > > but changing it to support other tasks is trivial. > > > > We'll need to update the async issuers to tag the IOs they issue but > > the mechanism is already there. > > Most likely this tagging will take place in "struct page" and I am not > sure if we will be allowed to grow size of "struct page" for this reason. We can tag inodes and then bios so this should be fine. > > * Unlike dirty data pages, metadata tends to have strict ordering > > requirements and thus is susceptible to priority inversion. Two > > solutions were suggested - 1. allow overdrawl for metadata writes so > > that low prio metadata writes don't block the whole FS, 2. provide > > an interface to query and wait for bdi-cgroup congestion which can > > be called from FS metadata paths to throttle metadata operations > > before they enter the stream of ordered operations. > > So that probably will mean changing the order of operations also. IIUC, > in case of fsync (ordered mode), we opened a meta data transaction first, > then tried to flush all the cached data and then flush metadata. So if > fsync is throttled, all the metadata operations behind it will get > serialized for ext3/ext4. > > So you seem to be suggesting that we change the design so that metadata > operation does not thrown into ordered stream till we have finished > writing all the data back to disk? I am not a filesystem developer, so > I don't know how feasible this change is. > > This is just one of the points. In the past while talking to Dave Chinner, > he mentioned that in XFS, if two cgroups fall into same allocation group > then there were cases where IO of one cgroup can get serialized behind > other. > > In general, the core of the issue is that filesystems are not cgroup aware > and if you do throttling below filesystems, then invariably one or other > serialization issue will come up and I am concerned that we will be constantly > fixing those serialization issues. Or the desgin point could be so central > to filesystem design that it can't be changed. We talked about this at LSF and Dave Chinner had the idea that we could make processes wait at the time when a transaction is started. At that time we don't hold any global locks so process can be throttled without serializing other processes. This effectively builds some cgroup awareness into filesystems but pretty simple one so it should be doable. > In general, if you do throttling deeper in the stakc and build back > pressure, then all the layers sitting above should be cgroup aware > to avoid problems. Two layers identified so far are writeback and > filesystems. Is it really worth the complexity. How about doing > throttling in higher layers when IO is entering the kernel and > keep proportional IO logic at the lowest level and current mechanism > of building pressure continues to work? I would like to keep single throttling mechanism for different limitting methods - i.e. handle proportional IO the same way as IO hard limits. So we cannot really rely on the fact that throttling is work preserving. The advantage of throttling at IO layer is that we can keep all the details inside it and only export pretty minimal information (like is bdi congested for given cgroup) to upper layers. If we wanted to do throttling at upper layers (such as Fengguang's buffered write throttling), we need to export the internal details to allow effective throttling... Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR
WARNING: multiple messages have this Message-ID (diff)
From: Jan Kara <jack@suse.cz> To: Vivek Goyal <vgoyal@redhat.com> Cc: Tejun Heo <tj@kernel.org>, Fengguang Wu <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>, Jens Axboe <axboe@kernel.dk>, linux-mm@kvack.org, sjayaraman@suse.com, andrea@betterlinux.com, jmoyer@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, lizefan@huawei.com, containers@lists.linux-foundation.org, cgroups@vger.kernel.org, ctalbott@google.com, rni@google.com, lsf@lists.linux-foundation.org Subject: Re: [RFC] writeback and cgroup Date: Sat, 7 Apr 2012 10:00:27 +0200 [thread overview] Message-ID: <20120407080027.GA2584@quack.suse.cz> (raw) In-Reply-To: <20120404145134.GC12676@redhat.com> Hi Vivek, On Wed 04-04-12 10:51:34, Vivek Goyal wrote: > On Tue, Apr 03, 2012 at 11:36:55AM -0700, Tejun Heo wrote: > [..] > > IIUC, without cgroup, the current writeback code works more or less > > like this. Throwing in cgroup doesn't really change the fundamental > > design. Instead of a single pipe going down, we just have multiple > > pipes to the same device, each of which should be treated separately. > > Of course, a spinning disk can't be divided that easily and their > > performance characteristics will be inter-dependent, but the place to > > solve that problem is where the problem is, the block layer. > > How do you take care of thorottling IO to NFS case in this model? Current > throttling logic is tied to block device and in case of NFS, there is no > block device. Yeah, for throttling NFS or other network filesystems we'd have to come up with some throttling mechanism at some other level. The problem with throttling at higher levels is that you have to somehow extract information from lower levels about amount of work so I'm not completely certain now, where would be the right place. Possibly it also depends on the intended usecase - so far I don't know about any real user for this functionality... > [..] > > In the discussion, for such implementation, the following obstacles > > were identified. > > > > * There are a lot of cases where IOs are issued by a task which isn't > > the originiator. ie. Writeback issues IOs for pages which are > > dirtied by some other tasks. So, by the time an IO reaches the > > block layer, we don't know which cgroup the IO belongs to. > > > > Recently, block layer has grown support to attach a task to a bio > > which causes the bio to be handled as if it were issued by the > > associated task regardless of the actual issuing task. It currently > > only allows attaching %current to a bio - bio_associate_current() - > > but changing it to support other tasks is trivial. > > > > We'll need to update the async issuers to tag the IOs they issue but > > the mechanism is already there. > > Most likely this tagging will take place in "struct page" and I am not > sure if we will be allowed to grow size of "struct page" for this reason. We can tag inodes and then bios so this should be fine. > > * Unlike dirty data pages, metadata tends to have strict ordering > > requirements and thus is susceptible to priority inversion. Two > > solutions were suggested - 1. allow overdrawl for metadata writes so > > that low prio metadata writes don't block the whole FS, 2. provide > > an interface to query and wait for bdi-cgroup congestion which can > > be called from FS metadata paths to throttle metadata operations > > before they enter the stream of ordered operations. > > So that probably will mean changing the order of operations also. IIUC, > in case of fsync (ordered mode), we opened a meta data transaction first, > then tried to flush all the cached data and then flush metadata. So if > fsync is throttled, all the metadata operations behind it will get > serialized for ext3/ext4. > > So you seem to be suggesting that we change the design so that metadata > operation does not thrown into ordered stream till we have finished > writing all the data back to disk? I am not a filesystem developer, so > I don't know how feasible this change is. > > This is just one of the points. In the past while talking to Dave Chinner, > he mentioned that in XFS, if two cgroups fall into same allocation group > then there were cases where IO of one cgroup can get serialized behind > other. > > In general, the core of the issue is that filesystems are not cgroup aware > and if you do throttling below filesystems, then invariably one or other > serialization issue will come up and I am concerned that we will be constantly > fixing those serialization issues. Or the desgin point could be so central > to filesystem design that it can't be changed. We talked about this at LSF and Dave Chinner had the idea that we could make processes wait at the time when a transaction is started. At that time we don't hold any global locks so process can be throttled without serializing other processes. This effectively builds some cgroup awareness into filesystems but pretty simple one so it should be doable. > In general, if you do throttling deeper in the stakc and build back > pressure, then all the layers sitting above should be cgroup aware > to avoid problems. Two layers identified so far are writeback and > filesystems. Is it really worth the complexity. How about doing > throttling in higher layers when IO is entering the kernel and > keep proportional IO logic at the lowest level and current mechanism > of building pressure continues to work? I would like to keep single throttling mechanism for different limitting methods - i.e. handle proportional IO the same way as IO hard limits. So we cannot really rely on the fact that throttling is work preserving. The advantage of throttling at IO layer is that we can keep all the details inside it and only export pretty minimal information (like is bdi congested for given cgroup) to upper layers. If we wanted to do throttling at upper layers (such as Fengguang's buffered write throttling), we need to export the internal details to allow effective throttling... Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-04-10 15:08 UTC|newest] Thread overview: 262+ messages / expand[flat|nested] mbox.gz Atom feed top 2012-04-03 18:36 [RFC] writeback and cgroup Tejun Heo 2012-04-03 18:36 ` Tejun Heo 2012-04-03 18:36 ` Tejun Heo 2012-04-04 14:51 ` Vivek Goyal 2012-04-04 14:51 ` Vivek Goyal 2012-04-04 15:36 ` [Lsf] " Steve French 2012-04-04 15:36 ` Steve French 2012-04-04 15:36 ` Steve French 2012-04-04 18:56 ` Tejun Heo 2012-04-04 18:56 ` Tejun Heo 2012-04-04 19:19 ` Vivek Goyal 2012-04-04 19:19 ` Vivek Goyal 2012-04-25 8:47 ` Suresh Jayaraman 2012-04-25 8:47 ` Suresh Jayaraman 2012-04-25 8:47 ` Suresh Jayaraman [not found] ` <20120404191918.GK12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-25 8:47 ` Suresh Jayaraman [not found] ` <20120404185605.GC29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 19:19 ` Vivek Goyal [not found] ` <CAH2r5mtwQa0Uu=_Yd2JywVJXA=OMGV43X_OUfziC-yeVy9BGtQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2012-04-04 18:56 ` Tejun Heo 2012-04-04 18:49 ` Tejun Heo 2012-04-04 18:49 ` Tejun Heo 2012-04-04 18:49 ` Tejun Heo 2012-04-04 19:23 ` [Lsf] " Steve French 2012-04-04 19:23 ` Steve French [not found] ` <CAH2r5mvP56D0y4mk5wKrJcj+=OZ0e0Q5No_L+9a8a=GMcEhRew-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2012-04-14 12:15 ` Peter Zijlstra 2012-04-14 12:15 ` Peter Zijlstra 2012-04-14 12:15 ` Peter Zijlstra 2012-04-14 12:15 ` Peter Zijlstra [not found] ` <20120404184909.GB29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 19:23 ` Steve French 2012-04-04 20:32 ` Vivek Goyal 2012-04-05 16:38 ` Tejun Heo 2012-04-14 11:53 ` [Lsf] " Peter Zijlstra 2012-04-04 20:32 ` Vivek Goyal 2012-04-04 20:32 ` Vivek Goyal [not found] ` <20120404203239.GM12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-04 23:02 ` Tejun Heo 2012-04-04 23:02 ` Tejun Heo 2012-04-04 23:02 ` Tejun Heo 2012-04-04 23:02 ` Tejun Heo 2012-04-05 16:38 ` Tejun Heo 2012-04-05 16:38 ` Tejun Heo 2012-04-05 16:38 ` Tejun Heo [not found] ` <20120405163854.GE12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-05 17:13 ` Vivek Goyal 2012-04-05 17:13 ` Vivek Goyal 2012-04-05 17:13 ` Vivek Goyal 2012-04-14 11:53 ` [Lsf] " Peter Zijlstra 2012-04-14 11:53 ` Peter Zijlstra 2012-04-14 11:53 ` Peter Zijlstra 2012-04-16 1:25 ` Steve French [not found] ` <20120404145134.GC12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-04 15:36 ` Steve French 2012-04-04 18:49 ` Tejun Heo 2012-04-07 8:00 ` Jan Kara 2012-04-07 8:00 ` Jan Kara [this message] 2012-04-07 8:00 ` Jan Kara [not found] ` <20120407080027.GA2584-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-10 16:23 ` [Lsf] " Steve French 2012-04-10 18:06 ` Vivek Goyal 2012-04-10 16:23 ` [Lsf] " Steve French 2012-04-10 16:23 ` Steve French 2012-04-10 16:23 ` Steve French [not found] ` <CAH2r5mvLVnM3Se5vBBsYzwaz5Ckp3i6SVnGp2T0XaGe9_u8YYA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2012-04-10 18:16 ` Vivek Goyal 2012-04-10 18:16 ` Vivek Goyal 2012-04-10 18:16 ` Vivek Goyal 2012-04-10 18:06 ` Vivek Goyal 2012-04-10 18:06 ` Vivek Goyal [not found] ` <20120410180653.GJ21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-10 21:05 ` Jan Kara 2012-04-10 21:05 ` Jan Kara 2012-04-10 21:05 ` Jan Kara [not found] ` <20120410210505.GE4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-10 21:20 ` Vivek Goyal 2012-04-10 21:20 ` Vivek Goyal 2012-04-10 21:20 ` Vivek Goyal [not found] ` <20120410212041.GP21801-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-10 22:24 ` Jan Kara 2012-04-10 22:24 ` Jan Kara 2012-04-10 22:24 ` Jan Kara [not found] ` <20120410222425.GF4936-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-11 15:40 ` Vivek Goyal 2012-04-11 15:40 ` Vivek Goyal 2012-04-11 15:40 ` Vivek Goyal 2012-04-11 15:45 ` Vivek Goyal 2012-04-11 15:45 ` Vivek Goyal [not found] ` <20120411154531.GE16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-11 17:05 ` Jan Kara 2012-04-11 17:05 ` Jan Kara 2012-04-11 17:05 ` Jan Kara 2012-04-11 17:23 ` Vivek Goyal 2012-04-11 17:23 ` Vivek Goyal [not found] ` <20120411172311.GF16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-11 19:44 ` Jan Kara 2012-04-11 19:44 ` Jan Kara 2012-04-11 19:44 ` Jan Kara [not found] ` <20120411170542.GB16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-11 17:23 ` Vivek Goyal 2012-04-17 21:48 ` Tejun Heo 2012-04-17 21:48 ` Tejun Heo 2012-04-17 21:48 ` Tejun Heo 2012-04-17 21:48 ` Tejun Heo 2012-04-18 18:18 ` Vivek Goyal 2012-04-18 18:18 ` Vivek Goyal [not found] ` <20120417214831.GE19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-18 18:18 ` Vivek Goyal [not found] ` <20120411154005.GD16692-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-11 15:45 ` Vivek Goyal 2012-04-11 19:22 ` Jan Kara 2012-04-14 12:25 ` [Lsf] " Peter Zijlstra 2012-04-11 19:22 ` Jan Kara 2012-04-11 19:22 ` Jan Kara [not found] ` <20120411192231.GF16008-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-12 20:37 ` Vivek Goyal 2012-04-12 20:37 ` Vivek Goyal 2012-04-12 20:37 ` Vivek Goyal 2012-04-12 20:51 ` Tejun Heo 2012-04-12 20:51 ` Tejun Heo [not found] ` <20120412205148.GA24056-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-14 14:36 ` Fengguang Wu 2012-04-14 14:36 ` Fengguang Wu 2012-04-16 14:57 ` Vivek Goyal 2012-04-16 14:57 ` Vivek Goyal 2012-04-24 11:33 ` Fengguang Wu 2012-04-24 11:33 ` Fengguang Wu 2012-04-24 14:56 ` Jan Kara 2012-04-24 14:56 ` Jan Kara 2012-04-24 14:56 ` Jan Kara 2012-04-24 14:56 ` Jan Kara 2012-04-24 15:58 ` Vivek Goyal 2012-04-24 15:58 ` Vivek Goyal [not found] ` <20120424155843.GG26708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-25 2:42 ` Fengguang Wu 2012-04-25 2:42 ` Fengguang Wu 2012-04-25 2:42 ` Fengguang Wu 2012-04-25 2:42 ` Fengguang Wu [not found] ` <20120424145655.GA1474-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-24 15:58 ` Vivek Goyal 2012-04-25 3:16 ` Fengguang Wu 2012-04-25 3:16 ` Fengguang Wu 2012-04-25 9:01 ` Jan Kara 2012-04-25 9:01 ` Jan Kara 2012-04-25 9:01 ` Jan Kara [not found] ` <20120425090156.GB12568-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-25 12:05 ` Fengguang Wu 2012-04-25 12:05 ` Fengguang Wu 2012-04-25 9:01 ` Jan Kara [not found] ` <20120416145744.GA15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-24 11:33 ` Fengguang Wu 2012-04-16 14:57 ` Vivek Goyal 2012-04-15 11:37 ` [Lsf] " Peter Zijlstra 2012-04-15 11:37 ` Peter Zijlstra 2012-04-15 11:37 ` Peter Zijlstra [not found] ` <20120412203719.GL2207-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-12 20:51 ` Tejun Heo 2012-04-15 11:37 ` [Lsf] " Peter Zijlstra 2012-04-17 22:01 ` Tejun Heo 2012-04-17 22:01 ` Tejun Heo 2012-04-17 22:01 ` Tejun Heo 2012-04-17 22:01 ` Tejun Heo 2012-04-18 6:30 ` Jan Kara 2012-04-18 6:30 ` Jan Kara [not found] ` <20120417220106.GF19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-18 6:30 ` Jan Kara 2012-04-14 12:25 ` [Lsf] " Peter Zijlstra 2012-04-14 12:25 ` Peter Zijlstra 2012-04-14 12:25 ` Peter Zijlstra 2012-04-16 12:54 ` Vivek Goyal 2012-04-16 12:54 ` Vivek Goyal 2012-04-16 12:54 ` Vivek Goyal 2012-04-16 13:07 ` Fengguang Wu 2012-04-16 13:07 ` Fengguang Wu 2012-04-16 14:19 ` Fengguang Wu 2012-04-16 14:19 ` Fengguang Wu 2012-04-16 14:19 ` Fengguang Wu 2012-04-16 15:52 ` Vivek Goyal 2012-04-16 15:52 ` Vivek Goyal 2012-04-16 15:52 ` Vivek Goyal [not found] ` <20120416155207.GB15437-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-17 2:14 ` Fengguang Wu 2012-04-17 2:14 ` Fengguang Wu 2012-04-17 2:14 ` Fengguang Wu [not found] ` <20120416125432.GB12776-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-16 13:07 ` Fengguang Wu [not found] ` <20120403183655.GA23106-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 14:51 ` Vivek Goyal 2012-04-04 17:51 ` Fengguang Wu 2012-04-04 17:51 ` Fengguang Wu 2012-04-04 17:51 ` Fengguang Wu 2012-04-04 18:35 ` Vivek Goyal 2012-04-04 18:35 ` Vivek Goyal [not found] ` <20120404183528.GJ12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-04 21:42 ` Fengguang Wu 2012-04-04 21:42 ` Fengguang Wu 2012-04-04 21:42 ` Fengguang Wu 2012-04-05 15:10 ` Vivek Goyal 2012-04-05 15:10 ` Vivek Goyal [not found] ` <20120405151026.GB23999-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-06 0:32 ` Fengguang Wu 2012-04-06 0:32 ` Fengguang Wu 2012-04-06 0:32 ` Fengguang Wu 2012-04-05 15:10 ` Vivek Goyal 2012-04-04 18:35 ` Vivek Goyal 2012-04-04 19:33 ` Tejun Heo 2012-04-04 19:33 ` Tejun Heo 2012-04-04 19:33 ` Tejun Heo 2012-04-06 9:59 ` Fengguang Wu 2012-04-06 9:59 ` Fengguang Wu 2012-04-06 9:59 ` Fengguang Wu 2012-04-17 22:38 ` Tejun Heo 2012-04-17 22:38 ` Tejun Heo 2012-04-17 22:38 ` Tejun Heo 2012-04-17 22:38 ` Tejun Heo 2012-04-19 14:23 ` Fengguang Wu 2012-04-19 14:23 ` Fengguang Wu 2012-04-19 14:23 ` Fengguang Wu 2012-04-19 18:31 ` Vivek Goyal 2012-04-19 18:31 ` Vivek Goyal [not found] ` <20120419183118.GM10216-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-20 12:45 ` Fengguang Wu 2012-04-20 12:45 ` Fengguang Wu 2012-04-20 12:45 ` Fengguang Wu 2012-04-20 19:29 ` Vivek Goyal 2012-04-20 19:29 ` Vivek Goyal 2012-04-20 21:33 ` Tejun Heo 2012-04-20 21:33 ` Tejun Heo 2012-04-22 14:26 ` Fengguang Wu 2012-04-22 14:26 ` Fengguang Wu [not found] ` <20120420213301.GA29134-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-22 14:26 ` Fengguang Wu 2012-04-23 12:30 ` Vivek Goyal 2012-04-23 12:30 ` Vivek Goyal 2012-04-23 12:30 ` Vivek Goyal 2012-04-23 16:04 ` Tejun Heo 2012-04-23 16:04 ` Tejun Heo [not found] ` <20120423123011.GA8103-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-23 16:04 ` Tejun Heo [not found] ` <20120420192930.GR22419-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-20 21:33 ` Tejun Heo 2012-04-20 19:29 ` Vivek Goyal 2012-04-19 18:31 ` Vivek Goyal 2012-04-19 20:26 ` Jan Kara 2012-04-19 20:26 ` Jan Kara 2012-04-19 20:26 ` Jan Kara 2012-04-20 13:34 ` Fengguang Wu 2012-04-20 13:34 ` Fengguang Wu 2012-04-20 19:08 ` Tejun Heo 2012-04-20 19:08 ` Tejun Heo 2012-04-20 19:08 ` Tejun Heo [not found] ` <20120420190844.GH32324-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-22 14:46 ` Fengguang Wu 2012-04-22 14:46 ` Fengguang Wu 2012-04-22 14:46 ` Fengguang Wu 2012-04-22 14:46 ` Fengguang Wu 2012-04-23 16:56 ` Tejun Heo 2012-04-23 16:56 ` Tejun Heo 2012-04-23 16:56 ` Tejun Heo [not found] ` <20120423165626.GB5406-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-24 7:58 ` Fengguang Wu 2012-04-24 7:58 ` Fengguang Wu 2012-04-24 7:58 ` Fengguang Wu 2012-04-25 15:47 ` Tejun Heo 2012-04-25 15:47 ` Tejun Heo 2012-04-25 15:47 ` Tejun Heo 2012-04-23 9:14 ` Jan Kara 2012-04-23 9:14 ` Jan Kara 2012-04-23 9:14 ` Jan Kara 2012-04-23 10:24 ` Fengguang Wu 2012-04-23 10:24 ` Fengguang Wu 2012-04-23 12:42 ` Jan Kara 2012-04-23 12:42 ` Jan Kara 2012-04-23 14:31 ` Fengguang Wu 2012-04-23 14:31 ` Fengguang Wu [not found] ` <20120423124240.GE6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-23 14:31 ` Fengguang Wu 2012-04-23 12:42 ` Jan Kara [not found] ` <20120423091432.GC6512-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-23 10:24 ` Fengguang Wu [not found] ` <20120419202635.GA4795-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-20 13:34 ` Fengguang Wu [not found] ` <20120417223854.GG19975-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-19 14:23 ` Fengguang Wu 2012-04-18 6:57 ` Jan Kara 2012-04-18 6:57 ` Jan Kara 2012-04-18 7:58 ` Fengguang Wu 2012-04-18 7:58 ` Fengguang Wu 2012-04-18 7:58 ` Fengguang Wu [not found] ` <20120418065720.GA21485-+0h/O2h83AeN3ZZ/Hiejyg@public.gmane.org> 2012-04-18 7:58 ` Fengguang Wu 2012-04-18 6:57 ` Jan Kara [not found] ` <20120404193355.GD29686-RcKxWJ4Cfj1J2suj2OqeGauc2jM2gXBXkQQo+JxHRPFibQn6LdNjmg@public.gmane.org> 2012-04-04 20:18 ` Vivek Goyal 2012-04-04 20:18 ` Vivek Goyal 2012-04-04 20:18 ` Vivek Goyal 2012-04-05 16:31 ` Tejun Heo 2012-04-05 16:31 ` Tejun Heo 2012-04-05 17:09 ` Vivek Goyal 2012-04-05 17:09 ` Vivek Goyal [not found] ` <20120405163113.GD12854-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> 2012-04-05 17:09 ` Vivek Goyal [not found] ` <20120404201816.GL12676-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 2012-04-05 16:31 ` Tejun Heo 2012-04-06 9:59 ` Fengguang Wu 2012-04-03 18:36 Tejun Heo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20120407080027.GA2584@quack.suse.cz \ --to=jack@suse.cz \ --cc=andrea@betterlinux.com \ --cc=axboe@kernel.dk \ --cc=cgroups@vger.kernel.org \ --cc=containers@lists.linux-foundation.org \ --cc=ctalbott@google.com \ --cc=fengguang.wu@intel.com \ --cc=jmoyer@redhat.com \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lizefan@huawei.com \ --cc=lsf@lists.linux-foundation.org \ --cc=rni@google.com \ --cc=sjayaraman@suse.com \ --cc=tj@kernel.org \ --cc=vgoyal@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.