From: Johannes Weiner <hannes@cmpxchg.org> To: azurIt <azurit@pobox.sk> Cc: Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@suse.cz>, David Rientjes <rientjes@google.com>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>, linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [patch 0/7] improve memcg oom killer robustness v2 Date: Tue, 10 Sep 2013 18:03:29 -0400 [thread overview] Message-ID: <20130910220329.GK856@cmpxchg.org> (raw) In-Reply-To: <20130910233247.9EDF4DBA@pobox.sk> On Tue, Sep 10, 2013 at 11:32:47PM +0200, azurIt wrote: > >On Tue, Sep 10, 2013 at 11:08:53PM +0200, azurIt wrote: > >> >On Tue, Sep 10, 2013 at 09:32:53PM +0200, azurIt wrote: > >> >> Here is full kernel log between 6:00 and 7:59: > >> >> http://watchdog.sk/lkml/kern6.log > >> > > >> >Wow, your apaches are like the hydra. Whenever one is OOM killed, > >> >more show up! > >> > >> > >> > >> Yeah, it's supposed to do this ;) How are you expecting the machine to recover from an OOM situation, though? I guess I don't really understand what these machines are doing. But if you are overloading them like crazy, isn't that the expected outcome? > >> >> >> What do you think? I'm now running kernel with your previous patch, not with the newest one. > >> >> > > >> >> >Which one exactly? Can you attach the diff? > >> >> > >> >> > >> >> > >> >> I meant, the problem above occured on kernel with your latest patch: > >> >> http://watchdog.sk/lkml/7-2-memcg-fix.patch > >> > > >> >The above log has the following callstack: > >> > > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337628] [<ffffffff810d19fe>] dump_header+0x7e/0x1e0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337707] [<ffffffff810d18ff>] ? find_lock_task_mm+0x2f/0x70 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337790] [<ffffffff810d18ff>] ? find_lock_task_mm+0x2f/0x70 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337874] [<ffffffff81094bb0>] ? __css_put+0x50/0x90 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337952] [<ffffffff810d1ec5>] oom_kill_process+0x85/0x2a0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338037] [<ffffffff810d2448>] mem_cgroup_out_of_memory+0xa8/0xf0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338120] [<ffffffff81110858>] T.1154+0x8b8/0x8f0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338201] [<ffffffff81110fa6>] mem_cgroup_charge_common+0x56/0xa0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338283] [<ffffffff81111035>] mem_cgroup_newpage_charge+0x45/0x50 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338364] [<ffffffff810f3039>] handle_pte_fault+0x609/0x940 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338451] [<ffffffff8102ab1f>] ? pte_alloc_one+0x3f/0x50 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338532] [<ffffffff8107e455>] ? sched_clock_local+0x25/0x90 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338617] [<ffffffff810f34d7>] handle_mm_fault+0x167/0x340 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338699] [<ffffffff8102714b>] do_page_fault+0x13b/0x490 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338781] [<ffffffff810f8848>] ? do_brk+0x208/0x3a0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338865] [<ffffffff812dba22>] ? gr_learn_resource+0x42/0x1e0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338951] [<ffffffff815cb7bf>] page_fault+0x1f/0x30 > >> > > >> >The charge code seems to be directly invoking the OOM killer, which is > >> >not possible with 7-2-memcg-fix. Are you sure this is the right patch > >> >for this log? This _looks_ more like what 7-1-memcg-fix was doing, > >> >with a direct kill in the charge context and a fixup later on. > >> > >> I, luckyly, still have the kernel source from which that kernel was build. I tried to re-apply the 7-2-memcg-fix.patch: > >> > >> # patch -p1 --dry-run < 7-2-memcg-fix.patch > >> patching file arch/x86/mm/fault.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 4 out of 4 hunks ignored -- saving rejects to file arch/x86/mm/fault.c.rej > >> patching file include/linux/memcontrol.h > >> Hunk #1 succeeded at 141 with fuzz 2 (offset 21 lines). > >> Hunk #2 succeeded at 391 with fuzz 1 (offset 39 lines). > > > >Uhm, some of it applied... I have absolutely no idea what state that > >tree is in now... > > I used '--dry-run' so it should be ok :) Ah, right. > >> patching file include/linux/mm.h > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 1 out of 1 hunk ignored -- saving rejects to file include/linux/mm.h.rej > >> patching file include/linux/sched.h > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 1 out of 1 hunk ignored -- saving rejects to file include/linux/sched.h.rej > >> patching file mm/memcontrol.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 10 out of 10 hunks ignored -- saving rejects to file mm/memcontrol.c.rej > >> patching file mm/memory.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 2 out of 2 hunks ignored -- saving rejects to file mm/memory.c.rej > >> patching file mm/oom_kill.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 1 out of 1 hunk ignored -- saving rejects to file mm/oom_kill.c.rej > >> > >> > >> Can you tell from this if the source has the right patch? > > > >Not reliably, I don't think. Can you send me > > > > include/linux/memcontrol.h > > mm/memcontrol.c > > mm/memory.c > > mm/oom_kill.c > > > >from those sources? > > > >It might be easier to start the application from scratch... Keep in > >mind that 7-2 was not an incremental fix, you need to remove the > >previous memcg patches (as opposed to 7-1). > > > > Yes, i used only 7-2 from your patches. Here are the files: > http://watchdog.sk/lkml/kernel > > orig - kernel source which was used to build the kernel i was talking about earlier > new - newly unpacked and patched 3.2.50 with all of 'my' patches Ok, thanks! > Here is how your patch was applied: > > # patch -p1 < 7-2-memcg-fix.patch > patching file arch/x86/mm/fault.c > Hunk #1 succeeded at 944 (offset 102 lines). > Hunk #2 succeeded at 970 (offset 102 lines). > Hunk #3 succeeded at 1273 with fuzz 1 (offset 212 lines). > Hunk #4 succeeded at 1382 (offset 223 lines). Ah, I forgot about this one. Could you provide that file (fault.c) as well please? > patching file include/linux/memcontrol.h > Hunk #1 succeeded at 122 with fuzz 2 (offset 2 lines). > Hunk #2 succeeded at 354 (offset 2 lines). Looks good, still. > patching file include/linux/mm.h > Hunk #1 succeeded at 163 (offset 7 lines). > patching file include/linux/sched.h > Hunk #1 succeeded at 1644 (offset 76 lines). > patching file mm/memcontrol.c > Hunk #1 succeeded at 1752 (offset 9 lines). > Hunk #2 succeeded at 1777 (offset 9 lines). > Hunk #3 succeeded at 1828 (offset 9 lines). > Hunk #4 succeeded at 1867 (offset 9 lines). > Hunk #5 succeeded at 2256 (offset 9 lines). > Hunk #6 succeeded at 2317 (offset 9 lines). > Hunk #7 succeeded at 2348 (offset 9 lines). > Hunk #8 succeeded at 2411 (offset 9 lines). > Hunk #9 succeeded at 2419 (offset 9 lines). > Hunk #10 succeeded at 2432 (offset 9 lines). > patching file mm/memory.c > Hunk #1 succeeded at 3712 (offset 273 lines). > Hunk #2 succeeded at 3812 (offset 317 lines). > patching file mm/oom_kill.c These look good as well. That leaves the weird impossible stack trace. Did you double check that this crash came from a kernel with those exact files? I'm also confused about the freezer. You used to freeze cgroups that were out of memory in the past, right? Are you no longer doing this?
WARNING: multiple messages have this Message-ID (diff)
From: Johannes Weiner <hannes@cmpxchg.org> To: azurIt <azurit@pobox.sk> Cc: Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@suse.cz>, David Rientjes <rientjes@google.com>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>, linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [patch 0/7] improve memcg oom killer robustness v2 Date: Tue, 10 Sep 2013 18:03:29 -0400 [thread overview] Message-ID: <20130910220329.GK856@cmpxchg.org> (raw) In-Reply-To: <20130910233247.9EDF4DBA@pobox.sk> On Tue, Sep 10, 2013 at 11:32:47PM +0200, azurIt wrote: > >On Tue, Sep 10, 2013 at 11:08:53PM +0200, azurIt wrote: > >> >On Tue, Sep 10, 2013 at 09:32:53PM +0200, azurIt wrote: > >> >> Here is full kernel log between 6:00 and 7:59: > >> >> http://watchdog.sk/lkml/kern6.log > >> > > >> >Wow, your apaches are like the hydra. Whenever one is OOM killed, > >> >more show up! > >> > >> > >> > >> Yeah, it's supposed to do this ;) How are you expecting the machine to recover from an OOM situation, though? I guess I don't really understand what these machines are doing. But if you are overloading them like crazy, isn't that the expected outcome? > >> >> >> What do you think? I'm now running kernel with your previous patch, not with the newest one. > >> >> > > >> >> >Which one exactly? Can you attach the diff? > >> >> > >> >> > >> >> > >> >> I meant, the problem above occured on kernel with your latest patch: > >> >> http://watchdog.sk/lkml/7-2-memcg-fix.patch > >> > > >> >The above log has the following callstack: > >> > > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337628] [<ffffffff810d19fe>] dump_header+0x7e/0x1e0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337707] [<ffffffff810d18ff>] ? find_lock_task_mm+0x2f/0x70 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337790] [<ffffffff810d18ff>] ? find_lock_task_mm+0x2f/0x70 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337874] [<ffffffff81094bb0>] ? __css_put+0x50/0x90 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.337952] [<ffffffff810d1ec5>] oom_kill_process+0x85/0x2a0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338037] [<ffffffff810d2448>] mem_cgroup_out_of_memory+0xa8/0xf0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338120] [<ffffffff81110858>] T.1154+0x8b8/0x8f0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338201] [<ffffffff81110fa6>] mem_cgroup_charge_common+0x56/0xa0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338283] [<ffffffff81111035>] mem_cgroup_newpage_charge+0x45/0x50 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338364] [<ffffffff810f3039>] handle_pte_fault+0x609/0x940 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338451] [<ffffffff8102ab1f>] ? pte_alloc_one+0x3f/0x50 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338532] [<ffffffff8107e455>] ? sched_clock_local+0x25/0x90 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338617] [<ffffffff810f34d7>] handle_mm_fault+0x167/0x340 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338699] [<ffffffff8102714b>] do_page_fault+0x13b/0x490 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338781] [<ffffffff810f8848>] ? do_brk+0x208/0x3a0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338865] [<ffffffff812dba22>] ? gr_learn_resource+0x42/0x1e0 > >> >Sep 10 07:59:43 server01 kernel: [ 3846.338951] [<ffffffff815cb7bf>] page_fault+0x1f/0x30 > >> > > >> >The charge code seems to be directly invoking the OOM killer, which is > >> >not possible with 7-2-memcg-fix. Are you sure this is the right patch > >> >for this log? This _looks_ more like what 7-1-memcg-fix was doing, > >> >with a direct kill in the charge context and a fixup later on. > >> > >> I, luckyly, still have the kernel source from which that kernel was build. I tried to re-apply the 7-2-memcg-fix.patch: > >> > >> # patch -p1 --dry-run < 7-2-memcg-fix.patch > >> patching file arch/x86/mm/fault.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 4 out of 4 hunks ignored -- saving rejects to file arch/x86/mm/fault.c.rej > >> patching file include/linux/memcontrol.h > >> Hunk #1 succeeded at 141 with fuzz 2 (offset 21 lines). > >> Hunk #2 succeeded at 391 with fuzz 1 (offset 39 lines). > > > >Uhm, some of it applied... I have absolutely no idea what state that > >tree is in now... > > I used '--dry-run' so it should be ok :) Ah, right. > >> patching file include/linux/mm.h > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 1 out of 1 hunk ignored -- saving rejects to file include/linux/mm.h.rej > >> patching file include/linux/sched.h > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 1 out of 1 hunk ignored -- saving rejects to file include/linux/sched.h.rej > >> patching file mm/memcontrol.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 10 out of 10 hunks ignored -- saving rejects to file mm/memcontrol.c.rej > >> patching file mm/memory.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 2 out of 2 hunks ignored -- saving rejects to file mm/memory.c.rej > >> patching file mm/oom_kill.c > >> Reversed (or previously applied) patch detected! Assume -R? [n] > >> Apply anyway? [n] > >> Skipping patch. > >> 1 out of 1 hunk ignored -- saving rejects to file mm/oom_kill.c.rej > >> > >> > >> Can you tell from this if the source has the right patch? > > > >Not reliably, I don't think. Can you send me > > > > include/linux/memcontrol.h > > mm/memcontrol.c > > mm/memory.c > > mm/oom_kill.c > > > >from those sources? > > > >It might be easier to start the application from scratch... Keep in > >mind that 7-2 was not an incremental fix, you need to remove the > >previous memcg patches (as opposed to 7-1). > > > > Yes, i used only 7-2 from your patches. Here are the files: > http://watchdog.sk/lkml/kernel > > orig - kernel source which was used to build the kernel i was talking about earlier > new - newly unpacked and patched 3.2.50 with all of 'my' patches Ok, thanks! > Here is how your patch was applied: > > # patch -p1 < 7-2-memcg-fix.patch > patching file arch/x86/mm/fault.c > Hunk #1 succeeded at 944 (offset 102 lines). > Hunk #2 succeeded at 970 (offset 102 lines). > Hunk #3 succeeded at 1273 with fuzz 1 (offset 212 lines). > Hunk #4 succeeded at 1382 (offset 223 lines). Ah, I forgot about this one. Could you provide that file (fault.c) as well please? > patching file include/linux/memcontrol.h > Hunk #1 succeeded at 122 with fuzz 2 (offset 2 lines). > Hunk #2 succeeded at 354 (offset 2 lines). Looks good, still. > patching file include/linux/mm.h > Hunk #1 succeeded at 163 (offset 7 lines). > patching file include/linux/sched.h > Hunk #1 succeeded at 1644 (offset 76 lines). > patching file mm/memcontrol.c > Hunk #1 succeeded at 1752 (offset 9 lines). > Hunk #2 succeeded at 1777 (offset 9 lines). > Hunk #3 succeeded at 1828 (offset 9 lines). > Hunk #4 succeeded at 1867 (offset 9 lines). > Hunk #5 succeeded at 2256 (offset 9 lines). > Hunk #6 succeeded at 2317 (offset 9 lines). > Hunk #7 succeeded at 2348 (offset 9 lines). > Hunk #8 succeeded at 2411 (offset 9 lines). > Hunk #9 succeeded at 2419 (offset 9 lines). > Hunk #10 succeeded at 2432 (offset 9 lines). > patching file mm/memory.c > Hunk #1 succeeded at 3712 (offset 273 lines). > Hunk #2 succeeded at 3812 (offset 317 lines). > patching file mm/oom_kill.c These look good as well. That leaves the weird impossible stack trace. Did you double check that this crash came from a kernel with those exact files? I'm also confused about the freezer. You used to freeze cgroups that were out of memory in the past, right? Are you no longer doing this? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-09-10 22:03 UTC|newest] Thread overview: 227+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-08-03 16:59 [patch 0/7] improve memcg oom killer robustness v2 Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-03 16:59 ` [patch 1/7] arch: mm: remove obsolete init OOM protection Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-06 6:34 ` Vineet Gupta 2013-08-06 6:34 ` Vineet Gupta 2013-08-06 6:34 ` Vineet Gupta 2013-08-03 16:59 ` [patch 2/7] arch: mm: do not invoke OOM killer on kernel fault OOM Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-03 16:59 ` [patch 3/7] arch: mm: pass userspace fault flag to generic fault handler Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-05 22:06 ` Andrew Morton 2013-08-05 22:06 ` Andrew Morton 2013-08-05 22:25 ` Johannes Weiner 2013-08-05 22:25 ` Johannes Weiner 2013-08-03 16:59 ` [patch 4/7] x86: finish user fault error path with fatal signal Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-03 16:59 ` [patch 5/7] mm: memcg: enable memcg OOM killer only for user faults Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-05 9:18 ` Michal Hocko 2013-08-05 9:18 ` Michal Hocko 2013-08-03 16:59 ` [patch 6/7] mm: memcg: rework and document OOM waiting and wakeup Johannes Weiner 2013-08-03 16:59 ` Johannes Weiner 2013-08-03 17:00 ` [patch 7/7] mm: memcg: do not trap chargers with full callstack on OOM Johannes Weiner 2013-08-03 17:00 ` Johannes Weiner 2013-08-05 9:54 ` Michal Hocko 2013-08-05 9:54 ` Michal Hocko 2013-08-05 9:54 ` Michal Hocko 2013-08-05 20:56 ` Johannes Weiner 2013-08-05 20:56 ` Johannes Weiner 2013-08-03 17:08 ` [patch 0/7] improve memcg oom killer robustness v2 Johannes Weiner 2013-08-03 17:08 ` Johannes Weiner 2013-08-09 9:06 ` azurIt 2013-08-09 9:06 ` azurIt 2013-08-09 9:06 ` azurIt 2013-08-30 19:58 ` azurIt 2013-08-30 19:58 ` azurIt 2013-09-02 10:38 ` azurIt 2013-09-02 10:38 ` azurIt 2013-09-03 20:48 ` Johannes Weiner 2013-09-03 20:48 ` Johannes Weiner 2013-09-04 7:53 ` azurIt 2013-09-04 7:53 ` azurIt 2013-09-04 7:53 ` azurIt 2013-09-04 7:53 ` azurIt 2013-09-04 8:18 ` azurIt 2013-09-04 8:18 ` azurIt 2013-09-05 11:54 ` Johannes Weiner 2013-09-05 11:54 ` Johannes Weiner 2013-09-05 12:43 ` Michal Hocko 2013-09-05 12:43 ` Michal Hocko 2013-09-05 16:18 ` Johannes Weiner 2013-09-05 16:18 ` Johannes Weiner 2013-09-09 12:36 ` Michal Hocko 2013-09-09 12:36 ` Michal Hocko 2013-09-09 12:56 ` Michal Hocko 2013-09-09 12:56 ` Michal Hocko 2013-09-12 12:59 ` Johannes Weiner 2013-09-12 12:59 ` Johannes Weiner 2013-09-16 14:03 ` Michal Hocko 2013-09-16 14:03 ` Michal Hocko 2013-09-16 14:03 ` Michal Hocko 2013-09-05 13:24 ` Michal Hocko 2013-09-05 13:24 ` Michal Hocko 2013-09-09 13:10 ` azurIt 2013-09-09 13:10 ` azurIt 2013-09-09 17:28 ` Johannes Weiner 2013-09-09 17:28 ` Johannes Weiner 2013-09-09 19:59 ` azurIt 2013-09-09 19:59 ` azurIt 2013-09-09 20:12 ` Johannes Weiner 2013-09-09 20:12 ` Johannes Weiner 2013-09-09 20:18 ` azurIt 2013-09-09 20:18 ` azurIt 2013-09-09 21:08 ` azurIt 2013-09-09 21:08 ` azurIt 2013-09-10 18:13 ` azurIt 2013-09-10 18:13 ` azurIt 2013-09-10 18:37 ` Johannes Weiner 2013-09-10 18:37 ` Johannes Weiner 2013-09-10 19:32 ` azurIt 2013-09-10 19:32 ` azurIt 2013-09-10 20:12 ` Johannes Weiner 2013-09-10 20:12 ` Johannes Weiner 2013-09-10 21:08 ` azurIt 2013-09-10 21:08 ` azurIt 2013-09-10 21:08 ` azurIt 2013-09-10 21:18 ` Johannes Weiner 2013-09-10 21:18 ` Johannes Weiner 2013-09-10 21:32 ` azurIt 2013-09-10 21:32 ` azurIt 2013-09-10 22:03 ` Johannes Weiner [this message] 2013-09-10 22:03 ` Johannes Weiner 2013-09-11 12:33 ` azurIt 2013-09-11 12:33 ` azurIt 2013-09-11 18:03 ` Johannes Weiner 2013-09-11 18:03 ` Johannes Weiner 2013-09-11 18:03 ` Johannes Weiner 2013-09-11 18:54 ` azurIt 2013-09-11 18:54 ` azurIt 2013-09-11 19:11 ` Johannes Weiner 2013-09-11 19:11 ` Johannes Weiner 2013-09-11 19:41 ` azurIt 2013-09-11 19:41 ` azurIt 2013-09-11 20:04 ` Johannes Weiner 2013-09-11 20:04 ` Johannes Weiner 2013-09-14 10:48 ` azurIt 2013-09-14 10:48 ` azurIt 2013-09-16 13:40 ` Michal Hocko 2013-09-16 13:40 ` Michal Hocko 2013-09-16 14:01 ` azurIt 2013-09-16 14:01 ` azurIt 2013-09-16 14:06 ` Michal Hocko 2013-09-16 14:06 ` Michal Hocko 2013-09-16 14:13 ` azurIt 2013-09-16 14:13 ` azurIt 2013-09-16 14:13 ` azurIt 2013-09-16 14:57 ` Michal Hocko 2013-09-16 14:57 ` Michal Hocko 2013-09-16 15:05 ` azurIt 2013-09-16 15:05 ` azurIt 2013-09-16 15:17 ` Johannes Weiner 2013-09-16 15:17 ` Johannes Weiner 2013-09-16 15:17 ` Johannes Weiner 2013-09-16 15:24 ` azurIt 2013-09-16 15:24 ` azurIt 2013-09-16 15:25 ` Michal Hocko 2013-09-16 15:25 ` Michal Hocko 2013-09-16 15:40 ` azurIt 2013-09-16 15:40 ` azurIt 2013-09-16 20:52 ` azurIt 2013-09-16 20:52 ` azurIt 2013-09-17 0:02 ` Johannes Weiner 2013-09-17 0:02 ` Johannes Weiner 2013-09-17 11:15 ` azurIt 2013-09-17 11:15 ` azurIt 2013-09-17 11:15 ` azurIt 2013-09-17 14:10 ` Michal Hocko 2013-09-17 14:10 ` Michal Hocko 2013-09-18 14:03 ` azurIt 2013-09-18 14:03 ` azurIt 2013-09-18 14:03 ` azurIt 2013-09-18 14:24 ` Michal Hocko 2013-09-18 14:24 ` Michal Hocko 2013-09-18 14:33 ` azurIt 2013-09-18 14:33 ` azurIt 2013-09-18 14:42 ` Michal Hocko 2013-09-18 14:42 ` Michal Hocko 2013-09-18 14:42 ` Michal Hocko 2013-09-18 18:02 ` azurIt 2013-09-18 18:02 ` azurIt 2013-09-18 18:36 ` Michal Hocko 2013-09-18 18:36 ` Michal Hocko 2013-09-18 18:36 ` Michal Hocko 2013-09-18 18:04 ` Johannes Weiner 2013-09-18 18:04 ` Johannes Weiner 2013-09-18 18:19 ` Johannes Weiner 2013-09-18 18:19 ` Johannes Weiner 2013-09-18 19:55 ` Johannes Weiner 2013-09-18 19:55 ` Johannes Weiner 2013-09-18 19:55 ` Johannes Weiner 2013-09-18 20:52 ` azurIt 2013-09-18 20:52 ` azurIt 2013-09-18 20:52 ` azurIt 2013-09-25 7:26 ` azurIt 2013-09-25 7:26 ` azurIt 2013-09-25 7:26 ` azurIt 2013-09-26 16:54 ` azurIt 2013-09-26 16:54 ` azurIt 2013-09-26 16:54 ` azurIt 2013-09-26 19:27 ` Johannes Weiner 2013-09-26 19:27 ` Johannes Weiner 2013-09-27 2:04 ` azurIt 2013-09-27 2:04 ` azurIt 2013-09-27 2:04 ` azurIt 2013-09-27 2:04 ` azurIt 2013-10-07 11:01 ` azurIt 2013-10-07 11:01 ` azurIt 2013-10-07 11:01 ` azurIt 2013-10-07 11:01 ` azurIt 2013-10-07 19:23 ` Johannes Weiner 2013-10-07 19:23 ` Johannes Weiner 2013-10-09 18:44 ` azurIt 2013-10-09 18:44 ` azurIt 2013-10-09 18:44 ` azurIt 2013-10-10 0:14 ` Johannes Weiner 2013-10-10 0:14 ` Johannes Weiner 2013-10-10 0:14 ` Johannes Weiner 2013-10-10 22:59 ` azurIt 2013-10-10 22:59 ` azurIt 2013-10-10 22:59 ` azurIt 2013-09-17 11:20 ` azurIt 2013-09-17 11:20 ` azurIt 2013-09-16 10:22 ` azurIt 2013-09-16 10:22 ` azurIt 2013-09-04 9:45 ` azurIt 2013-09-04 9:45 ` azurIt 2013-09-04 11:57 ` Michal Hocko 2013-09-04 11:57 ` Michal Hocko 2013-09-04 12:10 ` azurIt 2013-09-04 12:10 ` azurIt 2013-09-04 12:10 ` azurIt 2013-09-04 12:26 ` Michal Hocko 2013-09-04 12:26 ` Michal Hocko 2013-09-04 12:26 ` Michal Hocko 2013-09-04 12:39 ` azurIt 2013-09-04 12:39 ` azurIt 2013-09-05 9:14 ` azurIt 2013-09-05 9:14 ` azurIt 2013-09-05 9:53 ` Michal Hocko 2013-09-05 9:53 ` Michal Hocko 2013-09-05 10:17 ` azurIt 2013-09-05 10:17 ` azurIt 2013-09-05 11:17 ` Michal Hocko 2013-09-05 11:17 ` Michal Hocko 2013-09-05 11:17 ` Michal Hocko 2013-09-05 11:47 ` azurIt 2013-09-05 11:47 ` azurIt 2013-09-05 12:03 ` Michal Hocko 2013-09-05 12:03 ` Michal Hocko 2013-09-05 12:33 ` azurIt 2013-09-05 12:33 ` azurIt 2013-09-05 12:33 ` azurIt 2013-09-05 12:45 ` Michal Hocko 2013-09-05 12:45 ` Michal Hocko 2013-09-05 13:00 ` azurIt 2013-09-05 13:00 ` azurIt
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20130910220329.GK856@cmpxchg.org \ --to=hannes@cmpxchg.org \ --cc=akpm@linux-foundation.org \ --cc=azurit@pobox.sk \ --cc=cgroups@vger.kernel.org \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=kosaki.motohiro@jp.fujitsu.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@suse.cz \ --cc=rientjes@google.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.