linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@suse.com
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	xiyou.wangcong@gmail.com, dave.hansen@intel.com,
	hannes@cmpxchg.org, mgorman@suse.de, vbabka@suse.cz,
	sergey.senozhatsky.work@gmail.com, pmladek@suse.com,
	penguin-kernel@I-love.SAKURA.ne.jp
Subject: Re: [PATCH] mm,page_alloc: Serialize warn_alloc() if schedulable.
Date: Wed, 9 Aug 2017 22:01:40 +0900	[thread overview]
Message-ID: <201708092201.DJI65113.QJHMOFtFFVOLSO@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <201708091514.IDG64043.MtFLOQHJFFVOSO@I-love.SAKURA.ne.jp>

Tetsuo Handa wrote:
> I'm failing to test your "mm, oom: fix oom_reaper fallouts" patches using
> http://lkml.kernel.org/r/201708072228.FAJ09347.tOOVOFFQJSHMFL@I-love.SAKURA.ne.jp
> because it fails to invoke the OOM killer for unknown reason. I analyzed it using
> kmallocwd and confirmed that two dozens of concurrent allocating threads is
> sufficient for hitting this warn_alloc() v.s. printk() lockup.
> Since printk offloading is not yet available, serialization is the the only choice
> we can mitigate this problem for now. How long will we have to waste more?

Above explanation is not appropriate.

As a part of testing your "mm, oom: fix oom_reaper fallouts" patches, I'm trying
to use various stress patterns including changing number of threads. For some
unknown reason (though it is not caused by your patches), sometimes it takes
too much long time (order of minutes) to invoke the OOM killer despite warn_alloc()
is periodically printed or not printed at all. In order to find out what stage is
taking so long time, I'm using kmallocwd with 1 second timeout, for SysRq-t etc. are
useless for tracking how long threads are waiting at specific location because they
do not have in-flight allocation information while they take many seconds for showing
too much noise (including idle threads simply sleeping). And due to kmallocwd's 1 second
timeout setting (which I want to use for finding out at what stage allocating threads
are waiting for so long), printk() from out_of_memory() with oom_lock held is trapped
by periodical printk() by kmallocwd, and caused lockup.

So, this is not direct warn_alloc() v.s. printk() lockup. But what the kmallocwd
tries to warn is similar to warn_alloc(). I can't automate testing of "mm, oom:
fix oom_reaper fallouts" patches because I sometimes need to use SysRq-f etc. to
unstuck the test.

> 
> ----------
> [  645.993827] MemAlloc-Info: stalling=18 dying=0 exiting=0 victim=0 oom_count=29
> (...snipped...)
> [  645.996694] MemAlloc: vmtoolsd(2221) flags=0x400100 switches=5607 seq=3740 gfp=0x14200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=7541
> [  645.996695] vmtoolsd        R  running task    11960  2221      1 0x00000080
> [  645.996699] Call Trace:
> [  645.996708]  ? console_unlock+0x373/0x4a0
> [  645.996709]  ? vprintk_emit+0x211/0x2f0
> [  645.996714]  ? vprintk_emit+0x21a/0x2f0
> [  645.996720]  ? vprintk_default+0x1a/0x20
> [  645.996722]  ? vprintk_func+0x22/0x60
> [  645.996724]  ? printk+0x53/0x6a
> [  645.996731]  ? dump_stack_print_info+0xab/0xb0
> [  645.996736]  ? dump_stack+0x5e/0x9e
> [  645.996739]  ? dump_header+0x9d/0x3fa
> [  645.996744]  ? trace_hardirqs_on+0xd/0x10
> [  645.996751]  ? oom_kill_process+0x226/0x650
> [  645.996757]  ? out_of_memory+0x13d/0x570
> [  645.996758]  ? out_of_memory+0x20d/0x570
> [  645.996763]  ? __alloc_pages_nodemask+0xbc8/0xed0
> [  645.996780]  ? alloc_pages_current+0x65/0xb0
> [  645.996784]  ? __page_cache_alloc+0x10b/0x140
> [  645.996789]  ? filemap_fault+0x3df/0x6a0
> [  645.996790]  ? filemap_fault+0x2ab/0x6a0
> [  645.996797]  ? xfs_filemap_fault+0x34/0x50
> [  645.996799]  ? __do_fault+0x19/0x120
> [  645.996803]  ? __handle_mm_fault+0xa99/0x1260
> [  645.996814]  ? handle_mm_fault+0x1b2/0x350
> [  645.996816]  ? handle_mm_fault+0x46/0x350
> [  645.996820]  ? __do_page_fault+0x1da/0x510
> [  645.996828]  ? do_page_fault+0x21/0x70
> [  645.996832]  ? page_fault+0x22/0x30
> (...snipped...)
> [  645.998748] MemAlloc-Info: stalling=18 dying=0 exiting=0 victim=0 oom_count=29
> (...snipped...)
> [ 1472.484590] MemAlloc-Info: stalling=25 dying=0 exiting=0 victim=0 oom_count=29
> (...snipped...)
> [ 1472.487341] MemAlloc: vmtoolsd(2221) flags=0x400100 switches=5607 seq=3740 gfp=0x14200ca(GFP_HIGHUSER_MOVABLE) order=0 delay=834032
> [ 1472.487342] vmtoolsd        R  running task    11960  2221      1 0x00000080
> [ 1472.487346] Call Trace:
> [ 1472.487353]  ? console_unlock+0x373/0x4a0
> [ 1472.487355]  ? vprintk_emit+0x211/0x2f0
> [ 1472.487360]  ? vprintk_emit+0x21a/0x2f0
> [ 1472.487367]  ? vprintk_default+0x1a/0x20
> [ 1472.487369]  ? vprintk_func+0x22/0x60
> [ 1472.487370]  ? printk+0x53/0x6a
> [ 1472.487377]  ? dump_stack_print_info+0xab/0xb0
> [ 1472.487381]  ? dump_stack+0x5e/0x9e
> [ 1472.487384]  ? dump_header+0x9d/0x3fa
> [ 1472.487389]  ? trace_hardirqs_on+0xd/0x10
> [ 1472.487396]  ? oom_kill_process+0x226/0x650
> [ 1472.487402]  ? out_of_memory+0x13d/0x570
> [ 1472.487403]  ? out_of_memory+0x20d/0x570
> [ 1472.487408]  ? __alloc_pages_nodemask+0xbc8/0xed0
> [ 1472.487426]  ? alloc_pages_current+0x65/0xb0
> [ 1472.487429]  ? __page_cache_alloc+0x10b/0x140
> [ 1472.487434]  ? filemap_fault+0x3df/0x6a0
> [ 1472.487435]  ? filemap_fault+0x2ab/0x6a0
> [ 1472.487441]  ? xfs_filemap_fault+0x34/0x50
> [ 1472.487444]  ? __do_fault+0x19/0x120
> [ 1472.487447]  ? __handle_mm_fault+0xa99/0x1260
> [ 1472.487459]  ? handle_mm_fault+0x1b2/0x350
> [ 1472.487460]  ? handle_mm_fault+0x46/0x350
> [ 1472.487465]  ? __do_page_fault+0x1da/0x510
> [ 1472.487472]  ? do_page_fault+0x21/0x70
> [ 1472.487476]  ? page_fault+0x22/0x30
> (...snipped...)
> [ 1472.489975] MemAlloc-Info: stalling=25 dying=0 exiting=0 victim=0 oom_count=29
> ----------
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2017-08-09 13:02 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-01 11:43 [PATCH] mm,page_alloc: Serialize warn_alloc() if schedulable Tetsuo Handa
2017-06-01 11:59 ` Michal Hocko
2017-06-01 13:11   ` Tetsuo Handa
2017-06-01 13:28     ` Michal Hocko
2017-06-01 22:10       ` Andrew Morton
2017-06-02  7:18         ` Michal Hocko
2017-06-02 11:13           ` Tetsuo Handa
2017-06-02 12:15             ` Michal Hocko
2017-06-02 17:13               ` Tetsuo Handa
2017-06-02 21:57             ` Cong Wang
2017-06-04  8:58               ` Tetsuo Handa
2017-06-04 15:05                 ` Michal Hocko
2017-06-04 21:43                   ` Tetsuo Handa
2017-06-05  5:37                     ` Michal Hocko
2017-06-05 18:15                       ` Cong Wang
2017-06-06  9:17                         ` Michal Hocko
2017-06-05 18:25                 ` Cong Wang
2017-06-22 10:35                   ` Tetsuo Handa
2017-06-22 22:53                     ` Cong Wang
2017-06-02 16:59           ` Cong Wang
2017-06-02 19:59           ` Andrew Morton
2017-06-03  2:57             ` Tetsuo Handa
2017-06-03  7:32             ` Michal Hocko
2017-06-03  8:36               ` Tetsuo Handa
2017-06-05  7:10                 ` Sergey Senozhatsky
2017-06-05  9:36                   ` Sergey Senozhatsky
2017-06-05 15:02                     ` Tetsuo Handa
2017-06-03 13:21               ` Tetsuo Handa
2017-07-08  4:59           ` Tetsuo Handa
2017-07-10 13:21             ` Michal Hocko
2017-07-10 13:54               ` Tetsuo Handa
2017-07-10 14:14                 ` Michal Hocko
2017-07-11 13:10                   ` Tetsuo Handa
2017-07-11 13:49                     ` Michal Hocko
2017-07-11 14:58                       ` Petr Mladek
2017-07-11 22:06                       ` Tetsuo Handa
2017-07-12  8:54                         ` Michal Hocko
2017-07-12 12:23                           ` Tetsuo Handa
2017-07-12 12:41                             ` Michal Hocko
2017-07-14 12:30                               ` Tetsuo Handa
2017-07-14 12:48                                 ` Michal Hocko
2017-08-09  6:14                                   ` Tetsuo Handa
2017-08-09 13:01                                     ` Tetsuo Handa [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201708092201.DJI65113.QJHMOFtFFVOLSO@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=pmladek@suse.com \
    --cc=sergey.senozhatsky.work@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).