linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Naresh Kamboju <naresh.kamboju@linaro.org>
To: Michal Hocko <mhocko@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>,
	"Linux F2FS DEV,
	Mailing List"  <linux-f2fs-devel@lists.sourceforge.net>,
	linux-ext4 <linux-ext4@vger.kernel.org>,
	linux-block <linux-block@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	open list <linux-kernel@vger.kernel.org>,
	Linux-Next Mailing List <linux-next@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	Jaegeuk Kim <jaegeuk@kernel.org>, "Theodore Ts'o" <tytso@mit.edu>,
	Chao Yu <chao@kernel.org>, Hugh Dickins <hughd@google.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	Chao Yu <yuchao0@huawei.com>,
	lkft-triage@lists.linaro.org
Subject: Re: mm: mkfs.ext4 invoked oom-killer on i386 - pagecache_get_page
Date: Wed, 20 May 2020 17:26:17 +0530	[thread overview]
Message-ID: <CA+G9fYvzLm7n1BE7AJXd8_49fOgPgWWTiQ7sXkVre_zoERjQKg@mail.gmail.com> (raw)
In-Reply-To: <20200519084535.GG32497@dhcp22.suse.cz>

FYI,

This issue is specific on 32-bit architectures i386 and arm on linux-next tree.
As per the test results history this problem started happening from
Bad : next-20200430
Good : next-20200429

steps to reproduce:
dd if=/dev/disk/by-id/ata-SanDisk_SSD_PLUS_120GB_190504A00573
of=/dev/null bs=1M count=2048
or
mkfs -t ext4 /dev/disk/by-id/ata-SanDisk_SSD_PLUS_120GB_190804A00BE5


Problem:
[   38.802375] dd invoked oom-killer: gfp_mask=0x100cc0(GFP_USER),
order=0, oom_score_adj=0

i386 crash log:  https://pastebin.com/Hb8U89vU
arm crash log: https://pastebin.com/BD9t3JTm

On Tue, 19 May 2020 at 14:15, Michal Hocko <mhocko@kernel.org> wrote:
>
> On Tue 19-05-20 10:11:25, Arnd Bergmann wrote:
> > On Tue, May 19, 2020 at 9:52 AM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Mon 18-05-20 19:40:55, Naresh Kamboju wrote:
> > > > Thanks for looking into this problem.
> > > >
> > > > On Sat, 2 May 2020 at 02:28, Andrew Morton <akpm@linux-foundation.org> wrote:
> > > > >
> > > > > On Fri, 1 May 2020 18:08:28 +0530 Naresh Kamboju <naresh.kamboju@linaro.org> wrote:
> > > > >
> > > > > > mkfs -t ext4 invoked oom-killer on i386 kernel running on x86_64 device
> > > > > > and started happening on linux -next master branch kernel tag next-20200430
> > > > > > and next-20200501. We did not bisect this problem.
> > > [...]
> > > > Creating journal (131072 blocks): [   31.251333] mkfs.ext4 invoked
> > > > oom-killer: gfp_mask=0x101cc0(GFP_USER|__GFP_WRITE), order=0,
> > > > oom_score_adj=0
> > > [...]
> > > > [   31.500943] DMA free:187396kB min:22528kB low:28160kB high:33792kB
> > > > reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB
> > > > active_file:4736kB inactive_file:431688kB unevictable:0kB
> > > > writepending:62020kB present:783360kB managed:668264kB mlocked:0kB
> > > > kernel_stack:888kB pagetables:0kB bounce:0kB free_pcp:880kB
> > > > local_pcp:216kB free_cma:163840kB
> > >
> > > This is really unexpected. You are saying this is a regular i386 and DMA
> > > should be bottom 16MB while yours is 780MB and the rest of the low mem
> > > is in the Normal zone which is completely missing here. How have you got
> > > to that configuration? I have to say I haven't seen anything like that
> > > on i386.
> >
> > I think that line comes from an ARM32 beaglebone-X15 machine showing
> > the same symptom. The i386 line from the log file that Naresh linked to at
> > https://lkft.validation.linaro.org/scheduler/job/1406110#L1223  is less
> > unusual:
>
> OK, that makes more sense! At least for the memory layout.
>
> > [   34.931663] Node 0 active_anon:21464kB inactive_anon:8688kB
> > active_file:16604kB inactive_file:849976kB unevictable:0kB
> > isolated(anon):0kB isolated(file):0kB mapped:25284kB dirty:58952kB
> > writeback:27772kB shmem:8944kB writeback_tmp:0kB unstable:0kB
> > all_unreclaimable? yes
> > [   34.955523] DMA free:3356kB min:68kB low:84kB high:100kB
> > reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB
> > active_file:0kB inactive_file:11964kB unevictable:0kB
> > writepending:11980kB present:15964kB managed:15876kB mlocked:0kB
> > kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB
> > free_cma:0kB
> > [   34.983385] lowmem_reserve[]: 0 825 1947 825
> > [   34.987678] Normal free:3948kB min:7732kB low:8640kB high:9548kB
> > reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB
> > active_file:1096kB inactive_file:786400kB unevictable:0kB
> > writepending:65432kB present:884728kB managed:845576kB mlocked:0kB
> > kernel_stack:1112kB pagetables:0kB bounce:0kB free_pcp:2908kB
> > local_pcp:500kB free_cma:0kB
>
> The lowmem is really low (way below the min watermark so even memory
> reserves for high priority and atomic requests are depleted. There is
> still 786MB of inactive page cache to be reclaimed. It doesn't seem to
> be dirty or under the writeback but it still might be pinned by the
> filesystem. I would suggest watching vmscan reclaim tracepoints and
> check why the reclaim fails to reclaim anything.
> --
> Michal Hocko
> SUSE Labs

  reply	other threads:[~2020-05-20 11:56 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CA+G9fYu2ruH-8uxBHE0pdE6RgRTSx4QuQPAN=Nv3BCdRd2ouYA@mail.gmail.com>
     [not found] ` <20200501135806.4eebf0b92f84ab60bba3e1e7@linux-foundation.org>
2020-05-18 14:10   ` mm: mkfs.ext4 invoked oom-killer on i386 - pagecache_get_page Naresh Kamboju
2020-05-19  7:52     ` Michal Hocko
2020-05-19  8:11       ` Arnd Bergmann
2020-05-19  8:45         ` Michal Hocko
2020-05-20 11:56           ` Naresh Kamboju [this message]
2020-05-20 17:59             ` Naresh Kamboju
2020-05-20 19:09               ` Chris Down
2020-05-21  9:22                 ` Naresh Kamboju
2020-05-21  9:35                   ` Arnd Bergmann
2020-05-21  9:55                 ` Michal Hocko
2020-05-21 10:41                   ` Naresh Kamboju
2020-05-21 10:58                     ` Michal Hocko
2020-05-21 12:24                       ` Hugh Dickins
2020-05-21 12:44                         ` Michal Hocko
2020-05-21 19:17                           ` Johannes Weiner
2020-05-21 20:06                             ` Hugh Dickins
2020-05-21 21:58                               ` Johannes Weiner
2020-05-21 23:35                                 ` Hugh Dickins
2020-05-28 14:59                                 ` Michal Hocko
2020-05-21 16:34                   ` Michal Hocko
2020-05-21 19:00                     ` Naresh Kamboju
2020-05-21 20:53                       ` Naresh Kamboju
2020-05-28 15:03                         ` Michal Hocko
2020-05-28 16:17                           ` Naresh Kamboju
2020-05-28 16:41                             ` Chris Down
2020-05-29  1:50                               ` Yafang Shao
2020-05-29  1:56                                 ` Chris Down
2020-05-29  9:49                                   ` Michal Hocko
2020-06-11  9:55                                     ` Michal Hocko
2020-06-12  9:43                                       ` Naresh Kamboju
2020-06-12 12:09                                         ` Michal Hocko
2020-06-17 13:37                     ` Naresh Kamboju
2020-06-17 13:57                       ` Chris Down
2020-06-17 14:11                         ` Michal Hocko
2020-06-17 15:53                           ` Naresh Kamboju
2020-06-17 16:06                             ` Michal Hocko
2020-06-17 20:13                               ` Naresh Kamboju
2020-06-17 21:09                                 ` Chris Down
2020-06-18  1:43                                   ` Yafang Shao
2020-06-18 12:37                                     ` Chris Down
2020-06-18 12:41                                       ` Michal Hocko
2020-06-18 12:49                                         ` Chris Down
2020-06-18 14:59                                       ` Yafang Shao
2020-06-17 13:59                       ` Michal Hocko
2020-06-17 14:08                         ` Chris Down
2020-05-21  2:39               ` Yafang Shao
2020-05-21  8:58                 ` Naresh Kamboju
2020-05-21  9:47                   ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+G9fYvzLm7n1BE7AJXd8_49fOgPgWWTiQ7sXkVre_zoERjQKg@mail.gmail.com \
    --to=naresh.kamboju@linaro.org \
    --cc=aarcange@redhat.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=chao@kernel.org \
    --cc=hughd@google.com \
    --cc=jaegeuk@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-next@vger.kernel.org \
    --cc=lkft-triage@lists.linaro.org \
    --cc=mhocko@kernel.org \
    --cc=tytso@mit.edu \
    --cc=willy@infradead.org \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).