From: Cong Wang <xiyou.wangcong@gmail.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: LKML <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>, linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: avoid blocking lock_page() in kcompactd
Date: Mon, 20 Jan 2020 14:41:50 -0800 [thread overview]
Message-ID: <CAM_iQpWfnFD9YUetACbYf0u-be6u3m3Y62iAcbWSc1Ykrf4XCA@mail.gmail.com> (raw)
In-Reply-To: <20200110092256.GN3466@techsingularity.net>
On Fri, Jan 10, 2020 at 1:22 AM Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Thu, Jan 09, 2020 at 02:56:46PM -0800, Cong Wang wrote:
> > We observed kcompactd hung at __lock_page():
> >
> > INFO: task kcompactd0:57 blocked for more than 120 seconds.
> > Not tainted 4.19.56.x86_64 #1
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > kcompactd0 D 0 57 2 0x80000000
> > Call Trace:
> > ? __schedule+0x236/0x860
> > schedule+0x28/0x80
> > io_schedule+0x12/0x40
> > __lock_page+0xf9/0x120
> > ? page_cache_tree_insert+0xb0/0xb0
> > ? update_pageblock_skip+0xb0/0xb0
> > migrate_pages+0x88c/0xb90
> > ? isolate_freepages_block+0x3b0/0x3b0
> > compact_zone+0x5f1/0x870
> > kcompactd_do_work+0x130/0x2c0
> > ? __switch_to_asm+0x35/0x70
> > ? __switch_to_asm+0x41/0x70
> > ? kcompactd_do_work+0x2c0/0x2c0
> > ? kcompactd+0x73/0x180
> > kcompactd+0x73/0x180
> > ? finish_wait+0x80/0x80
> > kthread+0x113/0x130
> > ? kthread_create_worker_on_cpu+0x50/0x50
> > ret_from_fork+0x35/0x40
> >
> > which faddr2line maps to:
> >
> > migrate_pages+0x88c/0xb90:
> > lock_page at include/linux/pagemap.h:483
> > (inlined by) __unmap_and_move at mm/migrate.c:1024
> > (inlined by) unmap_and_move at mm/migrate.c:1189
> > (inlined by) migrate_pages at mm/migrate.c:1419
> >
> > Sometimes kcompactd eventually got out of this situation, sometimes not.
> >
> > I think for memory compaction, it is a best effort to migrate the pages,
> > so it doesn't have to wait for I/O to complete. It is fine to call
> > trylock_page() here, which is pretty much similar to
> > buffer_migrate_lock_buffers().
> >
> > Given MIGRATE_SYNC_LIGHT is used on compaction path, just relax the
> > check for it.
> >
>
> Is this a single page being locked for a long time or multiple pages
> being locked without reaching a reschedule point?
Not sure whether it is single page or multiple pages, but I successfully
located the process locking the page (or pages), and I used perf to
capture its stack trace:
ffffffffa722aa06 shrink_inactive_list
ffffffffa722b3d7 shrink_node_memcg
ffffffffa722b85f shrink_node
ffffffffa722bc89 do_try_to_free_pages
ffffffffa722c179 try_to_free_mem_cgroup_pages
ffffffffa7298703 try_charge
ffffffffa729a886 mem_cgroup_try_charge
ffffffffa720ec03 __add_to_page_cache_locked
ffffffffa720ee3a add_to_page_cache_lru
ffffffffa7312ddb iomap_readpages_actor
ffffffffa73133f7 iomap_apply
ffffffffa73135da iomap_readpages
ffffffffa722062e read_pages
ffffffffa7220b3f __do_page_cache_readahead
ffffffffa7210554 filemap_fault
ffffffffc039e41f __xfs_filemap_fault
ffffffffa724f5e7 __do_fault
ffffffffa724c5f2 __handle_mm_fault
ffffffffa724cbc6 handle_mm_fault
ffffffffa70a313e __do_page_fault
ffffffffa7a00dfe page_fault
This process got stuck in this situation for a long time (since I sent out
this patch) without making any progress. It behaves like stuck in an infinite
loop, although the EIP still moves around within mem_cgroup_try_charge().
I also enabled trace event mm_vmscan_lru_shrink_inactive(), here is what
I collected:
<...>-455459 [003] .... 2691911.664706:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=0
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664711:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=4
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664714:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=2 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=3
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664717:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=5 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=2
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664720:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=5 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=1
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664725:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=7 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=0
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664730:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=2
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664732:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=0
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664736:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=4
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664739:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=2 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=3
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664744:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=5 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=2
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664747:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=4 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=1
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664752:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=12 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activa
te=0 nr_ref_keep=0 nr_unmap_fail=0 priority=0
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664755:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=4
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664761:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=2
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664762:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=1
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664764:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=0
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664770:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=4 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=1
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664777:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=21 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activa
te=0 nr_ref_keep=0 nr_unmap_fail=0 priority=0
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664780:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=1 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=4
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
<...>-455459 [003] .... 2691911.664783:
mm_vmscan_lru_shrink_inactive: nid=0 nr_scanned=2 nr_reclaimed=0
nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activat
e=0 nr_ref_keep=0 nr_unmap_fail=0 priority=3
flags=RECLAIM_WB_FILE|RECLAIM_WB_ASYNC
>
> If it's a single page being locked, it's important to identify what held
> page lock for 2 minutes because that is potentially a missing
> unlock_page. The kernel in question is old -- 4.19.56. Are there any
> other modifications to that kernel?
We only backported some networking and hardware driver patches,
not any MM change.
Please let me know if I can collect any other information you need,
before it gets rebooted.
Thanks.
next prev parent reply other threads:[~2020-01-20 22:42 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-09 22:56 [PATCH] mm: avoid blocking lock_page() in kcompactd Cong Wang
2020-01-10 0:28 ` Yang Shi
2020-01-10 1:01 ` Cong Wang
2020-01-10 4:51 ` Cong Wang
2020-01-10 7:38 ` Michal Hocko
2020-01-20 22:48 ` Cong Wang
2020-01-21 9:00 ` Michal Hocko
2020-01-26 19:53 ` Cong Wang
2020-01-26 23:39 ` Matthew Wilcox
2020-01-27 15:00 ` Michal Hocko
2020-01-27 19:06 ` Matthew Wilcox
2020-01-28 1:25 ` Yang Shi
2020-01-28 6:03 ` Matthew Wilcox
2020-01-28 8:17 ` Michal Hocko
2020-01-28 8:30 ` Matthew Wilcox
2020-01-28 9:13 ` Michal Hocko
2020-01-28 10:48 ` Matthew Wilcox
2020-01-28 11:39 ` Michal Hocko
2020-01-28 19:44 ` Cong Wang
2020-01-30 22:52 ` Cong Wang
2020-02-13 7:48 ` Michal Hocko
2020-02-13 16:46 ` Matthew Wilcox
2020-02-13 17:08 ` Michal Hocko
2020-02-14 4:27 ` Matthew Wilcox
2020-02-14 6:55 ` Michal Hocko
2020-01-27 14:49 ` Michal Hocko
2020-01-28 0:46 ` Cong Wang
2020-01-28 8:22 ` Michal Hocko
2020-01-10 9:22 ` Mel Gorman
2020-01-20 22:41 ` Cong Wang [this message]
2020-01-21 19:21 ` Yang Shi
2020-01-21 8:26 ` Hillf Danton
2020-01-21 9:06 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAM_iQpWfnFD9YUetACbYf0u-be6u3m3Y62iAcbWSc1Ykrf4XCA@mail.gmail.com \
--to=xiyou.wangcong@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).