linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yu Xu <xuyu@linux.alibaba.com>
To: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, gavin.dg@linux.alibaba.com,
	Greg Thelen <gthelen@google.com>, Wei Xu <weixugc@google.com>,
	Matthew Wilcox <willy@infradead.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH] mm, thp: relax migration wait when failed to get tail page
Date: Mon, 7 Jun 2021 15:24:41 +0800	[thread overview]
Message-ID: <6c4e0df7-1f06-585f-d113-f38db6c819b5@linux.alibaba.com> (raw)
In-Reply-To: <alpine.LSU.2.11.2106020831590.6388@eggly.anvils>

On 6/2/21 11:57 PM, Hugh Dickins wrote:
> On Wed, 2 Jun 2021, Yu Xu wrote:
>> On 6/2/21 12:55 AM, Hugh Dickins wrote:
>>> On Wed, 2 Jun 2021, Xu Yu wrote:
>>>
>>>> We notice that hung task happens in a conner but practical scenario when
>>>> CONFIG_PREEMPT_NONE is enabled, as follows.
>>>>
>>>> Process 0                       Process 1                     Process
>>>> 2..Inf
>>>> split_huge_page_to_list
>>>>       unmap_page
>>>>           split_huge_pmd_address
>>>>                                   __migration_entry_wait(head)
>>>>                                                                 __migration_entry_wait(tail)
>>>>       remap_page (roll back)
>>>>           remove_migration_ptes
>>>>               rmap_walk_anon
>>>>                   cond_resched
>>>>
>>>> Where __migration_entry_wait(tail) is occurred in kernel space, e.g.,
>>>> copy_to_user, which will immediately fault again without rescheduling,
>>>> and thus occupy the cpu fully.
>>>>
>>>> When there are too many processes performing __migration_entry_wait on
>>>> tail page, remap_page will never be done after cond_resched.
>>>>
>>>> This relaxes __migration_entry_wait on tail page, thus gives remap_page
>>>> a chance to complete.
>>>>
>>>> Signed-off-by: Gang Deng <gavin.dg@linux.alibaba.com>
>>>> Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
>>>
>>> Well caught: you're absolutely right that there's a bug there.
>>> But isn't cond_resched() just papering over the real bug, and
>>> what it should do is a "page = compound_head(page);" before the
>>> get_page_unless_zero()? How does that work out in your testing?
>>
>> compound_head works. The patched kernel is alive for hours under
>> our reproducer, which usually makes the vanilla kernel hung after
>> tens of minutes at most.
> 
> Oh, that's good news, thanks.
> 
> (It's still likely that a well-placed cond_resched() somewhere in
> mm/gup.c would also be a good idea, but none of us have yet got
> around to identifying where.)

We neither. If really have to do it outside of __migration_entry_wait,
return value of __migration_entry_wait is needed, and many related
functions have to updated, which may be undesirable.

> 
>>
>> If we use compound_head, the behavior of __migration_entry_wait(tail)
>> changes from "retry fault" to "prevent THP from being split". Is that
>> right?  Then which is preferred? If it were me, I would prefer "retry
>> fault".
> 
> As Matthew remarked, you are asking very good questions, and split
> migration entries are difficult to think about.  But I believe you'll
> find it works out okay.
> 
> The point of *put_and_* wait_on_page_locked() is that it does drop
> the page reference you acquired with get_page_unless_zero, as soon
> as the page is on the wait queue, before actually waiting.
> 
> So splitting the THP is only prevented for a brief interval.  Now,
> it's true that if there are very many tasks faulting on portions
> of the huge page, in that interval between inserting the migration
> entries and freezing the huge page's refcount to 0, they can reduce
> the chance of splitting considerably.  But that's not an excuse for
> for doing get_page_unless_zero() on the wrong thing, as it was doing.

We finally come to your solution, i.e., compound_head.

In that case, who should resend the compound_head patch to this issue?
shall we do with your s.o.b?

> 
> Hugh
> 

-- 
Thanks,
Yu


  reply	other threads:[~2021-06-07  7:24 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-01 16:31 [PATCH] mm, thp: relax migration wait when failed to get tail page Xu Yu
2021-06-01 16:49 ` Matthew Wilcox
2021-06-01 16:55 ` Hugh Dickins
2021-06-01 17:11   ` Matthew Wilcox
2021-06-01 19:10     ` Hugh Dickins
2021-06-01 20:27       ` Matthew Wilcox
2021-06-02  3:27       ` Yu Xu
2021-06-02 11:58         ` Matthew Wilcox
2021-06-02 12:59           ` Yu Xu
2021-06-02 13:20   ` Yu Xu
2021-06-02 15:57     ` Hugh Dickins
2021-06-07  7:24       ` Yu Xu [this message]
2021-06-08  4:44         ` Hugh Dickins
2021-06-08  5:43           ` Yu Xu
2021-06-08  6:53             ` Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6c4e0df7-1f06-585f-d113-f38db6c819b5@linux.alibaba.com \
    --to=xuyu@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=gavin.dg@linux.alibaba.com \
    --cc=gthelen@google.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npiggin@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=weixugc@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).