From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74A46C48BC2 for ; Mon, 7 Jun 2021 07:24:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C7CF86120F for ; Mon, 7 Jun 2021 07:24:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C7CF86120F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2037C6B006C; Mon, 7 Jun 2021 03:24:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D9936B006E; Mon, 7 Jun 2021 03:24:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A24E6B0070; Mon, 7 Jun 2021 03:24:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id C3C4C6B006C for ; Mon, 7 Jun 2021 03:24:49 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5A5A1BF0C for ; Mon, 7 Jun 2021 07:24:49 +0000 (UTC) X-FDA: 78226090698.36.A33C87E Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by imf22.hostedemail.com (Postfix) with ESMTP id 41860C0091AE for ; Mon, 7 Jun 2021 07:24:44 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=xuyu@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0UbXVknN_1623050678; Received: from xuyu-mbp15.local(mailfrom:xuyu@linux.alibaba.com fp:SMTPD_---0UbXVknN_1623050678) by smtp.aliyun-inc.com(127.0.0.1); Mon, 07 Jun 2021 15:24:43 +0800 Subject: Re: [PATCH] mm, thp: relax migration wait when failed to get tail page To: Hugh Dickins Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, gavin.dg@linux.alibaba.com, Greg Thelen , Wei Xu , Matthew Wilcox , Nicholas Piggin , Vlastimil Babka References: From: Yu Xu Message-ID: <6c4e0df7-1f06-585f-d113-f38db6c819b5@linux.alibaba.com> Date: Mon, 7 Jun 2021 15:24:41 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.10.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf22.hostedemail.com: domain of xuyu@linux.alibaba.com designates 115.124.30.54 as permitted sender) smtp.mailfrom=xuyu@linux.alibaba.com X-Stat-Signature: obog3gmws75ca9jo8t1mem5b7jrpcj3f X-Rspamd-Queue-Id: 41860C0091AE X-Rspamd-Server: rspam02 X-HE-Tag: 1623050684-621012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.004406, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/2/21 11:57 PM, Hugh Dickins wrote: > On Wed, 2 Jun 2021, Yu Xu wrote: >> On 6/2/21 12:55 AM, Hugh Dickins wrote: >>> On Wed, 2 Jun 2021, Xu Yu wrote: >>> >>>> We notice that hung task happens in a conner but practical scenario when >>>> CONFIG_PREEMPT_NONE is enabled, as follows. >>>> >>>> Process 0 Process 1 Process >>>> 2..Inf >>>> split_huge_page_to_list >>>> unmap_page >>>> split_huge_pmd_address >>>> __migration_entry_wait(head) >>>> __migration_entry_wait(tail) >>>> remap_page (roll back) >>>> remove_migration_ptes >>>> rmap_walk_anon >>>> cond_resched >>>> >>>> Where __migration_entry_wait(tail) is occurred in kernel space, e.g., >>>> copy_to_user, which will immediately fault again without rescheduling, >>>> and thus occupy the cpu fully. >>>> >>>> When there are too many processes performing __migration_entry_wait on >>>> tail page, remap_page will never be done after cond_resched. >>>> >>>> This relaxes __migration_entry_wait on tail page, thus gives remap_page >>>> a chance to complete. >>>> >>>> Signed-off-by: Gang Deng >>>> Signed-off-by: Xu Yu >>> >>> Well caught: you're absolutely right that there's a bug there. >>> But isn't cond_resched() just papering over the real bug, and >>> what it should do is a "page = compound_head(page);" before the >>> get_page_unless_zero()? How does that work out in your testing? >> >> compound_head works. The patched kernel is alive for hours under >> our reproducer, which usually makes the vanilla kernel hung after >> tens of minutes at most. > > Oh, that's good news, thanks. > > (It's still likely that a well-placed cond_resched() somewhere in > mm/gup.c would also be a good idea, but none of us have yet got > around to identifying where.) We neither. If really have to do it outside of __migration_entry_wait, return value of __migration_entry_wait is needed, and many related functions have to updated, which may be undesirable. > >> >> If we use compound_head, the behavior of __migration_entry_wait(tail) >> changes from "retry fault" to "prevent THP from being split". Is that >> right? Then which is preferred? If it were me, I would prefer "retry >> fault". > > As Matthew remarked, you are asking very good questions, and split > migration entries are difficult to think about. But I believe you'll > find it works out okay. > > The point of *put_and_* wait_on_page_locked() is that it does drop > the page reference you acquired with get_page_unless_zero, as soon > as the page is on the wait queue, before actually waiting. > > So splitting the THP is only prevented for a brief interval. Now, > it's true that if there are very many tasks faulting on portions > of the huge page, in that interval between inserting the migration > entries and freezing the huge page's refcount to 0, they can reduce > the chance of splitting considerably. But that's not an excuse for > for doing get_page_unless_zero() on the wrong thing, as it was doing. We finally come to your solution, i.e., compound_head. In that case, who should resend the compound_head patch to this issue? shall we do with your s.o.b? > > Hugh > -- Thanks, Yu