From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F991C47082 for ; Tue, 8 Jun 2021 06:41:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD28F61208 for ; Tue, 8 Jun 2021 06:41:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD28F61208 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3CD9D6B006C; Tue, 8 Jun 2021 02:41:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37D676B006E; Tue, 8 Jun 2021 02:41:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F7866B0070; Tue, 8 Jun 2021 02:41:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id E26C86B006C for ; Tue, 8 Jun 2021 02:41:29 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6DBEA40D6 for ; Tue, 8 Jun 2021 06:41:29 +0000 (UTC) X-FDA: 78229610298.32.EA5ACB1 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf13.hostedemail.com (Postfix) with ESMTP id 3BFE4E000243 for ; Tue, 8 Jun 2021 06:41:26 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 9900B219BE; Tue, 8 Jun 2021 06:41:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623134487; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=LuL40cuMBibz7ZQQ1srfQNbk5qVCLt4Pr82LmYmyub4=; b=on9ecdXXsvc5iHQa6orqhYTFGmeRr1FT0I5DxM0ICAbccYZuglOidtfaZPa5/obGxBx4Vs 9DEm6PWxLeLKsJEVZuWEOTLkcokOG+CwpbsKF1F0DVq+R8r12+rlchEnZirfvvGBBVfHlP RxV9j2P7025c1LlfYM09K5/YzhUAe5o= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 67F93A3B81; Tue, 8 Jun 2021 06:41:27 +0000 (UTC) Date: Tue, 8 Jun 2021 08:41:26 +0200 From: Michal Hocko To: Yang Shi Cc: Zi Yan , nao.horiguchi@gmail.com, "Kirill A. Shutemov" , Hugh Dickins , Andrew Morton , Linux MM , Linux Kernel Mailing List Subject: Re: [PATCH] mm: mempolicy: don't have to split pmd for huge zero page Message-ID: References: <20210604203513.240709-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=on9ecdXX; spf=pass (imf13.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3BFE4E000243 X-Stat-Signature: gwzxyp194yudbhtw8scpw695zwyacrb4 X-HE-Tag: 1623134486-762441 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 07-06-21 15:02:39, Yang Shi wrote: > On Mon, Jun 7, 2021 at 11:55 AM Michal Hocko wrote: > > > > On Mon 07-06-21 10:00:01, Yang Shi wrote: > > > On Sun, Jun 6, 2021 at 11:21 PM Michal Hocko wrote: > > > > > > > > On Fri 04-06-21 13:35:13, Yang Shi wrote: > > > > > When trying to migrate pages to obey mempolicy, the huge zero page is > > > > > split then the page table walk at PTE level just skips zero page. So it > > > > > seems pointless to split huge zero page, it could be just skipped like > > > > > base zero page. > > > > > > > > My THP knowledge is not the best but this is incorrect AIACS. Huge zero > > > > page is not split. We do split the pmd which is mapping the said page. I > > > > suspect you refer to vm_normal_page when talking about a zero page but > > > > please be aware that huge zero page is not a normal zero page. It is > > > > allocated dynamically (see get_huge_zero_page). > > > > > > For a normal huge page, yes, split_huge_pmd() just splits pmd. But > > > actually the base zero pfn will be inserted to PTEs when splitting > > > huge zero pmd. Please check __split_huge_zero_page_pmd() out. > > > > My bad. I didn't have a look all the way down there. The naming > > suggested that this is purely page table operations and I have suspected > > that ptes just point to the offset of the THP. > > > > But I am obviously wrong here. Sorry about that. > > > > > I should make this point clearer in the commit log. Sorry for the confusion. > > > > > > > > > > > So in the end you patch disables mbind of zero pages to a target node > > > > and that is a regression. > > > > > > Do we really migrate zero page? IIUC zero page is just skipped by > > > vm_normal_page() check in queue_pages_pte_range(), isn't it? > > > > Yeah, normal zero pages are skipped indeed. I haven't studied why this > > is the case yet. It surely sounds a bit suspicious because this is an > > explicit request to migrate memory and if the zero page is misplaced it > > should be moved. On the hand this would increase RSS so maybe this is > > the point. > > The zero page is a global shared page, I don't think "misplace" > applies to it. It doesn't make too much sense to migrate a shared > page. Actually there is page mapcount check in migrate_page_add() to > skip shared normal pages as well. I didn't really mean to migrate zero page itself. What I meant was to instanciate a new page when the global one is on a different NUMA node than the bind() requests. This can be either done by having per NUMA zero page or simply allocate a new page for the exclusive mapping. > > > > Have you tested the patch? > > > > > > No, just build test. I thought this change was straightforward. > > > > > > > > > > > > Set ACTION_CONTINUE to prevent the walk_page_range() split the pmd for > > > > > this case. > > > > > > > > Btw. this changelog is missing a problem statement. I suspect there is > > > > no actual problem that it should fix and it is likely driven by reading > > > > the code. Right? > > > > > > The actual problem is it is pointless to split a huge zero pmd. Yes, > > > it is driven by visual inspection. > > > > Is there any actual workload that cares? This is quite a subtle area so > > I would be careful to do changes just because... > > I'm not sure whether there is measurable improvement for actual > workloads, but I believe this change does eliminate some unnecessary > work. I can see why being consistent here is a good argument. On the other hand it would be imho better to look for reasons why zero pages are left misplaced before making the code consistent. From a very quick git archeology it seems that vm_normal_page has been used since MPOL_MF_MOVE was introduced. At the time (dc9aa5b9d65fd) vm_normal_page hasn't skipped through zero page AFAICS. I do not remember all the details about zero page (wrt. pte special) handling though so it might be hidden at some other place. In any case the existing code doesn't really work properly. The question is whether anybody actually cares but this is definitely something worth looking into IMHO. > I think the test shown in the previous email gives us some confidence > that the change doesn't have regression. Yes, this is true. -- Michal Hocko SUSE Labs