From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B17ACC433F5 for ; Fri, 28 Jan 2022 08:28:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 470A26B0074; Fri, 28 Jan 2022 03:28:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F9186B0075; Fri, 28 Jan 2022 03:28:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C0F56B0078; Fri, 28 Jan 2022 03:28:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id 1F2446B0074 for ; Fri, 28 Jan 2022 03:28:23 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C45EC95294 for ; Fri, 28 Jan 2022 08:28:22 +0000 (UTC) X-FDA: 79079018844.04.A92A8A3 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf17.hostedemail.com (Postfix) with ESMTP id A905A4000D for ; Fri, 28 Jan 2022 08:28:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643358501; x=1674894501; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dsVWmk+4uaBVS/PgrR6ol+oGdqxEBzkJwnQwzMlxr3c=; b=ikSc3G/oxcCwCpiaX7EqgZg0OU0W6rceExYNPZBbMf9TuOHWAn9Y99mf R4oUdbOS+mdUZfn3joIUILFC3BdBFlcwa/uwA9hkItUkY6N6j0ip0C6Q1 mklScWvdYnpdGqla6rL6fiErNWH+39ZQgUnQqhiiUYJwdTlYaGePUuGAd x850o3p35w2OUO4065CnDbE+vw379ZFoSvwE3mAwtyPudq4zj+1VA2tHE SnONCoa9GVzR2qS3wJE00rkdPkGwMMQ/ay4hjs+pMOggH40ahaj/I5BqX Io8ozlHOixNJ4hfMJ12htHwQ6meWCW7aaY0bYqJoOJicuFWzvB0xn40Dq A==; X-IronPort-AV: E=McAfee;i="6200,9189,10240"; a="234454545" X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="234454545" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:21 -0800 X-IronPort-AV: E=Sophos;i="5.88,323,1635231600"; d="scan'208";a="697019677" Received: from yhuang6-desk2.sh.intel.com ([10.239.13.11]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2022 00:28:17 -0800 From: Huang Ying To: Peter Zijlstra , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Huang Ying , Dave Hansen , Baolin Wang , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt , zhongjiang-ali Subject: [PATCH -V11 3/3] memory tiering: skip to scan fast memory Date: Fri, 28 Jan 2022 16:27:51 +0800 Message-Id: <20220128082751.593478-4-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220128082751.593478-1-ying.huang@intel.com> References: <20220128082751.593478-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A905A4000D X-Rspam-User: nil Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ikSc3G/o"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf17.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: yfeq4o8i61x16heykekbaqu653wauq5c X-Rspamd-Server: rspam08 X-HE-Tag: 1643358501-69908 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the NUMA balancing isn't used to optimize the page placement among sockets but only among memory types, the hot pages in the fast memory node couldn't be migrated (promoted) to anywhere. So it's unnecessary to scan the pages in the fast memory node via changing their PTE/PMD mapping to be PROT_NONE. So that the page faults could be avoided too. In the test, if only the memory tiering NUMA balancing mode is enabled, t= he number of the NUMA balancing hint faults for the DRAM node is reduced to almost 0 with the patch. While the benchmark score doesn't change visibly. Signed-off-by: "Huang, Ying" Suggested-by: Dave Hansen Tested-by: Baolin Wang Reviewed-by: Baolin Wang Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: zhongjiang-ali Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- mm/huge_memory.c | 30 +++++++++++++++++++++--------- mm/mprotect.c | 13 ++++++++++++- 2 files changed, 33 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 406a3c28c026..9ce126cb0cfd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -34,6 +34,7 @@ #include #include #include +#include =20 #include #include @@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, p= md_t *pmd, } #endif =20 - /* - * Avoid trapping faults against the zero page. The read-only - * data is likely to be read-cached on the local CPU and - * local/remote hits to the zero page are not interesting. - */ - if (prot_numa && is_huge_zero_pmd(*pmd)) - goto unlock; + if (prot_numa) { + struct page *page; + /* + * Avoid trapping faults against the zero page. The read-only + * data is likely to be read-cached on the local CPU and + * local/remote hits to the zero page are not interesting. + */ + if (is_huge_zero_pmd(*pmd)) + goto unlock; =20 - if (prot_numa && pmd_protnone(*pmd)) - goto unlock; + if (pmd_protnone(*pmd)) + goto unlock; =20 + page =3D pmd_page(*pmd); + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(page_to_nid(page))) + goto unlock; + } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical * to not clear pmd intermittently to avoid race with MADV_DONTNEED diff --git a/mm/mprotect.c b/mm/mprotect.c index 0138dfcdb1d8..2fe03e695c81 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_st= ruct *vma, pmd_t *pmd, */ if (prot_numa) { struct page *page; + int nid; =20 /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area= _struct *vma, pmd_t *pmd, * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - if (target_node =3D=3D page_to_nid(page)) + nid =3D page_to_nid(page); + if (target_node =3D=3D nid) + continue; + + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(nid)) continue; } =20 --=20 2.30.2