From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02A3E6465 for ; Tue, 22 Mar 2022 21:46:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B01E4C340EC; Tue, 22 Mar 2022 21:46:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985587; bh=9IqJpby1rXcDo2CXiHZeKmOcsdDkqvvMLJ8P6jqL7js=; h=Date:To:From:In-Reply-To:Subject:From; b=NU13ZgZaOecsXMgL969FB7+Dl297YXkEvTedEQ4LZBaEGlT4wEsxmGCAJOdZtwpB2 vHsINt3DiUbjgW7TeVCBlSGNcapqIDleBr0qzT/lFULeY0Q50c/6DxtMtfJiUVYBlc ywzQL9ynkOqN8Gu8/wH6aqviwoLhO6KX2+K/fYOQ= Date: Tue, 22 Mar 2022 14:46:27 -0700 To: ziy@nvidia.com,zhongjiang-ali@linux.alibaba.com,weixugc@google.com,shy828301@gmail.com,shakeelb@google.com,riel@surriel.com,rdunlap@infradead.org,peterz@infradead.org,osalvador@suse.de,mhocko@suse.com,mgorman@techsingularity.net,hannes@cmpxchg.org,feng.tang@intel.com,dave.hansen@linux.intel.com,baolin.wang@linux.alibaba.com,ying.huang@intel.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 156/227] memory tiering: skip to scan fast memory Message-Id: <20220322214627.B01E4C340EC@smtp.kernel.org> Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: From: Huang Ying Subject: memory tiering: skip to scan fast memory If the NUMA balancing isn't used to optimize the page placement among sockets but only among memory types, the hot pages in the fast memory node couldn't be migrated (promoted) to anywhere. So it's unnecessary to scan the pages in the fast memory node via changing their PTE/PMD mapping to be PROT_NONE. So that the page faults could be avoided too. In the test, if only the memory tiering NUMA balancing mode is enabled, the number of the NUMA balancing hint faults for the DRAM node is reduced to almost 0 with the patch. While the benchmark score doesn't change visibly. Link: https://lkml.kernel.org/r/20220221084529.1052339-4-ying.huang@intel.com Signed-off-by: "Huang, Ying" Suggested-by: Dave Hansen Tested-by: Baolin Wang Reviewed-by: Baolin Wang Acked-by: Johannes Weiner Reviewed-by: Oscar Salvador Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Zi Yan Cc: Wei Xu Cc: Shakeel Butt Cc: zhongjiang-ali Cc: Feng Tang Cc: Randy Dunlap Signed-off-by: Andrew Morton --- mm/huge_memory.c | 30 +++++++++++++++++++++--------- mm/mprotect.c | 13 ++++++++++++- 2 files changed, 33 insertions(+), 10 deletions(-) --- a/mm/huge_memory.c~memory-tiering-skip-to-scan-fast-memory +++ a/mm/huge_memory.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struc } #endif - /* - * Avoid trapping faults against the zero page. The read-only - * data is likely to be read-cached on the local CPU and - * local/remote hits to the zero page are not interesting. - */ - if (prot_numa && is_huge_zero_pmd(*pmd)) - goto unlock; + if (prot_numa) { + struct page *page; + /* + * Avoid trapping faults against the zero page. The read-only + * data is likely to be read-cached on the local CPU and + * local/remote hits to the zero page are not interesting. + */ + if (is_huge_zero_pmd(*pmd)) + goto unlock; - if (prot_numa && pmd_protnone(*pmd)) - goto unlock; + if (pmd_protnone(*pmd)) + goto unlock; + page = pmd_page(*pmd); + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(page_to_nid(page))) + goto unlock; + } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical * to not clear pmd intermittently to avoid race with MADV_DONTNEED --- a/mm/mprotect.c~memory-tiering-skip-to-scan-fast-memory +++ a/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -83,6 +84,7 @@ static unsigned long change_pte_range(st */ if (prot_numa) { struct page *page; + int nid; /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -109,7 +111,16 @@ static unsigned long change_pte_range(st * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - if (target_node == page_to_nid(page)) + nid = page_to_nid(page); + if (target_node == nid) + continue; + + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(nid)) continue; } _ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1F93C4321E for ; Tue, 22 Mar 2022 21:46:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236489AbiCVVsQ (ORCPT ); Tue, 22 Mar 2022 17:48:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236623AbiCVVsO (ORCPT ); Tue, 22 Mar 2022 17:48:14 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72B5B60040 for ; Tue, 22 Mar 2022 14:46:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 17685B81DC6 for ; Tue, 22 Mar 2022 21:46:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B01E4C340EC; Tue, 22 Mar 2022 21:46:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985587; bh=9IqJpby1rXcDo2CXiHZeKmOcsdDkqvvMLJ8P6jqL7js=; h=Date:To:From:In-Reply-To:Subject:From; b=NU13ZgZaOecsXMgL969FB7+Dl297YXkEvTedEQ4LZBaEGlT4wEsxmGCAJOdZtwpB2 vHsINt3DiUbjgW7TeVCBlSGNcapqIDleBr0qzT/lFULeY0Q50c/6DxtMtfJiUVYBlc ywzQL9ynkOqN8Gu8/wH6aqviwoLhO6KX2+K/fYOQ= Date: Tue, 22 Mar 2022 14:46:27 -0700 To: ziy@nvidia.com, zhongjiang-ali@linux.alibaba.com, weixugc@google.com, shy828301@gmail.com, shakeelb@google.com, riel@surriel.com, rdunlap@infradead.org, peterz@infradead.org, osalvador@suse.de, mhocko@suse.com, mgorman@techsingularity.net, hannes@cmpxchg.org, feng.tang@intel.com, dave.hansen@linux.intel.com, baolin.wang@linux.alibaba.com, ying.huang@intel.com, akpm@linux-foundation.org, patches@lists.linux.dev, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 156/227] memory tiering: skip to scan fast memory Message-Id: <20220322214627.B01E4C340EC@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Huang Ying Subject: memory tiering: skip to scan fast memory If the NUMA balancing isn't used to optimize the page placement among sockets but only among memory types, the hot pages in the fast memory node couldn't be migrated (promoted) to anywhere. So it's unnecessary to scan the pages in the fast memory node via changing their PTE/PMD mapping to be PROT_NONE. So that the page faults could be avoided too. In the test, if only the memory tiering NUMA balancing mode is enabled, the number of the NUMA balancing hint faults for the DRAM node is reduced to almost 0 with the patch. While the benchmark score doesn't change visibly. Link: https://lkml.kernel.org/r/20220221084529.1052339-4-ying.huang@intel.com Signed-off-by: "Huang, Ying" Suggested-by: Dave Hansen Tested-by: Baolin Wang Reviewed-by: Baolin Wang Acked-by: Johannes Weiner Reviewed-by: Oscar Salvador Reviewed-by: Yang Shi Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Zi Yan Cc: Wei Xu Cc: Shakeel Butt Cc: zhongjiang-ali Cc: Feng Tang Cc: Randy Dunlap Signed-off-by: Andrew Morton --- mm/huge_memory.c | 30 +++++++++++++++++++++--------- mm/mprotect.c | 13 ++++++++++++- 2 files changed, 33 insertions(+), 10 deletions(-) --- a/mm/huge_memory.c~memory-tiering-skip-to-scan-fast-memory +++ a/mm/huge_memory.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struc } #endif - /* - * Avoid trapping faults against the zero page. The read-only - * data is likely to be read-cached on the local CPU and - * local/remote hits to the zero page are not interesting. - */ - if (prot_numa && is_huge_zero_pmd(*pmd)) - goto unlock; + if (prot_numa) { + struct page *page; + /* + * Avoid trapping faults against the zero page. The read-only + * data is likely to be read-cached on the local CPU and + * local/remote hits to the zero page are not interesting. + */ + if (is_huge_zero_pmd(*pmd)) + goto unlock; - if (prot_numa && pmd_protnone(*pmd)) - goto unlock; + if (pmd_protnone(*pmd)) + goto unlock; + page = pmd_page(*pmd); + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(page_to_nid(page))) + goto unlock; + } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical * to not clear pmd intermittently to avoid race with MADV_DONTNEED --- a/mm/mprotect.c~memory-tiering-skip-to-scan-fast-memory +++ a/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -83,6 +84,7 @@ static unsigned long change_pte_range(st */ if (prot_numa) { struct page *page; + int nid; /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -109,7 +111,16 @@ static unsigned long change_pte_range(st * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - if (target_node == page_to_nid(page)) + nid = page_to_nid(page); + if (target_node == nid) + continue; + + /* + * Skip scanning top tier node if normal numa + * balancing is disabled + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + node_is_toptier(nid)) continue; } _