From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73373C636C8 for ; Thu, 22 Jul 2021 03:19:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D7EA6127C for ; Thu, 22 Jul 2021 03:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231177AbhGVCil (ORCPT ); Wed, 21 Jul 2021 22:38:41 -0400 Received: from mga12.intel.com ([192.55.52.136]:47202 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231178AbhGVCie (ORCPT ); Wed, 21 Jul 2021 22:38:34 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10052"; a="191139225" X-IronPort-AV: E=Sophos;i="5.84,259,1620716400"; d="scan'208";a="191139225" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2021 20:19:10 -0700 X-IronPort-AV: E=Sophos;i="5.84,259,1620716400"; d="scan'208";a="501577058" Received: from yhuang6-desk2.sh.intel.com ([10.239.159.119]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2021 20:19:07 -0700 From: Huang Ying To: linux-kernel@vger.kernel.org Cc: Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Peter Zijlstra , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt , linux-mm@kvack.org Subject: [PATCH -V7 6/6] memory tiering: adjust hot threshold automatically Date: Thu, 22 Jul 2021 11:18:19 +0800 Message-Id: <20210722031819.3446711-7-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210722031819.3446711-1-ying.huang@intel.com> References: <20210722031819.3446711-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It isn't easy for the administrator to determine the hot threshold. So in this patch, a method to adjust the hot threshold automatically is implemented. The basic idea is to control the number of the candidate promotion pages to match the promotion rate limit. If the hint page fault latency of a page is less than the hot threshold, we will try to promote the page, and the page is called the candidate promotion page. If the number of the candidate promotion pages in the statistics interval is much more than the promotion rate limit, the hot threshold will be decreased to reduce the number of the candidate promotion pages. Otherwise, the hot threshold will be increased to increase the number of the candidate promotion pages. To make the above method works, in each statistics interval, the total number of the pages to check (on which the hint page faults occur) and the hot/cold distribution need to be stable. Because the page tables are scanned linearly in NUMA balancing, but the hot/cold distribution isn't uniform along the address, the statistics interval should be larger than the NUMA balancing scan period. So in the patch, the max scan period is used as statistics interval and it works well in our tests. The sysctl knob kernel.numa_balancing_hot_threshold_ms becomes the initial value and max value of the hot threshold. The patch improves the score of pmbench memory accessing benchmark with 80:20 read/write ratio and normal access address distribution by 3.9% with 32.4% fewer NUMA page migrations on a 2 socket Intel server with Optance DC Persistent Memory. Because it improves the accuracy of the hot page selection. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 3 +++ kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 39 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c71beb262c44..ed7eef917856 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -883,6 +883,9 @@ typedef struct pglist_data { #ifdef CONFIG_NUMA_BALANCING unsigned long numa_ts; unsigned long numa_nr_candidate; + unsigned long numa_threshold_ts; + unsigned long numa_threshold_nr_candidate; + unsigned long numa_threshold; #endif /* Fields commonly accessed by the page reclaim scanner */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d4ce77cc22d3..572fc4efa3db 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1433,6 +1433,35 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat, return true; } +#define NUMA_MIGRATION_ADJUST_STEPS 16 + +static void numa_migration_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, + unsigned long ref_th) +{ + unsigned long now = jiffies, last_th_ts, th_period; + unsigned long unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max); + last_th_ts = pgdat->numa_threshold_ts; + if (now > last_th_ts + th_period && + cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) { + ref_cand = rate_limit * + sysctl_numa_balancing_scan_period_max / 1000; + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate; + unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS; + th = pgdat->numa_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th); + pgdat->numa_threshold_nr_candidate = nr_cand; + pgdat->numa_threshold = th; + } +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1447,19 +1476,22 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long rate_limit, latency, th; + unsigned long rate_limit, latency, th, def_th; pgdat = NODE_DATA(dst_nid); if (pgdat_free_space_enough(pgdat)) return true; - th = sysctl_numa_balancing_hot_threshold; + def_th = sysctl_numa_balancing_hot_threshold; + rate_limit = + sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); + numa_migration_adjust_threshold(pgdat, rate_limit, def_th); + + th = pgdat->numa_threshold ? : def_th; latency = numa_hint_fault_latency(page); if (latency > th) return false; - rate_limit = - sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT); return numa_migration_check_rate_limit(pgdat, rate_limit, thp_nr_pages(page)); } -- 2.30.2