linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, dan.j.williams@intel.com,
	dave.hansen@linux.intel.com, david@redhat.com,
	gthelen@google.com, kbusch@kernel.org, linux-mm@kvack.org,
	mhocko@suse.com, mm-commits@vger.kernel.org, osalvador@suse.de,
	rientjes@google.com, shy828301@gmail.com, tglx@linutronix.de,
	torvalds@linux-foundation.org, weixugc@google.com,
	ying.huang@intel.com, ziy@nvidia.com
Subject: [patch 05/19] mm/migrate: fix CPUHP state to update node demotion order
Date: Mon, 18 Oct 2021 15:15:35 -0700	[thread overview]
Message-ID: <20211018221535.8ljd1xMT1%akpm@linux-foundation.org> (raw)
In-Reply-To: <20211018151438.f2246e2656c041b6753a8bdd@linux-foundation.org>

From: Huang Ying <ying.huang@intel.com>
Subject: mm/migrate: fix CPUHP state to update node demotion order

The node demotion order needs to be updated during CPU hotplug.  Because
whether a NUMA node has CPU may influence the demotion order.  The update
function should be called during CPU online/offline after the
node_states[N_CPU] has been updated.  That is done in CPUHP_AP_ONLINE_DYN
during CPU online and in CPUHP_MM_VMSTAT_DEAD during CPU offline.  But in
commit 884a6e5d1f93 ("mm/migrate: update node demotion order on hotplug
events"), the function to update node demotion order is called in
CPUHP_AP_ONLINE_DYN during CPU online/offline.  This doesn't satisfy the
order requirement.

For example, there are 4 CPUs (P0, P1, P2, P3) in 2 sockets (P0, P1 in S0
and P2, P3 in S1), the demotion order is

- S0 -> NUMA_NO_NODE
- S1 -> NUMA_NO_NODE

After P2 and P3 is offlined, because S1 has no CPU now, the demotion
order should have been changed to

- S0 -> S1
- S1 -> NO_NODE

but it isn't changed, because the order updating callback for CPU hotplug
doesn't see the new nodemask.  After that, if P1 is offlined, the demotion
order is changed to the expected order as above.

So in this patch, we added CPUHP_AP_MM_DEMOTION_ONLINE and
CPUHP_MM_DEMOTION_DEAD to be called after CPUHP_AP_ONLINE_DYN and
CPUHP_MM_VMSTAT_DEAD during CPU online and offline, and register the
update function on them.

Link: https://lkml.kernel.org/r/20210929060351.7293-1-ying.huang@intel.com
Fixes: 884a6e5d1f93 ("mm/migrate: update node demotion order on hotplug events")
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/cpuhotplug.h |    4 ++++
 mm/migrate.c               |    8 +++++---
 2 files changed, 9 insertions(+), 3 deletions(-)

--- a/include/linux/cpuhotplug.h~mm-migrate-fix-cpuhp-state-to-update-node-demotion-order
+++ a/include/linux/cpuhotplug.h
@@ -72,6 +72,8 @@ enum cpuhp_state {
 	CPUHP_SLUB_DEAD,
 	CPUHP_DEBUG_OBJ_DEAD,
 	CPUHP_MM_WRITEBACK_DEAD,
+	/* Must be after CPUHP_MM_VMSTAT_DEAD */
+	CPUHP_MM_DEMOTION_DEAD,
 	CPUHP_MM_VMSTAT_DEAD,
 	CPUHP_SOFTIRQ_DEAD,
 	CPUHP_NET_MVNETA_DEAD,
@@ -240,6 +242,8 @@ enum cpuhp_state {
 	CPUHP_AP_BASE_CACHEINFO_ONLINE,
 	CPUHP_AP_ONLINE_DYN,
 	CPUHP_AP_ONLINE_DYN_END		= CPUHP_AP_ONLINE_DYN + 30,
+	/* Must be after CPUHP_AP_ONLINE_DYN for node_states[N_CPU] update */
+	CPUHP_AP_MM_DEMOTION_ONLINE,
 	CPUHP_AP_X86_HPET_ONLINE,
 	CPUHP_AP_X86_KVM_CLK_ONLINE,
 	CPUHP_AP_DTPM_CPU_ONLINE,
--- a/mm/migrate.c~mm-migrate-fix-cpuhp-state-to-update-node-demotion-order
+++ a/mm/migrate.c
@@ -3288,9 +3288,8 @@ static int __init migrate_on_reclaim_ini
 {
 	int ret;
 
-	ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",
-				migration_online_cpu,
-				migration_offline_cpu);
+	ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",
+					NULL, migration_offline_cpu);
 	/*
 	 * In the unlikely case that this fails, the automatic
 	 * migration targets may become suboptimal for nodes
@@ -3298,6 +3297,9 @@ static int __init migrate_on_reclaim_ini
 	 * rare case, do not bother trying to do anything special.
 	 */
 	WARN_ON(ret < 0);
+	ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
+				migration_online_cpu, NULL);
+	WARN_ON(ret < 0);
 
 	hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
 	return 0;
_


  parent reply	other threads:[~2021-10-18 22:15 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-18 22:14 incoming Andrew Morton
2021-10-18 22:15 ` [patch 01/19] mm/userfaultfd: selftests: fix memory corruption with thp enabled Andrew Morton
2021-10-18 22:15 ` [patch 02/19] userfaultfd: fix a race between writeprotect and exit_mmap() Andrew Morton
2021-10-18 22:15 ` [patch 03/19] mm/migrate: optimize hotplug-time demotion order updates Andrew Morton
2021-10-18 22:15 ` [patch 04/19] mm/migrate: add CPU hotplug to demotion #ifdef Andrew Morton
2021-10-18 22:15 ` Andrew Morton [this message]
2021-10-18 22:15 ` [patch 06/19] ocfs2: fix data corruption after conversion from inline format Andrew Morton
2021-10-18 22:15 ` [patch 07/19] ocfs2: mount fails with buffer overflow in strlen Andrew Morton
2021-10-18 22:15 ` [patch 08/19] memblock: check memory total_size Andrew Morton
2021-10-18 22:15 ` [patch 09/19] mm/mempolicy: do not allow illegal MPOL_F_NUMA_BALANCING | MPOL_LOCAL in mbind() Andrew Morton
2021-10-18 22:15 ` [patch 10/19] mm, slub: fix two bugs in slab_debug_trace_open() Andrew Morton
2021-10-18 22:15 ` [patch 11/19] mm, slub: fix mismatch between reconstructed freelist depth and cnt Andrew Morton
2021-10-18 22:15 ` [patch 12/19] mm, slub: fix potential memoryleak in kmem_cache_open() Andrew Morton
2021-10-18 22:16 ` [patch 13/19] mm, slub: fix potential use-after-free in slab_debugfs_fops Andrew Morton
2021-10-18 22:16 ` [patch 14/19] mm, slub: fix incorrect memcg slab count for bulk free Andrew Morton
2021-10-18 22:16 ` [patch 15/19] elfcore: correct reference to CONFIG_UML Andrew Morton
2021-10-18 22:16 ` [patch 16/19] vfs: check fd has read access in kernel_read_file_from_fd() Andrew Morton
2021-10-18 22:16 ` [patch 17/19] mm/secretmem: fix NULL page->mapping dereference in page_is_secretmem() Andrew Morton
2021-10-18 22:16 ` [patch 18/19] mm/thp: decrease nr_thps in file's mapping on THP split Andrew Morton
2021-10-18 22:16 ` [patch 19/19] mailmap: add Andrej Shadura Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211018221535.8ljd1xMT1%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=gthelen@google.com \
    --cc=kbusch@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=osalvador@suse.de \
    --cc=rientjes@google.com \
    --cc=shy828301@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=weixugc@google.com \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).