From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D86C3C433DB for ; Tue, 9 Mar 2021 00:04:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 70C486527D for ; Tue, 9 Mar 2021 00:04:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70C486527D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F0CCE8D008F; Mon, 8 Mar 2021 19:04:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EBD608D007F; Mon, 8 Mar 2021 19:04:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D36BF8D008F; Mon, 8 Mar 2021 19:04:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id B8DA38D007F for ; Mon, 8 Mar 2021 19:04:03 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 746943656 for ; Tue, 9 Mar 2021 00:04:03 +0000 (UTC) X-FDA: 77898387966.20.F47C181 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf22.hostedemail.com (Postfix) with ESMTP id C54E1C0001FA for ; Tue, 9 Mar 2021 00:04:00 +0000 (UTC) Received: by mail-ed1-f50.google.com with SMTP id w9so17293724edt.13 for ; Mon, 08 Mar 2021 16:04:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0dbxPVxty7S0FTjd9YfEPyg4JwOlpFEdqNuRwZRhWT0=; b=KeFcQQvct2WR7F4ahCrvTVGSiA4xlLYJbeuIpAW3/+CT5X/O62jw1S/rGUtlNgFKI8 2UDmHQ9Z09UhYURZn0cHRtxl8MzUEQEp1JEmNnJjxsF3ypZSZx5NJ2lnUaUcF+ktcwJH tEJ1cFCYZrM7OJqP1iQbW9nEjcW8/SfPTt4MoIU9ZUpYEuke6wCuJr0EEzIt/XfrKRmZ AAdMzX1Z0uBS+KEOL4mt3JwSqIoNQM6znvbiVNhnjJNQXUZWrcmxWyU+55MgiVLPeVKU bb1Fh2vc60uZ7acU9TVxiByzP9agtxWQmitIUCj2VTe3k9/Abcuwj6Aq3X065kbY3aGL dKfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0dbxPVxty7S0FTjd9YfEPyg4JwOlpFEdqNuRwZRhWT0=; b=V1joq+Yn6W8okPuo/7Xvgpj5FnBhGEDEHav0EXeqHV+ebTA0DtU4DltlMPxdNKYlFq EJV5/aerRRYOGoiirhSiZhU4hZDPHxvERa2IpQMMrPAeZsxuBgWC7YvbwPMjiZQWAnMY 5SvVtFzBEMMkg0kuod4ZbjmThIdOxYhJZwl1tfl0jqkGigacXoYh8oT4MkHmvEE5b6Sr 1BCnDDJW9TEZzkQE72Ss6AVYt9+QIIqefW+HGRg31vnxE3WDz0LMrxXaWekWbirVuz1r NEirpVp0g6hX5YjfzPOgsBkeMFPFP80Mud1dYiUsoh3xJ8WPQtM4UaskmgtKtmDZ+0+K UDug== X-Gm-Message-State: AOAM531OEFlL0et46V4UpDXl7c2ZZnpgOo46RCiy1m2OV0bHCfUsn0DX 6OQIwM3Opovw0CBb5wDL3NPfcgOLX8+9qrhW7O0= X-Google-Smtp-Source: ABdhPJwrn4Ocd3cyj62z+neHA6zA7ZX5REnfKTANwwcLg7Ce3Kra69sLmHMv7Y05T0MLzFAQF6EC0uH4rT9tuRwiVpw= X-Received: by 2002:aa7:df86:: with SMTP id b6mr1072395edy.294.1615248239438; Mon, 08 Mar 2021 16:03:59 -0800 (PST) MIME-Version: 1.0 References: <20210304235949.7922C1C3@viggo.jf.intel.com> <20210304235955.05514241@viggo.jf.intel.com> In-Reply-To: <20210304235955.05514241@viggo.jf.intel.com> From: Yang Shi Date: Mon, 8 Mar 2021 16:03:47 -0800 Message-ID: Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events To: Dave Hansen Cc: Linux Kernel Mailing List , Linux MM , Yang Shi , David Rientjes , Huang Ying , Dan Williams , David Hildenbrand , Oscar Salvador Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: jwxhgn1jdykmw9s9x55f1uq8u81bpk8y X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C54E1C0001FA Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=mail-ed1-f50.google.com; client-ip=209.85.208.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615248240-40464 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 4, 2021 at 4:00 PM Dave Hansen wrote: > > > From: Dave Hansen > > Reclaim-based migration is attempting to optimize data placement in > memory based on the system topology. If the system changes, so must > the migration ordering. > > The implementation is conceptually simple and entirely unoptimized. > On any memory or CPU hotplug events, assume that a node was added or > removed and recalculate all migration targets. This ensures that the > node_demotion[] array is always ready to be used in case the new > reclaim mode is enabled. > > This recalculation is far from optimal, most glaringly that it does > not even attempt to figure out the hotplug event would have some > *actual* effect on the demotion order. But, given the expected > paucity of hotplug events, this should be fine. > > === What does RCU provide? === > > Imaginge a simple loop which walks down the demotion path looking > for the last node: > > terminal_node = start_node; > while (node_demotion[terminal_node] != NUMA_NO_NODE) { > terminal_node = node_demotion[terminal_node]; > } > > The initial values are: > > node_demotion[0] = 1; > node_demotion[1] = NUMA_NO_NODE; > > and are updated to: > > node_demotion[0] = NUMA_NO_NODE; > node_demotion[1] = 0; > > What guarantees that the loop did not observe: > > node_demotion[0] = 1; > node_demotion[1] = 0; > > and would loop forever? > > With RCU, a rcu_read_lock/unlock() can be placed around the > loop. Since the write side does a synchronize_rcu(), the loop > that observed the old contents is known to be complete after the > synchronize_rcu() has completed. > > RCU, combined with disable_all_migrate_targets(), ensures that > the old migration state is not visible by the time > __set_migration_target_nodes() is called. > > === What does READ_ONCE() provide? === > > READ_ONCE() forbids the compiler from merging or reordering > successive reads of node_demotion[]. This ensures that any > updates are *eventually* observed. > > Consider the above loop again. The compiler could theoretically > read the entirety of node_demotion[] into local storage > (registers) and never go back to memory, and *permanently* > observe bad values for node_demotion[]. > > Note: RCU does not provide any universal compiler-ordering > guarantees: > > https://lore.kernel.org/lkml/20150921204327.GH4029@linux.vnet.ibm.com/ > > Signed-off-by: Dave Hansen > Cc: Yang Shi > Cc: David Rientjes > Cc: Huang Ying > Cc: Dan Williams > Cc: David Hildenbrand > Cc: osalvador > --- > > b/mm/migrate.c | 159 +++++++++++++++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 137 insertions(+), 22 deletions(-) > > diff -puN mm/migrate.c~enable-numa-demotion mm/migrate.c > --- a/mm/migrate.c~enable-numa-demotion 2021-03-04 15:35:53.670806436 -0800 > +++ b/mm/migrate.c 2021-03-04 15:35:53.677806436 -0800 > @@ -49,6 +49,7 @@ > #include > #include > #include > +#include > > #include > > @@ -1192,8 +1193,12 @@ out: > */ > > /* > - * Writes to this array occur without locking. READ_ONCE() > - * is recommended for readers to ensure consistent reads. > + * Writes to this array occur without locking. Cycles are > + * not allowed: Node X demotes to Y which demotes to X... > + * > + * If multiple reads are performed, a single rcu_read_lock() > + * must be held over all reads to ensure that no cycles are > + * observed. > */ > static int node_demotion[MAX_NUMNODES] __read_mostly = > {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; > @@ -1209,13 +1214,22 @@ static int node_demotion[MAX_NUMNODES] _ > */ > int next_demotion_node(int node) > { > + int target; > + > /* > - * node_demotion[] is updated without excluding > - * this function from running. READ_ONCE() avoids > - * reading multiple, inconsistent 'node' values > - * during an update. > + * node_demotion[] is updated without excluding this > + * function from running. RCU doesn't provide any > + * compiler barriers, so the READ_ONCE() is required > + * to avoid compiler reordering or read merging. > + * > + * Make sure to use RCU over entire code blocks if > + * node_demotion[] reads need to be consistent. > */ > - return READ_ONCE(node_demotion[node]); > + rcu_read_lock(); > + target = READ_ONCE(node_demotion[node]); > + rcu_read_unlock(); > + > + return target; > } > > /* > @@ -3220,8 +3234,9 @@ void migrate_vma_finalize(struct migrate > EXPORT_SYMBOL(migrate_vma_finalize); > #endif /* CONFIG_DEVICE_PRIVATE */ > > +#if defined(CONFIG_MEMORY_HOTPLUG) > /* Disable reclaim-based migration. */ > -static void disable_all_migrate_targets(void) > +static void __disable_all_migrate_targets(void) > { > int node; > > @@ -3229,6 +3244,25 @@ static void disable_all_migrate_targets( > node_demotion[node] = NUMA_NO_NODE; > } > > +static void disable_all_migrate_targets(void) > +{ > + __disable_all_migrate_targets(); > + > + /* > + * Ensure that the "disable" is visible across the system. > + * Readers will see either a combination of before+disable > + * state or disable+after. They will never see before and > + * after state together. > + * > + * The before+after state together might have cycles and > + * could cause readers to do things like loop until this > + * function finishes. This ensures they can only see a > + * single "bad" read and would, for instance, only loop > + * once. > + */ > + synchronize_rcu(); > +} > + > /* > * Find an automatic demotion target for 'node'. > * Failing here is OK. It might just indicate > @@ -3291,20 +3325,6 @@ static void __set_migration_target_nodes > disable_all_migrate_targets(); > > /* > - * Ensure that the "disable" is visible across the system. > - * Readers will see either a combination of before+disable > - * state or disable+after. They will never see before and > - * after state together. > - * > - * The before+after state together might have cycles and > - * could cause readers to do things like loop until this > - * function finishes. This ensures they can only see a > - * single "bad" read and would, for instance, only loop > - * once. > - */ > - smp_wmb(); > - > - /* > * Allocations go close to CPUs, first. Assume that > * the migration path starts at the nodes with CPUs. > */ > @@ -3347,3 +3367,98 @@ static void set_migration_target_nodes(v > __set_migration_target_nodes(); > put_online_mems(); > } > + > +/* > + * React to hotplug events that might affect the migration targets > + * like events that online or offline NUMA nodes. > + * > + * The ordering is also currently dependent on which nodes have > + * CPUs. That means we need CPU on/offline notification too. > + */ > +static int migration_online_cpu(unsigned int cpu) > +{ > + set_migration_target_nodes(); > + return 0; > +} > + > +static int migration_offline_cpu(unsigned int cpu) > +{ > + set_migration_target_nodes(); > + return 0; > +} > + > +/* > + * This leaves migrate-on-reclaim transiently disabled between > + * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs > + * whether reclaim-based migration is enabled or not, which > + * ensures that the user can turn reclaim-based migration at > + * any time without needing to recalculate migration targets. > + * > + * These callbacks already hold get_online_mems(). That is why > + * __set_migration_target_nodes() can be used as opposed to > + * set_migration_target_nodes(). > + */ > +static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, > + unsigned long action, void *arg) > +{ > + switch (action) { > + case MEM_GOING_OFFLINE: > + /* > + * Make sure there are not transient states where > + * an offline node is a migration target. This > + * will leave migration disabled until the offline > + * completes and the MEM_OFFLINE case below runs. > + */ > + disable_all_migrate_targets(); > + > + /* > + * Ensure the disable operation is globally visible. > + * This avoids readers ever being able to > + * simultaneously observe the old (pre-hotplug) and > + * new (post-hotplug) migration targets. > + */ > + synchronize_rcu(); It seems disable_all_migrate_targets() already has synchronize_rcu() called. We don't need to call it twice. Otherwise, it looks good to me. Reviewed-by: Yang Shi > + break; > + case MEM_OFFLINE: > + case MEM_ONLINE: > + /* > + * Recalculate the target nodes once the node > + * reaches its final state (online or offline). > + */ > + __set_migration_target_nodes(); > + break; > + case MEM_CANCEL_OFFLINE: > + /* > + * MEM_GOING_OFFLINE disabled all the migration > + * targets. Reenable them. > + */ > + __set_migration_target_nodes(); > + break; > + case MEM_GOING_ONLINE: > + case MEM_CANCEL_ONLINE: > + break; > + } > + > + return notifier_from_errno(0); > +} > + > +static int __init migrate_on_reclaim_init(void) > +{ > + int ret; > + > + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim", > + migration_online_cpu, > + migration_offline_cpu); > + /* > + * In the unlikely case that this fails, the automatic > + * migration targets may become suboptimal for nodes > + * where N_CPU changes. With such a small impact in a > + * rare case, do not bother trying to do anything special. > + */ > + WARN_ON(ret < 0); > + > + hotplug_memory_notifier(migrate_on_reclaim_callback, 100); > + return 0; > +} > +late_initcall(migrate_on_reclaim_init); > +#endif /* CONFIG_MEMORY_HOTPLUG */ > _ >