From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932921AbdDFLIA (ORCPT ); Thu, 6 Apr 2017 07:08:00 -0400 Received: from mx2.suse.de ([195.135.220.15]:47799 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754902AbdDFLHw (ORCPT ); Thu, 6 Apr 2017 07:07:52 -0400 Date: Thu, 6 Apr 2017 13:07:47 +0200 From: Michal Hocko To: Reza Arbab Cc: Mel Gorman , linux-mm@kvack.org, Andrew Morton , Vlastimil Babka , Andrea Arcangeli , Yasuaki Ishimatsu , Tang Chen , qiuxishi@huawei.com, Kani Toshimitsu , slaoub@gmail.com, Joonsoo Kim , Andi Kleen , Zhang Zhen , David Rientjes , Daniel Kiper , Igor Mammedov , Vitaly Kuznetsov , LKML , Chris Metcalf , Dan Williams , Heiko Carstens , Lai Jiangshan , Martin Schwidefsky Subject: Re: [PATCH 0/6] mm: make movable onlining suck less Message-ID: <20170406110747.GJ5497@dhcp22.suse.cz> References: <20170404183012.a6biape5y7vu6cjm@arbab-laptop> <20170404194122.GS15132@dhcp22.suse.cz> <20170404214339.6o4c4uhwudyhzbbo@arbab-laptop> <20170405064239.GB6035@dhcp22.suse.cz> <20170405092427.GG6035@dhcp22.suse.cz> <20170405145304.wxzfavqxnyqtrlru@arbab-laptop> <20170405154258.GR6035@dhcp22.suse.cz> <20170405173248.4vtdgk2kolbzztya@arbab-laptop> <20170405181502.GU6035@dhcp22.suse.cz> <20170405210214.GX6035@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170405210214.GX6035@dhcp22.suse.cz> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 05-04-17 23:02:14, Michal Hocko wrote: [...] > OK, I was staring into the code and I guess I finally understand what is > going on here. Looking at arch_add_memory->...->register_mem_sect_under_node > was just misleading. I am still not 100% sure why but we try to do the > same thing later from register_one_node->link_mem_sections for nodes > which were offline. I should have noticed this path before. And here > is the difference from the previous code. We are past arch_add_memory > and that path used to do __add_zone which among other things will also > resize node boundaries. I am not doing that anymore because I postpone > that to the onlining phase. Jeez this code is so convoluted my head > spins. > > I am not really sure how to fix this. I suspect register_mem_sect_under_node > should just ignore the online state of the node. But I wouldn't > be all that surprised if this had some subtle reason as well. An > alternative would be to actually move register_mem_sect_under_node out > of register_new_memory and move it up the call stack, most probably to > add_memory_resource. We have the range and can map it to the memblock > and so will not rely on the node range. I will sleep over it and > hopefully come up with something tomorrow. OK, so this is the most sensible way I was able to come up with. I didn't get to test it yet but from the above analysis it should work. --- >>From 6c99a3284ea70262e3f25cbe71826a57aeaa7ffd Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Thu, 6 Apr 2017 11:59:37 +0200 Subject: [PATCH] mm, memory_hotplug: split up register_one_node Memory hotplug (add_memory_resource) has to reinitialize node infrastructure if the node is offline (one which went through the complete add_memory(); remove_memory() cycle). That involves node registration to the kobj infrastructure (register_node), the proper association with cpus (register_cpu_under_node) and finally creation of node<->memblock symlinks (link_mem_sections). The last part requires to know node_start_pfn and node_spanned_pages which we currently have but a leter patch will postpone this initialization to the onlining phase which happens later. In fact we do not need to rely on the early initialization even now because we know which range is currently hot added. Split register_one_node into core which does all the common work for the boot time NUMA initialization and the hotplug (__register_one_node). register_one_node keeps the full initialization while hotplug calls __register_one_node and manually calls link_mem_sections for the proper range. This shouldn't introduce any functional change. Signed-off-by: Michal Hocko --- drivers/base/node.c | 51 ++++++++++++++++++++------------------------------- include/linux/node.h | 35 ++++++++++++++++++++++++++++++++++- mm/memory_hotplug.c | 17 ++++++++++++++++- 3 files changed, 70 insertions(+), 33 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 06294d69779b..dff5b53f7905 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -461,10 +461,9 @@ int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, return 0; } -static int link_mem_sections(int nid) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages) { - unsigned long start_pfn = NODE_DATA(nid)->node_start_pfn; - unsigned long end_pfn = start_pfn + NODE_DATA(nid)->node_spanned_pages; + unsigned long end_pfn = start_pfn + nr_pages; unsigned long pfn; struct memory_block *mem_blk = NULL; int err = 0; @@ -552,10 +551,7 @@ static int node_memory_callback(struct notifier_block *self, return NOTIFY_OK; } #endif /* CONFIG_HUGETLBFS */ -#else /* !CONFIG_MEMORY_HOTPLUG_SPARSE */ - -static int link_mem_sections(int nid) { return 0; } -#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ +#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ #if !defined(CONFIG_MEMORY_HOTPLUG_SPARSE) || \ !defined(CONFIG_HUGETLBFS) @@ -569,39 +565,32 @@ static void init_node_hugetlb_work(int nid) { } #endif -int register_one_node(int nid) +int __register_one_node(int nid) { - int error = 0; + int p_node = parent_node(nid); + struct node *parent = NULL; + int error; int cpu; - if (node_online(nid)) { - int p_node = parent_node(nid); - struct node *parent = NULL; - - if (p_node != nid) - parent = node_devices[p_node]; - - node_devices[nid] = kzalloc(sizeof(struct node), GFP_KERNEL); - if (!node_devices[nid]) - return -ENOMEM; - - error = register_node(node_devices[nid], nid, parent); + if (p_node != nid) + parent = node_devices[p_node]; - /* link cpu under this node */ - for_each_present_cpu(cpu) { - if (cpu_to_node(cpu) == nid) - register_cpu_under_node(cpu, nid); - } + node_devices[nid] = kzalloc(sizeof(struct node), GFP_KERNEL); + if (!node_devices[nid]) + return -ENOMEM; - /* link memory sections under this node */ - error = link_mem_sections(nid); + error = register_node(node_devices[nid], nid, parent); - /* initialize work queue for memory hot plug */ - init_node_hugetlb_work(nid); + /* link cpu under this node */ + for_each_present_cpu(cpu) { + if (cpu_to_node(cpu) == nid) + register_cpu_under_node(cpu, nid); } - return error; + /* initialize work queue for memory hot plug */ + init_node_hugetlb_work(nid); + return error; } void unregister_one_node(int nid) diff --git a/include/linux/node.h b/include/linux/node.h index 2115ad5d6f19..2baa640d0b92 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -30,9 +30,38 @@ struct memory_block; extern struct node *node_devices[]; typedef void (*node_registration_func_t)(struct node *); +#ifdef CONFIG_MEMORY_HOTPLUG_SPARSE +extern int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages); +#else +static int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages) +{ + return 0; +} +#endif + extern void unregister_node(struct node *node); #ifdef CONFIG_NUMA -extern int register_one_node(int nid); +/* Core of the node registration - only memory hotplug should use this */ +extern int __register_one_node(int nid); + +/* Registers an online node */ +static inline int register_one_node(int nid) +{ + int error = 0; + + if (node_online(nid)) { + struct pglist_data *pgdat = NODE_DATA(nid); + + error = __register_one_node(nid); + if (error) + return error; + /* link memory sections under this node */ + error = link_mem_sections(nid, pgdat->node_start_pfn, pgdat->node_spanned_pages); + } + + return error; +} + extern void unregister_one_node(int nid); extern int register_cpu_under_node(unsigned int cpu, unsigned int nid); extern int unregister_cpu_under_node(unsigned int cpu, unsigned int nid); @@ -46,6 +75,10 @@ extern void register_hugetlbfs_with_node(node_registration_func_t doregister, node_registration_func_t unregister); #endif #else +static inline int __register_one_node(int nid) +{ + return 0; +} static inline int register_one_node(int nid) { return 0; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c2b018c808b7..2c731bdfa845 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1220,7 +1220,22 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) node_set_online(nid); if (new_node) { - ret = register_one_node(nid); + unsigned long start_pfn = start >> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + + ret = __register_one_node(nid); + if (ret) + goto register_fail; + + /* + * link memory sections under this node. This is already + * done when creatig memory section in register_new_memory + * but that depends to have the node registered so offline + * nodes have to go through register_node. + * TODO clean up this mess. + */ + ret = link_mem_sections(nid, start_pfn, nr_pages); +register_fail: /* * If sysfs file of new node can't create, cpu on the node * can't be hot-added. There is no rollback way now. -- 2.11.0 -- Michal Hocko SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id EF9416B040C for ; Thu, 6 Apr 2017 07:07:52 -0400 (EDT) Received: by mail-wr0-f199.google.com with SMTP id a80so3080825wrc.19 for ; Thu, 06 Apr 2017 04:07:52 -0700 (PDT) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id y2si2112736wrc.183.2017.04.06.04.07.51 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 06 Apr 2017 04:07:51 -0700 (PDT) Date: Thu, 6 Apr 2017 13:07:47 +0200 From: Michal Hocko Subject: Re: [PATCH 0/6] mm: make movable onlining suck less Message-ID: <20170406110747.GJ5497@dhcp22.suse.cz> References: <20170404183012.a6biape5y7vu6cjm@arbab-laptop> <20170404194122.GS15132@dhcp22.suse.cz> <20170404214339.6o4c4uhwudyhzbbo@arbab-laptop> <20170405064239.GB6035@dhcp22.suse.cz> <20170405092427.GG6035@dhcp22.suse.cz> <20170405145304.wxzfavqxnyqtrlru@arbab-laptop> <20170405154258.GR6035@dhcp22.suse.cz> <20170405173248.4vtdgk2kolbzztya@arbab-laptop> <20170405181502.GU6035@dhcp22.suse.cz> <20170405210214.GX6035@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170405210214.GX6035@dhcp22.suse.cz> Sender: owner-linux-mm@kvack.org List-ID: To: Reza Arbab Cc: Mel Gorman , linux-mm@kvack.org, Andrew Morton , Vlastimil Babka , Andrea Arcangeli , Yasuaki Ishimatsu , Tang Chen , qiuxishi@huawei.com, Kani Toshimitsu , slaoub@gmail.com, Joonsoo Kim , Andi Kleen , Zhang Zhen , David Rientjes , Daniel Kiper , Igor Mammedov , Vitaly Kuznetsov , LKML , Chris Metcalf , Dan Williams , Heiko Carstens , Lai Jiangshan , Martin Schwidefsky On Wed 05-04-17 23:02:14, Michal Hocko wrote: [...] > OK, I was staring into the code and I guess I finally understand what is > going on here. Looking at arch_add_memory->...->register_mem_sect_under_node > was just misleading. I am still not 100% sure why but we try to do the > same thing later from register_one_node->link_mem_sections for nodes > which were offline. I should have noticed this path before. And here > is the difference from the previous code. We are past arch_add_memory > and that path used to do __add_zone which among other things will also > resize node boundaries. I am not doing that anymore because I postpone > that to the onlining phase. Jeez this code is so convoluted my head > spins. > > I am not really sure how to fix this. I suspect register_mem_sect_under_node > should just ignore the online state of the node. But I wouldn't > be all that surprised if this had some subtle reason as well. An > alternative would be to actually move register_mem_sect_under_node out > of register_new_memory and move it up the call stack, most probably to > add_memory_resource. We have the range and can map it to the memblock > and so will not rely on the node range. I will sleep over it and > hopefully come up with something tomorrow. OK, so this is the most sensible way I was able to come up with. I didn't get to test it yet but from the above analysis it should work. ---