From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83588C433E7 for ; Mon, 12 Oct 2020 10:09:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A649B208B6 for ; Mon, 12 Oct 2020 10:09:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A649B208B6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=h3c.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C2335940007; Mon, 12 Oct 2020 06:09:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BABCD900002; Mon, 12 Oct 2020 06:09:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4A8C940007; Mon, 12 Oct 2020 06:09:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 70A1B900002 for ; Mon, 12 Oct 2020 06:09:13 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F2D431EF2 for ; Mon, 12 Oct 2020 10:09:12 +0000 (UTC) X-FDA: 77362850544.09.wound42_130a330271f9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id D82DB180AD80F for ; Mon, 12 Oct 2020 10:09:12 +0000 (UTC) X-HE-Tag: wound42_130a330271f9 X-Filterd-Recvd-Size: 5973 Received: from h3cspam02-ex.h3c.com (smtp.h3c.com [60.191.123.50]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 12 Oct 2020 10:09:11 +0000 (UTC) Received: from h3cspam02-ex.h3c.com (localhost [127.0.0.2] (may be forged)) by h3cspam02-ex.h3c.com with ESMTP id 09C8c6jZ040775 for ; Mon, 12 Oct 2020 16:38:06 +0800 (GMT-8) (envelope-from tian.xianting@h3c.com) Received: from DAG2EX03-BASE.srv.huawei-3com.com ([10.8.0.66]) by h3cspam02-ex.h3c.com with ESMTPS id 09C8bNxa039088 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 12 Oct 2020 16:37:23 +0800 (GMT-8) (envelope-from tian.xianting@h3c.com) Received: from localhost.localdomain (10.99.212.201) by DAG2EX03-BASE.srv.huawei-3com.com (10.8.0.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Mon, 12 Oct 2020 16:37:25 +0800 From: Xianting Tian To: , , , , CC: , , , , Xianting Tian Subject: [PATCH] mm: Make allocator take care of memoryless numa node Date: Mon, 12 Oct 2020 16:27:39 +0800 Message-ID: <20201012082739.15661-1-tian.xianting@h3c.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.99.212.201] X-ClientProxiedBy: BJSMTP02-EX.srv.huawei-3com.com (10.63.20.133) To DAG2EX03-BASE.srv.huawei-3com.com (10.8.0.66) X-DNSRBL: X-MAIL:h3cspam02-ex.h3c.com 09C8bNxa039088 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In architecture like powerpc, we can have cpus without any local memory attached to it. In such cases the node does not have real memory. In many places of current kernel code, it doesn't judge whether the node is memoryless numa node before calling allocator interface. This patch is to use local_memory_node(), which is guaranteed to have memory, in allocator interface. local_memory_node() is a noop in other architectures that don't support memoryless nodes. As the call path: alloc_pages_node __alloc_pages_node __alloc_pages_nodemask and __alloc_pages_node,__alloc_pages_nodemask may be called directly, so only add local_memory_node() in __alloc_pages_nodemask. Signed-off-by: Xianting Tian --- include/linux/slab.h | 3 +++ mm/page_alloc.c | 1 + mm/slab.c | 6 +++++- mm/slob.c | 1 + mm/slub.c | 10 ++++++++-- 5 files changed, 18 insertions(+), 3 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 24df2393e..527e811e0 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -574,6 +574,7 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) flags, node, size); } #endif + node = local_memory_node(node); return __kmalloc_node(size, flags, node); } @@ -626,6 +627,8 @@ static inline void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) return kmalloc_node(bytes, flags, node); + + node = local_memory_node(node); return __kmalloc_node(bytes, flags, node); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6866533de..be63c62c2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4878,6 +4878,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, return NULL; } + preferred_nid = local_memory_node(preferred_nid); gfp_mask &= gfp_allowed_mask; alloc_mask = gfp_mask; if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) diff --git a/mm/slab.c b/mm/slab.c index f658e86ec..263c2f2e1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3575,7 +3575,10 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + void *ret; + + nodeid = local_memory_node(nodeid); + ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, @@ -3593,6 +3596,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, { void *ret; + nodeid = local_memory_node(nodeid); ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); diff --git a/mm/slob.c b/mm/slob.c index 7cc9805c8..1f1c25e06 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -636,6 +636,7 @@ EXPORT_SYMBOL(__kmalloc_node); void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) { + node = local_memory_node(node); return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); diff --git a/mm/slub.c b/mm/slub.c index 6d3574013..6e5e12b04 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2921,7 +2921,10 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); #ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); + void *ret; + + node = local_memory_node(node); + ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, s->object_size, s->size, gfpflags, node); @@ -2935,7 +2938,10 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); + void *ret; + + node = local_memory_node(node); + ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); -- 2.17.1