From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC480C43331 for ; Thu, 2 Apr 2020 04:04:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ABFBC20776 for ; Thu, 2 Apr 2020 04:04:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="GPFh7scS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABFBC20776 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C8918E001B; Thu, 2 Apr 2020 00:04:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 578658E000D; Thu, 2 Apr 2020 00:04:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B5BA8E001B; Thu, 2 Apr 2020 00:04:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 347508E000D for ; Thu, 2 Apr 2020 00:04:33 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E505745AB for ; Thu, 2 Apr 2020 04:04:32 +0000 (UTC) X-FDA: 76661573184.23.burst56_27ac3e1f45306 X-HE-Tag: burst56_27ac3e1f45306 X-Filterd-Recvd-Size: 4725 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Apr 2020 04:04:32 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2497D206E9; Thu, 2 Apr 2020 04:04:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585800271; bh=BbHbQ555hufIsZUlZGGb6AlYA+n8gkdNkChShGhdm0Y=; h=Date:From:To:Subject:In-Reply-To:From; b=GPFh7scSFSM0UUnVBeZAXOJUxqGsK0+/nDaPnv4W8WHV5qrHgBVidygSFVg/vTfni Q8qQgII1DUVAXKurEe9GraTG86KPGGIQSX0B/3tTUGgj8zVs2PA99r8ALZx+IJsCvW 3+nF5RLzECMhUbEvS6oxiV1TccFe2ZhkXcHrS5oY= Date: Wed, 01 Apr 2020 21:04:30 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bharata@linux.ibm.com, cl@linux.com, iamjoonsoo.kim@lge.com, ktkhai@virtuozzo.com, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@kernel.org, mm-commits@vger.kernel.org, mpe@ellerman.id.au, nathanl@linux.ibm.com, penberg@kernel.org, puvichakravarthy@in.ibm.com, rientjes@google.com, sachinp@linux.vnet.ibm.com, srikar@linux.vnet.ibm.com, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 026/155] revert "topology: add support for node_to_mem_node() to determine the fallback node" Message-ID: <20200402040430.KbIxSa9RQ%akpm@linux-foundation.org> In-Reply-To: <20200401210155.09e3b9742e1c6e732f5a7250@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: revert "topology: add support for node_to_mem_node() to determine the fallback node" This reverts commit ad2c8144418c6a81cefe65379fd47bbe8344cef2. The function node_to_mem_node() was introduced by that commit for use in SLUB on systems with memoryless nodes, but it turned out to be unreliable on some architectures/configurations and a simpler solution exists than fixing it up. Thus commit 0715e6c516f1 ("mm, slub: prevent kmalloc_node crashes and memory leaks") removed the only user of node_to_mem_node() and we can revert the commit that introduced the function. Link: http://lkml.kernel.org/r/20200320115533.9604-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka Reviewed-by: Srikar Dronamraju Cc: Joonsoo Kim Cc: Bharata B Rao Cc: Christopher Lameter Cc: David Rientjes Cc: Kirill Tkhai Cc: Mel Gorman Cc: Michael Ellerman Cc: Michal Hocko Cc: Nathan Lynch Cc: Pekka Enberg Cc: PUVICHAKRAVARTHY RAMACHANDRAN Cc: Sachin Sant Signed-off-by: Andrew Morton --- include/linux/topology.h | 17 ----------------- mm/page_alloc.c | 1 - 2 files changed, 18 deletions(-) --- a/include/linux/topology.h~revert-topology-add-support-for-node_to_mem_node-to-determine-the-fallback-node +++ a/include/linux/topology.h @@ -130,20 +130,11 @@ static inline int numa_node_id(void) * Use the accessor functions set_numa_mem(), numa_mem_id() and cpu_to_mem(). */ DECLARE_PER_CPU(int, _numa_mem_); -extern int _node_numa_mem_[MAX_NUMNODES]; #ifndef set_numa_mem static inline void set_numa_mem(int node) { this_cpu_write(_numa_mem_, node); - _node_numa_mem_[numa_node_id()] = node; -} -#endif - -#ifndef node_to_mem_node -static inline int node_to_mem_node(int node) -{ - return _node_numa_mem_[node]; } #endif @@ -166,7 +157,6 @@ static inline int cpu_to_mem(int cpu) static inline void set_cpu_numa_mem(int cpu, int node) { per_cpu(_numa_mem_, cpu) = node; - _node_numa_mem_[cpu_to_node(cpu)] = node; } #endif @@ -180,13 +170,6 @@ static inline int numa_mem_id(void) } #endif -#ifndef node_to_mem_node -static inline int node_to_mem_node(int node) -{ - return node; -} -#endif - #ifndef cpu_to_mem static inline int cpu_to_mem(int cpu) { --- a/mm/page_alloc.c~revert-topology-add-support-for-node_to_mem_node-to-determine-the-fallback-node +++ a/mm/page_alloc.c @@ -95,7 +95,6 @@ DEFINE_STATIC_KEY_TRUE(vm_numa_stat_key) */ DEFINE_PER_CPU(int, _numa_mem_); /* Kernel "local memory" node */ EXPORT_PER_CPU_SYMBOL(_numa_mem_); -int _node_numa_mem_[MAX_NUMNODES]; #endif /* work_structs for global per-cpu drains */ _