From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F092C10F29 for ; Tue, 17 Mar 2020 13:53:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0F6A220757 for ; Tue, 17 Mar 2020 13:53:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F6A220757 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AFB386B0005; Tue, 17 Mar 2020 09:53:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AAD536B0006; Tue, 17 Mar 2020 09:53:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99AAF6B0007; Tue, 17 Mar 2020 09:53:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 826FD6B0005 for ; Tue, 17 Mar 2020 09:53:39 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 476F8180AD815 for ; Tue, 17 Mar 2020 13:53:39 +0000 (UTC) X-FDA: 76604996958.19.pump59_1fd767f06dc0f X-HE-Tag: pump59_1fd767f06dc0f X-Filterd-Recvd-Size: 6073 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Mar 2020 13:53:38 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 50FB2AD66; Tue, 17 Mar 2020 13:53:34 +0000 (UTC) Subject: Re: [PATCH 2/4] mm/slub: Use mem_node to allocate a new slab To: Srikar Dronamraju Cc: Andrew Morton , linux-mm@kvack.org, Mel Gorman , Michael Ellerman , Sachin Sant , Michal Hocko , Christopher Lameter , linuxppc-dev@lists.ozlabs.org, Joonsoo Kim , Kirill Tkhai , Bharata B Rao References: <3381CD91-AB3D-4773-BA04-E7A072A63968@linux.vnet.ibm.com> <20200317131753.4074-1-srikar@linux.vnet.ibm.com> <20200317131753.4074-3-srikar@linux.vnet.ibm.com> <20200317134523.GB4334@linux.vnet.ibm.com> From: Vlastimil Babka Message-ID: <3d9629d4-4a6d-d2b5-28b7-58af497671c7@suse.cz> Date: Tue, 17 Mar 2020 14:53:26 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: <20200317134523.GB4334@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/17/20 2:45 PM, Srikar Dronamraju wrote: > * Vlastimil Babka [2020-03-17 14:34:25]: > >> On 3/17/20 2:17 PM, Srikar Dronamraju wrote: >> > Currently while allocating a slab for a offline node, we use its >> > associated node_numa_mem to search for a partial slab. If we don't find >> > a partial slab, we try allocating a slab from the offline node using >> > __alloc_pages_node. However this is bound to fail. >> > >> > NIP [c00000000039a300] __alloc_pages_nodemask+0x130/0x3b0 >> > LR [c00000000039a3c4] __alloc_pages_nodemask+0x1f4/0x3b0 >> > Call Trace: >> > [c0000008b36837f0] [c00000000039a3b4] __alloc_pages_nodemask+0x1e4/0x3b0 (unreliable) >> > [c0000008b3683870] [c0000000003d1ff8] new_slab+0x128/0xcf0 >> > [c0000008b3683950] [c0000000003d6060] ___slab_alloc+0x410/0x820 >> > [c0000008b3683a40] [c0000000003d64a4] __slab_alloc+0x34/0x60 >> > [c0000008b3683a70] [c0000000003d78b0] __kmalloc_node+0x110/0x490 >> > [c0000008b3683af0] [c000000000343a08] kvmalloc_node+0x58/0x110 >> > [c0000008b3683b30] [c0000000003ffd44] mem_cgroup_css_online+0x104/0x270 >> > [c0000008b3683b90] [c000000000234e08] online_css+0x48/0xd0 >> > [c0000008b3683bc0] [c00000000023dedc] cgroup_apply_control_enable+0x2ec/0x4d0 >> > [c0000008b3683ca0] [c0000000002416f8] cgroup_mkdir+0x228/0x5f0 >> > [c0000008b3683d10] [c000000000520360] kernfs_iop_mkdir+0x90/0xf0 >> > [c0000008b3683d50] [c00000000043e400] vfs_mkdir+0x110/0x230 >> > [c0000008b3683da0] [c000000000441ee0] do_mkdirat+0xb0/0x1a0 >> > [c0000008b3683e20] [c00000000000b278] system_call+0x5c/0x68 >> > >> > Mitigate this by allocating the new slab from the node_numa_mem. >> >> Are you sure this is really needed and the other 3 patches are not enough for >> the current SLUB code to work as needed? It seems you are changing the semantics >> here... >> > > The other 3 patches are not enough because we don't carry the searchnode > when the actual alloc_pages_node gets called. > > With only the 3 patches, we see the above Panic, its signature is slightly > different from what Sachin first reported and which I have carried in 1st > patch. Ah, I see. So that's the missing pgdat after your series [1] right? That sounds like an argument for Michal's suggestions that pgdats exist and have correctly populated zonelists for all possible nodes. node_to_mem_node() could be just a shortcut for the first zone's node in the zonelist, so that fallback follows the topology. [1] https://lore.kernel.org/linuxppc-dev/20200311110237.5731-1-srikar@linux.vnet.ibm.com/t/#m76e5b4c4084380b1d4b193d5aa0359b987f2290e >> > --- a/mm/slub.c >> > +++ b/mm/slub.c >> > @@ -1970,14 +1970,8 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, >> > struct kmem_cache_cpu *c) >> > { >> > void *object; >> > - int searchnode = node; >> > >> > - if (node == NUMA_NO_NODE) >> > - searchnode = numa_mem_id(); >> > - else if (!node_present_pages(node)) >> > - searchnode = node_to_mem_node(node); >> > - >> > - object = get_partial_node(s, get_node(s, searchnode), c, flags); >> > + object = get_partial_node(s, get_node(s, node), c, flags); >> > if (object || node != NUMA_NO_NODE)> return object; >> > >> > return get_any_partial(s, flags, c); >> >> I.e. here in this if(), now node will never equal NUMA_NO_NODE (thanks to the >> hunk below), thus the get_any_partial() call becomes dead code? >> >> > @@ -2470,6 +2464,11 @@ static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags, >> > >> > WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO)); >> > >> > + if (node == NUMA_NO_NODE) >> > + node = numa_mem_id(); >> > + else if (!node_present_pages(node)) >> > + node = node_to_mem_node(node); >> > + >> > freelist = get_partial(s, flags, node, c); >> > >> > if (freelist) >> > @@ -2569,12 +2568,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, >> > redo: >> > >> > if (unlikely(!node_match(page, node))) { >> > - int searchnode = node; >> > - >> > if (node != NUMA_NO_NODE && !node_present_pages(node)) >> > - searchnode = node_to_mem_node(node); >> > + node = node_to_mem_node(node); >> > >> > - if (unlikely(!node_match(page, searchnode))) { >> > + if (unlikely(!node_match(page, node))) { >> > stat(s, ALLOC_NODE_MISMATCH); >> > deactivate_slab(s, page, c->freelist, c); >> > goto new_slab; >> > >> > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4014C10F29 for ; Tue, 17 Mar 2020 14:52:28 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9D5AF20724 for ; Tue, 17 Mar 2020 14:52:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D5AF20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 48hbk04HM2zDqjp for ; Wed, 18 Mar 2020 01:52:24 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.cz (client-ip=195.135.220.15; helo=mx2.suse.de; envelope-from=vbabka@suse.cz; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.cz Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 48hZQC48DZzDq6K for ; Wed, 18 Mar 2020 00:53:39 +1100 (AEDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 50FB2AD66; Tue, 17 Mar 2020 13:53:34 +0000 (UTC) Subject: Re: [PATCH 2/4] mm/slub: Use mem_node to allocate a new slab To: Srikar Dronamraju References: <3381CD91-AB3D-4773-BA04-E7A072A63968@linux.vnet.ibm.com> <20200317131753.4074-1-srikar@linux.vnet.ibm.com> <20200317131753.4074-3-srikar@linux.vnet.ibm.com> <20200317134523.GB4334@linux.vnet.ibm.com> From: Vlastimil Babka Message-ID: <3d9629d4-4a6d-d2b5-28b7-58af497671c7@suse.cz> Date: Tue, 17 Mar 2020 14:53:26 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: <20200317134523.GB4334@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sachin Sant , Michal Hocko , linux-mm@kvack.org, Kirill Tkhai , Mel Gorman , Joonsoo Kim , Andrew Morton , Bharata B Rao , linuxppc-dev@lists.ozlabs.org, Christopher Lameter Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On 3/17/20 2:45 PM, Srikar Dronamraju wrote: > * Vlastimil Babka [2020-03-17 14:34:25]: > >> On 3/17/20 2:17 PM, Srikar Dronamraju wrote: >> > Currently while allocating a slab for a offline node, we use its >> > associated node_numa_mem to search for a partial slab. If we don't find >> > a partial slab, we try allocating a slab from the offline node using >> > __alloc_pages_node. However this is bound to fail. >> > >> > NIP [c00000000039a300] __alloc_pages_nodemask+0x130/0x3b0 >> > LR [c00000000039a3c4] __alloc_pages_nodemask+0x1f4/0x3b0 >> > Call Trace: >> > [c0000008b36837f0] [c00000000039a3b4] __alloc_pages_nodemask+0x1e4/0x3b0 (unreliable) >> > [c0000008b3683870] [c0000000003d1ff8] new_slab+0x128/0xcf0 >> > [c0000008b3683950] [c0000000003d6060] ___slab_alloc+0x410/0x820 >> > [c0000008b3683a40] [c0000000003d64a4] __slab_alloc+0x34/0x60 >> > [c0000008b3683a70] [c0000000003d78b0] __kmalloc_node+0x110/0x490 >> > [c0000008b3683af0] [c000000000343a08] kvmalloc_node+0x58/0x110 >> > [c0000008b3683b30] [c0000000003ffd44] mem_cgroup_css_online+0x104/0x270 >> > [c0000008b3683b90] [c000000000234e08] online_css+0x48/0xd0 >> > [c0000008b3683bc0] [c00000000023dedc] cgroup_apply_control_enable+0x2ec/0x4d0 >> > [c0000008b3683ca0] [c0000000002416f8] cgroup_mkdir+0x228/0x5f0 >> > [c0000008b3683d10] [c000000000520360] kernfs_iop_mkdir+0x90/0xf0 >> > [c0000008b3683d50] [c00000000043e400] vfs_mkdir+0x110/0x230 >> > [c0000008b3683da0] [c000000000441ee0] do_mkdirat+0xb0/0x1a0 >> > [c0000008b3683e20] [c00000000000b278] system_call+0x5c/0x68 >> > >> > Mitigate this by allocating the new slab from the node_numa_mem. >> >> Are you sure this is really needed and the other 3 patches are not enough for >> the current SLUB code to work as needed? It seems you are changing the semantics >> here... >> > > The other 3 patches are not enough because we don't carry the searchnode > when the actual alloc_pages_node gets called. > > With only the 3 patches, we see the above Panic, its signature is slightly > different from what Sachin first reported and which I have carried in 1st > patch. Ah, I see. So that's the missing pgdat after your series [1] right? That sounds like an argument for Michal's suggestions that pgdats exist and have correctly populated zonelists for all possible nodes. node_to_mem_node() could be just a shortcut for the first zone's node in the zonelist, so that fallback follows the topology. [1] https://lore.kernel.org/linuxppc-dev/20200311110237.5731-1-srikar@linux.vnet.ibm.com/t/#m76e5b4c4084380b1d4b193d5aa0359b987f2290e >> > --- a/mm/slub.c >> > +++ b/mm/slub.c >> > @@ -1970,14 +1970,8 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, >> > struct kmem_cache_cpu *c) >> > { >> > void *object; >> > - int searchnode = node; >> > >> > - if (node == NUMA_NO_NODE) >> > - searchnode = numa_mem_id(); >> > - else if (!node_present_pages(node)) >> > - searchnode = node_to_mem_node(node); >> > - >> > - object = get_partial_node(s, get_node(s, searchnode), c, flags); >> > + object = get_partial_node(s, get_node(s, node), c, flags); >> > if (object || node != NUMA_NO_NODE)> return object; >> > >> > return get_any_partial(s, flags, c); >> >> I.e. here in this if(), now node will never equal NUMA_NO_NODE (thanks to the >> hunk below), thus the get_any_partial() call becomes dead code? >> >> > @@ -2470,6 +2464,11 @@ static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags, >> > >> > WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO)); >> > >> > + if (node == NUMA_NO_NODE) >> > + node = numa_mem_id(); >> > + else if (!node_present_pages(node)) >> > + node = node_to_mem_node(node); >> > + >> > freelist = get_partial(s, flags, node, c); >> > >> > if (freelist) >> > @@ -2569,12 +2568,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, >> > redo: >> > >> > if (unlikely(!node_match(page, node))) { >> > - int searchnode = node; >> > - >> > if (node != NUMA_NO_NODE && !node_present_pages(node)) >> > - searchnode = node_to_mem_node(node); >> > + node = node_to_mem_node(node); >> > >> > - if (unlikely(!node_match(page, searchnode))) { >> > + if (unlikely(!node_match(page, node))) { >> > stat(s, ALLOC_NODE_MISMATCH); >> > deactivate_slab(s, page, c->freelist, c); >> > goto new_slab; >> > >> >