From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1480FEB64D9 for ; Mon, 19 Jun 2023 12:16:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229640AbjFSMQs (ORCPT ); Mon, 19 Jun 2023 08:16:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229542AbjFSMQr (ORCPT ); Mon, 19 Jun 2023 08:16:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3915A11D; Mon, 19 Jun 2023 05:16:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BED6E60BC0; Mon, 19 Jun 2023 12:16:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03282C433C0; Mon, 19 Jun 2023 12:16:43 +0000 (UTC) Date: Mon, 19 Jun 2023 13:16:41 +0100 From: Catalin Marinas To: Stephen Rothwell Cc: Vlastimil Babka , Andrew Morton , Linux Kernel Mailing List , Linux Next Mailing List Subject: Re: linux-next: manual merge of the slab tree with the mm tree Message-ID: References: <20230619140330.28437ac3@canb.auug.org.au> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230619140330.28437ac3@canb.auug.org.au> Precedence: bulk List-ID: X-Mailing-List: linux-next@vger.kernel.org On Mon, Jun 19, 2023 at 02:03:30PM +1000, Stephen Rothwell wrote: > diff --cc mm/slab_common.c > index 43c008165f56,90ecaface410..000000000000 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@@ -892,24 -876,17 +890,24 @@@ new_kmalloc_cache(int idx, enum kmalloc > flags |= SLAB_CACHE_DMA; > } > > + if (minalign > ARCH_KMALLOC_MINALIGN) { > + aligned_size = ALIGN(aligned_size, minalign); > + aligned_idx = __kmalloc_index(aligned_size, false); > + } > + > + /* > + * If CONFIG_MEMCG_KMEM is enabled, disable cache merging for > + * KMALLOC_NORMAL caches. > + */ > + if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_NORMAL)) > + flags |= SLAB_NO_MERGE; > + > - kmalloc_caches[type][idx] = create_kmalloc_cache( > - kmalloc_info[idx].name[type], > - kmalloc_info[idx].size, flags, 0, > - kmalloc_info[idx].size); > + if (!kmalloc_caches[type][aligned_idx]) > + kmalloc_caches[type][aligned_idx] = create_kmalloc_cache( > + kmalloc_info[aligned_idx].name[type], > + aligned_size, flags); > + if (idx != aligned_idx) > + kmalloc_caches[type][idx] = kmalloc_caches[type][aligned_idx]; > - > - /* > - * If CONFIG_MEMCG_KMEM is enabled, disable cache merging for > - * KMALLOC_NORMAL caches. > - */ > - if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_NORMAL)) > - kmalloc_caches[type][idx]->refcount = -1; > } > > /* Thanks Stephen. The resolution looks fine to me. -- Catalin