From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9758EC433EF for ; Sat, 30 Apr 2022 11:49:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C09756B0072; Sat, 30 Apr 2022 07:49:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB7C66B0073; Sat, 30 Apr 2022 07:49:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7FE76B0074; Sat, 30 Apr 2022 07:49:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 961796B0072 for ; Sat, 30 Apr 2022 07:49:03 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 604BC2158B for ; Sat, 30 Apr 2022 11:49:03 +0000 (UTC) X-FDA: 79413374166.10.4001107 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf11.hostedemail.com (Postfix) with ESMTP id 8D8B740045 for ; Sat, 30 Apr 2022 11:48:59 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id fv2so9148966pjb.4 for ; Sat, 30 Apr 2022 04:49:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=+2MPzekCUKXPke9z3oTQa7Yx8/S0qu96viMfodSH2jg=; b=F7J5oiPv5vblGlDQr017khwTwKeA4Sci8vRMXI8CrPcCsI++3NAWIHC2DkBc9PyV9F w2ikILV6kPreMl9AZ/V15rX0M4ZnNfxHQsPbS9IE1qXh7Cw86gXtvp138o7G54SQMo4X UH/Q8zihjz2d0ni6GKVIxjlqnPyhmAtrjOJBLA2A1gxT6dwNyOKPyoQZQaOJLorp14B/ hC1w2FG/BnHQPIKRnP8j6tA1LFY0JfXojwVDY0j8twD4E71AXtwi3fWA8tw69iYwSwvJ /Kipl3ifSE0CnBlsBGZm5ZSYXATLHzA+gVrjohkaBHsYsuhGO/IUae1kc//X52bO0iBU 2BCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=+2MPzekCUKXPke9z3oTQa7Yx8/S0qu96viMfodSH2jg=; b=KuzI3qzTThGhyzjOavmUxS9zB5VfgEoJmJwIvlhjSUxytfz/VKjNrQy2SEIqU4qE7o ftWp+Koxx+wpOOEuAy3gaD2A3AwOQc77luKOP22BRM2MmR0UbHPQR9SOFhIdsUOOWHQO /ALHxb3qAk/SuFrOhi/NF5A8+UjPPak6WxUrVnPqv/ptYCvvnBROStwjUBbPNWjdR/y8 dtUsCP9BLXqIbiDFGLpvNG/+yyjloCDvmYpAvzNcBw/zzP76Rr5xTVV7LnmY0Ff0BGeX vWBVCtE/BahmWPu4OIPiFyCyO341VYZDvIEs6rf5RUXCTr/DFgvi+SH86gD6bcpyrNfr nNsQ== X-Gm-Message-State: AOAM530G+CtMevd/3Mio06WjkL4y/GcBOLTFjHpNwesux9jbAfroY5XJ BGOdOAdHPwjgiE1XMUtR342ePYxR/04X8DIe06HxoQ== X-Google-Smtp-Source: ABdhPJyL/eZiC28IcUv5DOLvRU4RDw2vx6fWkSWCjM7xvlgLb0sEC4Qs647FKJakoxCqxEVVY/Y7Sw== X-Received: by 2002:a17:90b:4c88:b0:1d9:7158:876b with SMTP id my8-20020a17090b4c8800b001d97158876bmr3880850pjb.54.1651319341908; Sat, 30 Apr 2022 04:49:01 -0700 (PDT) Received: from ip-172-31-27-201.ap-northeast-1.compute.internal (ec2-18-183-95-104.ap-northeast-1.compute.amazonaws.com. [18.183.95.104]) by smtp.gmail.com with ESMTPSA id a5-20020a17090ae20500b001da3780bfd3sm12751606pjz.0.2022.04.30.04.48.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 30 Apr 2022 04:49:01 -0700 (PDT) Date: Sat, 30 Apr 2022 11:48:56 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Marco Elver , Matthew WilCox , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 10/23] mm/slab_common: cleanup kmem_cache_alloc{,node,lru} Message-ID: <20220430114854.GB24925@ip-172-31-27-201.ap-northeast-1.compute.internal> References: <20220414085727.643099-1-42.hyeyoo@gmail.com> <20220414085727.643099-11-42.hyeyoo@gmail.com> <228411f0-96b9-60b4-b734-444ea39a354b@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <228411f0-96b9-60b4-b734-444ea39a354b@suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8D8B740045 X-Stat-Signature: 3y4wp5ija46wbn96py4unnk6oooma8ts X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=F7J5oiPv; spf=pass (imf11.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1651319339-617448 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 26, 2022 at 08:01:27PM +0200, Vlastimil Babka wrote: > On 4/14/22 10:57, Hyeonggon Yoo wrote: > > Implement only __kmem_cache_alloc_node() in slab allocators and make > > kmem_cache_alloc{,node,lru} wrapper of it. > > > > Now that kmem_cache_alloc{,node,lru} is inline function, we should > > use _THIS_IP_ instead of _RET_IP_ for consistency. > > Hm yeah looks like this actually fixes some damage of obscured actual > __RET_IP_ by the recent addition and wrapping of __kmem_cache_alloc_lru(). > > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > Reviewed-by: Vlastimil Babka > > Some nits: > > > --- > > include/linux/slab.h | 52 ++++++++++++++++++++++++++++++++----- > > mm/slab.c | 61 +++++--------------------------------------- > > mm/slob.c | 27 ++++++-------------- > > mm/slub.c | 35 +++++-------------------- > > 4 files changed, 67 insertions(+), 108 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 143830f57a7f..1b5bdcb0fd31 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -429,9 +429,52 @@ void *__kmalloc(size_t size, gfp_t flags) > > return __kmalloc_node(size, flags, NUMA_NO_NODE); > > } > > > > -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; > > -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, > > - gfp_t gfpflags) __assume_slab_alignment __malloc; > > + > > +void *__kmem_cache_alloc_node(struct kmem_cache *s, struct list_lru *lru, > > + gfp_t gfpflags, int node, unsigned long caller __maybe_unused) > > + __assume_slab_alignment __malloc; > > I don't think caller needs to be __maybe_unused in the declaration nor any > of the implementations of __kmem_cache_alloc_node(), all actually pass it on? My intention was to give hints to compilers when CONFIG_TRACING=n. I'll check if the compiler just optimizes them without __maybe_unused. Thanks!