From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2946CC43461 for ; Fri, 11 Sep 2020 12:24:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 82C922078D for ; Fri, 11 Sep 2020 12:24:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AmQEQaDw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 82C922078D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B07BF900002; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB7648E0001; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A6F2900002; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 8571E8E0001 for ; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 466D78248068 for ; Fri, 11 Sep 2020 12:24:21 +0000 (UTC) X-FDA: 77250698322.20.beef09_0e109e1270ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 18D0D180C07A3 for ; Fri, 11 Sep 2020 12:24:21 +0000 (UTC) X-HE-Tag: beef09_0e109e1270ee X-Filterd-Recvd-Size: 9202 Received: from mail-ot1-f66.google.com (mail-ot1-f66.google.com [209.85.210.66]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Sep 2020 12:24:20 +0000 (UTC) Received: by mail-ot1-f66.google.com with SMTP id y5so8164687otg.5 for ; Fri, 11 Sep 2020 05:24:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=AmQEQaDwdqKJEzBUQY5LjoMckUpmTiNa/rA3A6EyryNH5Q3YCUm/WD8+II8tBEbfkf ReuQdzPVdnmecQIYWueOlcTfY17HwKnj4IID2x0dAsR1+kMv5MZDWlDpFMRzo+CGyGlR Bh7iwTMpNIbD/tZDMU/CQZghNGaAT3u//6VOnELzpB5Iteowtb2ourR3U9ycUtkHxTdq isLv7eIX/+kCQVTyqdbwaDg36ceXn+sm9A9EYIy/fCtX3t9dECkFwtRofM+QmtZQ4Vs1 8iSSuDyWJCmPXzF9i8yDc/n8I7BXKUh0deZFZEAnx5CahcEDu1FpOzw+UfmtxzcQ3sKu M1BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=oSByg2cQOVz4m+LWxyXYsLPX7od7+d499GArcQ5spPv1DjOdR3G0J5/flYjSpLiUrH pZYW5weh0dcg7u80DWLgs5qRhtzdYKVJKcaKQahDEUyO2mk+9/VGoSc9+mujlqUInK5o G7ea2Nzvj0pLsB67I8gVcZUq0CGjR1s0kDU2u3OQpxr6JI9PAPyYjk/cSy4IXeC6SyHD J3tfq4Ul19sislhTwH9m5kc/TKsxgAoYZ34YQMHjctPtN75UV23amUv3LTUrFD0TNckw mo0bDCsNdpDJ8HBCyGgIbkDV86EFDIUBqH9cuXFkAvYQiXSsDBsiOtAHuTxOnyI4/JG/ d1IA== X-Gm-Message-State: AOAM530WaxLFZ21oRpyFvYp3mFZy8xQ8WE2KboI/ruK4zYSdFAdXuwu9 CvGIaKbRVeJhQmfeOOcDKSm/gxLYg+5sTZQbg0dktw== X-Google-Smtp-Source: ABdhPJwyGPMTPiRrUQ8QlagiNFgDDsDdayzyw2LOiwIJAAkn/SqZj/czRiWsWF0W2681jWFFmgtDjxaB1KbubSYDgMM= X-Received: by 2002:a05:6830:1e8c:: with SMTP id n12mr1091647otr.17.1599827058228; Fri, 11 Sep 2020 05:24:18 -0700 (PDT) MIME-Version: 1.0 References: <20200907134055.2878499-1-elver@google.com> <20200907134055.2878499-5-elver@google.com> In-Reply-To: From: Marco Elver Date: Fri, 11 Sep 2020 14:24:06 +0200 Message-ID: Subject: Re: [PATCH RFC 04/10] mm, kfence: insert KFENCE hooks for SLAB To: Dmitry Vyukov Cc: Alexander Potapenko , Andrew Morton , Catalin Marinas , Christoph Lameter , David Rientjes , Joonsoo Kim , Mark Rutland , Pekka Enberg , "H. Peter Anvin" , "Paul E. McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Dave Hansen , Eric Dumazet , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Jonathan Corbet , Kees Cook , Peter Zijlstra , Qian Cai , Thomas Gleixner , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 18D0D180C07A3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 11 Sep 2020 at 09:17, Dmitry Vyukov wrote: > > On Mon, Sep 7, 2020 at 3:41 PM Marco Elver wrote: > > > > From: Alexander Potapenko > > > > Inserts KFENCE hooks into the SLAB allocator. > > > > We note the addition of the 'orig_size' argument to slab_alloc*() > > functions, to be able to pass the originally requested size to KFENCE. > > When KFENCE is disabled, there is no additional overhead, since these > > functions are __always_inline. > > > > Co-developed-by: Marco Elver > > Signed-off-by: Marco Elver > > Signed-off-by: Alexander Potapenko > > --- > > mm/slab.c | 46 ++++++++++++++++++++++++++++++++++------------ > > mm/slab_common.c | 6 +++++- > > 2 files changed, 39 insertions(+), 13 deletions(-) > > > > diff --git a/mm/slab.c b/mm/slab.c > > index 3160dff6fd76..30aba06ae02b 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -100,6 +100,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -3206,7 +3207,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, > > } > > > > static __always_inline void * > > -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > unsigned long caller) > > { > > unsigned long save_flags; > > @@ -3219,6 +3220,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(!cachep)) > > return NULL; > > > > + ptr = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(ptr)) > > + goto out_hooks; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > > > @@ -3251,6 +3256,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) > > memset(ptr, 0, cachep->object_size); > > > > +out_hooks: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); > > return ptr; > > } > > @@ -3288,7 +3294,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > #endif /* CONFIG_NUMA */ > > > > static __always_inline void * > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > { > > unsigned long save_flags; > > void *objp; > > @@ -3299,6 +3305,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(!cachep)) > > return NULL; > > > > + objp = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(objp)) > > + goto leave; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > objp = __do_cache_alloc(cachep, flags); > > @@ -3309,6 +3319,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) > > memset(objp, 0, cachep->object_size); > > > > +leave: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); > > return objp; > > } > > @@ -3414,6 +3425,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > > unsigned long caller) > > { > > + if (kfence_free(objp)) { > > + kmemleak_free_recursive(objp, cachep->flags); > > + return; > > + } > > + > > /* Put the object into the quarantine, don't touch it for now. */ > > if (kasan_slab_free(cachep, objp, _RET_IP_)) > > return; > > @@ -3479,7 +3495,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, > > */ > > void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > { > > - void *ret = slab_alloc(cachep, flags, _RET_IP_); > > + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); > > > It's kinda minor, but since we are talking about malloc fast path: > will passing 0 instead of cachep->object_size (here and everywhere > else) and then using cachep->object_size on the slow path if 0 is > passed as size improve codegen? It doesn't save us much, maybe 1 instruction based on what I'm looking at right now. The main worry I have is that the 'orig_size' argument is now part of slab_alloc, and changing its semantics may cause problems in future if it's no longer just passed to kfence_alloc(). Today, we can do the 'size = size ?: cache->object_size' trick inside kfence_alloc(), but at the cost breaking the intuitive semantics of slab_alloc's orig_size argument for future users. Is it worth it? Thanks, -- Marco