From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAF26C433E2 for ; Fri, 11 Sep 2020 12:31:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8BA1222211 for ; Fri, 11 Sep 2020 12:31:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AmQEQaDw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725875AbgIKMbS (ORCPT ); Fri, 11 Sep 2020 08:31:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725895AbgIKMYW (ORCPT ); Fri, 11 Sep 2020 08:24:22 -0400 Received: from mail-ot1-x343.google.com (mail-ot1-x343.google.com [IPv6:2607:f8b0:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A510C061756 for ; Fri, 11 Sep 2020 05:24:20 -0700 (PDT) Received: by mail-ot1-x343.google.com with SMTP id w25so6931933otk.8 for ; Fri, 11 Sep 2020 05:24:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=AmQEQaDwdqKJEzBUQY5LjoMckUpmTiNa/rA3A6EyryNH5Q3YCUm/WD8+II8tBEbfkf ReuQdzPVdnmecQIYWueOlcTfY17HwKnj4IID2x0dAsR1+kMv5MZDWlDpFMRzo+CGyGlR Bh7iwTMpNIbD/tZDMU/CQZghNGaAT3u//6VOnELzpB5Iteowtb2ourR3U9ycUtkHxTdq isLv7eIX/+kCQVTyqdbwaDg36ceXn+sm9A9EYIy/fCtX3t9dECkFwtRofM+QmtZQ4Vs1 8iSSuDyWJCmPXzF9i8yDc/n8I7BXKUh0deZFZEAnx5CahcEDu1FpOzw+UfmtxzcQ3sKu M1BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=aIeyiVSKVvUZkVZWoFjD4B6wUhAXdja7CWElcnWyOP4/SG09Q4l4WtPRmFD+3fJc91 wQxwhsmFfUYWkIiNovZE6VX0HdbZhKeUv2YRF0zZAXPt8UEWYiwJn7QwGwwIBBxYFNjE MX+NXI8AbzxbAm5Sej7/KWysEPpfVXD5+Y4jBn3wlvuGF+8yVWJtMODt9e3g/cP1eUBr adshi8af5p5t34MkqSJscmAWgKQEYk+BaSnfzwBMZTLviidiooF0CLpVRIOWsgDBbSNJ dTOYYiq2SEChvvtFAH6DTXqYBCrCb1ViwvgbDR90BNZqAe/xmy3F50uKQxl3kJ2i4sfU gF5w== X-Gm-Message-State: AOAM533HVZXAGWQJxKXqpIybAzLX1SGWkvG59piAxRY0C49T0ZWvXs+l JflD41VfuJwli+J6yV3sxgep4XsgERbaoT5nWJSSKA== X-Google-Smtp-Source: ABdhPJwyGPMTPiRrUQ8QlagiNFgDDsDdayzyw2LOiwIJAAkn/SqZj/czRiWsWF0W2681jWFFmgtDjxaB1KbubSYDgMM= X-Received: by 2002:a05:6830:1e8c:: with SMTP id n12mr1091647otr.17.1599827058228; Fri, 11 Sep 2020 05:24:18 -0700 (PDT) MIME-Version: 1.0 References: <20200907134055.2878499-1-elver@google.com> <20200907134055.2878499-5-elver@google.com> In-Reply-To: From: Marco Elver Date: Fri, 11 Sep 2020 14:24:06 +0200 Message-ID: Subject: Re: [PATCH RFC 04/10] mm, kfence: insert KFENCE hooks for SLAB To: Dmitry Vyukov Cc: Alexander Potapenko , Andrew Morton , Catalin Marinas , Christoph Lameter , David Rientjes , Joonsoo Kim , Mark Rutland , Pekka Enberg , "H. Peter Anvin" , "Paul E. McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Dave Hansen , Eric Dumazet , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Jonathan Corbet , Kees Cook , Peter Zijlstra , Qian Cai , Thomas Gleixner , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 11 Sep 2020 at 09:17, Dmitry Vyukov wrote: > > On Mon, Sep 7, 2020 at 3:41 PM Marco Elver wrote: > > > > From: Alexander Potapenko > > > > Inserts KFENCE hooks into the SLAB allocator. > > > > We note the addition of the 'orig_size' argument to slab_alloc*() > > functions, to be able to pass the originally requested size to KFENCE. > > When KFENCE is disabled, there is no additional overhead, since these > > functions are __always_inline. > > > > Co-developed-by: Marco Elver > > Signed-off-by: Marco Elver > > Signed-off-by: Alexander Potapenko > > --- > > mm/slab.c | 46 ++++++++++++++++++++++++++++++++++------------ > > mm/slab_common.c | 6 +++++- > > 2 files changed, 39 insertions(+), 13 deletions(-) > > > > diff --git a/mm/slab.c b/mm/slab.c > > index 3160dff6fd76..30aba06ae02b 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -100,6 +100,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -3206,7 +3207,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, > > } > > > > static __always_inline void * > > -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > unsigned long caller) > > { > > unsigned long save_flags; > > @@ -3219,6 +3220,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(!cachep)) > > return NULL; > > > > + ptr = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(ptr)) > > + goto out_hooks; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > > > @@ -3251,6 +3256,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) > > memset(ptr, 0, cachep->object_size); > > > > +out_hooks: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); > > return ptr; > > } > > @@ -3288,7 +3294,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > #endif /* CONFIG_NUMA */ > > > > static __always_inline void * > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > { > > unsigned long save_flags; > > void *objp; > > @@ -3299,6 +3305,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(!cachep)) > > return NULL; > > > > + objp = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(objp)) > > + goto leave; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > objp = __do_cache_alloc(cachep, flags); > > @@ -3309,6 +3319,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) > > memset(objp, 0, cachep->object_size); > > > > +leave: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); > > return objp; > > } > > @@ -3414,6 +3425,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > > unsigned long caller) > > { > > + if (kfence_free(objp)) { > > + kmemleak_free_recursive(objp, cachep->flags); > > + return; > > + } > > + > > /* Put the object into the quarantine, don't touch it for now. */ > > if (kasan_slab_free(cachep, objp, _RET_IP_)) > > return; > > @@ -3479,7 +3495,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, > > */ > > void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > { > > - void *ret = slab_alloc(cachep, flags, _RET_IP_); > > + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); > > > It's kinda minor, but since we are talking about malloc fast path: > will passing 0 instead of cachep->object_size (here and everywhere > else) and then using cachep->object_size on the slow path if 0 is > passed as size improve codegen? It doesn't save us much, maybe 1 instruction based on what I'm looking at right now. The main worry I have is that the 'orig_size' argument is now part of slab_alloc, and changing its semantics may cause problems in future if it's no longer just passed to kfence_alloc(). Today, we can do the 'size = size ?: cache->object_size' trick inside kfence_alloc(), but at the cost breaking the intuitive semantics of slab_alloc's orig_size argument for future users. Is it worth it? Thanks, -- Marco From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2946CC43461 for ; Fri, 11 Sep 2020 12:24:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 82C922078D for ; Fri, 11 Sep 2020 12:24:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AmQEQaDw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 82C922078D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B07BF900002; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB7648E0001; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A6F2900002; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 8571E8E0001 for ; Fri, 11 Sep 2020 08:24:21 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 466D78248068 for ; Fri, 11 Sep 2020 12:24:21 +0000 (UTC) X-FDA: 77250698322.20.beef09_0e109e1270ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 18D0D180C07A3 for ; Fri, 11 Sep 2020 12:24:21 +0000 (UTC) X-HE-Tag: beef09_0e109e1270ee X-Filterd-Recvd-Size: 9202 Received: from mail-ot1-f66.google.com (mail-ot1-f66.google.com [209.85.210.66]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Sep 2020 12:24:20 +0000 (UTC) Received: by mail-ot1-f66.google.com with SMTP id y5so8164687otg.5 for ; Fri, 11 Sep 2020 05:24:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=AmQEQaDwdqKJEzBUQY5LjoMckUpmTiNa/rA3A6EyryNH5Q3YCUm/WD8+II8tBEbfkf ReuQdzPVdnmecQIYWueOlcTfY17HwKnj4IID2x0dAsR1+kMv5MZDWlDpFMRzo+CGyGlR Bh7iwTMpNIbD/tZDMU/CQZghNGaAT3u//6VOnELzpB5Iteowtb2ourR3U9ycUtkHxTdq isLv7eIX/+kCQVTyqdbwaDg36ceXn+sm9A9EYIy/fCtX3t9dECkFwtRofM+QmtZQ4Vs1 8iSSuDyWJCmPXzF9i8yDc/n8I7BXKUh0deZFZEAnx5CahcEDu1FpOzw+UfmtxzcQ3sKu M1BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=oSByg2cQOVz4m+LWxyXYsLPX7od7+d499GArcQ5spPv1DjOdR3G0J5/flYjSpLiUrH pZYW5weh0dcg7u80DWLgs5qRhtzdYKVJKcaKQahDEUyO2mk+9/VGoSc9+mujlqUInK5o G7ea2Nzvj0pLsB67I8gVcZUq0CGjR1s0kDU2u3OQpxr6JI9PAPyYjk/cSy4IXeC6SyHD J3tfq4Ul19sislhTwH9m5kc/TKsxgAoYZ34YQMHjctPtN75UV23amUv3LTUrFD0TNckw mo0bDCsNdpDJ8HBCyGgIbkDV86EFDIUBqH9cuXFkAvYQiXSsDBsiOtAHuTxOnyI4/JG/ d1IA== X-Gm-Message-State: AOAM530WaxLFZ21oRpyFvYp3mFZy8xQ8WE2KboI/ruK4zYSdFAdXuwu9 CvGIaKbRVeJhQmfeOOcDKSm/gxLYg+5sTZQbg0dktw== X-Google-Smtp-Source: ABdhPJwyGPMTPiRrUQ8QlagiNFgDDsDdayzyw2LOiwIJAAkn/SqZj/czRiWsWF0W2681jWFFmgtDjxaB1KbubSYDgMM= X-Received: by 2002:a05:6830:1e8c:: with SMTP id n12mr1091647otr.17.1599827058228; Fri, 11 Sep 2020 05:24:18 -0700 (PDT) MIME-Version: 1.0 References: <20200907134055.2878499-1-elver@google.com> <20200907134055.2878499-5-elver@google.com> In-Reply-To: From: Marco Elver Date: Fri, 11 Sep 2020 14:24:06 +0200 Message-ID: Subject: Re: [PATCH RFC 04/10] mm, kfence: insert KFENCE hooks for SLAB To: Dmitry Vyukov Cc: Alexander Potapenko , Andrew Morton , Catalin Marinas , Christoph Lameter , David Rientjes , Joonsoo Kim , Mark Rutland , Pekka Enberg , "H. Peter Anvin" , "Paul E. McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Dave Hansen , Eric Dumazet , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Jonathan Corbet , Kees Cook , Peter Zijlstra , Qian Cai , Thomas Gleixner , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 18D0D180C07A3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 11 Sep 2020 at 09:17, Dmitry Vyukov wrote: > > On Mon, Sep 7, 2020 at 3:41 PM Marco Elver wrote: > > > > From: Alexander Potapenko > > > > Inserts KFENCE hooks into the SLAB allocator. > > > > We note the addition of the 'orig_size' argument to slab_alloc*() > > functions, to be able to pass the originally requested size to KFENCE. > > When KFENCE is disabled, there is no additional overhead, since these > > functions are __always_inline. > > > > Co-developed-by: Marco Elver > > Signed-off-by: Marco Elver > > Signed-off-by: Alexander Potapenko > > --- > > mm/slab.c | 46 ++++++++++++++++++++++++++++++++++------------ > > mm/slab_common.c | 6 +++++- > > 2 files changed, 39 insertions(+), 13 deletions(-) > > > > diff --git a/mm/slab.c b/mm/slab.c > > index 3160dff6fd76..30aba06ae02b 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -100,6 +100,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -3206,7 +3207,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, > > } > > > > static __always_inline void * > > -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > unsigned long caller) > > { > > unsigned long save_flags; > > @@ -3219,6 +3220,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(!cachep)) > > return NULL; > > > > + ptr = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(ptr)) > > + goto out_hooks; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > > > @@ -3251,6 +3256,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) > > memset(ptr, 0, cachep->object_size); > > > > +out_hooks: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); > > return ptr; > > } > > @@ -3288,7 +3294,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > #endif /* CONFIG_NUMA */ > > > > static __always_inline void * > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > { > > unsigned long save_flags; > > void *objp; > > @@ -3299,6 +3305,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(!cachep)) > > return NULL; > > > > + objp = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(objp)) > > + goto leave; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > objp = __do_cache_alloc(cachep, flags); > > @@ -3309,6 +3319,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) > > memset(objp, 0, cachep->object_size); > > > > +leave: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); > > return objp; > > } > > @@ -3414,6 +3425,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > > unsigned long caller) > > { > > + if (kfence_free(objp)) { > > + kmemleak_free_recursive(objp, cachep->flags); > > + return; > > + } > > + > > /* Put the object into the quarantine, don't touch it for now. */ > > if (kasan_slab_free(cachep, objp, _RET_IP_)) > > return; > > @@ -3479,7 +3495,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, > > */ > > void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > { > > - void *ret = slab_alloc(cachep, flags, _RET_IP_); > > + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); > > > It's kinda minor, but since we are talking about malloc fast path: > will passing 0 instead of cachep->object_size (here and everywhere > else) and then using cachep->object_size on the slow path if 0 is > passed as size improve codegen? It doesn't save us much, maybe 1 instruction based on what I'm looking at right now. The main worry I have is that the 'orig_size' argument is now part of slab_alloc, and changing its semantics may cause problems in future if it's no longer just passed to kfence_alloc(). Today, we can do the 'size = size ?: cache->object_size' trick inside kfence_alloc(), but at the cost breaking the intuitive semantics of slab_alloc's orig_size argument for future users. Is it worth it? Thanks, -- Marco From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50A31C43461 for ; Fri, 11 Sep 2020 12:25:44 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EE0E8207EA for ; Fri, 11 Sep 2020 12:25:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="veOmy3Fc"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="AmQEQaDw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE0E8207EA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:Subject:Message-ID:Date:From:In-Reply-To: References:MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UGeP+ZLhNKCvilDwsrxsJpUbtG1KsVXWet0HdqRse2I=; b=veOmy3Fc3vk3BXnW2s/A0tdPW 2Xzd13DhIIOOOnvQkXDfPrFGAdmkzRb0/8Erp+VSEnDbAxE2tgFtnBLQi856N/dB3P/nufejb6fdY S3sqc0ZuQFVRmASbcGjFHqwoBt3lwezTxJ83m6lz4R29v94wAKCJrr3zNEo7XgRofDEUV3RUGxjfP M9SSZOKq7eK/Og8oowR6yjPoIABNHL6LCnTrmLReKn0Lh3u5CBGbzQMvfN0qrdGJ8PrH3LhsPvDTz yrOvyaePS22K/mAaQaoqmn470AyYcfc2MdWrKn6RaBOK2WA3X0u8BybjD9SDKfF6M5ro1r0cdThmS x296BP5Hg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kGi6L-0001I3-Gn; Fri, 11 Sep 2020 12:24:26 +0000 Received: from mail-ot1-x341.google.com ([2607:f8b0:4864:20::341]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kGi6H-0001GD-Vi for linux-arm-kernel@lists.infradead.org; Fri, 11 Sep 2020 12:24:23 +0000 Received: by mail-ot1-x341.google.com with SMTP id e23so8161611otk.7 for ; Fri, 11 Sep 2020 05:24:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=AmQEQaDwdqKJEzBUQY5LjoMckUpmTiNa/rA3A6EyryNH5Q3YCUm/WD8+II8tBEbfkf ReuQdzPVdnmecQIYWueOlcTfY17HwKnj4IID2x0dAsR1+kMv5MZDWlDpFMRzo+CGyGlR Bh7iwTMpNIbD/tZDMU/CQZghNGaAT3u//6VOnELzpB5Iteowtb2ourR3U9ycUtkHxTdq isLv7eIX/+kCQVTyqdbwaDg36ceXn+sm9A9EYIy/fCtX3t9dECkFwtRofM+QmtZQ4Vs1 8iSSuDyWJCmPXzF9i8yDc/n8I7BXKUh0deZFZEAnx5CahcEDu1FpOzw+UfmtxzcQ3sKu M1BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zszRmbpQKBS0800usmmxgfyOo7qtmKwLO7PwkfuP3F8=; b=tQnI6QStMKSF2aiA/yzhGfmsgBbv01e27QxLYoOrGiUf358AVPWlAltVfy2j/83fDP vmfT3Q5qktSiV9YP82AfOywEZzE/vf6i5ZOcSAZsJfgO7jeB5oVQwfoQu4P/zAtHaRRX DwX+Y9K1OAZ8CbEj1Kn11jWsDrIM3W0ItabV2ntsF3NR+vrQ6gJqMtYC+Z0J3756s8vc ry3XgyOyMh7u5qj+0nJy0qKGlVc7vV8aaUELQlogDca2VNgh/YobfVmttfc2MfpJantZ MVjGUUwgs1dhZ9D/JhxBtTBESv7G3pcQx10t1iI5OdPYPxF8/w8mGkh7+4IysTz9wNA2 bGvA== X-Gm-Message-State: AOAM531QXGeiO3Xo2w7OvYTRc1L2yV4q/qRabuFTwCPBY/zwe9PHqC6b Fhc4T+U/rUsQsYawbFD3EJIJ6C9iX6lx1r9EJR+vFQ== X-Google-Smtp-Source: ABdhPJwyGPMTPiRrUQ8QlagiNFgDDsDdayzyw2LOiwIJAAkn/SqZj/czRiWsWF0W2681jWFFmgtDjxaB1KbubSYDgMM= X-Received: by 2002:a05:6830:1e8c:: with SMTP id n12mr1091647otr.17.1599827058228; Fri, 11 Sep 2020 05:24:18 -0700 (PDT) MIME-Version: 1.0 References: <20200907134055.2878499-1-elver@google.com> <20200907134055.2878499-5-elver@google.com> In-Reply-To: From: Marco Elver Date: Fri, 11 Sep 2020 14:24:06 +0200 Message-ID: Subject: Re: [PATCH RFC 04/10] mm, kfence: insert KFENCE hooks for SLAB To: Dmitry Vyukov X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200911_082422_085142_569F161E X-CRM114-Status: GOOD ( 28.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "open list:DOCUMENTATION" , Peter Zijlstra , Catalin Marinas , Dave Hansen , Linux-MM , Eric Dumazet , Alexander Potapenko , "H. Peter Anvin" , Christoph Lameter , Will Deacon , Jonathan Corbet , the arch/x86 maintainers , kasan-dev , Ingo Molnar , David Rientjes , Andrey Ryabinin , Kees Cook , "Paul E. McKenney" , Jann Horn , Andrey Konovalov , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , Andrew Morton , Linux ARM , Greg Kroah-Hartman , LKML , Pekka Enberg , Qian Cai , Joonsoo Kim Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, 11 Sep 2020 at 09:17, Dmitry Vyukov wrote: > > On Mon, Sep 7, 2020 at 3:41 PM Marco Elver wrote: > > > > From: Alexander Potapenko > > > > Inserts KFENCE hooks into the SLAB allocator. > > > > We note the addition of the 'orig_size' argument to slab_alloc*() > > functions, to be able to pass the originally requested size to KFENCE. > > When KFENCE is disabled, there is no additional overhead, since these > > functions are __always_inline. > > > > Co-developed-by: Marco Elver > > Signed-off-by: Marco Elver > > Signed-off-by: Alexander Potapenko > > --- > > mm/slab.c | 46 ++++++++++++++++++++++++++++++++++------------ > > mm/slab_common.c | 6 +++++- > > 2 files changed, 39 insertions(+), 13 deletions(-) > > > > diff --git a/mm/slab.c b/mm/slab.c > > index 3160dff6fd76..30aba06ae02b 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -100,6 +100,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -3206,7 +3207,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, > > } > > > > static __always_inline void * > > -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > unsigned long caller) > > { > > unsigned long save_flags; > > @@ -3219,6 +3220,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(!cachep)) > > return NULL; > > > > + ptr = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(ptr)) > > + goto out_hooks; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > > > @@ -3251,6 +3256,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) > > memset(ptr, 0, cachep->object_size); > > > > +out_hooks: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); > > return ptr; > > } > > @@ -3288,7 +3294,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > #endif /* CONFIG_NUMA */ > > > > static __always_inline void * > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > { > > unsigned long save_flags; > > void *objp; > > @@ -3299,6 +3305,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(!cachep)) > > return NULL; > > > > + objp = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(objp)) > > + goto leave; > > + > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > objp = __do_cache_alloc(cachep, flags); > > @@ -3309,6 +3319,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) > > memset(objp, 0, cachep->object_size); > > > > +leave: > > slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); > > return objp; > > } > > @@ -3414,6 +3425,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > > unsigned long caller) > > { > > + if (kfence_free(objp)) { > > + kmemleak_free_recursive(objp, cachep->flags); > > + return; > > + } > > + > > /* Put the object into the quarantine, don't touch it for now. */ > > if (kasan_slab_free(cachep, objp, _RET_IP_)) > > return; > > @@ -3479,7 +3495,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, > > */ > > void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > { > > - void *ret = slab_alloc(cachep, flags, _RET_IP_); > > + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); > > > It's kinda minor, but since we are talking about malloc fast path: > will passing 0 instead of cachep->object_size (here and everywhere > else) and then using cachep->object_size on the slow path if 0 is > passed as size improve codegen? It doesn't save us much, maybe 1 instruction based on what I'm looking at right now. The main worry I have is that the 'orig_size' argument is now part of slab_alloc, and changing its semantics may cause problems in future if it's no longer just passed to kfence_alloc(). Today, we can do the 'size = size ?: cache->object_size' trick inside kfence_alloc(), but at the cost breaking the intuitive semantics of slab_alloc's orig_size argument for future users. Is it worth it? Thanks, -- Marco _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel