From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.9 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F88DC11F65 for ; Wed, 30 Jun 2021 19:13:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2829961462 for ; Wed, 30 Jun 2021 19:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233608AbhF3TQK (ORCPT ); Wed, 30 Jun 2021 15:16:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233152AbhF3TQH (ORCPT ); Wed, 30 Jun 2021 15:16:07 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 337BBC061756 for ; Wed, 30 Jun 2021 12:13:37 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id u8so4937536wrq.8 for ; Wed, 30 Jun 2021 12:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=r7y6xbzz3Ru117IKa1A1HqYQLPsQ9JWTt80K0/sfpiwfqVG1bSJD0SFSYl/jMHoCJ2 ob3TpfUWHwHuwt70WI4j1pKseqgibuQ5m4T4+kBD+5X/Bv38bChnS+jbYWPjkjI6EjUo G/39As31rcJBFn/TdNWaDFYEVqq/MFEqS9de5nZt01bJR5tya6r0t+hff3JJ1iQaFL6q dB7TWBXgRhxSKJ7KbJGzlDhCLUgE6zw1ZuoAEHVgShC17Fx8OFPzpryUET3AzY1C1mhQ /Xv8NisXpCuGG5Ni7I7u/OU/OthuU1kcuTvSAoWLDujFEbHTJ+JOsHYi3+V0+37L6dy3 6FeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=CgPpmnEBe3zQ/DNv6u09hr8Lk2hVFU10QSyKh+DdehQ9AE7VVDvVuEvbQwemS2dOkd cI1bBk23jLZ6mOL8JSiEIOP+MrEaocIHiRi9ehqx/o0digl/0qW5Nvsbye4L73sZf5PB +1CWPfSQfU+//XzmMrqIS3ClUhIctKFy7QnFxNJWklbQNjIysthmLFcO0ugI10zfNHOI sNpNf/YPasTg4IiZFtH2yjE6c5PMR5cptkLRz87s2GuY9EMB2ySAK4VLE/OC0Y5THyG1 9qqSKQWzU7Iz74/UeS+bTHiH83irNUhnRLiuBkkcLxH42n7wpwC6Toj3NvT6U66x8K0H LHFg== X-Gm-Message-State: AOAM53220ofsd8Dq6OZ0p0qgYetcjNn95j1tPGRUlh3adTWaJiPYtoKg GN7V0oSw+SrtVIGpULLa80GWpw== X-Google-Smtp-Source: ABdhPJwGJqNIQjt34+ofqYDqSFhTn0KnMtEPFfjP2mQPalcDH63xTtPzA6qVHWDm/uGp300DxtbuOA== X-Received: by 2002:adf:c18a:: with SMTP id x10mr41309800wre.193.1625080415431; Wed, 30 Jun 2021 12:13:35 -0700 (PDT) Received: from elver.google.com ([2a00:79e0:15:13:8b0e:c57f:ff29:7e4]) by smtp.gmail.com with ESMTPSA id o20sm6991115wms.3.2021.06.30.12.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 12:13:34 -0700 (PDT) Date: Wed, 30 Jun 2021 21:13:27 +0200 From: Marco Elver To: yee.lee@mediatek.com Cc: andreyknvl@gmail.com, wsd_upstream@mediatek.com, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Matthias Brugger , "open list:KASAN" , "open list:MEMORY MANAGEMENT" , open list , "moderated list:ARM/Mediatek SoC support" , "moderated list:ARM/Mediatek SoC support" Subject: Re: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug Message-ID: References: <20210630134943.20781-1-yee.lee@mediatek.com> <20210630134943.20781-2-yee.lee@mediatek.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210630134943.20781-2-yee.lee@mediatek.com> User-Agent: Mutt/2.0.5 (2021-01-21) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 30, 2021 at 09:49PM +0800, yee.lee@mediatek.com wrote: > From: Yee Lee > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite > the redzone of object with unaligned size. > > An additional memzero_explicit() path is added to replacing init by > hwtag instruction for those unaligned size at SLUB debug mode. > > The penalty is acceptable since they are only enabled in debug mode, > not production builds. A block of comment is added for explanation. > > Signed-off-by: Yee Lee > Suggested-by: Marco Elver > Suggested-by: Andrey Konovalov > Cc: Andrey Ryabinin > Cc: Alexander Potapenko > Cc: Dmitry Vyukov > Cc: Andrew Morton In future, please add changes to each version after an additional '---'. Example: --- v2: * Use IS_ENABLED(CONFIG_SLUB_DEBUG) in if-statement. > --- > mm/kasan/kasan.h | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..6f698f13dbe6 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + /* > + * Explicitly initialize the memory with the precise object size > + * to avoid overwriting the SLAB redzone. This disables initialization > + * in the arch code and may thus lead to performance penalty. > + * The penalty is accepted since SLAB redzones aren't enabled in production builds. > + */ Can we please format the comment properly: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 6f698f13dbe6..1972ec5736cb 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -388,10 +388,10 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; /* - * Explicitly initialize the memory with the precise object size - * to avoid overwriting the SLAB redzone. This disables initialization - * in the arch code and may thus lead to performance penalty. - * The penalty is accepted since SLAB redzones aren't enabled in production builds. + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. */ if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memzero_explicit((void *)addr, size); > + } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); I think this solution might be fine for now, as I don't see an easy way to do this without some major refactor to use kmem_cache_debug_flags(). However, I think there's an intermediate solution where we only check the static-key 'slub_debug_enabled' though. Because I've checked, and various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static branch just makes sure there's no performance overhead. Checking the static branch requires including mm/slab.h into mm/kasan/kasan.h, which we currently don't do and perhaps wanted to avoid. Although I don't see a reason there, because there's no circular dependency even if we did. Andrey, any opinion? In case you guys think checking static key is the better solution, I think the below would work together with the pre-requisite patch at the end: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 1972ec5736cb..9130d025612c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,8 @@ #include #include +#include "../slab.h" + #ifdef CONFIG_KASAN_HW_TAGS #include @@ -393,7 +395,8 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) * the arch code and may thus lead to performance penalty. The penalty * is accepted since SLAB redzones aren't enabled in production builds. */ - if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + if (slub_debug_enabled_unlikely() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; memzero_explicit((void *)addr, size); } [ Note: You can pick the below patch up by extracting it from the email and running 'git am -s '. You could then use it as part of a patch series together with your original patch. ] From: Marco Elver Date: Wed, 30 Jun 2021 20:56:57 +0200 Subject: [PATCH] mm: introduce helper to check slub_debug_enabled Introduce a helper to check slub_debug_enabled, so that we can confine the use of #ifdef to the definition of the slub_debug_enabled_unlikely() helper. Signed-off-by: Marco Elver --- mm/slab.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 18c1927cd196..9439da434712 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -215,10 +215,18 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); +static inline bool slub_debug_enabled_unlikely(void) +{ + return static_branch_unlikely(&slub_debug_enabled); +} #else static inline void print_tracking(struct kmem_cache *s, void *object) { } +static inline bool slub_debug_enabled_unlikely(void) +{ + return false; +} #endif /* @@ -228,11 +236,10 @@ static inline void print_tracking(struct kmem_cache *s, void *object) */ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { -#ifdef CONFIG_SLUB_DEBUG - VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); - if (static_branch_unlikely(&slub_debug_enabled)) + if (IS_ENABLED(CONFIG_SLUB_DEBUG)) + VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); + if (slub_debug_enabled_unlikely()) return s->flags & flags; -#endif return false; } -- 2.32.0.93.g670b81a890-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.9 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 164C8C11F65 for ; Wed, 30 Jun 2021 19:13:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD6E261463 for ; Wed, 30 Jun 2021 19:13:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD6E261463 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B7BF88D01C3; Wed, 30 Jun 2021 15:13:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2BDF8D01A2; Wed, 30 Jun 2021 15:13:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A5888D01C3; Wed, 30 Jun 2021 15:13:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 720228D01A2 for ; Wed, 30 Jun 2021 15:13:37 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2F5F622AD5 for ; Wed, 30 Jun 2021 19:13:37 +0000 (UTC) X-FDA: 78311339274.26.3A441AA Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by imf18.hostedemail.com (Postfix) with ESMTP id D7A4F400208D for ; Wed, 30 Jun 2021 19:13:36 +0000 (UTC) Received: by mail-wr1-f54.google.com with SMTP id u6so4955504wrs.5 for ; Wed, 30 Jun 2021 12:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=r7y6xbzz3Ru117IKa1A1HqYQLPsQ9JWTt80K0/sfpiwfqVG1bSJD0SFSYl/jMHoCJ2 ob3TpfUWHwHuwt70WI4j1pKseqgibuQ5m4T4+kBD+5X/Bv38bChnS+jbYWPjkjI6EjUo G/39As31rcJBFn/TdNWaDFYEVqq/MFEqS9de5nZt01bJR5tya6r0t+hff3JJ1iQaFL6q dB7TWBXgRhxSKJ7KbJGzlDhCLUgE6zw1ZuoAEHVgShC17Fx8OFPzpryUET3AzY1C1mhQ /Xv8NisXpCuGG5Ni7I7u/OU/OthuU1kcuTvSAoWLDujFEbHTJ+JOsHYi3+V0+37L6dy3 6FeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=uHhUht+5qDVRPGI8Akn3sM12Hb96xYq8+tv8XtPGEUdC3uqu/qJxUryAUgFfEp0Gv5 CgcK3Vkm998rKC7R875B4HwSsz78jThv5DYLmXQZ9NS7HcTcnYhZXSyAaeE2/3CLWDpf nbJvMga+NgQbbeiNyNFiLiZ+2F78qFhyVcSuFmAV1E1jDlAvKQJxm6bSEg9CE2a8DIes hFn+JOvhs42zWb2XazsgxM2ia/Th6WMmacZAMrBCK68VPRPjKDjDE0wo3+NpBitDMjwx QmRnkLG2HGPpbNrfsOx5tVXXqnMpUzn+v5rzqw5LSfLfZ1kx+EUipgsZ6oorxAOcLrLS 0czQ== X-Gm-Message-State: AOAM533lagXgdIqP9dfM/eT0kHs1F9LCfL1pTFIy1VV0qlGg9zp4/I01 morb7iBtnDC0ZyOakLmtprj6Eg== X-Google-Smtp-Source: ABdhPJwGJqNIQjt34+ofqYDqSFhTn0KnMtEPFfjP2mQPalcDH63xTtPzA6qVHWDm/uGp300DxtbuOA== X-Received: by 2002:adf:c18a:: with SMTP id x10mr41309800wre.193.1625080415431; Wed, 30 Jun 2021 12:13:35 -0700 (PDT) Received: from elver.google.com ([2a00:79e0:15:13:8b0e:c57f:ff29:7e4]) by smtp.gmail.com with ESMTPSA id o20sm6991115wms.3.2021.06.30.12.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 12:13:34 -0700 (PDT) Date: Wed, 30 Jun 2021 21:13:27 +0200 From: Marco Elver To: yee.lee@mediatek.com Cc: andreyknvl@gmail.com, wsd_upstream@mediatek.com, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Matthias Brugger , "open list:KASAN" , "open list:MEMORY MANAGEMENT" , open list , "moderated list:ARM/Mediatek SoC support" , "moderated list:ARM/Mediatek SoC support" Subject: Re: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug Message-ID: References: <20210630134943.20781-1-yee.lee@mediatek.com> <20210630134943.20781-2-yee.lee@mediatek.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210630134943.20781-2-yee.lee@mediatek.com> User-Agent: Mutt/2.0.5 (2021-01-21) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=r7y6xbzz; spf=pass (imf18.hostedemail.com: domain of elver@google.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D7A4F400208D X-Stat-Signature: agfnjg91zrktubymcxay4uttdhruontr X-HE-Tag: 1625080416-332716 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jun 30, 2021 at 09:49PM +0800, yee.lee@mediatek.com wrote: > From: Yee Lee > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite > the redzone of object with unaligned size. > > An additional memzero_explicit() path is added to replacing init by > hwtag instruction for those unaligned size at SLUB debug mode. > > The penalty is acceptable since they are only enabled in debug mode, > not production builds. A block of comment is added for explanation. > > Signed-off-by: Yee Lee > Suggested-by: Marco Elver > Suggested-by: Andrey Konovalov > Cc: Andrey Ryabinin > Cc: Alexander Potapenko > Cc: Dmitry Vyukov > Cc: Andrew Morton In future, please add changes to each version after an additional '---'. Example: --- v2: * Use IS_ENABLED(CONFIG_SLUB_DEBUG) in if-statement. > --- > mm/kasan/kasan.h | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..6f698f13dbe6 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + /* > + * Explicitly initialize the memory with the precise object size > + * to avoid overwriting the SLAB redzone. This disables initialization > + * in the arch code and may thus lead to performance penalty. > + * The penalty is accepted since SLAB redzones aren't enabled in production builds. > + */ Can we please format the comment properly: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 6f698f13dbe6..1972ec5736cb 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -388,10 +388,10 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; /* - * Explicitly initialize the memory with the precise object size - * to avoid overwriting the SLAB redzone. This disables initialization - * in the arch code and may thus lead to performance penalty. - * The penalty is accepted since SLAB redzones aren't enabled in production builds. + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. */ if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memzero_explicit((void *)addr, size); > + } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); I think this solution might be fine for now, as I don't see an easy way to do this without some major refactor to use kmem_cache_debug_flags(). However, I think there's an intermediate solution where we only check the static-key 'slub_debug_enabled' though. Because I've checked, and various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static branch just makes sure there's no performance overhead. Checking the static branch requires including mm/slab.h into mm/kasan/kasan.h, which we currently don't do and perhaps wanted to avoid. Although I don't see a reason there, because there's no circular dependency even if we did. Andrey, any opinion? In case you guys think checking static key is the better solution, I think the below would work together with the pre-requisite patch at the end: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 1972ec5736cb..9130d025612c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,8 @@ #include #include +#include "../slab.h" + #ifdef CONFIG_KASAN_HW_TAGS #include @@ -393,7 +395,8 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) * the arch code and may thus lead to performance penalty. The penalty * is accepted since SLAB redzones aren't enabled in production builds. */ - if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + if (slub_debug_enabled_unlikely() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; memzero_explicit((void *)addr, size); } [ Note: You can pick the below patch up by extracting it from the email and running 'git am -s '. You could then use it as part of a patch series together with your original patch. ] From: Marco Elver Date: Wed, 30 Jun 2021 20:56:57 +0200 Subject: [PATCH] mm: introduce helper to check slub_debug_enabled Introduce a helper to check slub_debug_enabled, so that we can confine the use of #ifdef to the definition of the slub_debug_enabled_unlikely() helper. Signed-off-by: Marco Elver --- mm/slab.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 18c1927cd196..9439da434712 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -215,10 +215,18 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); +static inline bool slub_debug_enabled_unlikely(void) +{ + return static_branch_unlikely(&slub_debug_enabled); +} #else static inline void print_tracking(struct kmem_cache *s, void *object) { } +static inline bool slub_debug_enabled_unlikely(void) +{ + return false; +} #endif /* @@ -228,11 +236,10 @@ static inline void print_tracking(struct kmem_cache *s, void *object) */ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { -#ifdef CONFIG_SLUB_DEBUG - VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); - if (static_branch_unlikely(&slub_debug_enabled)) + if (IS_ENABLED(CONFIG_SLUB_DEBUG)) + VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); + if (slub_debug_enabled_unlikely()) return s->flags & flags; -#endif return false; } -- 2.32.0.93.g670b81a890-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B8AFC11F65 for ; Wed, 30 Jun 2021 19:14:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 314D261465 for ; Wed, 30 Jun 2021 19:14:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 314D261465 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rJ232reFBLKAAIepiQGzc7HMWJxgwZgONJ+3YwhZeFM=; b=KWOibgsGUXPWo0 gg4BtJKT8yZYE81qwpLnRl2ghHo2wr+7uAkTJzLqg+BYq6gVRFBj3x2Ls44mQ6PLVBhRiC//Tst45 9slCKopeWR6AiWyehDY8OycMg5VwxLFLnKYy27DtlsJVi8mksXqGjOrwdiYPRaY5wQxoIuvvrnm2J W5AU31n4q9j5APrH92aA2wAa4R4vrDUZObTHMkhmETwvQ7fISua9nLd627/pijmxhVb7iCRzEukf8 VxDoyJyzzIqDT39NMvUIncImhPJdHgBA/mDj8sAHk7wG/9ueVwwZY8jnNpt39ra+ydHRN+qMBhsZl hmMf1H4LwjWyvQbDoTPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyfek-00F5u3-Dq; Wed, 30 Jun 2021 19:13:54 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyfeW-00F5sS-Rn for linux-mediatek@lists.infradead.org; Wed, 30 Jun 2021 19:13:42 +0000 Received: by mail-wr1-x430.google.com with SMTP id j2so4926742wrs.12 for ; Wed, 30 Jun 2021 12:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=r7y6xbzz3Ru117IKa1A1HqYQLPsQ9JWTt80K0/sfpiwfqVG1bSJD0SFSYl/jMHoCJ2 ob3TpfUWHwHuwt70WI4j1pKseqgibuQ5m4T4+kBD+5X/Bv38bChnS+jbYWPjkjI6EjUo G/39As31rcJBFn/TdNWaDFYEVqq/MFEqS9de5nZt01bJR5tya6r0t+hff3JJ1iQaFL6q dB7TWBXgRhxSKJ7KbJGzlDhCLUgE6zw1ZuoAEHVgShC17Fx8OFPzpryUET3AzY1C1mhQ /Xv8NisXpCuGG5Ni7I7u/OU/OthuU1kcuTvSAoWLDujFEbHTJ+JOsHYi3+V0+37L6dy3 6FeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=gOOZAHVCvadx2Vg2+CewrwON+GaqmxznmTQssFK+NzRhAg8k2HiwjCr1hu/NtLaBiG 0Ws759K5eLlnimnHAAvPr+y0Bnf3gn8NZC+LBNs6WbJQ0IOrm8k5eg0m0Zr4t9sDVO+6 QCtPn0LX35IPmRCQiaD7IHd1XZr7GHm0gJp8u2XAZ6gu2ZRBHZTbBltcaKwEC/QyinvJ RZ3/IxiX2qGq9xBCtCCdon4LARiIGaJGUb/C+91TA7CQm4OauFPYdCqpDBIubyl4BN9v PkYJm93ZOCJyBocMqCIE9QUmo+6KglB78Ovkk5M3v7Plr8y4tHYdUwHqcuKTB80AKj/F 4Log== X-Gm-Message-State: AOAM533qs8SQpWxGk9RXdhkc62fybPtZuwUWwLjS9iN68Tbz4pk9P4cP XHoFVNTzmElHEzyAagUCpFbqng== X-Google-Smtp-Source: ABdhPJwGJqNIQjt34+ofqYDqSFhTn0KnMtEPFfjP2mQPalcDH63xTtPzA6qVHWDm/uGp300DxtbuOA== X-Received: by 2002:adf:c18a:: with SMTP id x10mr41309800wre.193.1625080415431; Wed, 30 Jun 2021 12:13:35 -0700 (PDT) Received: from elver.google.com ([2a00:79e0:15:13:8b0e:c57f:ff29:7e4]) by smtp.gmail.com with ESMTPSA id o20sm6991115wms.3.2021.06.30.12.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 12:13:34 -0700 (PDT) Date: Wed, 30 Jun 2021 21:13:27 +0200 From: Marco Elver To: yee.lee@mediatek.com Cc: andreyknvl@gmail.com, wsd_upstream@mediatek.com, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Matthias Brugger , "open list:KASAN" , "open list:MEMORY MANAGEMENT" , open list , "moderated list:ARM/Mediatek SoC support" , "moderated list:ARM/Mediatek SoC support" Subject: Re: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug Message-ID: References: <20210630134943.20781-1-yee.lee@mediatek.com> <20210630134943.20781-2-yee.lee@mediatek.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210630134943.20781-2-yee.lee@mediatek.com> User-Agent: Mutt/2.0.5 (2021-01-21) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210630_121340_958740_8BB1DAB3 X-CRM114-Status: GOOD ( 36.60 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org On Wed, Jun 30, 2021 at 09:49PM +0800, yee.lee@mediatek.com wrote: > From: Yee Lee > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite > the redzone of object with unaligned size. > > An additional memzero_explicit() path is added to replacing init by > hwtag instruction for those unaligned size at SLUB debug mode. > > The penalty is acceptable since they are only enabled in debug mode, > not production builds. A block of comment is added for explanation. > > Signed-off-by: Yee Lee > Suggested-by: Marco Elver > Suggested-by: Andrey Konovalov > Cc: Andrey Ryabinin > Cc: Alexander Potapenko > Cc: Dmitry Vyukov > Cc: Andrew Morton In future, please add changes to each version after an additional '---'. Example: --- v2: * Use IS_ENABLED(CONFIG_SLUB_DEBUG) in if-statement. > --- > mm/kasan/kasan.h | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..6f698f13dbe6 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + /* > + * Explicitly initialize the memory with the precise object size > + * to avoid overwriting the SLAB redzone. This disables initialization > + * in the arch code and may thus lead to performance penalty. > + * The penalty is accepted since SLAB redzones aren't enabled in production builds. > + */ Can we please format the comment properly: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 6f698f13dbe6..1972ec5736cb 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -388,10 +388,10 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; /* - * Explicitly initialize the memory with the precise object size - * to avoid overwriting the SLAB redzone. This disables initialization - * in the arch code and may thus lead to performance penalty. - * The penalty is accepted since SLAB redzones aren't enabled in production builds. + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. */ if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memzero_explicit((void *)addr, size); > + } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); I think this solution might be fine for now, as I don't see an easy way to do this without some major refactor to use kmem_cache_debug_flags(). However, I think there's an intermediate solution where we only check the static-key 'slub_debug_enabled' though. Because I've checked, and various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static branch just makes sure there's no performance overhead. Checking the static branch requires including mm/slab.h into mm/kasan/kasan.h, which we currently don't do and perhaps wanted to avoid. Although I don't see a reason there, because there's no circular dependency even if we did. Andrey, any opinion? In case you guys think checking static key is the better solution, I think the below would work together with the pre-requisite patch at the end: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 1972ec5736cb..9130d025612c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,8 @@ #include #include +#include "../slab.h" + #ifdef CONFIG_KASAN_HW_TAGS #include @@ -393,7 +395,8 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) * the arch code and may thus lead to performance penalty. The penalty * is accepted since SLAB redzones aren't enabled in production builds. */ - if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + if (slub_debug_enabled_unlikely() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; memzero_explicit((void *)addr, size); } [ Note: You can pick the below patch up by extracting it from the email and running 'git am -s '. You could then use it as part of a patch series together with your original patch. ] From: Marco Elver Date: Wed, 30 Jun 2021 20:56:57 +0200 Subject: [PATCH] mm: introduce helper to check slub_debug_enabled Introduce a helper to check slub_debug_enabled, so that we can confine the use of #ifdef to the definition of the slub_debug_enabled_unlikely() helper. Signed-off-by: Marco Elver --- mm/slab.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 18c1927cd196..9439da434712 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -215,10 +215,18 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); +static inline bool slub_debug_enabled_unlikely(void) +{ + return static_branch_unlikely(&slub_debug_enabled); +} #else static inline void print_tracking(struct kmem_cache *s, void *object) { } +static inline bool slub_debug_enabled_unlikely(void) +{ + return false; +} #endif /* @@ -228,11 +236,10 @@ static inline void print_tracking(struct kmem_cache *s, void *object) */ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { -#ifdef CONFIG_SLUB_DEBUG - VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); - if (static_branch_unlikely(&slub_debug_enabled)) + if (IS_ENABLED(CONFIG_SLUB_DEBUG)) + VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); + if (slub_debug_enabled_unlikely()) return s->flags & flags; -#endif return false; } -- 2.32.0.93.g670b81a890-goog _______________________________________________ Linux-mediatek mailing list Linux-mediatek@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-mediatek From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E70BC11F65 for ; Wed, 30 Jun 2021 19:15:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3E56F61449 for ; Wed, 30 Jun 2021 19:15:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3E56F61449 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ak2/du7HrQ2fZGSe44lrqWkxzlaRHSptbyFWdJdrEqc=; b=GZ4boVD55UP4/g M+bHyjsKSpdPXbqjNj1rMgOvgUQb3L5ME+jgDSm1bAuivvhLxjz5Bx1etMAnVQBTTic5p9xx9rew7 sFVK6jK8aoYl5P2vPmLJr6iW+HB/YywjCfNPv5dfVkBrL8LcIUYRgCGy1uxZToSQiuGnTNN7xymoG n8QJAq/ztox2/k74GgOV8sPbcZ2BiwylBJbeD5+MZFG7OiieQ+huqR52rXZzgAXOROgWZxscdvXD6 Oi5SxWtP3Ocn0ent7kVzPjCffSzo/IyKKTmkvi2E9Oe47Ex3ul+c7VvBAk4yOZ0IzMz/M0iA9xCUs 81BN7/11QQrHWbr+TBMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyfeb-00F5tV-Fi; Wed, 30 Jun 2021 19:13:45 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyfeW-00F5sT-Ro for linux-arm-kernel@lists.infradead.org; Wed, 30 Jun 2021 19:13:42 +0000 Received: by mail-wr1-x430.google.com with SMTP id i94so4970965wri.4 for ; Wed, 30 Jun 2021 12:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=r7y6xbzz3Ru117IKa1A1HqYQLPsQ9JWTt80K0/sfpiwfqVG1bSJD0SFSYl/jMHoCJ2 ob3TpfUWHwHuwt70WI4j1pKseqgibuQ5m4T4+kBD+5X/Bv38bChnS+jbYWPjkjI6EjUo G/39As31rcJBFn/TdNWaDFYEVqq/MFEqS9de5nZt01bJR5tya6r0t+hff3JJ1iQaFL6q dB7TWBXgRhxSKJ7KbJGzlDhCLUgE6zw1ZuoAEHVgShC17Fx8OFPzpryUET3AzY1C1mhQ /Xv8NisXpCuGG5Ni7I7u/OU/OthuU1kcuTvSAoWLDujFEbHTJ+JOsHYi3+V0+37L6dy3 6FeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=LwMx6mpf58K+dt89atcRzUa+xJG7iS/9NXSnO3QAijk=; b=Fypaig52QZJ0uPU0wc5A80Ca4kbSw2twZNkjnj5N3wqYPlmaaZB1HTGnD5GgBZRwOz URGJcGddzd4bE2csDP2LRL3BVuGaSxbkJR9fLrTzzQemIb7i/VIV+uKMNtfEJgivcaXN 6+A/eo0Jqwjn+oux/pdE7I/Vp6sGGyPBKdQc5qxueC9tOFAmuYyeN+c+GamBoq77NEgW q5MqIYV4OVIEOLQ0CrJkdD4jqS7fO++YKpjNkLBFgoZD0qFlAgYIZoyb2TyaYxY4w6f0 GCvZuAgfIi3W76bSH0Ok3NciMmLoq870dUuL92imejKpLOmEAqmrT2kELjoQijPg9miE D14Q== X-Gm-Message-State: AOAM53130t+mbDqWKuZD9LAqIW9P048O3MaTzXBiiOIioR7Q2HfLvr1b Yth5swV8VKB7roWffA7VCP6Rxg== X-Google-Smtp-Source: ABdhPJwGJqNIQjt34+ofqYDqSFhTn0KnMtEPFfjP2mQPalcDH63xTtPzA6qVHWDm/uGp300DxtbuOA== X-Received: by 2002:adf:c18a:: with SMTP id x10mr41309800wre.193.1625080415431; Wed, 30 Jun 2021 12:13:35 -0700 (PDT) Received: from elver.google.com ([2a00:79e0:15:13:8b0e:c57f:ff29:7e4]) by smtp.gmail.com with ESMTPSA id o20sm6991115wms.3.2021.06.30.12.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 12:13:34 -0700 (PDT) Date: Wed, 30 Jun 2021 21:13:27 +0200 From: Marco Elver To: yee.lee@mediatek.com Cc: andreyknvl@gmail.com, wsd_upstream@mediatek.com, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Matthias Brugger , "open list:KASAN" , "open list:MEMORY MANAGEMENT" , open list , "moderated list:ARM/Mediatek SoC support" , "moderated list:ARM/Mediatek SoC support" Subject: Re: [PATCH v3 1/1] kasan: Add memzero init for unaligned size under SLUB debug Message-ID: References: <20210630134943.20781-1-yee.lee@mediatek.com> <20210630134943.20781-2-yee.lee@mediatek.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210630134943.20781-2-yee.lee@mediatek.com> User-Agent: Mutt/2.0.5 (2021-01-21) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210630_121340_958281_0214CF32 X-CRM114-Status: GOOD ( 37.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jun 30, 2021 at 09:49PM +0800, yee.lee@mediatek.com wrote: > From: Yee Lee > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite > the redzone of object with unaligned size. > > An additional memzero_explicit() path is added to replacing init by > hwtag instruction for those unaligned size at SLUB debug mode. > > The penalty is acceptable since they are only enabled in debug mode, > not production builds. A block of comment is added for explanation. > > Signed-off-by: Yee Lee > Suggested-by: Marco Elver > Suggested-by: Andrey Konovalov > Cc: Andrey Ryabinin > Cc: Alexander Potapenko > Cc: Dmitry Vyukov > Cc: Andrew Morton In future, please add changes to each version after an additional '---'. Example: --- v2: * Use IS_ENABLED(CONFIG_SLUB_DEBUG) in if-statement. > --- > mm/kasan/kasan.h | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..6f698f13dbe6 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + /* > + * Explicitly initialize the memory with the precise object size > + * to avoid overwriting the SLAB redzone. This disables initialization > + * in the arch code and may thus lead to performance penalty. > + * The penalty is accepted since SLAB redzones aren't enabled in production builds. > + */ Can we please format the comment properly: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 6f698f13dbe6..1972ec5736cb 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -388,10 +388,10 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; /* - * Explicitly initialize the memory with the precise object size - * to avoid overwriting the SLAB redzone. This disables initialization - * in the arch code and may thus lead to performance penalty. - * The penalty is accepted since SLAB redzones aren't enabled in production builds. + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. */ if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memzero_explicit((void *)addr, size); > + } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); I think this solution might be fine for now, as I don't see an easy way to do this without some major refactor to use kmem_cache_debug_flags(). However, I think there's an intermediate solution where we only check the static-key 'slub_debug_enabled' though. Because I've checked, and various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static branch just makes sure there's no performance overhead. Checking the static branch requires including mm/slab.h into mm/kasan/kasan.h, which we currently don't do and perhaps wanted to avoid. Although I don't see a reason there, because there's no circular dependency even if we did. Andrey, any opinion? In case you guys think checking static key is the better solution, I think the below would work together with the pre-requisite patch at the end: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 1972ec5736cb..9130d025612c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,8 @@ #include #include +#include "../slab.h" + #ifdef CONFIG_KASAN_HW_TAGS #include @@ -393,7 +395,8 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) * the arch code and may thus lead to performance penalty. The penalty * is accepted since SLAB redzones aren't enabled in production builds. */ - if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + if (slub_debug_enabled_unlikely() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; memzero_explicit((void *)addr, size); } [ Note: You can pick the below patch up by extracting it from the email and running 'git am -s '. You could then use it as part of a patch series together with your original patch. ] From: Marco Elver Date: Wed, 30 Jun 2021 20:56:57 +0200 Subject: [PATCH] mm: introduce helper to check slub_debug_enabled Introduce a helper to check slub_debug_enabled, so that we can confine the use of #ifdef to the definition of the slub_debug_enabled_unlikely() helper. Signed-off-by: Marco Elver --- mm/slab.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 18c1927cd196..9439da434712 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -215,10 +215,18 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); +static inline bool slub_debug_enabled_unlikely(void) +{ + return static_branch_unlikely(&slub_debug_enabled); +} #else static inline void print_tracking(struct kmem_cache *s, void *object) { } +static inline bool slub_debug_enabled_unlikely(void) +{ + return false; +} #endif /* @@ -228,11 +236,10 @@ static inline void print_tracking(struct kmem_cache *s, void *object) */ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { -#ifdef CONFIG_SLUB_DEBUG - VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); - if (static_branch_unlikely(&slub_debug_enabled)) + if (IS_ENABLED(CONFIG_SLUB_DEBUG)) + VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); + if (slub_debug_enabled_unlikely()) return s->flags & flags; -#endif return false; } -- 2.32.0.93.g670b81a890-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel